This is a valid Atom 1.0 feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
<link href="https://octopus.com/blog/feed.xml" rel="self" />
^
line 111, column 0: (24 occurrences) [help]
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#00 ...
line 111, column 0: (24 occurrences) [help]
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#00 ...
line 442, column 0: (2 occurrences) [help]
<p><img src="/blog/_astro/process-steps.ty0Y7JtZ_1z4GNq.webp" alt="Octopus ...
line 442, column 0: (2 occurrences) [help]
<p><img src="/blog/_astro/process-steps.ty0Y7JtZ_1z4GNq.webp" alt="Octopus ...
line 442, column 0: (2 occurrences) [help]
<p><img src="/blog/_astro/process-steps.ty0Y7JtZ_1z4GNq.webp" alt="Octopus ...
line 1923, column 0: (2 occurrences) [help]
<p><video src="/blog/video/process-editor/before-after.mp4" width="750" heig ...
line 1923, column 0: (2 occurrences) [help]
<p><video src="/blog/video/process-editor/before-after.mp4" width="750" heig ...
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Octopus blog</title>
<subtitle>Site description.</subtitle>
<link href="https://octopus.com/blog/feed.xml" rel="self" />
<link href="https://octopus.com" />
<id>https://octopus.com/blog/feed.xml</id>
<updated>2025-04-16T00:00:00.000Z</updated>
<entry>
<title>Introducing Kubernetes Live Object Status</title>
<link href="https://octopus.com/blog/kubernetes-live-object-status" />
<id>https://octopus.com/blog/kubernetes-live-object-status</id>
<published>2025-04-16T00:00:00.000Z</published>
<updated>2025-09-15T00:00:00.000Z</updated>
<summary>The Octopus Kubernetes monitor is the next major expansion of capabilities of the Octopus Kubernetes agent.</summary>
<author>
<name>Nick Josevski, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Kubernetes is rapidly becoming the dominant platform for hosting and running applications.</p>
<p>At Octopus, we want to provide the best experience for deploying and monitoring your applications on Kubernetes.</p>
<p>To make your deployments to Kubernetes simpler, faster, and safer, Octopus has a deployment target called the Kubernetes agent. The Kubernetes agent is a small, lightweight application you install into your Kubernetes cluster.</p>
<p>Kubernetes Live Object Status—our Kubernetes monitor—is the next major expansion of this deployment agent’s capabilities.</p>
<h2 id="post-deployment-monitoring">Post-deployment monitoring</h2>
<p>Monitoring is an important part of the deployment process and is even more important for Kubernetes. It gives you confidence that your applications are running as expected.</p>
<p>When troubleshooting an unhealthy app, you often need to use a combination of tools and login credentials to figure out what’s going wrong. That can be quite fiddly, especially with Kubernetes. Your Octopus Deploy instance already has these credentials and awareness of your applications, similar to runbooks. We’re aiming to make Octopus the first port of call as you and your team continuously deliver software.</p>
<p>We roll up the combined status for the objects in a given release, per environment, into a single live status.</p>
<p><img src="/blog/img/kubernetes-live-object-status/klos-project-view.png" alt="Project view with live status"><em>Project view with live status</em></p>
<h3 id="detailed-resource-inspection">Detailed resource inspection</h3>
<p>Dive into the details of each Kubernetes resource to understand how your deployment has been configured and monitor your application itself.</p>
<p>Kubernetes Live Object Status gives a quick view of all the Kubernetes resources included in your application. You can then dig in to each resource to view the manifests, events and application logs for each Kubernetes resource.</p>
<p><img src="/blog/img/kubernetes-live-object-status/live-status-drawer-manifest.png" alt="Kubernetes resource manifest"><em>Kubernetes resource manifest</em></p>
<h2 id="if-youre-already-using-the-kubernetes-agent">If you’re already using the Kubernetes agent</h2>
<p>If you already use the Kubernetes agent, your upgrade path will be simple.</p>
<h3 id="upgrading-your-agents-to-the-version-containing-the-monitor">Upgrading your agents to the version containing the monitor</h3>
<p>We’re working on a one-click upgrade process you can access in Octopus Deploy.</p>
<p>If you can’t wait until then, you can upgrade existing Kubernetes agents by running a Helm command on your cluster. <a href="https://octopus.com/docs/kubernetes/live-object-status/installation#upgrading-an-existing-kubernetes-agent">See our documentation for all the details</a>.</p>
<h2 id="new-to-using-octopus-for-kubernetes-deployments">New to using Octopus for Kubernetes deployments?</h2>
<p>After you install the agent, it registers itself with Octopus Server as a new deployment target. This lets you deploy your applications and manifests into that cluster, without the need for workers, external credentials, or custom tooling. All new installations of the agent will have the monitor enabled.</p>
<h3 id="installing-the-agent">Installing the agent</h3>
<p>The Kubernetes agent gets packaged and installed via a Helm chart. This makes managing the agent very simple and makes automated installation easy.</p>
<p>The Kubernetes monitoring component comes along for the ride. <a href="https://octopus.com/docs/kubernetes/live-object-status/installation">See our docs for detailed instructions</a>.</p>
<p><img src="/blog/img/kubernetes-live-object-status/kubernetes-agent-wizard-config.png" alt="Kubernetes agent wizard configuration options"><em>Kubernetes agent configuration options</em></p>
<h2 id="new-to-octopus-deploy-entirely">New to Octopus Deploy entirely?</h2>
<p>How exciting! Welcome to scalable, simple, and safe Kubernetes CD with Octopus.</p>
<p>Octopus is one user-friendly tool for developers to deploy, verify, and troubleshoot their apps. Platform engineers use this powerful tool to fully automate Continuous Delivery, manage configuration templates, and implement compliance, security, and auditing best practices.</p>
<p>We empower your teams to spend less time managing and troubleshooting Kubernetes deployments and more time shipping new features to improve your software.</p>
<p>Octopus models environments out-of-the-box and reduces the need for custom scripting. You define your deployment process once and reuse it for all your environments. You can go to production confidently as your process has already worked in other environments.</p>
<p>If you’re interested in trying it out, sign up for a <a href="https://octopus.com/start">free 30-day trial</a>.</p>
<h3 id="getting-started-with-the-agent-and-monitor">Getting started with the agent and monitor</h3>
<p>The Octopus Kubernetes agent targets are a mechanism for executing Kubernetes steps and monitoring application health from inside the target Kubernetes cluster, rather than via an external API connection.</p>
<p>Like the Octopus Tentacle, the Kubernetes agent is a small, lightweight application that’s installed into the target Kubernetes cluster.</p>
<p>You install the Kubernetes agent using Helm via the octopusdeploy/kubernetes-agent chart. For the complete details, see our docs about <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-agent#installing-the-kubernetes-agent">installing the Kubernetes agent</a>.</p>
<h3 id="when-can-i-use-it">When can I use it?</h3>
<div class="info"><p>Kubernetes Live Object Status is now generally available and recommended for production use for Octopus Cloud and self-hosted customers running Octopus Server 2025.3 or later.</p><p>Support for Octopus Server running in high availability clusters is not yet available, but will be coming in the next self-hosted release.</p></div>
<p>The Kubernetes agent is available now as an Early Access Preview (EAP) in Octopus Cloud! If you don’t see the feature available, please reach out and we can fast-track your cloud instance getting this release.</p>
<p>Remember this is an opt-in upgrade for existing Octopus agents installed on your cluster(s). <a href="https://octopus.com/docs/kubernetes/live-object-status/installation#upgrading-an-existing-kubernetes-agent">See this documentation page for all the details</a>.</p>
<p><img src="/blog/img/kubernetes-live-object-status/kubernetes-agent-wizard-config.png" alt="Kubernetes agent as deployment targets"><em>Kubernetes agent as deployment targets</em></p>
<h2 id="how-we-built-kubernetes-live-object-status">How we built Kubernetes Live Object Status</h2>
<p>To facilitate a potentially large flow of new data coming to Octopus Server, a separate and non-disruptive web host runs alongside the main host. This isolation level gives us confidence that this is an additive feature and if there are performance complications, they’ll get isolated and managed with minimal impact on Octopus Server’s regular operations.</p>
<p>The cluster-based monitoring capability uses two values to identify the incoming request:</p>
<ul>
<li>The client certificate thumbprint</li>
<li>An installation ID in the request headers</li>
</ul>
<p>Octopus Server uses a long-lived bearer token as a shared secret for authentication. The token gets generated when the monitoring capability installs in the cluster and registers with Octopus Server. This token is rotatable by customers and only valid for use on the gRPC endpoint.</p>
<p>This allowed us to build gRPC services to handle the data flowing from the monitoring agent in the Kubernetes clusters. <a href="https://grpc.io/">gRPC</a> is a modern open-source high-performance remote procedure call (RPC) framework. This is the first time we’re using gRPC as part of an Octopus feature.</p>
<p>In the cluster, alongside the Octopus Kubernetes agent, we have this new component that’s responsible for the monitoring aspect. It sits in the cluster and monitors the deployed resources, pumping relevant live-status data back out over gRPC to Octopus Deploy.</p>
<p>As we also run Octopus Deploy in Kubernetes for our Octopus Cloud customers, we have a new nginx-based ingress configuration to help with partitioning and scalability. To find out more have a look at <a href="https://www.youtube.com/watch?v=DH7YDySEPHU">how we use Kubernetes for Octopus Cloud</a>.</p>
<h3 id="written-in-go">Written in Go</h3>
<p>This is the first large-scale feature our team has built in <a href="https://go.dev/">Golang</a> in Octopus. This has given us access to a large set of great libraries built for Kubernetes. Examples include Helm packages and the Argo GitOps engine. Our team got the expertise uplift from the Codefresh engineers who are now part of Octopus.</p>
<p>The GitOps engine is a flexible library with enough configuration and extension points for us to save very specific information on a per-resource basis. This helps us get the right information out of the cluster and back to Octopus. Go is also the de facto programming language for Kubernetes.</p>
<p>We’re exploring options to open-source parts of our implementation. Stay tuned for when that’s all decided, as we’ll have a follow-up blog post. The likely first step will be making the source available for inspection. This is part of offering more transparency into the tools we’re asking customers to run in the security context of their clusters.</p>
<h2 id="whats-coming-next">What’s coming next</h2>
<p>Today’s release is the EAP. The list below represents capabilities we think are worth adding next (though it’s not the complete list). If you have thoughts and opinions, please reach out to us in the comments section below or on our <a href="https://octopus.com/slack">community Slack</a>.</p>
<ul>
<li>Terraform-based setup</li>
<li>Support Kubernetes API targets</li>
<li>Octopus HA (multi-node server) support</li>
<li>Custom health checks</li>
<li>Orphan and drift detection</li>
</ul>
<h3 id="this-looks-cool-but-what-if-i-dont-deploy-to-kubernetes">This looks cool, but what if I don’t deploy to Kubernetes?</h3>
<p>Currently, there are no plans to extend this beyond Kubernetes deployments. Please let us know where and why you’d like to use this monitoring capability.</p>
<h2 id="let-us-know-your-thoughts">Let us know your thoughts</h2>
<p>We’re excited to see how you use this monitoring feature. Please let us know in the comments section below or on our <a href="https://octopus.com/slack">community Slack</a> what new opportunities this opens up for your application delivery objectives.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Troubleshooting common Octopus Deploy issues</title>
<link href="https://octopus.com/blog/troubleshooting-common-octopus-deploy-issues" />
<id>https://octopus.com/blog/troubleshooting-common-octopus-deploy-issues</id>
<published>2025-09-11T00:00:00.000Z</published>
<updated>2025-09-11T00:00:00.000Z</updated>
<summary>Exploring common issues in Octopus Deploy and ways to resolve them.</summary>
<author>
<name>Donny Bell, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Octopus Deploy provides a powerful, flexible platform for automating deployments and runbook execution. However, there are times when you may encounter challenges that require troubleshooting. In this post, we’ll walk through some of the most common issues that users may run into and provide guidance on how to resolve them.</p>
<h2 id="tentacle-communication-issues">Tentacle communication issues</h2>
<p>Deployment targets and workers running the Octopus Tentacle agent are key to Octopus infrastructure. If Tentacles cannot communicate with your Octopus Server, deployments and runbooks will fail. Some of the issues we commonly see when setting up a new Tentacle agent include: firewall restrictions, network connectivity issues, SSL offloading, and misconfigured certificates.</p>
<p>Our <a href="https://octopus.com/docs/infrastructure/deployment-targets/tentacle/troubleshooting-tentacles">Troubleshooting Tentacles documentation</a> is a comprehensive guide for troubleshooting Tentacle communication issues. It covers:</p>
<ul>
<li>Verifying that Tentacle services are running</li>
<li>Checking firewall rules</li>
<li>Ensuring correct thumbprints are configured</li>
<li>Debugging connectivity with Octopus diagnostic tools</li>
</ul>
<h2 id="calamari-issues-and-antivirus-exclusions">Calamari issues and antivirus exclusions</h2>
<p>If you ever run into an error message in your logs that includes:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>Bootstrapper did not return the bootstrapper service message</span></span></code></pre>
<p>This normally indicates that antivirus or other security software is interfering with an Octopus task (such as a deployment or runbook).</p>
<p>Octopus tasks are powered by <a href="https://octopus.com/docs/octopus-rest-api/calamari">Calamari</a>, a lightweight deployment bootstrapper invoked for each deployment or runbook step. It’s automatically installed and updated as needed in the <em>Tools</em> folder of the <em>Tentacle home directory</em>. Additionally, steps for a given task are processed in a temporary folder inside of the Work folder, also residing in the Tentacle home directory.</p>
<p>Sometimes, antivirus or endpoint protection software can lock or quarantine files in these folders, causing deployments to fail.
To prevent this, we recommend working with your security team to add exclusions as necessary for these directories. For additional information, please review our <a href="https://octopus.com/docs/security/hardening-octopus#configure-malware-protection">Hardening Octopus documentation</a>.</p>
<h2 id="polling-tentacles-over-port-443-https">Polling tentacles over port 443 (HTTPS)</h2>
<p>In some environments, firewall policies can make it difficult or impossible to open additional ports. <a href="https://octopus.com/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443">Octopus supports configuring Polling Tentacles over port 443</a>, which allows communication through a port that is typically already allowed in enterprise networks.</p>
<p>This option simplifies network configuration and can reduce the setup burden in restrictive environments. This also allows for Octopus instances that communicate with other organizations or network environments to have a path for Tentacle communication that may otherwise not be possible.</p>
<h2 id="variable-snapshots-in-projects-and-runbooks">Variable snapshots in Projects and Runbooks</h2>
<p>Octopus variables are a seemingly simple, but common point of confusion that we often help our users with. Octopus leverages <a href="https://octopus.com/docs/releases">project releases</a> and <a href="https://octopus.com/docs/runbooks/runbook-publishing">runbooks snapshots</a> to preserve an immutable set of information to make deployments and runbooks repeatable and predictable.</p>
<p>You can view the variable values associated with a release by selecting the [Show Snapshot] option in the Variable Snapshot section of a release or runbook. This can be a helpful step for confirming the variable values for a given release or runbook.</p>
<figure>
<p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image1.png" alt="Octopus Deploy UI showing a project release and the steps to update the variables for that release"></p>
</figure>
<p><a href="https://octopus.com/docs/projects/variables">Project variables and associated library variable set variables</a> are captured in a snapshot when a release or runbook snapshot is created. In order for variable updates to take effect, you must also do the following in an associated project or runbook:</p>
<p>For projects:</p>
<ul>
<li>Create a new release so that the variable snapshot updates, or</li>
<li>Update an existing release’s variable snapshot</li>
</ul>
<p>For runbooks:</p>
<ul>
<li>Create and publish a new runbook snapshot</li>
</ul>
<div class="hint"><p>The exception to the above is changes to Tenant variables.</p><p>From our <a href="https://octopus.com/docs/tenants/tenant-variables#tenant-variables-and-snapshots">Tenant Variables documentation</a>:</p><blockquote>
<p><em>[…] we don’t take a snapshot of tenant variables. This enables you to add new tenants at any time and deploy to them without creating a new release. This means any changes you make to tenant-variables will take immediate effect.</em></p>
</blockquote></div>
<h2 id="debugging-variables-with-variable-logging">Debugging variables with variable logging</h2>
<p>When deployments or runbooks don’t behave as expected, variable issues are a common culprit. Octopus provides the ability to debug variables by <a href="https://octopus.com/docs/support/how-to-turn-on-variable-logging-and-export-the-task-log">enabling variable logging and viewing the raw task log</a>.</p>
<p>By turning on variable logging, you can:</p>
<ul>
<li>Inspect the evaluated values of your variables</li>
<li>Verify scoping and precedence rules</li>
<li>Export raw task logs for detailed review</li>
<li>Save significant troubleshooting time when debugging complex variable configurations</li>
</ul>
<p>Alternatively, you may now enable Debug Mode for Octopus deployments and runbooks. For project deployments, this option is available on the “deploy” screen:</p>
<figure>
<p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image2.png" alt="Octopus Deploy UI showing a project release and the steps to enable or disable debug mode"></p>
</figure>
<p>When running a runbook, you must click the <strong><code>Show advanced</code></strong> button to reveal Debug mode:</p>
<figure>
<p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image3.png" alt="Octopus Deploy UI showing a runbook snapshot and the steps to enable or disable debug mode"></p>
</figure>
<h2 id="resources-for-custom-api-scripts">Resources for custom API scripts</h2>
<p><a href="https://octopus.com/docs/octopus-rest-api">Octopus Deploy features a powerful REST API</a>. Many Octopus users extend their automation by writing custom scripts that interact with Octopus programmatically. You can find <a href="https://octopus.com/docs/octopus-rest-api/examples">API examples in our documentation</a>. We also offer a <a href="https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/REST">public GitHub repository</a> with many scripts that may fit your needs as written or provide a good baseline to iterate and customize for your needs.</p>
<p>If you can’t find what you need or would like additional inspiration, our <a href="https://octopus.com/slack">Octopus Community Slack channel</a> is a great place to interact with other Octopus users and Octopus employees who can help!</p>
<h2 id="conclusion">Conclusion</h2>
<p>Octopus Deploy is a powerful deployment tool that can handle many complex and scaled scenarios. If you need additional help, <a href="https://octopus.com/support">contact Octopus Support</a>.</p>]]></content>
</entry>
<entry>
<title>Your IDP needs DDD</title>
<link href="https://octopus.com/blog/your-idp-needs-ddd" />
<id>https://octopus.com/blog/your-idp-needs-ddd</id>
<published>2025-09-09T00:00:00.000Z</published>
<updated>2025-09-09T00:00:00.000Z</updated>
<summary>As Platform Engineering grows into a movement at scale, we need to revisit the past and apply some lessons from domain-driven design to our internal developer platforms (IDPs).</summary>
<author>
<name>Steve Fenton, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>It has been more than two decades since Eric Evans published his book on domain-driven design (DDD). The idea was to create software designed after the business domain, using the same language and mental models people used outside of the software team.</p>
<p>We don’t talk about domain-driven design much these days. But as George Santayana said: “Those who cannot remember the past are condemned to repeat it.”</p>
<p>As Platform Engineering grows into a movement at scale, we need to revisit the past and apply some lessons from domain-driven design to our internal developer platforms (IDPs). Otherwise, we continually step on rakes to learn why they shouldn’t be left on the lawn.</p>
<p>There’s a host of interesting interconnected ideas in domain-driven design, but one that resonates with Platform Engineering is the concept of a core domain.</p>
<h2 id="what-core-domains-are">What core domains are</h2>
<p>When you build software, there are several areas where your innovation, opinions, and solutions create unique value for your organization. You also need many things that don’t add much value, but your software isn’t viable without them.</p>
<p>Let’s use a pizza restaurant as an example. If you sell pizza, you want to make it easy for customers to choose what they want to eat and have it delivered. To complete the process, you need to look up their address and take a payment.</p>
<p>Core domains are the areas where you want to do something different that will give you a competitive edge. For your pizza company, that might be how you present the menu, collect customizations, and offer deals and rewards.</p>
<p>Non-core domains, also called generic domains, are areas where innovation and differentiation make little difference, or where doing things differently may even be undesirable. Customers expect that looking up their address and paying will work like elsewhere. They don’t want you to be innovative here, as it makes it harder to use.</p>
<p>So, core domains are something unique or special to your organization. It’s essential to your business’s existence, and where you should invest the most. This is something you want to do so well that it’s hard for your competitors to copy.</p>
<h2 id="the-problem-of-generic-domains">The problem of generic domains</h2>
<p>When you spend time on generic domains, you direct time, attention, energy, and investment away from the areas that impact your organization most. Generic domains have limited value because they don’t benefit from doing something different or unique. The pizza company will never create such an excellent payment flow that you’d choose their offering over a competitor who offers better customization.</p>
<p>Generic domains can be just as complex as your core domains, which means they can consume large amounts of investment. Suppose there’s a commercial provider of an offering in your generic domain space. In that case, they’ll be treating it as a core domain and innovating the space, which puts additional pressure on you to invest to avoid falling behind user expectations.</p>
<p>Organizations that avoid the generic domain trap can outpace their competitors as they spend more time working on features that will make them stand out.</p>
<h2 id="how-to-tame-generic-domains">How to tame generic domains</h2>
<p>There’s an easy way to avoid the generic domain trap. Domain-driven design provides a pattern for managing them, which recognizes the asymmetry in the value gained by investment in core domains versus generic domains.</p>
<p>Instead of reducing costs on paper, the goal is to minimize the real cost of working on generic domains: Lost value.</p>
<figure><p><img src="/blog/img/your-idp-needs-ddd/taming-generic-domains.png" alt="Domain-driven design prefers to buy off the shelf, then falls back to isolation, outsourcing, and minimalism"></p></figure>
<p>You should work through this list from the top and choose the earliest exit available.</p>
<ol>
<li><strong>Off-the-shelf</strong>: Look for existing software or services that address the generic domain. In particular, look to use:
<ol>
<li>Commercial products and software-as-a-service offerings where the provider treats it as a core domain. Their innovation and support will ensure the generic domain doesn’t become an anchor dragging you back.</li>
<li>Open source tools that are robust and maintained, and where the overheads of adopting and updating them are low.</li>
</ol>
</li>
<li><strong>Isolation</strong>: Where you have to build custom code for a generic domain, encapsulate and isolate it. Placing it behind a well-defined interface minimizes the impact on your core domain and lets you switch it out if an off-the-shelf solution emerges later.</li>
<li><strong>Outsourcing</strong>: While outsourcing your core domain can cause problems, outsourcing generic domains helps control the cost and distraction of the work. You can define the interface and have an outsourced team focus on the implementation details.</li>
<li><strong>Minimalism</strong>: When no other option is available, create a simple minimalist solution that meets the immediate need. Don’t over-engineer the generic domain or add features you don’t need. You can be reluctant to iterate the solution and keep your eyes and ears open for when someone creates a software product or service you can use to replace it.</li>
</ol>
<h2 id="analyzing-build-versus-buy">Analyzing build versus buy</h2>
<p>Platform Teams who want to create the most significant force multiplier for the developers they serve need to protect their focus on fitting the tools to the organization. To do that, they need to identify and eliminate areas where their skills are wasted.</p>
<p>A crucial part of this optimization process is performing a solid build versus buy analysis, which should factor in the initial development cost, ongoing maintenance and support costs, and the opportunity cost of diverting resources away from the core domains.</p>
<p>Returning to the pizza restaurant, offloading the address lookup to a vendor that commonly provides this feature will mean users are familiar with how it works. The vendor will dedicate more attention to improving the user experience of their tool, and when the vendor introduces innovations, they appear commonly enough that users accept them.</p>
<p>Similarly, you’d want to offload payments to a fast and secure payment provider. Hence, it works like other sites and keeps pace with developments like 3D secure, tokenization, card security codes, multi-factor authorization, and biometrics. These industry innovations would have forced the developers to revisit this generic domain many times just to keep pace with the baseline.</p>
<h2 id="why-this-is-crucial-for-platform-engineering">Why this is crucial for Platform Engineering</h2>
<p>Domain-driven design (DDD) tells us to optimize our development efforts where they will have the most impact on the organization’s success. You shouldn’t over-invest in non-core domains, as they have limited business value and drain resources. Building custom solutions for non-core domains brings unnecessary complexity and maintenance burden, and for what? To build something that you could have bought or outsourced.</p>
<p>The software industry is about to re-learn the lessons that led to the discovery of domain-driven design. Industry-wide, we are dedicating thousands of developers to building the same thing. Not a simple minimalist solution to fill a gap left by commercial products, software-as-a-service, or open source software, but a million giant platforms that are out of date before they’ve been pushed to production.</p>
<p>Internal developer platforms sink unless platform teams can shift weight down to underlying tools. By transferring ballast to commercial products or open source tools, the platform team can get back to agility and create simple, minimal solutions that handle real gaps in toolchains caused by truly bespoke needs.</p>
<p>And it’s these very needs that are missed with behemoth platforms. An organization that needed to go an extra two miles on security may adopt Platform Engineering to tailor a truly robust solution to their security needs. As that platform grows and accumulates additional custom features, the focus on security is lost, and the investment is wasted building and maintaining code that doesn’t solve a unique need for the organization.</p>
<p>But it’s not all doom and gloom for Platform Engineers. Commercial and open source tools can rescue them from this inevitability by providing features that make it easy to shift the weight down and keep the platform light enough to float.</p>
<h2 id="float-on-our-platform-hub">Float on our Platform Hub</h2>
<p>That’s where Platform Hub comes in. By adding features platform teams need, like process templates, project templates, and policies, platform teams can transfer the effort down to Octopus and lighten their platform by removing thousands of lines of bespoke templating code.</p>
<p>Platform Teams can get back to focusing on the unique needs that make their platform vital to their organization. They will benefit from our innovative mechanisms for template management and policies, which go well beyond the attack of the template clones and the synchronization conflicts that platform teams report with their bespoke solutions.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Focus on your end users when creating AI workloads</title>
<link href="https://octopus.com/blog/focus-on-end-users-for-ai" />
<id>https://octopus.com/blog/focus-on-end-users-for-ai</id>
<published>2025-09-04T00:00:00.000Z</published>
<updated>2025-09-04T00:00:00.000Z</updated>
<summary>Why it is important to focus on helping end users above all else when creating AI workloads.</summary>
<author>
<name>Bob Walker, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Recently, I attended a conference targeted at CIOs, CTOs, and VPs of Technology. As expected, there were many sessions on AI and how it can help companies be more efficient. The example given was the well-known use of AI in the hiring process; using AI as gatekeepers to quickly weed out all the unqualified candidates. “Your human resources people won’t have to wade through so many CVs and phone screens!”</p>
<p>That use case improves the efficiency of human resources or your people team. But that efficiency comes at the cost of the end users, the people you are trying to hire. <em>Everyone hates</em> how AI is used in hiring processes today. Phrases like dystopian and Orwellian are common. In this article, I’ll discuss why focusing on both your AI feature’s beneficiary and end users is essential.</p>
<h2 id="beneficiary-user-vs-end-user">Beneficiary User vs. End User</h2>
<p>A beneficiary user is a person who benefits from leveraging AI. The end user is the person who will use an AI feature to accomplish a specific task.</p>
<p>Returning to the hiring process, the beneficiary user is responsible for wading through CVs and performing the initial phone screen. The end user is the person submitting their CV. The person in charge of going through CVs benefits from AI by offloading the repetitive work of screening unqualified candidates. Imagine a job posting for a senior .NET developer, but 30% of CVs submitted only include project manager experience. You might think I’m exaggerating, but you’d be surprised. As a former hiring manager who had to wade through CVs, I was shocked by how many people were “CV Bombing” - applying for as many positions as possible.</p>
<p>Looking at Octopus Deploy, the beneficiary of our AI Assistant is the platform engineer. The end user is the developer who uses the assistant to accomplish a particular task. For example, you can ask the Octopus AI Assistant why a deployment or runbook run failed. The AI Assistant will look at the failure message, and using our knowledge base, our docs, and the web, will come up with a reason why the failure occurred and suggestions on how to fix it. Assuming the suggestion is correct, the developer can quickly self-serve a solution without involving the platform engineer. The platform engineer benefits because they can focus on high-value tasks instead of helping debug a specific deployment failure. If the platform engineer didn’t know the answer, they’d go through our docs or do a Google search.</p>
<p>Now that we understand the two kinds of users, let’s examine what happens when a person is both the beneficiary and the end user.</p>
<h2 id="learning-the-wrong-lessons-from-the-success-of-chatgpt">Learning the wrong lessons from the success of ChatGPT</h2>
<p>ChatGPT and similar tools are unique because their users are both the beneficiary and the end user.</p>
<p>One of many benefits of ChatGPT is that it is an evolution of search engines. Before ChatGPT, you did a Google search, which returned a list of results. The search engine ranked the results for you. They had complex algorithms to find the best results based on their internal ranking system. A cottage SEO industry (Search Engine Optimization) sprang up to get higher results. ChatGPT changed that by providing you with answers curated from the content of many websites.</p>
<p>For common questions, with many sources agreeing on the same answer, the results between Google and ChatGPT are close. ChatGPT is not infallible; once, it insisted that Omaha, Nebraska, was 29 nautical miles from Chicago, Illinois. Google can be more accurate, but that is a result of maturity. They’ve had 25 years to improve and iterate their search results algorithm.</p>
<p>ChatGPT is popular because of the interface. It is very similar to the Google Search box. The results are where they differ. ChatGPT collates information and generates an answer that is easy to read. In addition, Google Searches are very transactional: search, get a result, move on with your day. With ChatGPT, the sessions are interactive. You can ask additional questions, and ChatGPT remembers the entire conversation.</p>
<p>I’m only focused on ChatGPT’s question/answer aspect. I know it can do so much more, including generating content, images, composing songs, and more.</p>
<p>Unfortunately, companies seem insistent on learning the wrong lessons when analyzing popular trends. They see that “people like prompts and providing answers or content to them. Let’s do that for [insert use case here]!”</p>
<h2 id="an-awful-user-experience-and-its-impact">An awful user experience and its impact</h2>
<p>That wrong lesson has its roots in computer graphic adventure games from the 1980s/early 1990s.</p>
<p>My first computer game was <a href="https://en.wikipedia.org/wiki/Space_Quest_III">Space Quest III</a> from Sierra. Like computer games of that era, I typed in commands to get the on-screen character to act. There was no help guide or tutorial. I had to figure it out. My brother and I spent <em>weeks</em> trying to escape the first area. We had to find the magic set of commands to execute in a specific sequence in specific areas.</p>
<p>Last year, I started the multi-month process of changing banks from a regional to a national bank. The national bank offered a high-yield savings account, while the regional bank didn’t. I had to call the national bank a few times. They have followed the latest trend in phone support. Dial the number, and a human-sounding voice asks you what you need help with. Too often, the response is “I’m sorry, I didn’t get that” or “I didn’t understand.” I needed to know the magic phrase to get help. There was no clear escape hatch to get to an operator.</p>
<p>Their online AI help agent was no better. The AI help agent was trained on their public documents. If the answer wasn’t in the documents, it couldn’t help me. Often, it referred me to calling their support line, creating an endless cycle of frustration.</p>
<p>That experience was so bad that I went back to the regional bank. They proudly promote that you’ll talk to a real person when calling for help. I would rather lose thousands of dollars over many years than deal with the national bank’s awful AI-based help system.</p>
<p>I’m not the only one who hates talking to AI chatbots for support. The Commonwealth Bank of Australia (CBA) <a href="https://www.abc.net.au/news/2025-08-21/cba-backtracks-on-ai-job-cuts-as-chatbot-lifts-call-volumes/105679492">recently reversed its decision</a> to eliminate jobs after introducing an AI-powered help due to poor customer experience.</p>
<h2 id="augmenting-the-end-user-experience">Augmenting the end user experience</h2>
<p>The problem is that, just like humans, AI makes mistakes. Without appropriate settings, it will insist that it is correct. Where humans and AI differ is that AI is “book smart” but not “street smart,” but people can be “book smart” and “street smart.” That means that humans use a combination of experiences and acquired knowledge to make decisions. Humans learn and evolve. At the same time, AI needs to be retrained. The best analogy came from Neil deGrasse Tyson on a <a href="https://www.youtube.com/watch?v=BYizgB2FcAQ">recent interview with Hasan Minhaj</a> - think of AI like Albert Einstein, but he is locked in a box. It is a sensory deprivation tank, where he does not know the outside world. Someone asks him random questions, and he responds with his current knowledge. He has no context outside the knowledge he has acquired before going into the box.</p>
<p>As a result, AI struggles with complex decisions. It doesn’t do well when something is outside the expected parameters. In a recent study from <a href="https://arxiv.org/pdf/2412.14161">Carnegie Mellon University and Duke University</a>, AI Agents are correct 30 to 35 percent for multi-step tasks. And the results depended on the model used, with GPT-4o achieving an 8.6% success rate. In a <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">recent study</a> by Apple Computers, many popular LRMs (Large Reasoning Models) models couldn’t handle puzzles (Tower of Hanoi, Checker Jumping, Block World, and River Crossing) once the number of pieces increased beyond simple examples. Today’s AI still has to undergo many more evolutions to become similar to <a href="https://en.wikipedia.org/wiki/J.A.R.V.I.S.">Tony Stark’s Jarvis</a> in the <abbr title="Marvel Cinematic Universe">MCU</abbr>.</p>
<p>I’m not against using AI. Far from it. However, it’s essential to understand its limitations when designing an end-user experience.</p>
<p>We’ve been very methodical in finding the proper use cases for our AI Assistant. We looked at how AI could augment the user experience. That means the Octopus AI Assistant <strong>is not</strong> intended to replace the current end-user interface. That would result in a sub-par experience, the opposite of augmentation.</p>
<p>The challenge we wanted to solve was surfacing the correct information for the users at the right time. We wanted to let the user ask AI for help and not annoy them with unwanted pop-ups or suggestions. We didn’t want to create Clippy 2.0 in the product.</p>
<p>Knowing that, our four use cases for the AI Assistant are:</p>
<ol>
<li><strong>Deployment Failure Analyzer</strong>: Read the logs of a failed deployment and offer suggestions to fix the issue.</li>
<li><strong>Tier-0 Support</strong>: Provide answers to end-users for common Octopus-related questions. For example, “summarize this deployment process” or “what’s a project?”</li>
<li><strong>Best Practices Analyzer</strong>: Using Octopus Deploy’s strong opinions, review the user’s instance to find areas for improvement.</li>
<li><strong>Prompt-Based Project Creation</strong>: Using templates provided by Octopus Deploy, create a new project to deploy to specific deployment targets.</li>
</ol>
<p>Interestingly, you don’t need AI to list the first three items. I can take a deployment failure, do a Google search, and likely produce similar results. Or, I can use our Octopus Linting tool, <a href="https://octopus.com/blog/octolint-best-practices">Octolint</a>, for best practices. AI is short-cutting all of that by collating all that information and surfacing it to the user. It’s enabling self-service for the end user.</p>
<p>But just as necessary, if the AI assistant can’t help, users can still ask their DevOps or Platform Engineers for help.</p>
<p>That is very different from using AI in hiring or AI-based help agents. They are replacement end-user interfaces. They don’t augment the user experience. Instead, they act as pseudo-gatekeepers to the hiring managers and provide support. They only focus on reducing the load for the beneficiary users. Most likely as a way for companies to cut costs or keep demand for additional headcount down. Unless you know someone at the hiring company or the magic phrase for AI Agent-based help, there are no alternatives.</p>
<p>But end users hate that experience. I believe that is one of the main reasons why <a href="https://www.ibm.com/thought-leadership/institute-business-value/en-us/c-suite-study/ceo">IBM found</a> that only 25% of AI initiatives have delivered the expected ROI over the past few years.</p>
<h2 id="considerations-for-the-end-user-experience">Considerations for the end user experience</h2>
<p>When designing the <a href="https://octopus.com/use-case/ai-assistant">Octopus AI assistant</a>, we started with multiple questions about augmenting the end-user experience. We didn’t want to “sprinkle AI” into the product and claim we had an AI strategy.</p>
<ol>
<li>What problem is the AI feature attempting to solve for the end user?</li>
<li>What is the fallback when the AI feature encounters an unknown use case?</li>
<li>What is an acceptable level of accuracy for the AI feature?</li>
<li>If the response is wrong, what is the escalation process for the end user?</li>
<li>How will the functionality be discovered?</li>
</ol>
<p>The answers for the deployment failure functionality of the AI Assistant are:</p>
<ol>
<li>Often, failures result from an incorrect configuration, transient error, bug in the script, permissions, or some other common problem. In many cases, it is outside the direct control of Octopus. Surface the information to the user to enable them to self-service the fix and decrease the recovery time.</li>
<li>Provide a generic answer and encourage the user to contact Octopus Support or their internal experts.</li>
<li>Reasonable accuracy is expected. Various conditions outside the control of Octopus Deploy can cause errors. Provide multiple suggestions using publicly available documentation. If none work, encourage the user to escalate to a human.</li>
<li>If the response doesn’t help, provide a link to Octopus Support or to contact their internal experts. In either case, they will escalate to a human.</li>
<li>When navigating to a failed deployment or runbook run, the Octopus AI Assistant will provide a suggestion that the user can click on to get the answer.</li>
</ol>
<p>The focus has been “How can we take what we have and make it better?”, not ” How can we ensure that Platform or DevOps engineers are never bothered again?”</p>
<h2 id="conclusion">Conclusion</h2>
<p>When an AI feature has a beneficiary user and end user, focus on providing a fantastic experience for the end user. Augment the end-user experience. But assume that at some point the AI will be incorrect (just like a person is incorrect), and offer a clear escalation path. Despite the many advances in AI, experienced people can handle complex scenarios much better. When the end-user isn’t considered, and the only focus is “improving the bottom line,” it creates an inferior replacement for an existing experience. End users will only put up with so much before they decide to change.</p>]]></content>
</entry>
<entry>
<title>How organizations measure Platform Engineering</title>
<link href="https://octopus.com/blog/how-organizations-measure-platform-engineering" />
<id>https://octopus.com/blog/how-organizations-measure-platform-engineering</id>
<published>2025-09-02T00:00:00.000Z</published>
<updated>2025-09-02T00:00:00.000Z</updated>
<summary>One of the areas we explored in the Platform Engineering Snapshot was how organizations measure their internal developer platforms, with results that varied from technical measures to not collecting any metrics at all.</summary>
<author>
<name>Steve Fenton, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Our Platform Engineering Snapshot is coming soon, containing insights, strategies, and real-world data on how organizations adopt and succeed with Platform Engineering. We’ll also launch a survey to deepen our understanding of the patterns, challenges, and future direction of successful platform building.</p>
<p>One of the areas we explored in the Platform Engineering Snapshot was how organizations measure their internal developer platforms, with results that varied from technical measures to not collecting any metrics at all.</p>
<p>This post looks at which metrics were popular and how you can use them to structure your measurement approach to avoid the zero-metric illusion.</p>
<h2 id="measurement-is-essential">Measurement is essential</h2>
<p>At the start of a <a href="https://octopus.com/devops/platform-engineering/">Platform Engineering</a> initiative, the immediate problems take precedence, and measurement becomes an afterthought. This leaves platform teams without a pre-platform baseline, which makes it challenging to demonstrate the platform’s impact.</p>
<p>The lack of measurement also increases the risk of platforms adding features that don’t align with the organization’s core motivations. For instance, a platform team might focus on standardization, only to discover that the primary driver for investing in Platform Engineering was to improve developer experience.</p>
<p>Platforms often fail not because they are inherently bad, but because they target the wrong personas and attempt to solve the wrong problems.</p>
<p>Even successful platforms struggle with ongoing justification and optimization when they lack clear goals, a robust measurement system that reflects those goals, and baseline data. The platform team may find leadership challenges continued investment if they can’t show the platform’s value.</p>
<p>Therefore, it is crucial to measure platform performance against its established goals. But how are organizations currently approaching this?</p>
<h2 id="what-organizations-measure">What organizations measure</h2>
<p>The survey found organizations focused on 3 key areas for measurement: Software delivery, operational performance, and user experience. The variety of metrics likely reflects the variety of contexts that platforms can assist, though is also evidence that metric systems are skewed to <a href="https://octopus.com/blog/productivity-delusion">the elusive <em>productivity problem</em></a> rather than true developer experience and engagement.</p>
<h3 id="software-delivery-metrics">Software delivery metrics</h3>
<p>The software delivery metrics assess the speed, quality, and throughput of development teams using the platform. They often align with DORA measurements, highlighting how platforms are seen as a route to shipping software more often and at higher quality. The metrics encompass throughput (e.g., deployment frequency, build time, features delivered) and stability (e.g., change failure rate, build success rate).</p>
<ul>
<li>Deployment frequency</li>
<li>Deployment times</li>
<li>Change failure rate</li>
<li>Recovery time</li>
<li>Build success rate</li>
<li>Build time</li>
<li>Features delivered</li>
</ul>
<p>While valuable for understanding delivery performance, these metrics primarily reflect downstream outcomes rather than the platform’s direct contribution to developer productivity. They are most effective when measured before and after platform adoption to demonstrate improvement.</p>
<h3 id="operational-performance-metrics">Operational performance metrics</h3>
<p>Metrics for operational performance track the cost, performance, and efficiency of applications that use the platform. They combine traditional infrastructure monitoring (e.g., reliability, error rates, system performance) with resource optimization (e.g., usage efficiency, cost management). Project count is a basic adoption indicator, but it lacks the context of the addressable market size.</p>
<ul>
<li>Reliability</li>
<li>Error rates</li>
<li>System performance</li>
<li>Resource usage</li>
<li>Infrastructure cost</li>
<li>Project count</li>
</ul>
<p>These metrics are crucial for demonstrating that the platform is operationally sound and cost-effective, but they don’t necessarily indicate developer value or ease of use.</p>
<h3 id="user-experience-metrics">User experience metrics</h3>
<p>These metrics directly measure how developers and teams perceive and interact with the platform, treating internal developers as customers. They focus on satisfaction and onboarding journeys. <a href="https://octopus.com/devops/metrics/platform-satisfaction/">Net promoter score (NPS)</a> offers benchmarkable sentiment data, while user satisfaction provides broader feedback. Onboarding time is critical as it represents the first impression and adoption barrier.</p>
<ul>
<li>User satisfaction</li>
<li>Net promoter score (NPS)</li>
<li>Onboarding time</li>
</ul>
<p>These metrics are vital for platforms run as products with optional adoption, indicating whether the platform is compelling enough for developers to choose and continue using voluntarily.</p>
<figure><p><img src="/blog/img/how-organizations-measure-platform-engineering/success-metrics.png" alt="Popular metrics include deployment frequency, reliability, build success rate, deployment times, and user satisfaction"></p></figure>
<p>Most organizations prioritize technical and delivery metrics, with fewer focusing on user experience or business outcomes. This suggests a risk that platform teams are measuring what’s easy to collect rather than what truly demonstrates business value. The heavy emphasis on technical metrics indicates that many organizations still measure platforms like infrastructure rather than products serving internal customers.</p>
<h2 id="how-measurement-improves-platform-performance">How measurement improves platform performance</h2>
<p>For metrics to improve platform performance, you need to measure multiple dimensions, not just one. Internal developer platforms offer many benefits, so the measurement system should cover all platform goals.</p>
<p>We found organizations that measure more dimensions were more likely to be successful. A single metric gives you a one-in-three chance of creating a successful platform, while 2 metrics make it 50/50. Organizations measuring 6 or more metrics were most likely to be successful.</p>
<p>Platform sponsors may expect you to deliver technical reliability, boost developer productivity, offer a positive user experience, manage costs, encourage adoption, and align with business objectives. Relying on just a handful of metrics cannot adequately capture the interplay of all these dimensions.</p>
<figure><p><img src="/blog/img/how-organizations-measure-platform-engineering/success-by-metric-count.png" alt="With a single metric, only a third of platforms achieve their goals, but platforms with 6 or more metrics have a 75% success rate"></p></figure>
<h2 id="breaking-the-success-illusion">Breaking the success illusion</h2>
<p>The data shows a zero-metric high-success effect. This occurs when organizations that don’t collect any concrete measures of the Platform Engineering effort report high success rates. This phenomenon comes from two very different situations masquerading as the same outcome.</p>
<p>There may be easy and obvious success criteria the platform can address, such as a critical and evident problem where success is undeniable without formal measurement. Where the effectiveness is apparent, formal measurements may be unnecessary. A more likely explanation is that there’s an illusion of success caused by a lack of measurement.</p>
<p>Without concrete metrics, platform teams can focus on outputs (e.g., features built, no outages) rather than actual outcomes (e.g., increased developer productivity, achievement of business goals). Stakeholders may mistake activity for impact, especially if the team is busy and there are no apparent failures.</p>
<p>This also helps explain the dramatic increase in success rates observed when moving from one metric to three or more. Teams relying on a single metric might choose one that doesn’t capture holistic success, leading to a false sense of achievement. However, when multiple dimensions are measured, maintaining illusions becomes difficult. For instance, you can’t claim unqualified success if deployment frequency is high but Net Promoter Score (NPS) is low.</p>
<p>The data also suggests a dangerous middle ground where minimal measurement can create more problems than no measurement. It provides false confidence without the comprehensive feedback necessary for genuine improvement.</p>
<figure><p><img src="/blog/img/how-organizations-measure-platform-engineering/success-illusion.png" alt="When organizations don't measure their Platform Engineering initiative, they operate under the illusion of success"></p></figure>
<h2 id="using-monk-metrics">Using MONK metrics</h2>
<p><a href="https://octopus.com/devops/metrics/monk-metrics/">MONK metrics</a> offer a balanced approach to measuring Platform Engineering success, combining external validation and internal alignment. This framework is more adaptable than purely technical metrics, yet still provides a standardized basis for comparison and improvement.</p>
<p>The MONK metrics are:</p>
<ul>
<li>Market share</li>
<li>Onboarding times</li>
<li>Net Promoter Score (NPS)</li>
<li>Key customer metrics</li>
</ul>
<p>The first three, market share, onboarding times, and NPS, form a benchmarkable trio. These metrics allow for comparison against industry standards and are broadly applicable to Platform Engineering initiatives, serving as a starting point to understand high performance in other organizations. For example, if your onboarding times are slower than those of your industry peers, you can investigate their methods and implement similar improvements.</p>
<p>Including “key customer metrics” is crucial for preventing the measurement framework from losing touch with business realities. Organizations kick off Platform Engineering to solve specific organizational problems, but their success is measured using generic technical metrics that don’t reflect the original investment’s purpose. Instead, it’s essential to translate the motivations behind adopting Platform Engineering into the key customer metrics to track progress towards the platform’s objectives effectively.</p>
<p>MONK metrics address the common issue of platform teams optimizing for metrics that appear favorable but don’t contribute to actual business value. For instance, if you introduce Platform Engineering to reduce time-to-market, measures of infrastructure uptime or build failures fail to track tangible improvements in delivery flow.</p>
<h2 id="lack-of-measurement-makes-your-platform-vulnerable">Lack of measurement makes your platform vulnerable</h2>
<p>Without measurement, platform teams end up in a vulnerable position. They can’t demonstrate their value when budget discussions come up, they can’t identify what’s working versus what needs improvement, and they can’t make compelling cases for continued investment or expansion.</p>
<p>The absence of data also hinders course correction. Platform Engineering involves complex trade-offs between developer experience, operational efficiency, security, and cost. Without clear metrics, teams may prioritize visible or politically expedient solutions over those that deliver actual value.</p>
<p>MONK metrics offer a practical solution. They are accessible, enabling organizations to begin measuring with minimal tooling or data infrastructure. You can gather basic market share data through surveys, track onboarding times simply, and get NPS using lightweight feedback tools.</p>
<p>Their benchmarkable nature also helps address the common “what’s good enough?” dilemma. Instead of abstractly debating whether a two-day onboarding time is acceptable, teams can compare their performance to similar organizations.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Platform Engineering and woodworking</title>
<link href="https://octopus.com/blog/platform-engineering-and-woodworking" />
<id>https://octopus.com/blog/platform-engineering-and-woodworking</id>
<published>2025-08-26T00:00:00.000Z</published>
<updated>2025-08-26T00:00:00.000Z</updated>
<summary>What is something that woodworkers, blacksmiths, and programmers have in common? One answer is that practitioners of these crafts have the unique ability to make their own tools.</summary>
<author>
<name>Paul Stovell, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>What is something that woodworkers, blacksmiths, and programmers have in common? One answer is that practitioners of these crafts have the unique ability to make their own tools.</p>
<p>In fact, making your own tools is so essential to these crafts that tool-making is part of their training and tradition. Hammers, tongs, and chisels are among the first items an apprentice blacksmith learns to make. The first projects for an amateur woodworker are often to build a workbench, a circular saw guide, sawhorses, or a cross-cut sled. Tool-making is so essential to these crafts that it’s not uncommon to detour midway through a production project into making a tool to assist with the project, a detour that becomes a natural part of the work process.</p>
<p>Programmers are, perhaps, the ultimate tool-makers. Every programmer has a collection of scripts and utility programs they’ve built to make small tasks more productive. Octopus Deploy started life as a collection of automation scripts I would take from project to project in my consulting days. Tool-making isn’t just a way to gain productivity; it’s also immensely satisfying. It gives us extensive control over our work environment: unlike every other profession, we don’t need to put up with bad tools, because we can always make our own.</p>
<h2 id="when-making-the-tool-consumes-the-project">When making the tool consumes the project</h2>
<p>Tool-making is never free, however. In woodworking or blacksmithing, it consumes raw materials and time. For programmers, we don’t have a raw material constraint (except perhaps coffee, the raw material that programmers turn into working code!), but we do have a time constraint. And the complexity of software and the optimistic nature of programmers mean that we often underestimate how difficult a particular tool can be to make.</p>
<p>For example, a programmer may need to complete a task that takes 5 minutes, like resetting the local test data they use in the application they are working on. And they might perform that task 5 or 6 times a month. So it’s quite common for good programmers to take an hour or so to create a script or a small utility program to automate the task. Even if the ROI calculation means the time they save automating isn’t going to be paid back for a long time, it can still be a smart thing to do if it results in less context switching, better accuracy, or is simply more satisfying.</p>
<p>As organizations scale their Continuous Delivery practices, this craft approach can lead to an unexpected problem: the tool-building starts consuming more resources than the actual work. Teams that began by automating a few deployment scenarios find themselves spending months building elaborate internal platforms to handle every edge case across dozens of applications. A deployment pipeline that started as a simple script to push a web application to production grows into a complex system supporting microservices, legacy mainframes, mobile apps, and that one critical application written in a programming language nobody wants to touch.</p>
<p>The irony is striking. The organization wants to create valuable software for its customers, but engineering teams are spending more and more time maintaining their custom-built tooling instead of creating features. Like a woodworker who spends more time perfecting their jigs than crafting furniture, these teams have lost sight of what they’re really trying to accomplish.</p>
<h2 id="platform-engineering-and-developer-experience">Platform Engineering and developer experience</h2>
<p>Platform Engineering is downstream of developer experience, even when it’s not the primary motivation for building an internal developer platform. Removing barriers to let developers improve the flow of value benefits the whole organization. Getting software into production and running that software in production is hard, and it wastes time for each team to reinvent solutions to this from first principles.</p>
<p>The best platform teams never lose sight of that developer experience. They build thin platforms that let the innovation of underlying tools accelerate their pace, so their platform can remain market-leading over the long term and at a lower cost than chasing that innovation with a custom-built platform. We often refer to internal developer platforms as the glue. What if we changed that to think of these platforms as the minimal jig that helps fit the professional-grade saws and drills to the shape the organization needs?</p>
<p>The platform, like the jig, should never become the build. When it does, it gets in the way and slows developers down instead of being their force multiplier.</p>
<h2 id="platform-hub-shifting-complexity-away-from-platform-engineers">Platform Hub: Shifting complexity away from Platform Engineers</h2>
<p>We built Platform Hub to lighten the load for platform builders. When they need to support Continuous Delivery at scale, Platform Hub tackles the complexity of template management and policy guardrails. The traditional approach is to create a template and clone it across all your applications, then face the nightmare of keeping hundreds of copies in sync as requirements evolve.</p>
<p>Platform Hub solves the cloning problem by keeping a single versioned template connected to all the places you use it. You can roll out non-breaking changes automatically and track the progress of significant updates. You can even require crucial steps like security scanning using policies, so you know all your deployments meet your standards.</p>
<p>By handling this complexity below the platform level, your teams can focus on what matters: shaping the tools to fit your organization’s specific needs while letting professional-grade tooling handle the heavy lifting. Less time building jigs; more time creating value.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Migrating Octopus projects to Terraform with Octoterra</title>
<link href="https://octopus.com/blog/importing-terraform-projects" />
<id>https://octopus.com/blog/importing-terraform-projects</id>
<published>2025-08-25T00:00:00.000Z</published>
<updated>2025-08-25T00:00:00.000Z</updated>
<summary>Learn how to bring existing Octopus projects under Terraform management with Octoterra.</summary>
<author>
<name>Matthew Casperson, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>With the release of <a href="https://registry.terraform.io/providers/OctopusDeploy/octopusdeploy/latest/docs">version 1 of the Octopus Terraform Provider</a>, DevOps teams can now manage their Octopus resources using a fully supported Terraform based Infrastructure as Code (IaC) solution.</p>
<p>New teams populating their first Octopus spaces can take advantage of the Terraform provider from the outset. However, it is arguably established teams with existing Octopus projects that will benefit the most from IaC capabilities. But how do you migrate existing Octopus projects to Terraform?</p>
<p>In this post we will cover how the Octoterra tool can bring existing projects under Terraform management.</p>
<h2 id="what-is-octoterra">What is Octoterra?</h2>
<p><a href="https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport">Octoterra</a> is a CLI tool that scans an existing Octopus space and converts the projects, runbooks, and other resources into Terraform configuration files.</p>
<p>A new feature in Octoterra generates Bash and PowerShell scripts to reimport the existing resources into the state file of the exported Terraform configuration. This allows teams to:</p>
<ol>
<li>Export existing Octopus resources into Terraform configuration files.</li>
<li>Import the configuration of existing resources into the Terraform state file associated with the exported configuration.</li>
<li>Manage the existing resources using Terraform going forward.</li>
</ol>
<h2 id="exporting-an-existing-octopus-project">Exporting an existing Octopus project</h2>
<p>Octoterra is distributed as self-contained binaries for Windows, Linux, and macOS, available from the <a href="https://github.com/OctopusSolutionsEngineering/OctopusTerraformExport/releases">GitHub releases page</a>.</p>
<p>You can also run Octoterra as a Docker container, which is often the simplest way to get started. The following command will run Octoterra in a Docker container, exporting the configuration of an existing Octopus project into the current directory:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">docker</span><span style="color:#A31515"> run</span><span style="color:#0000FF"> -v</span><span style="color:#001080"> $PWD</span><span style="color:#A31515">:/tmp/octoexport</span><span style="color:#0000FF"> --rm</span><span style="color:#A31515"> ghcr.io/octopussolutionsengineering/octoterra</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#000000">-url </span><span style="color:#A31515">https://instance.octopus.app</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#000000">-space </span><span style="color:#A31515">Spaces-##</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#000000">-apiKey </span><span style="color:#A31515">API-xxxx</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#000000">-projectName </span><span style="color:#A31515">"My Project"</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#000000">-lookupProjectDependencies </span><span style="color:#EE0000">\</span></span>
<span class="line"><span style="color:#000000">-generateImportScripts </span><span style="color:#EE0000">\</span></span>
<span class="line"><span style="color:#000000">-dest </span><span style="color:#A31515">/tmp/octoexport</span></span></code></pre>
<p>The Octoterra arguments from this example are:</p>
<ul>
<li><code>-url</code>: The URL of the Octopus server.</li>
<li><code>-space</code>: The ID of the Octopus space containing the project.</li>
<li><code>-apiKey</code>: The API key to authenticate with the Octopus server.</li>
<li><code>-projectName</code>: The name of the project to export.</li>
<li><code>-lookupProjectDependencies</code>: Enables the use of data sources to look up the IDs of space-level resources such as environments, lifecycles, and variables. This exports the project as a self-contained Terraform configuration referencing existing space-level resources by name.</li>
<li><code>-generateImportScripts</code>: Generates Bash and PowerShell scripts to locate and import the existing resources into the Terraform state file.</li>
<li><code>-dest</code>: The destination directory to write the exported Terraform configuration files and import scripts. When using a Docker container, this directory must be mounted as a volume to allow the container to write files to the host filesystem.</li>
</ul>
<p>For this post we’ll export a simple project called “My Project” that has a single deployment step running a PowerShell script and a channel called HotFix. The project also has a variable scoped to an environment and a channel.</p>
<p><img src="/blog/_astro/process-steps.ty0Y7JtZ_1z4GNq.webp" alt="Octopus deployment project steps screenshot" loading="lazy" decoding="async" fetchpriority="auto" width="3840" height="2150"></p>
<p><img src="/blog/_astro/project-variables.BZYnDYfU_iqGM7.webp" alt="Octopus project variables screenshot" loading="lazy" decoding="async" fetchpriority="auto" width="3840" height="2150"></p>
<p>Once the Terraform configuration files have been generated, run the following command in the <code>space_population</code> directory to initialize the directory containing the exported Terraform configuration files:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">terraform</span><span style="color:#A31515"> init</span></span></code></pre>
<p>Because we have supplied the <code>-generateImportScripts</code> argument, Octoterra will generate Bash and PowerShell scripts to reimport the project into the Terraform state file. All the script file names start with the prefix <code>import_</code>. In addition, two scripts called <code>import.sh</code> and <code>import.ps1</code> are generated which run all the other import scripts, providing a convenient way to import all the resources in the exported configuration.</p>
<p>If running on Linux or macOS, the scripts must be made executable before they can be run:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">chmod</span><span style="color:#A31515"> +x</span><span style="color:#0000FF"> *</span><span style="color:#A31515">.sh</span></span></code></pre>
<p>The import scripts can then be run to import the existing resources into the Terraform state file. The following command runs the Bash script to import the existing resources:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">./import.sh</span><span style="color:#A31515"> API-xxx</span><span style="color:#A31515"> https://instance.octopus.app</span><span style="color:#A31515"> Spaces-##</span></span></code></pre>
<p>If running on Windows, the PowerShell script can be run using the following command:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="powershell"><code><span class="line"><span style="color:#000000">.\import.ps1 API-xxx https://instance.octopus.app Spaces-#</span><span style="color:#008000">#</span></span></code></pre>
<p>You may need to allow PowerShell to run unsigned scripts by setting the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-7.5">execution policy</a>.</p>
<p>The import scripts first attempt to find the matching resources in the target space by name, and if found, import the resources into the Terraform state file using the <code>terraform import</code> command.</p>
<div class="hint"><p>In this example we have used the default local state. Production environments should use <a href="https://developer.hashicorp.com/terraform/language/state/remote">remote state</a>, such as an S3 bucket or Azure Storage Account, to ensure the Terraform state is stored securely and can be accessed by all team members.</p></div>
<h2 id="checking-the-terraform-state">Checking the Terraform state</h2>
<p>Once the project is imported, you can run <code>terraform plan</code> to see any differences between the generated Terraform configuration files and the imported state:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">terraform</span><span style="color:#A31515"> plan</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_apikey=API-xxxxx</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_server=https://instance.octopus.app</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_space_id=Spaces-</span><span style="color:#000000">###</span></span></code></pre>
<p>There are <a href="https://github.com/OctopusDeploy/terraform-provider-octopusdeploy/issues">open issues in the Terraform provider</a> at the time of writing that lead to the plan showing differences between the generated configuration and the imported state. For example, the <code>octopusdeploy_variable</code> resources may show fields like <code>is_editable</code>, <code>is_sensitive</code>, and <code>value</code> will be added, again when these are no-op changes.</p>
<p>Future releases of the Terraform provider and Octoterra will resolve these import differences, but for now, it is expected that the plan will show some differences between the generated configuration and the imported state.</p>
<p>That said, the primary purpose of the import scripts is to allow the exported Terraform configuration to be reapplied to the existing Octopus resources while avoiding errors about resources already existing in the target space.</p>
<p>This is an example of the plan output from my sample project showing the differences:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="plaintext"><code><span class="line"><span>Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:</span></span>
<span class="line"><span> ~ update in-place</span></span>
<span class="line"><span></span></span>
<span class="line"><span>Terraform will perform the following actions:</span></span>
<span class="line"><span></span></span>
<span class="line"><span> # octopusdeploy_project.project_my_project will be updated in-place</span></span>
<span class="line"><span> ~ resource "octopusdeploy_project" "project_my_project" {</span></span>
<span class="line"><span> id = "Projects-9441"</span></span>
<span class="line"><span> name = "My Project"</span></span>
<span class="line"><span> # (18 unchanged attributes hidden)</span></span>
<span class="line"><span></span></span>
<span class="line"><span> + connectivity_policy {</span></span>
<span class="line"><span> + allow_deployments_to_no_targets = true</span></span>
<span class="line"><span> + exclude_unhealthy_targets = false</span></span>
<span class="line"><span> + skip_machine_behavior = "None"</span></span>
<span class="line"><span> + target_roles = []</span></span>
<span class="line"><span> }</span></span>
<span class="line"><span></span></span>
<span class="line"><span> + versioning_strategy {</span></span>
<span class="line"><span> + template = "#{Octopus.Version.LastMajor}.#{Octopus.Version.LastMinor}.#{Octopus.Version.NextPatch}"</span></span>
<span class="line"><span> }</span></span>
<span class="line"><span> }</span></span>
<span class="line"><span></span></span>
<span class="line"><span> # octopusdeploy_variable.my_project_project_test_variable_1 will be updated in-place</span></span>
<span class="line"><span> ~ resource "octopusdeploy_variable" "my_project_project_test_variable_1" {</span></span>
<span class="line"><span> id = "e1b1bb15-d61e-d241-316d-651e495b46e1"</span></span>
<span class="line"><span> + is_editable = true</span></span>
<span class="line"><span> + is_sensitive = false</span></span>
<span class="line"><span> name = "Project.Test.Variable"</span></span>
<span class="line"><span> + value = "whatever"</span></span>
<span class="line"><span> # (4 unchanged attributes hidden)</span></span>
<span class="line"><span></span></span>
<span class="line"><span> # (1 unchanged block hidden)</span></span>
<span class="line"><span> }</span></span>
<span class="line"><span></span></span>
<span class="line"><span>Plan: 0 to add, 2 to change, 0 to destroy.</span></span>
<span class="line"><span>╷</span></span>
<span class="line"><span>│ Warning: Block Deprecated</span></span>
<span class="line"><span>│ </span></span>
<span class="line"><span>│ with octopusdeploy_project.project_my_project,</span></span>
<span class="line"><span>│ on project_project_my_project.tf line 36, in resource "octopusdeploy_project" "project_my_project":</span></span>
<span class="line"><span>│ 36: resource "octopusdeploy_project" "project_my_project" {</span></span>
<span class="line"><span>│ </span></span>
<span class="line"><span>│ octopusdeploy_project.versioning_strategy is deprecated in favor of resource octopusdeploy_project_versioning_strategy. See</span></span>
<span class="line"><span>│ https://oc.to/deprecation-tfp-project-versioning-strategy for more info and migration guidance.</span></span></code></pre>
<p>It is good practice to run <code>terraform plan</code> after importing the resources to ensure that the Terraform configuration files match the state of the Octopus resources, and manually review the generated Terraform configuration files if needed to confirm they are correct.</p>
<h2 id="backing-up-your-octopus-instance">Backing up your Octopus instance</h2>
<p>Before making any changes to production resources, it is highly recommended that you <a href="https://octopus.com/docs/administration/data/backup-and-restore">backup the Octopus database</a>. While changes to projects are tracked in the Octopus audit log, having a backup of the database allows you to restore the state of your Octopus instance if something goes wrong.</p>
<p>You may also consider testing the export and import process on a test instance of Octopus running a copy of your production data, or on a cloned project. This allows you to verify the export and import process works as expected before applying it to your production projects.</p>
<h2 id="applying-the-terraform-configuration">Applying the Terraform configuration</h2>
<p>Once you are satisfied that the plan does not show any unexpected changes, you can run <code>terraform apply</code> to apply the configuration:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">terraform</span><span style="color:#A31515"> apply</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_apikey=API-xxxxx</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_server=https://instance.octopus.app</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -var=octopus_space_id=Spaces-</span><span style="color:#000000">###</span></span></code></pre>
<p>This will apply the Terraform configuration back to Octopus. We expect this operation to make no changes to the project, as the exported configuration matches the state of the project in Octopus.</p>
<p>Once the apply operation is complete, the project is managed by Terraform, and you can make any future changes to the project by editing and reapplying the Terraform configuration.</p>
<h2 id="dealing-with-sensitive-variables">Dealing with sensitive variables</h2>
<p>Octoterra reads the state of Octopus projects and variables via the API. The API does not export the values of sensitive variables, so Octoterra can not include sensitive values in the exported Terraform configuration files.</p>
<p>To manage sensitive variables, you can either:</p>
<ul>
<li>Pass the value of sensitive variables as Terraform variables when running <code>terraform apply</code>, as Octoterra creates Terraform variables to define the values of all exported sensitive variables.</li>
<li>Exclude sensitive variables from the exported configuration and manage them separately in Octopus.</li>
</ul>
<p>To exclude variables from the exported configuration, you can use the <code>-excludeProjectVariable</code> argument when running Octoterra. This argument can be passed multiple times to exclude multiple variables from the exported configuration.</p>
<p>For example, if the project had a sensitive variable called <code>Project.Test.Secret</code> that we wished to exclude from the exported configuration, we would run the following command:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">docker</span><span style="color:#A31515"> run</span><span style="color:#0000FF"> -v</span><span style="color:#001080"> $PWD</span><span style="color:#A31515">:/tmp/octoexport</span><span style="color:#0000FF"> --rm</span><span style="color:#A31515"> ghcr.io/octopussolutionsengineering/octoterra</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -url</span><span style="color:#A31515"> https://instance.octopus.app</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -space</span><span style="color:#A31515"> Spaces-##</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -apiKey</span><span style="color:#A31515"> API-xxxx</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -projectName</span><span style="color:#A31515"> "My Project"</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -lookupProjectDependencies</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -generateImportScripts</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -excludeProjectVariable</span><span style="color:#A31515"> Project.Test.Secret</span><span style="color:#EE0000"> \</span></span>
<span class="line"><span style="color:#0000FF"> -dest</span><span style="color:#A31515"> /tmp/octoexport</span></span></code></pre>
<p>This will exclude the <code>Project.Test.Secret</code> variable from the exported configuration, meaning it is not managed by Terraform, allowing you to manage it separately in the Octopus UI.</p>
<h2 id="making-manual-changes-to-the-exported-configuration">Making manual changes to the exported configuration</h2>
<p>The exported Terraform configuration files can be manually edited to apply any customizations or address any issues you may find. The beauty of Terraform and the Octopus Terraform provider is that the configuration files use an open, documented, and editable format.</p>
<p>You have complete control and ownership of the configuration once it is exported, and are free to make any changes you need.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Customers with existing Octopus projects can now place them under Terraform management using the Octoterra tool. By exporting the projects to their equivalent Terraform configuration, importing the state, and applying the configuration, Octopus projects can be effectively migrated in-place with minimal disruption.</p>]]></content>
</entry>
<entry>
<title>Rebalancing buy versus build With AI</title>
<link href="https://octopus.com/blog/rebalancing-buy-vs-build-with-ai" />
<id>https://octopus.com/blog/rebalancing-buy-vs-build-with-ai</id>
<published>2025-08-22T00:00:00.000Z</published>
<updated>2025-08-22T00:00:00.000Z</updated>
<summary>AI is putting its thumb on the scales of buy vs build. This post details how AI has influenced how we approach buy vs build decisions at Octopus, along with providing a specific example of a recent decision made that was influenced by AI.</summary>
<author>
<name>Andrew Best, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>I’ve currently been using Claude Code a lot in my day to day work. There are a lot of AI hot takes floating around the tech industry right now. One way you can cut through those hot takes is to observe the impact it is having <em>where you are</em>.</p>
<p>At Octopus I am surrounded by some of the very best software engineers in the industry. I value and leverage their opinions continuously in my own work, and they always improve the quality of my execution and my decision-making.</p>
<p>We’ve gone to some effort recently to ensure all of our engineers have an AI tool available to them that they can use reflexively in their day-to-day work. There is likely an entire post to be written on that process, how we’ve approached it, and the impact we are seeing, but I’ll reserve that for another time.</p>
<p>Much of the teeth gnashing and hyperbole around AI tools centers around them replacing human software engineers. That they are evolving in a way that leads us toward some sort of singularity where AI growth becomes uncontrollable and irreversible, replacing humans and altering the course of history.</p>
<p>But if you observe <em>really good engineers</em> working with AI, you don’t see them despondent. You don’t see them belligerently avoiding the tools, espousing the values of hand-wrought code and human toil. The response spectrum I see goes from humorous observation when the tools occasionally and inevitably go off the rails, through to delight and awe when they create solutions with a fraction of the effort or cost than it would have taken previously.</p>
<p>Steve Jobs proposed <em>computers are like a bicycle for the mind</em>. This insight has recently been adapted as <em>AI is like a motorcycle for the mind</em>. This is what I see when great engineers get their hands on these tools and start using them in anger. Once engineers start building an intuition for where the tools are strongest, and how they can accelerate or completely eliminate toil and tedious tasks, or make tasks that were previously too costly now surmountable and achievable, you see them eagerly leaning into these tools, realizing how much <em>better</em> they make the craft of building software.</p>
<h2 id="buy-versus-build">Buy versus build</h2>
<p>We’ve had a policy in place at Octopus for some time for making buy vs build decisions.</p>
<blockquote>
<p>If solving a problem is NOT core to our business, we are strongly inclined to buy a solution, than spend the time and effort building one.</p>
</blockquote>
<p>However, there is a caveat to this policy. Often we run into problems of this shape, but <em>there are no good options that exist in the market</em> to buy.</p>
<p>This might mean that the options that exist don’t solve the problem well. Or they don’t solve it well in our context. Or they are not economically feasible - it might be a small problem to us, but solutions are priced for enterprise-grade versions of the problem.</p>
<p>This leaves us in a tricky situation. Either we overpay to solve the problem; or we try and ignore the pain the problem is causing us; or we spend quite a bit of time hacking together a solution, which also won’t usually have a positive cost vs value outcome.</p>
<p>I ran into this problem recently.</p>
<p>I’m heading up the effort to bring AI capabilities into Octopus, helping our teams solve problems that were previously unsolvable using the same LLM technology our engineers are enjoying in their daily work. We are hiring by the way: <a href="https://octopus.com/company/careers">octopus.com/careers</a>.</p>
<p>Building solutions that integrate LLMs, typically called agents, requires new tooling to support new feedback loops that aren’t found in existing software systems. Core to these new feedback loops are evals.</p>
<p>Evals are human-driven, LLM-assisted feedback loops that help you assess and manage the quality of your agents outputs. They are your unit-tests for non-deterministic workloads. They give you confidence that your agent is delivering quality outputs across a broad range of inputs. If you want to learn more about evals, go read <a href="https://hamel.dev/">Hamel Husain’s</a> posts from the past two years - he is the authority on evals.</p>
<p>Now there are solutions in the market to this problem at the moment. I’ve evaluated a handful of them, including <a href="https://www.langchain.com/langsmith">LangSmith</a> and <a href="https://www.braintrust.dev/">Braintrust</a>. All the solutions have a number of drawbacks <em>in my particular context</em>. Common to these were:</p>
<ul>
<li>You only get a first-class experience with them if you are using Python (or occasionally TypeScript)</li>
<li>They have opinionated workflows for iterating on prompt and agent development that don’t take first-class function calling into account. They want to be walled gardens for agent development</li>
</ul>
<p>These problems are both showstoppers for me. I’m developing agents with .NET, using <a href="https://github.com/microsoft/semantic-kernel">Semantic Kernel</a>. And the agent is deeply integrated with Octopus’s core domain, giving the agent access to fine-grained capabilities that help it fulfil its duties.</p>
<p>What I need from an eval tool is:</p>
<ul>
<li>It must work purely by ingesting OpenTelemetry traces, without additional instrumentation</li>
<li>It must give me a way to create eval sessions of arbitrary size driven from existing common automated test tooling (like XUnit or Jest)</li>
<li>It must exercise the agent as-built, with full access to the functions it can call within Octopus to fulfill its responsibilities.</li>
<li>It must provide me a productive interface for human assessment of trace sessions that emphasizes readability and rapid review</li>
</ul>
<p>Later, I’ll want to graduate into managing golden datasets of labelled traces and automating evals leveraging LLM-as-judge. But for now, I have the former requirements.</p>
<p>Previously I’d be in a bind at this point. I’d buy one of these tools and deal with the shortcomings. Or I’d do something cheap and cheerful, like export traces to disk, parse them into CSV, and do evals in spreadsheets. Or I’d spend a week or so building the starts of a tool that would suit my needs.</p>
<p>AI has put its thumb on the scales of the buy vs build conversation though. It is now <em>much more viable</em> to build bespoke tools to solve problems than it was previously. What previously might have taken days or weeks, now typically takes minutes or hours.</p>
<h2 id="building-an-ai-eval-tool">Building an AI eval tool</h2>
<p>I chose to build my own AI eval tool for this reason - it now seemed like a viable path to take. I’d build it primarily via Claude Code, and tweak and finesse it as I went along.</p>
<p>I’d already scaffolded some automated tests around our agent that drove it with a set of representative sample inputs gathered from production scenarios, and the agent emitted OpenTelemetry traces that contained the details we would want to evaluate. I used XUnit for these tests, but this approach is technology-agnostic, you could do the same with JUnit, or Jest.</p>
<p>I also added an <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter">OpenTelemetry file exporter</a> to our local otel collector to ensure I could capture the traces on disk and feed them to my eval tool.</p>
<p>One of the most important thing you can do when building tools with AI is to use a tech stack you are comfortable with, and often one that aligns with where you will use the tool. In my case I chose to use <a href="https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor">Blazor</a>, a lightweight web framework for .NET. The benefit of this is that the tool could live alongside the agent and its automated tests in our solution, and could be developed within and executed by the same IDE environment.</p>
<p>With those pieces in place, the solution came together over a two-hour session.</p>
<p>One of the keys to success with AI tools is managing context. The best way to do this is to break work down into chunks.</p>
<p>One of the first jobs I needed to do was to select and upload an OpenTelemetry jsonl file, then read it into memory and parse it into a data structure I would use for evaluation and storage. I broke this down into the following steps:</p>
<ul>
<li>Build a UI to select and upload a file, and log its contents on the backend</li>
<li>Parse the uploaded file into a particular data structure, called Sessions. I gave Claude the input file structure via example, and defined the target model by describing it in words, and how the inputs would map to the model. It would then write each individual session to disk</li>
<li>Display a list of the uploaded sessions in the UI, and allow the user to click on one and open up a session details page, using the session ID as a route. Stub out the details page</li>
</ul>
<p>By using Claude’s plan mode through each significant increment, adjusting the plan if required, then <code>Shift+Tab</code>-ing into auto apply mode, I quickly built out an application that satisfied my requirements.</p>
<h2 id="scalable-solutions">Scalable solutions</h2>
<p>I previously mentioned that I’ll eventually need a more sophisticated tool - something that might start looking like a platform for managing datasets, ingesting production telemetry, and automating the assessment of production agent outputs using LLM-as-judge. This won’t be on day 1 or 2. It might not even be on day 50. But it will likely be the case by the time day 300 rolls around.</p>
<p>At this point the tool will become critical infrastructure, and need ongoing love and attention to ensure it fulfills its purpose in our delivery pipeline.</p>
<p>At that point I’ll be revisiting the buy vs build decision - its likely I’ll go to a significant amount of effort to find a production-grade solution that will provide a stable and reliable platform to underpin the quality of our AI features. One that does not weigh down our team with an ongoing platform maintenance burden.</p>
<p>But for now? I’m unblocked and can move forward confidently without burning more cycles trying to fit square pegs in round holes.</p>
<h2 id="conclusion">Conclusion</h2>
<p>There are a few great things about vibe-coding your own tools:</p>
<ul>
<li>You can get results in less time than it would take you to discover tools in market that solve your problem, set up trials of them, and integrate and test their effectiveness.</li>
<li>You can tailor them to your context and needs. No more fitting your problem to their solution.</li>
<li>Once you are done with it, or your needs exceed what you’re willing to build into the tool, you can throw it away. You’ve only spent a handful of hours on it, not days or weeks. It is not your precious. There is no sunk cost. You can discard it when you’re done.</li>
</ul>]]></content>
</entry>
<entry>
<title>Supply chain security with GitHub Actions and Octopus Deploy</title>
<link href="https://octopus.com/blog/supply-chain-security-with-github-and-octopus-deploy" />
<id>https://octopus.com/blog/supply-chain-security-with-github-and-octopus-deploy</id>
<published>2025-08-12T00:00:00.000Z</published>
<updated>2025-08-12T00:00:00.000Z</updated>
<summary>Learn how to enact supply chain security using GitHub Actions and Octopus Deploy.</summary>
<author>
<name>Bob Walker, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>In May 2021, in response to a series of high-profile cyber attacks, <a href="https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity">President Biden issued Executive Order 14028</a> to improve the nation’s cybersecurity. A big reason for this was the <a href="https://www.gao.gov/blog/solarwinds-cyberattack-demands-significant-federal-and-private-sector-response-infographic">SolarWinds supply chain attack</a> in which Russian state-sponsored hackers compromised the Orion software platform. Section 4 specifically addressed that attack:</p>
<blockquote>
<p>The development of commercial software often lacks transparency, sufficient focus on the ability of the software to resist attack, and adequate controls to prevent tampering by malicious actors. There is a pressing need to implement more rigorous and predictable mechanisms for ensuring that products function securely, and as intended.</p>
</blockquote>
<p>Since 2021, we’ve seen a lot of new functionality to enable supply chain security. In this blog post, I will walk you through improving your supply chain security by leveraging GitHub, GitHub Actions, and Octopus Deploy.</p>
<h2 id="disclaimer">Disclaimer</h2>
<p>I’m under no illusions that this article will be the be-all and end-all solution for supply chain security. These solutions will not cover other avenues of attack. The solutions use new functionality added to both platforms since 2021, pre-existing functionality, and common-sense configurations.</p>
<p>A great third-party resource is Supply-chain Levels for Software Artifacts or <a href="https://slsa.dev/">SLSA</a>(pronounced “salsa”). For additional information, consult your CISO (Chief Information Security Officer), security team, or other company experts. As with anything security-related, check your company’s policies to ensure compliance.</p>
<h2 id="nomenclature">Nomenclature</h2>
<p>This article will use relatively new terms in software deployment pipelines. I’ve included the definitions below to make it easier to follow along.</p>
<ul>
<li><strong>SBOM</strong> - Software Bill of Materials - a list of all the third-party libraries (and their third-party libraries) used to create the build artifact (container, .zip files, jar files, etc.).</li>
<li><strong>Provenance</strong> - the record of who created the software change, how it was modified and built, and what inputs went into it. It shows how the build artifact was built.</li>
<li><strong>Attestation</strong> - A cryptographically verifiable statement that asserts something about an artifact, specifically its Provenance. It is similar to the notary seal on a document. Doesn’t show the whole process, but certifies its validity.</li>
</ul>
<p>SBOMs, Provenance, and Attestations are intertwined. Think of it like a cake.</p>
<ul>
<li>SBOMs are the ingredient list.</li>
<li>Provenance is the recipe and kitchen log (who cooked it, when, and with which tools).</li>
<li>Attestation is a signed certificate that proves the ingredient list, recipe, and cooking process are trustworthy.</li>
</ul>
<h2 id="supply-chain-security-is-more-than-sboms-provenance-and-attestation">Supply chain security is more than SBOMs, Provenance, and Attestation</h2>
<p>SBOMs, Provenance, and Attestations are new concepts to the typical software deployment pipeline. It is tempting to focus solely on them, but that is like only worrying about the tires on a car. There is much more to it. RBAC controls, branch protection policies, audit log streaming to SIEM, key vaults, approvals from ITSM, and authentication/authorization for cloud accounts, to name a few.</p>
<h2 id="responsibilities-differences-between-github-and-octopus-deploy">Responsibilities differences between GitHub and Octopus Deploy</h2>
<p>Clear boundaries between tooling in the deployment pipeline are essential. It sets expectations within your organization, avoids overlapping effort, and enables the tooling to be used as designed.</p>
<p>I’ve seen many instances where tooling is misused because it supports a simple use case. For example, approvals should be handled via ITSM tooling. But Octopus Deploy has the <a href="https://octopus.com/docs/projects/built-in-step-templates/manual-intervention-and-approvals">manual intervention step</a>. It was initially designed to let a deployment perform an act (generate a delta report), pause, and let someone review before proceeding. But for years, users tried to use it for deployment approvals. They wanted rules such as “the person who made the change can’t approve it.”</p>
<p>The responsibilities of GitHub and Octopus Deploy in a secure pipeline are as follows:</p>
<div class="table-wrap">
<table><thead><tr><th>GitHub </th><th>Octopus Deploy </th></tr></thead><tbody><tr><td>Branch Protection Policies </td><td>Environmental progression and release orchestration </td></tr><tr><td>Pull Request workflow </td><td>Centralized dashboard for deployment status and latest version </td></tr><tr><td>Linting, static code analysis, and vulnerability scanning </td><td>Creating and tracking approvals in ITSM tooling </td></tr><tr><td>Automated Testing (unit tests, integration tests, etc.) </td><td>RBAC and separation of duties for production deployments </td></tr><tr><td>Creating and publishing build artifacts (packages, containers, etc.)</td><td>Ingesting SBOMs and verifying Attestations </td></tr><tr><td>Generating SBOMs and Attestations </td><td>Environmental modeling of infrastructure </td></tr><tr><td>Calculating version numbers </td><td>Authentication and authorization to deployment targets and cloud providers</td></tr><tr><td>Creating releases in Octopus Deploy </td><td>Creating and destroying ephemeral environments </td></tr></tbody></table></div>
<h2 id="example-pipeline">Example Pipeline</h2>
<p>All the screenshots and scripts come from a real-world example I put together for demos and webinars. All the source code, builds, and deployment process are stored in a <a href="https://github.com/BobJWalker/Trident">public repo on GitHub</a> as well as my <a href="https://bobjwalker.octopus.app">publicly available Octopus Cloud instance</a>. You can log in as a guest.</p>
<p>As you can see, my GitHub workflow builds, tests, scans the code, generates the attestations, and then hands over to Octopus Deploy.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/github-action-run.png" alt="The latest GitHub action run from the example repo"></p>
<p>The deployment process will pull secrets from an Azure key vault, attach the SBOM as a deployment artifact, verify the attestations of all the build artifacts, build a database delta report, check the report, and finally deploy the database and website changes.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/octopus-deploy-sample-deployment.png" alt="The latest Octopus Deploy deployment from the example repo"></p>
<p>Supply chain security doesn’t mean slowing down the pipeline. You’ll notice many steps in the build workflow can run in parallel to speed up the overall process. A fast pipeline is just as important as a secure pipeline. Developers will hesitate to check in if it takes two hours to build, test, and scan the code. From the logs, you can see it took 7 minutes to build and deploy the application; 3 minutes for GitHub action and 4 minutes for Octopus Deploy.</p>
<h2 id="deployment-pipeline-rules">Deployment Pipeline Rules</h2>
<p>I created deployment pipeline with the following rules:</p>
<ol>
<li>At least one person must review every change that can impact <code>Production</code>, be it source code, database schema, or production deployment.</li>
<li>All changes must be made in a branch and merged via a pull request. The <code>main</code> branch must always be deployable.</li>
<li>Builds must run anytime source code changes.</li>
<li>All unit tests and third-party vulnerability scans must run for every build.</li>
<li>All builds must generate attestations for their build artifacts.</li>
<li>All deployments must verify the attestations before making changes.</li>
<li>All interactions between systems (GitHub -> Octopus, Octopus -> Azure, etc.) must use OIDC whenever possible.</li>
<li>When OIDC is impossible, store any secrets/passwords in a key vault that enables secret rotation.</li>
</ol>
<h2 id="github-configuration">GitHub Configuration</h2>
<p>The focus for GitHub will be on securing code from tampering and minimizing third-party vulnerabilities. As such, the scope of the GitHub configuration extends beyond the GitHub build action.</p>
<h3 id="branch-rulesets">Branch rulesets</h3>
<p>To ensure a review occurs for any change that impacts the product, the <code>main</code> or primary branch should have an <a href="https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-rulesets/about-rulesets">appropriate ruleset</a> that will:</p>
<ul>
<li>All commits are required to be made on a separate branch.</li>
<li>A pull request with at least one approval is required.</li>
<li>A code scanning result is required for critical alerts.</li>
</ul>
<p>Consult with your security team or CISO for other appropriate settings for your company.</p>
<h3 id="built-in-code-scanning">Built-in code scanning</h3>
<p>Use the scanning GitHub provides to find problems proactively. These include:</p>
<ul>
<li>Enabling <a href="https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide">Dependabot</a> to alert you of vulnerabilities that impact your dependencies.</li>
<li>Enabling <a href="https://docs.github.com/en/code-security/secret-scanning/introduction/about-secret-scanning">Secret Scanning</a> to alert you if a secret is accidentally checked into version control.</li>
</ul>
<p>These settings will proactively find problems in your codebase instead of waiting for a build to run.
</p>
<h3 id="general-github-actions-settings">General GitHub Actions settings</h3>
<p>Before getting into SBOMs and attestations, use the following with your GitHub Actions.</p>
<ul>
<li>Use <a href="https://docs.github.com/en/actions/how-tos/write-workflows/choose-what-workflows-do/use-secrets">Secrets and Variables</a> instead of storing them directly in the GitHub action.</li>
<li>Use <a href="https://octopus.com/docs/octopus-rest-api/openid-connect/github-actions">OIDC</a> to login to Octopus Deploy. Do not use log-lived API keys. When configuring the service account user in Octopus Deploy, you must provide a subject for the OIDC authentication to work. On my instance, I’m a little more relaxed about the subjects.
<ul>
<li>Pull Request Subject: <code>repo:BobJWalker/*:pull_request</code> - accept any connection from any pull request action in the BobJWalker repo.</li>
<li>Builds: <code>repo:BobJWalker/*:ref:refs/heads/*</code> - accept any connection from any build in the BobJWalker repo.</li>
</ul>
</li>
<li><a href="https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows">Trigger builds</a> for <code>main</code> or primary branch, <code>hotfix</code> and <code>feature</code> branches, and manually.
Set up filters for specific folders or files to avoid a no-op build. For example, my build workflow triggers are:</li>
</ul>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#0000FF">on</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> push</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> branches</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">main</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'feature/**'</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'features/**'</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'hotfix/**'</span></span>
<span class="line"><span style="color:#800000"> paths</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'src/**'</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'db/**'</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'k8s/**'</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#0000FF">'.github/workflows/build.yml'</span><span style="color:#008000"> ## This is in here so I get a new build with each build change I make </span></span>
<span class="line"><span style="color:#800000"> workflow_dispatch</span><span style="color:#000000">:</span></span></code></pre>
<h3 id="calculating-versions">Calculating Versions</h3>
<p>Believe it or not, calculating the version number was the most challenging part of this entire process. I’m using <a href="https://semver.org/">SemVer</a> and want to remain close to the rules.</p>
<ul>
<li>Builds from the <code>main</code> branch would be use: <code>{Major}.{Minor}.{Patch}</code>, e.g. <code>6.16.26</code>. The <code>main</code> branch is the only thing that can go to <code>Production</code>.</li>
<li>Builds from any other branch would use: <code>{Major}.{Minor}.{Patch}-{EscapedBranchName}.{CommitsSinceVersionSource}</code>, e.g. <code>6.16.27-feature-singleton.1</code>. The escaped branch name would include <code>feature</code> or <code>hotfix</code>. The code on these branches is pre-release and should only go to <code>Development</code> for testing.</li>
<li>The first check-in to a non-main branch would auto increment the <code>{Patch}</code> using the version from the <code>main</code> branch. After that, only the <code>{CommitsSinceVersionSource}</code> would be incremented for the non-main branch check-ins. By default, all changes must be backward compatible.</li>
<li>Increasing the <code>{Major}.{Minor}</code> would be a manual process accomplished using commit messages, e.g., <code>+semver: major</code> for significant increments and <code>+semver: minor</code> for minor increments. The developer is making a deliberate decision to make a non-backward-compatible change, which they document via a git commit.</li>
</ul>
<p>Of the tools I tried, I kept returning to <a href="https://gitversion.net/docs/">gitversion</a>. It met 90% of my needs with no additional configuration. But, it doesn’t work well with dynamic version formats based on branch names. The result is a step with a small amount of business logic and output variables.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#800000">runs-on</span><span style="color:#000000">: </span><span style="color:#0000FF">ubuntu-latest</span></span>
<span class="line"><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Determine version</span></span>
<span class="line"><span style="color:#800000">outputs</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#800000"> sem_ver</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ steps.determine_version.outputs.AssemblySemFileVer }}</span></span>
<span class="line"><span style="color:#800000">steps</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Set environment variable based on branch</span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#0000FF">set_env_var</span></span>
<span class="line"><span style="color:#800000"> run</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span></span>
<span class="line"><span style="color:#0000FF"> echo "GITHUB_REF: $GITHUB_REF"</span></span>
<span class="line"><span style="color:#0000FF"> echo "GITHUB_HEAD_REF: $GITHUB_HEAD_REF"</span></span>
<span class="line"><span style="color:#0000FF"> BRANCH_NAME="${GITHUB_REF#refs/heads/}"</span></span>
<span class="line"><span style="color:#0000FF"> echo "Branch detected: $BRANCH_NAME"</span></span>
<span class="line"><span style="color:#0000FF"> </span></span>
<span class="line"><span style="color:#0000FF"> if [ "$BRANCH_NAME" = "main" ]; then </span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_INCREMENT=Patch" >> $GITHUB_ENV </span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_MODE=ContinuousDeployment" >> $GITHUB_ENV </span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_FORMAT={Major}.{Minor}.{Patch}" >> $GITHUB_ENV </span></span>
<span class="line"><span style="color:#0000FF"> else </span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_INCREMENT=Patch" >> $GITHUB_ENV </span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_MODE=ContinuousDelivery" >> $GITHUB_ENV</span></span>
<span class="line"><span style="color:#0000FF"> echo "GIT_VERSION_FORMAT={Major}.{Minor}.{Patch}-{EscapedBranchName}.{CommitsSinceVersionSource}" >> $GITHUB_ENV</span></span>
<span class="line"><span style="color:#0000FF"> fi</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">uses</span><span style="color:#000000">: </span><span style="color:#0000FF">actions/checkout@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> fetch-depth</span><span style="color:#000000">: </span><span style="color:#0000FF">'0'</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Install GitVersion</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">gittools/actions/gitversion/setup@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> versionSpec</span><span style="color:#000000">: </span><span style="color:#098658">6.0.5</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">id</span><span style="color:#000000">: </span><span style="color:#0000FF">determine_version</span></span>
<span class="line"><span style="color:#800000"> name</span><span style="color:#000000">: </span><span style="color:#0000FF">Determine Version</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">gittools/actions/gitversion/execute@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> additionalArguments</span><span style="color:#000000">: </span><span style="color:#0000FF">/overrideconfig assembly-file-versioning-format=${{ env.GIT_VERSION_FORMAT }} /overrideconfig increment=${{ env.GIT_VERSION_INCREMENT }} /overrideconfig mode=${{ env.GIT_VERSION_MODE }} /overrideconfig update-build-number=true</span></span></code></pre>
<h3 id="trivy">Trivy</h3>
<p>My tool of choice for vulnerability scanning is <a href="https://trivy.dev/latest/">Trivy</a>, an open-source security scanner that can scan third-party package references, create containers, and generate SBOMs.</p>
<h4 id="trivy-source-code-scanning">Trivy source code scanning</h4>
<p>Trivy provides an action to scan third-party package references in the source code and upload them to GitHub. For my .NET application, Trivy is using the <code>package.lock.json</code> files. That file can be automatically generated by .NET’s build process by adding <code><RestorePackagesWithLockFile>true</RestorePackagesWithLockFile></code> to the project file.</p>
<p>For example:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="xml"><code><span class="line"><span style="color:#800000"><PropertyGroup></span></span>
<span class="line"><span style="color:#800000"> <TargetFramework></span><span style="color:#000000">net9.0</span><span style="color:#800000"></TargetFramework></span></span>
<span class="line"><span style="color:#800000"> <RootNamespace></span><span style="color:#000000">Trident.Web</span><span style="color:#800000"></RootNamespace></span></span>
<span class="line"><span style="color:#800000"> <VersionPrefix></span><span style="color:#000000">6.16</span><span style="color:#800000"></VersionPrefix></span></span>
<span class="line"><span style="color:#800000"> <StartupObject></span><span style="color:#000000">Trident.Web.Program</span><span style="color:#800000"></StartupObject></span></span>
<span class="line"><span style="color:#800000"> <RestorePackagesWithLockFile></span><span style="color:#000000">true</span><span style="color:#800000"></RestorePackagesWithLockFile></span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"></PropertyGroup></span></span></code></pre>
<p>Once the <code>package.lock.json</code> exists, only three steps are needed to scan and upload the results to GitHub. Trivy will report unfixed changes, which will create a lot of noise. I configured the action to ignore unfixed vulnerabilities, but stop the build if a fixable vulnerability is found.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Checkout code</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">actions/checkout@v4</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Run Trivy vulnerability scanner on repo</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">aquasecurity/trivy-action@0.32.0</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> scan-type</span><span style="color:#000000">: </span><span style="color:#0000FF">'fs'</span></span>
<span class="line"><span style="color:#800000"> ignore-unfixed</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#008000"> # Prevent unfixed results from being flagged</span></span>
<span class="line"><span style="color:#800000"> exit-code</span><span style="color:#000000">: </span><span style="color:#0000FF">'1'</span><span style="color:#008000"> # Stop the build if a fixable vulnerability is discovered</span></span>
<span class="line"><span style="color:#800000"> format</span><span style="color:#000000">: </span><span style="color:#0000FF">'sarif'</span></span>
<span class="line"><span style="color:#800000"> output</span><span style="color:#000000">: </span><span style="color:#0000FF">'trivy-results.sarif'</span></span>
<span class="line"><span style="color:#800000"> severity</span><span style="color:#000000">: </span><span style="color:#0000FF">'LOW,MEDIUM,HIGH,CRITICAL'</span><span style="color:#008000"> # Change the severity levels to match company policy</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Upload Trivy scan results to GitHub Security tab</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">github/codeql-action/upload-sarif@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> sarif_file</span><span style="color:#000000">: </span><span style="color:#0000FF">'trivy-results.sarif'</span></span></code></pre>
<p>It is best to consult your company policies to determine the best approach:</p>
<ul>
<li>Report all vulnerabilities but allow the build to proceed.</li>
<li>Only report vulnerabilities that have been fixed and stop the build if one is found.</li>
<li>Only report vulnerabilities that have been fixed but allow the build to proceed.</li>
</ul>
<h4 id="trivy-container-scanning">Trivy container scanning</h4>
<p>You should also scan any created containers for vulnerabilities. Your code might be referencing another container version that has known CVEs. Unfortunately, you have to wait until the container is built before scanning. So you’ll need to be strategic when you build the container vs. publishing it to your container registry.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">build website container</span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#0000FF">build_container</span></span>
<span class="line"><span style="color:#800000"> working-directory</span><span style="color:#000000">: </span><span style="color:#0000FF">src</span></span>
<span class="line"><span style="color:#800000"> run</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span><span style="color:#CD3131"> </span></span>
<span class="line"><span style="color:#0000FF"> docker build -f "./Trident.Web/Dockerfile" --build-arg APP_VERSION=${{ needs.prep.outputs.sem_Ver }} --tag ${{ vars.DOCKER_HUB_REPO }}:${{ needs.prep.outputs.sem_Ver }} --tag ${{ vars.DOCKER_HUB_REPO }}:latest . </span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Run Trivy vulnerability scanner on docker container</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">aquasecurity/trivy-action@0.32.0</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> image-ref</span><span style="color:#000000">: </span><span style="color:#0000FF">'${{ vars.DOCKER_HUB_REPO }}:${{ needs.prep.outputs.sem_Ver }}'</span></span>
<span class="line"><span style="color:#800000"> vuln-type</span><span style="color:#000000">: </span><span style="color:#0000FF">'os,library'</span></span>
<span class="line"><span style="color:#800000"> ignore-unfixed</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#008000"> # Prevent unfixed results from being flagged</span></span>
<span class="line"><span style="color:#800000"> exit-code</span><span style="color:#000000">: </span><span style="color:#0000FF">'1'</span><span style="color:#008000"> # Stop the build if a fixable vulnerability is discovered</span></span>
<span class="line"><span style="color:#800000"> format</span><span style="color:#000000">: </span><span style="color:#0000FF">'sarif'</span></span>
<span class="line"><span style="color:#800000"> output</span><span style="color:#000000">: </span><span style="color:#0000FF">'trivy-image-results.sarif'</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> severity</span><span style="color:#000000">: </span><span style="color:#0000FF">'LOW,MEDIUM,HIGH,CRITICAL'</span><span style="color:#008000"> # Change the severity levels to match company policy</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Upload Trivy scan results to GitHub Security tab</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">github/codeql-action/upload-sarif@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> sarif_file</span><span style="color:#000000">: </span><span style="color:#0000FF">'trivy-image-results.sarif'</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Login to Docker Hub</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">docker/login-action@v2</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#800000"> username</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ secrets.DOCKERHUB_USERNAME }}</span></span>
<span class="line"><span style="color:#800000"> password</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ secrets.DOCKERHUB_PAT }}</span></span>
<span class="line"><span style="color:#000000">- </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">push docker image</span></span>
<span class="line"><span style="color:#800000"> working-directory</span><span style="color:#000000">: </span><span style="color:#0000FF">src</span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#0000FF">push_docker_image</span></span>
<span class="line"><span style="color:#800000"> run</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span></span>
<span class="line"><span style="color:#0000FF"> docker push ${{ vars.DOCKER_HUB_REPO }}:${{ needs.prep.outputs.sem_Ver }}</span></span>
<span class="line"><span style="color:#0000FF"> docker push ${{ vars.DOCKER_HUB_REPO }}:latest</span></span>
<span class="line"></span>
<span class="line"><span style="color:#0000FF"> dockerSha=$(docker manifest inspect ${{ vars.DOCKER_HUB_REPO }}:${{ needs.prep.outputs.sem_Ver }} -v | jq -r '.Descriptor.digest')</span></span>
<span class="line"><span style="color:#0000FF"> echo "Docker sha is $dockerSha" </span></span>
<span class="line"><span style="color:#0000FF"> echo "TRIDENT_DOCKER_SHA=$dockerSha" >> $GITHUB_OUTPUT</span></span></code></pre>
<h3 id="trivy-sbom-generation-packaging-and-publishing">Trivy SBOM generation, packaging, and publishing</h3>
<p>Trivy can generate SBOMs from the same<code>package.lock.json</code> files (along with other package reference files) used to scan for vulnerabilities. For my deployment pipeline, I’m putting the SBOM into a .zip file and uploading it to Octopus Deploy so I can add it as a deployment artifact. If anyone asks for the SBOM, I can go directly to the production deployment and download it for them. I’m also publishing building information for this SBOM, so I have a record of all the commits that are part of this deployment.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#800000">runs-on</span><span style="color:#000000">: </span><span style="color:#0000FF">ubuntu-latest</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000">permissions</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#008000"> # Add any additional permissions your job requires here</span></span>
<span class="line"><span style="color:#800000"> id-token</span><span style="color:#000000">: </span><span style="color:#0000FF">write</span><span style="color:#008000"> # This is required to obtain the OIDC Token for Octopus Deploy</span></span>
<span class="line"><span style="color:#800000">steps</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Checkout the code for SBOM</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">actions/checkout@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> fetch-depth</span><span style="color:#000000">: </span><span style="color:#0000FF">'0'</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Run Trivy in GitHub SBOM mode and submit results to Dependency Graph</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">aquasecurity/trivy-action@0.32.0</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> scan-type</span><span style="color:#000000">: </span><span style="color:#0000FF">'fs'</span></span>
<span class="line"><span style="color:#800000"> format</span><span style="color:#000000">: </span><span style="color:#0000FF">'github'</span></span>
<span class="line"><span style="color:#800000"> output</span><span style="color:#000000">: </span><span style="color:#0000FF">'dependency-results.sbom.json'</span></span>
<span class="line"><span style="color:#800000"> scan-ref</span><span style="color:#000000">: </span><span style="color:#0000FF">'.'</span></span>
<span class="line"><span style="color:#800000"> github-pat</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ secrets.GITHUB_TOKEN }}</span></span>
<span class="line"></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Package SBOM</span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#A31515">"sbom_package"</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/create-zip-package-action@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> package_id</span><span style="color:#000000">: </span><span style="color:#0000FF">Trident.SBOM</span></span>
<span class="line"><span style="color:#800000"> version</span><span style="color:#000000">: </span><span style="color:#A31515">"${{ needs.prep.outputs.sem_Ver }}"</span><span style="color:#008000"> # the version comes from an earlier step </span></span>
<span class="line"><span style="color:#800000"> base_path</span><span style="color:#000000">: </span><span style="color:#A31515">"./"</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> files</span><span style="color:#000000">: </span><span style="color:#A31515">"dependency-results.sbom.json"</span></span>
<span class="line"><span style="color:#800000"> output_folder</span><span style="color:#000000">: </span><span style="color:#0000FF">packaged</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Create the Subject Checksum file for Attestation Build Provenance</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#0000FF">determine_sbom_hash</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> shell</span><span style="color:#000000">: </span><span style="color:#0000FF">pwsh</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> run</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span></span>
<span class="line"><span style="color:#0000FF"> $packageHash = Get-FileHash -path "packaged/Trident.SBOM.${{ needs.prep.outputs.sem_Ver }}.zip" -Algorithm SHA256</span></span>
<span class="line"><span style="color:#0000FF"> $hashToSave = $packageHash.Hash </span></span>
<span class="line"><span style="color:#0000FF"> Write-Host "The SBOM package hash is $hashToSave"</span></span>
<span class="line"><span style="color:#0000FF"> </span></span>
<span class="line"><span style="color:#0000FF"> "SBOM_HASH=$hashToSave" | Out-File -FilePath $env:GITHUB_OUTPUT -Append</span></span>
<span class="line"><span style="color:#0000FF"> </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Login to Octopus Deploy 🐙</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/login@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#800000"> server</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SERVER_URL }}</span></span>
<span class="line"><span style="color:#800000"> service_account_id</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ secrets.OCTOPUS_OIDC_SERVICE_ACCOUNT_ID }}</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Push packages to Octopus 🐙</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/push-package-action@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> server</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SERVER_URL }}</span></span>
<span class="line"><span style="color:#800000"> space</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SPACE }}</span></span>
<span class="line"><span style="color:#800000"> packages</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span></span>
<span class="line"><span style="color:#0000FF"> packaged/Trident.SBOM.${{ needs.prep.outputs.sem_Ver }}.zip # the version comes from an earlier step</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Push build information to Octopus 🐙</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/push-build-information-action@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> packages</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span><span style="color:#CD3131"> </span></span>
<span class="line"><span style="color:#0000FF"> Trident.SBOM </span></span>
<span class="line"><span style="color:#800000"> version</span><span style="color:#000000">: </span><span style="color:#A31515">"${{ needs.prep.outputs.sem_Ver }}"</span><span style="color:#008000"> # the version comes from an earlier step</span></span>
<span class="line"><span style="color:#800000"> server</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SERVER_URL }}</span></span>
<span class="line"><span style="color:#800000"> space</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SPACE }}</span></span></code></pre>
<h3 id="create-attestations-in-github-actions">Create Attestations in GitHub Actions</h3>
<p>This sample workflow builds three artifacts.</p>
<ol>
<li>SBOM Package <code>Trident.SBOM</code></li>
<li>Database Schema Package <code>Trident.Database.DBUp</code></li>
<li>Website Container <code>bobjwalker99/trident</code></li>
</ol>
<p>I wanted to have a single attestation for all three build artifacts. To do that, I needed to get the <code>SHA256</code> for each artifact and combine it into a file to send to the action.</p>
<p>In the example above, you likely saw the following for the container:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#001080">dockerSha</span><span style="color:#000000">=$(</span><span style="color:#795E26">docker</span><span style="color:#A31515"> manifest</span><span style="color:#A31515"> inspect</span><span style="color:#000000"> ${{ </span><span style="color:#001080">vars</span><span style="color:#000000">.</span><span style="color:#001080">DOCKER_HUB_REPO</span><span style="color:#000000"> }</span><span style="color:#A31515">}:</span><span style="color:#000000">${{ </span><span style="color:#001080">needs</span><span style="color:#000000">.</span><span style="color:#001080">prep</span><span style="color:#000000">.</span><span style="color:#001080">outputs</span><span style="color:#000000">.</span><span style="color:#001080">sem_Ver</span><span style="color:#000000"> }</span><span style="color:#A31515">}</span><span style="color:#0000FF"> -v</span><span style="color:#000000"> | </span><span style="color:#795E26">jq</span><span style="color:#0000FF"> -r</span><span style="color:#A31515"> '.Descriptor.digest'</span><span style="color:#000000">)</span></span>
<span class="line"><span style="color:#795E26">echo</span><span style="color:#A31515"> "Docker sha is </span><span style="color:#001080">$dockerSha</span><span style="color:#A31515">"</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#795E26">echo</span><span style="color:#A31515"> "TRIDENT_DOCKER_SHA=</span><span style="color:#001080">$dockerSha</span><span style="color:#A31515">"</span><span style="color:#000000"> >> </span><span style="color:#001080">$GITHUB_OUTPUT</span></span></code></pre>
<p>For the packages I used:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#001080">$packageHash</span><span style="color:#000000"> = Get-FileHash -path </span><span style="color:#A31515">"packaged/Trident.SBOM.${{ </span><span style="color:#001080">needs</span><span style="color:#A31515">.</span><span style="color:#001080">prep</span><span style="color:#A31515">.</span><span style="color:#001080">outputs</span><span style="color:#A31515">.</span><span style="color:#001080">sem_Ver</span><span style="color:#A31515"> }}.zip"</span><span style="color:#000000"> -Algorithm SHA256</span></span>
<span class="line"><span style="color:#001080">$hashToSave</span><span style="color:#000000"> = </span><span style="color:#001080">$packageHash</span><span style="color:#000000">.Hash </span></span>
<span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "The SBOM package hash is </span><span style="color:#001080">$hashToSave</span><span style="color:#A31515">"</span></span>
<span class="line"><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#795E26">"SBOM_HASH=</span><span style="color:#001080">$hashToSave</span><span style="color:#795E26">"</span><span style="color:#000000"> | </span><span style="color:#795E26">Out-File</span><span style="color:#0000FF"> -FilePath</span><span style="color:#001080"> $env</span><span style="color:#A31515">:GITHUB_OUTPUT</span><span style="color:#0000FF"> -Append</span></span></code></pre>
<p>I next needed to dump all of those hashes to a <code>.txt</code> file for the attestation action to consume. The result is:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#800000">runs-on</span><span style="color:#000000">: </span><span style="color:#0000FF">ubuntu-latest</span></span>
<span class="line"><span style="color:#800000">permissions</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> id-token</span><span style="color:#000000">: </span><span style="color:#0000FF">write</span></span>
<span class="line"><span style="color:#800000"> attestations</span><span style="color:#000000">: </span><span style="color:#0000FF">write</span><span style="color:#008000"> # Required to publish attestations </span></span>
<span class="line"><span style="color:#800000">steps</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Create the Subject Checksum file for Provenance</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> shell</span><span style="color:#000000">: </span><span style="color:#0000FF">pwsh</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> run</span><span style="color:#000000">: </span><span style="color:#AF00DB">|</span><span style="color:#CD3131"> </span></span>
<span class="line"><span style="color:#0000FF"> $cleanedPackageSha = $("${{ needs.build_and_publish_database.outputs.database_hash }}" -replace "sha256:", "").Trim()</span></span>
<span class="line"><span style="color:#0000FF"> $cleanedSbomSha = $("${{ needs.sbom.outputs.sbom_hash }}" -replace "sha256:", "").Trim()</span></span>
<span class="line"><span style="color:#0000FF"> $cleanedImageSha = $("${{ needs.build_and_publish_website.outputs.website_hash }}" -replace "sha256:", "").Trim()</span></span>
<span class="line"></span>
<span class="line"><span style="color:#0000FF"> $imageSubject = "${{ vars.DOCKER_HUB_REPO }}:${{ needs.prep.outputs.sem_Ver }}".Trim()</span></span>
<span class="line"><span style="color:#0000FF"> $packageSubject = "Trident.Database.DbUp.${{ needs.prep.outputs.sem_Ver }}.zip".Trim()</span></span>
<span class="line"><span style="color:#0000FF"> $sbomSubject = "Trident.SBOM.${{ needs.prep.outputs.sem_Ver }}.zip".Trim()</span></span>
<span class="line"></span>
<span class="line"><span style="color:#0000FF"> Write-Host "The website information is $cleanedImageSha $imageSubject"</span></span>
<span class="line"><span style="color:#0000FF"> Write-Host "The database information is $cleanedPackageSha $packageSubject"</span></span>
<span class="line"><span style="color:#0000FF"> Write-Host "The SBOM information is $cleanedSbomSha $sbomSubject"</span></span>
<span class="line"></span>
<span class="line"><span style="color:#0000FF"> $subjectText = @"</span></span>
<span class="line"><span style="color:#0000FF"> $cleanedImageSha $imageSubject</span></span>
<span class="line"><span style="color:#0000FF"> $cleanedPackageSha $packageSubject</span></span>
<span class="line"><span style="color:#0000FF"> $cleanedSbomSha $sbomSubject</span></span>
<span class="line"><span style="color:#0000FF"> "@</span></span>
<span class="line"></span>
<span class="line"><span style="color:#0000FF"> Write-Host "Creating the checksums file"</span></span>
<span class="line"><span style="color:#0000FF"> New-Item -Path . -Name "subject.checksums.txt" -ItemType "File" -Value $subjectText </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Generate Attestation from Provenance</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">actions/attest-build-provenance@v2</span></span>
<span class="line"><span style="color:#800000"> id</span><span style="color:#000000">: </span><span style="color:#0000FF">websiteattest</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> subject-checksums</span><span style="color:#000000">: </span><span style="color:#0000FF">subject.checksums.txt</span></span></code></pre>
<p>The resulting attestation is:</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/attestation-in-github.png" alt="attestation in GitHub generated from an action"></p>
<h3 id="handing-over-to-octopus-deploy">Handing over to Octopus Deploy</h3>
<p>In my Octopus Deploy instance, all feature and hotfix branches are deployed to the <code>Development</code> environment. While all <code>main</code> or primary branches are deployed to <code>Test</code> -> <code>Staging</code> -> <code>Production</code>. That allows me to get feedback on the short-lived branches while keeping <code>main</code> always in a deployable state.</p>
<p>The challenge is that the GitHub action needs to know the specific channel based on the branch that triggered the workflow. It is easy to do, but the syntax is a little goofy. It ends up being: <code>channel: ${{ github.ref == 'refs/heads/main' && vars.OCTOPUS_RELEASE_CHANNEL || vars.OCTOPUS_FEATURE_BRANCH_CHANNEL }}</code> - which is saying when on <code>main</code>, use the release channel. Otherwise, use the channel that deploys to <code>Development</code>.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="yaml"><code><span class="line"><span style="color:#800000">permissions</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#008000"> # Add any additional permissions your job requires here</span></span>
<span class="line"><span style="color:#800000"> id-token</span><span style="color:#000000">: </span><span style="color:#0000FF">write</span><span style="color:#008000"> # This is required to obtain the OIDC Token for Octopus Deploy </span></span>
<span class="line"><span style="color:#800000">steps</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Login to Octopus Deploy 🐙</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/login@v1</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">: </span></span>
<span class="line"><span style="color:#800000"> server</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SERVER_URL }}</span></span>
<span class="line"><span style="color:#800000"> service_account_id</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ secrets.OCTOPUS_OIDC_SERVICE_ACCOUNT_ID }}</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#000000"> - </span><span style="color:#800000">name</span><span style="color:#000000">: </span><span style="color:#0000FF">Create and deploy release in Octopus 🐙</span></span>
<span class="line"><span style="color:#800000"> uses</span><span style="color:#000000">: </span><span style="color:#0000FF">OctopusDeploy/create-release-action@v3</span></span>
<span class="line"><span style="color:#800000"> with</span><span style="color:#000000">:</span></span>
<span class="line"><span style="color:#800000"> server</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SERVER_URL }}</span></span>
<span class="line"><span style="color:#800000"> space</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_SPACE }}</span></span>
<span class="line"><span style="color:#800000"> project</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ vars.OCTOPUS_PROJECT_NAME }}</span></span>
<span class="line"><span style="color:#800000"> channel</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ github.ref == 'refs/heads/main' && vars.OCTOPUS_RELEASE_CHANNEL || vars.OCTOPUS_FEATURE_BRANCH_CHANNEL }}</span></span>
<span class="line"><span style="color:#800000"> package_version</span><span style="color:#000000">: </span><span style="color:#A31515">"${{ needs.prep.outputs.sem_Ver }}"</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> release_number</span><span style="color:#000000">: </span><span style="color:#A31515">"${{ needs.prep.outputs.sem_Ver }}"</span><span style="color:#000000"> </span></span>
<span class="line"><span style="color:#800000"> git_ref</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ (github.ref_type == 'tag' && github.event.repository.default_branch ) || (github.head_ref || github.ref) }}</span></span>
<span class="line"><span style="color:#800000"> git_commit</span><span style="color:#000000">: </span><span style="color:#0000FF">${{ github.event.after || github.event.pull_request.head.sha }}</span><span style="color:#000000"> </span></span></code></pre>
<h2 id="octopus-deploy-configuration">Octopus Deploy Configuration</h2>
<p>Octopus Deploy is the tooling that changes <code>Production</code>, impacting your users and customers. Because of that, this section will spend a lot of time ensuring only authorized changes and the deployment configuration make it to <code>Production</code>.</p>
<h3 id="permissions">Permissions</h3>
<p>We want to divide permissions between Platform/DevOps engineers and Developers. The Platform/DevOps engineer is the producer; they are experts in the tooling, deployment targets, and company policies. The developer is the consumer; they are experts in their application. The producers create all the pieces necessary for the consumer to use. Consumers can modify settings specific to their application. Producers make sure all pipelines are compliant.</p>
<p>Below is a list of permissions to illustrate the difference between Producers and Consumers.</p>
<div class="table-wrap">
<table><thead><tr><th>Permission</th><th>DevOps / Platform Engineer (Producer)</th><th>Developer (Consumer)</th></tr></thead><tbody><tr><td>Create Projects</td><td>Yes</td><td>No</td></tr><tr><td>Create and modify Variable Sets</td><td>Yes</td><td>No</td></tr><tr><td>Create and modify Environments and Lifecycles</td><td>Yes</td><td>No</td></tr><tr><td>Create and modify cloud accounts, feeds, and GitHub accounts</td><td>Yes</td><td>No</td></tr><tr><td>Projects - Configure Version Control and branch protection policies</td><td>Yes</td><td>No</td></tr><tr><td>Projects - Create and modify channels</td><td>Yes</td><td>No</td></tr><tr><td>Projects - Modify ITSM settings</td><td>Yes</td><td>No</td></tr><tr><td>Projects - Modify guided failure settings</td><td>Yes</td><td>No</td></tr><tr><td>Projects - Modify runbooks</td><td>Yes - must be done in a branch and submitted via a PR</td><td>Yes - must be done in a branch and submitted via a PR</td></tr><tr><td>Projects - Modify deployment process</td><td>Yes - must be done in a branch and submitted via a PR</td><td>Yes - must be done in a branch and submitted via a PR</td></tr><tr><td>Projects - Modify variables</td><td>Yes - must be done in a branch and submitted via a PR</td><td>Yes - must be done in a branch and submitted via a PR</td></tr></tbody></table></div>
<p>To accomplish that separation, you will need to:</p>
<ol>
<li>Configure <a href="https://octopus.com/docs/projects/version-control">project version control</a> for each project.</li>
<li>Configure branch protection policies for the <code>main</code> or primary branch in project version control.</li>
<li>Configure RBAC for your users (see below).</li>
</ol>
<h3 id="rbac-in-octopus">RBAC In Octopus</h3>
<p>For producers (Platform/DevOps Engineers), you’ll need to create a custom role called <code>Platform Engineer</code>. For Consumers (developers), you’ll need to create a custom role called <code>Developer</code>.</p>
<p>The specific permissions for each role will be:</p>
<div class="table-wrap">
<table><thead><tr><th>Permission</th><th>Platform Engineers</th><th>Developer</th></tr></thead><tbody><tr><td>AccountCreate</td><td>Yes</td><td>No</td></tr><tr><td>AccountDelete</td><td>Yes</td><td>No</td></tr><tr><td>AccountEdit</td><td>Yes</td><td>No</td></tr><tr><td>AccountView</td><td>Yes</td><td>Yes</td></tr><tr><td>ActionTemplateCreate</td><td>Yes</td><td>No</td></tr><tr><td>ActionTemplateDelete</td><td>Yes</td><td>No</td></tr><tr><td>ActionTemplateEdit</td><td>Yes</td><td>No</td></tr><tr><td>ActionTemplateView</td><td>Yes</td><td>Yes</td></tr><tr><td>ArtifactCreate</td><td>Yes</td><td>No</td></tr><tr><td>ArtifactDelete</td><td>Yes</td><td>No</td></tr><tr><td>ArtifactEdit</td><td>Yes</td><td>No</td></tr><tr><td>ArtifactView</td><td>Yes</td><td>Yes</td></tr><tr><td>BuiltInFeedPush</td><td>Yes</td><td>Yes</td></tr><tr><td>CertificateView</td><td>Yes</td><td>Yes</td></tr><tr><td>DefectReport</td><td>Yes</td><td>Yes</td></tr><tr><td>DefectResolve</td><td>Yes</td><td>Yes</td></tr><tr><td>DeploymentView</td><td>Yes</td><td>Yes</td></tr><tr><td>EnvironmentCreate</td><td>Yes</td><td>No</td></tr><tr><td>EnvironmentDelete</td><td>Yes</td><td>No</td></tr><tr><td>EnvironmentEdit</td><td>Yes</td><td>No</td></tr><tr><td>EnvironmentView</td><td>Yes</td><td>Yes</td></tr><tr><td>EventView</td><td>Yes</td><td>Yes</td></tr><tr><td>FeedEdit</td><td>Yes</td><td>No</td></tr><tr><td>FeedView</td><td>Yes</td><td>Yes</td></tr><tr><td>GitCredentialEdit</td><td>Yes</td><td>No</td></tr><tr><td>GitCredentialView</td><td>Yes</td><td>Yes</td></tr><tr><td>InsightsReportCreate</td><td>Yes</td><td>No</td></tr><tr><td>InsightsReportDelete</td><td>Yes</td><td>No</td></tr><tr><td>InsightsReportEdit</td><td>Yes</td><td>No</td></tr><tr><td>InsightsReportView</td><td>Yes</td><td>Yes</td></tr><tr><td>InterruptionView</td><td>Yes</td><td>Yes</td></tr><tr><td>InterruptionViewSubmitResponsible</td><td>Yes</td><td>Yes</td></tr><tr><td>LibraryVariableSetCreate</td><td>Yes</td><td>No</td></tr><tr><td>LibraryVariableSetDelete</td><td>Yes</td><td>No</td></tr><tr><td>LibraryVariableSetEdit</td><td>Yes</td><td>No</td></tr><tr><td>LibraryVariableSetView</td><td>Yes</td><td>Yes</td></tr><tr><td>LifecycleCreate</td><td>Yes</td><td>No</td></tr><tr><td>LifecycleDelete</td><td>Yes</td><td>No</td></tr><tr><td>LifecycleEdit</td><td>Yes</td><td>No</td></tr><tr><td>LifecycleView</td><td>Yes</td><td>Yes</td></tr><tr><td>MachineCreate</td><td>Yes</td><td>No</td></tr><tr><td>MachineDelete</td><td>Yes</td><td>No</td></tr><tr><td>MachineEdit</td><td>Yes</td><td>No</td></tr><tr><td>MachineView</td><td>Yes</td><td>Yes</td></tr><tr><td>MachinePolicyCreate</td><td>Yes</td><td>No</td></tr><tr><td>MachinePolicyDelete</td><td>Yes</td><td>No</td></tr><tr><td>MachinePolicyEdit</td><td>Yes</td><td>No</td></tr><tr><td>MachinePolicyView</td><td>Yes</td><td>Yes</td></tr><tr><td>ProcessEdit</td><td>Yes</td><td>Yes</td></tr><tr><td>ProcessView</td><td>Yes</td><td>Yes</td></tr><tr><td>ProjectCreate</td><td>Yes</td><td>No</td></tr><tr><td>ProjectDelete</td><td>Yes</td><td>No</td></tr><tr><td>ProjectEdit</td><td>Yes</td><td>No</td></tr><tr><td>ProjectView</td><td>Yes</td><td>Yes</td></tr><tr><td>ProjectGroupCreate</td><td>Yes</td><td>No</td></tr><tr><td>ProjectGroupDelete</td><td>Yes</td><td>No</td></tr><tr><td>ProjectGroupEdit</td><td>Yes</td><td>No</td></tr><tr><td>ProjectGroupView</td><td>Yes</td><td>Yes</td></tr><tr><td>ProxyCreate</td><td>Yes</td><td>No</td></tr><tr><td>ProxyDelete</td><td>Yes</td><td>No</td></tr><tr><td>ProxyEdit</td><td>Yes</td><td>No</td></tr><tr><td>ProxyView</td><td>Yes</td><td>Yes</td></tr><tr><td>ReleaseCreate</td><td>Yes</td><td>Yes</td></tr><tr><td>ReleaseDelete</td><td>Yes</td><td>No</td></tr><tr><td>ReleaseView</td><td>Yes</td><td>Yes</td></tr><tr><td>RetentionAdminister</td><td>Yes</td><td>No</td></tr><tr><td>RunbookEdit</td><td>Yes</td><td>Yes</td></tr><tr><td>RunbookRunView</td><td>Yes</td><td>Yes</td></tr><tr><td>RunbookView</td><td>Yes</td><td>Yes</td></tr><tr><td>SubscriptionCreate</td><td>Yes</td><td>No</td></tr><tr><td>SubscriptionDelete</td><td>Yes</td><td>No</td></tr><tr><td>SubscriptionEdit</td><td>Yes</td><td>No</td></tr><tr><td>SubscriptionView</td><td>Yes</td><td>Yes</td></tr><tr><td>TagSetCreate</td><td>Yes</td><td>No</td></tr><tr><td>TagSetDelete</td><td>Yes</td><td>No</td></tr><tr><td>TagSetEdit</td><td>Yes</td><td>No</td></tr><tr><td>TargetTagAdminister</td><td>Yes</td><td>No</td></tr><tr><td>TargetTagView</td><td>Yes</td><td>Yes</td></tr><tr><td>TaskCancel</td><td>Yes</td><td>No</td></tr><tr><td>TaskCreate</td><td>Yes</td><td>No</td></tr><tr><td>TaskView</td><td>Yes</td><td>Yes</td></tr><tr><td>TenantView</td><td>Yes</td><td>Yes</td></tr><tr><td>TriggerCreate</td><td>Yes</td><td>Yes</td></tr><tr><td>TriggerDelete</td><td>Yes</td><td>Yes</td></tr><tr><td>TriggerEdit</td><td>Yes</td><td>Yes</td></tr><tr><td>TriggerView</td><td>Yes</td><td>Yes</td></tr><tr><td>TeamCreate</td><td>Yes</td><td>No</td></tr><tr><td>TeamDelete</td><td>Yes</td><td>No</td></tr><tr><td>TeamEdit</td><td>Yes</td><td>No</td></tr><tr><td>TeamView</td><td>Yes</td><td>Yes</td></tr><tr><td>VariableEdit</td><td>Yes</td><td>Yes</td></tr><tr><td>VariableEditUnscoped</td><td>Yes</td><td>Yes</td></tr><tr><td>VariableView</td><td>Yes</td><td>Yes</td></tr><tr><td>VariableViewUnscoped</td><td>Yes</td><td>Yes</td></tr><tr><td>WorkerEdit</td><td>Yes</td><td>No</td></tr><tr><td>WorkerView</td><td>Yes</td><td>Yes</td></tr><tr><td>System Perm - DeploymentFreezeAdminister</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - SpaceCreate</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - SpaceDelete</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - SpaceEdit</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - SpaceView</td><td>Yes</td><td>No</td></tr><tr><td>System Perm - PlatformHubEdit</td><td>Yes</td><td>No</td></tr><tr><td>System Perm - PlatformHubView</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TaskCancel</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TaskCreate</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TeamCreate</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TeamDelete</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TeamEdit</td><td>Yes</td><td>No</td></tr><tr><td>System Permission - TeamView</td><td>Yes</td><td>Yes</td></tr><tr><td>System Permission - UserInvite</td><td>Yes</td><td>Yes</td></tr><tr><td>System Permission - UserRoleView</td><td>Yes</td><td>Yes</td></tr><tr><td>System Permission - UserView</td><td>Yes</td><td>Yes</td></tr></tbody></table></div>
<p>Once those roles are created, create the appropriate teams and assign them those roles. Do not scope them to any environment or tenant.</p>
<p>The new roles detailed above provide the appropriate editing capabilities within Octopus Deploy. They purposely exclude creating deployments and runbook runs. You’ll likely want to scope them to the appropriate environments or tenants.</p>
<p>Common scenarios we see:</p>
<ul>
<li>Developers can deploy any project to <code>Development</code> and <code>Test</code>, but not <code>Production</code>.</li>
<li>Developers can deploy specific projects to <code>Development</code>, <code>Test</code>, and <code>Production</code>. But no other projects.</li>
<li>Release managers or web admins can deploy to <code>Production</code>.</li>
</ul>
<div class="table-wrap">
<table><thead><tr><th>Role</th><th>Platform Engineers</th><th>Developer</th></tr></thead><tbody><tr><td>Deployment Creator</td><td>Yes - no scoping</td><td>Yes - scoped to specific environments, tenants, or projects</td></tr><tr><td>Runbook Consumer</td><td>Yes - no scoping</td><td>Yes - scoped to specific environments, tenants, or projects</td></tr><tr><td>Tenant Manager (if leveraging multi-tenancy)</td><td>Yes - no scoping</td><td>Yes - scoped to specific environments, tenants, or projects</td></tr></tbody></table></div>
<p>The Developer team on my instance is configured to allow them to deploy to <code>Development</code>, <code>Test</code>, and <code>Staging</code> while allowing them to run runbooks on any environment.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/team-user-role-assignment.png" alt="Developer team user role assignment"></p>
<h3 id="cloud-accounts-and-third-party-key-vaults">Cloud Accounts and Third-Party Key Vaults</h3>
<p>In my example deployment pipeline, I’m storing secrets inside of <a href="https://azure.microsoft.com/en-us/products/key-vault">Azure Key Vault</a> instead of using Octopus Deploy’s <a href="https://octopus.com/docs/projects/variables/sensitive-variables">sensitive variables</a>. My primary reason is that Azure Key Vault is focused on storing secrets and doesn’t try to be anything else. It offers features and functionality, such as secret rotation and versioning, for storing secrets that Octopus Deploy doesn’t provide.</p>
<p><a href="https://octopus.com/blog/using-azure-key-vault-with-octopus">This blog post</a> walks you through configuring Azure Key Vault with Octopus Deploy. The primary difference between my configuration and that configuration is that my Azure Account in Octopus Deploy uses <a href="https://octopus.com/docs/infrastructure/accounts/openid-connect">OIDC</a>.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/azure-account-with-oidc.png" alt="Azure account in Octopus Deploy using OIDC"></p>
<p>With Third-Party Key Vaults and OIDC, I aim to eliminate Octopus Deploy from the secret-storing business altogether.</p>
<h3 id="lifecycles">Lifecycles</h3>
<p><a href="https://octopus.com/docs/releases/lifecycles">Lifecycles</a> are among Octopus Deploy’s most misused features.</p>
<p>The most common misconfiguration I see is:</p>
<ul>
<li>Default lifecycle: <code>Development</code> -> <code>Test</code> -> <code>Staging</code> -> <code>Production</code></li>
<li>Hotfix lifecycle: <code>Staging</code> -> <code>Production</code></li>
</ul>
<p>The common reason behind that configuration is “we need a clear path to production in the event of an emergency.” While a valid point, the primary problem with that configuration is that <code>Development</code> is included in any path to production. <code>Development</code> should be used for fast feedback on a work in progress. At no point should code from <code>Development</code> ever be promoted to <code>Production</code>. It must go through a pull request/approval workflow, which is at the core of supply chain security.</p>
<p>The recommended configuration is:</p>
<ul>
<li>Default Lifecycle: <code>Development</code></li>
<li>Release Lifecycle: <code>Test</code> -> <code>Staging</code> -> <code>Production</code></li>
</ul>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/lifecycles-in-octopus.png" alt="lifecycles in Octopus Deploy"></p>
<p>The default lifecycle deployment destination for all branches except the <code>main</code> or primary branch is <code>Development</code>. The release lifecycle is only for <code>main</code> or the primary branch. With this configuration, only approved code (remember we configured the branch ruleset in GitHub) is deployed to <code>Production</code>.</p>
<h3 id="itsm-integration">ITSM Integration</h3>
<p>Octopus Deploy supports ServiceNow and Jira Service Management for <a href="https://octopus.com/docs/approvals">ITSM approvals</a>. If you are using those services and have an Enterprise tier license, you should configure that integration as soon as possible.</p>
<p>Our ITSM integration will create a change request and wait until it reaches the appropriate state. Octopus will not even start the deployment until that state is reached.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/service-now-enabled.png" alt="ITSM enabled in Octopus"></p>
<p>Not all environments require ITSM approval, so we require you to enable ITSM per environment. Because you can turn off/on ITSM integration, I recommend restricting <code>EnvironmentEdit</code> permissions to Platform Engineers.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/environment-with-itsm-enabled.png" alt="Production environment with ITSM enabled"></p>
<h3 id="github-octopus-deploy-integration">GitHub Octopus Deploy Integration</h3>
<p>If you are using Octopus Deploy cloud, configure the <a href="https://octopus.com/docs/projects/version-control/github">Octopus Deploy GitHub Application</a>. This allows you to connect to GitHub repositories without providing a PAT. You can still control the connection to specific repositories and organizations.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/github-octopus-deploy-app-integration.png" alt="Octopus Deploy GitHub app integration"></p>
<h3 id="build-information-and-issue-tracking-integration">Build Information and Issue Tracking Integration</h3>
<p>Octopus Deploy <a href="https://octopus.com/docs/releases/issue-tracking">natively integrates</a> with JIRA, GitHub Issue Tracker, and Azure DevOps Issue Tracker. If you are using JIRA Cloud, we also provide the capability to update the deployment status of JIRA Tickets.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/JIRA-cloud-update.png" alt="JIRA Cloud deployment integration"></p>
<p>That requires the configuration of JIRA Integration with Octopus. Our documentation provides a <a href="https://octopus.com/docs/releases/issue-tracking/jira">step-by-step guide</a> for configuration. The result looks like this:</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/jira-integration-config.png" alt="JIRA Integration configuration"></p>
<p>If you recall, earlier, the GitHub action was publishing the build information for the SBOM. Octopus parses the build commit message to look for the issue. My commit messages were prefixed with <code>TRID-43</code>, which Octopus then linked to that ticket in JIRA.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/parsing-the-commit-message.png" alt="parsing the commit message from build information"></p>
<h3 id="project-settings">Project Settings</h3>
<p>We want to configure the Octopus Deploy project to follow these rules:</p>
<ol>
<li>Any changes to the deployment process, variables, and runbooks require approval.</li>
<li>Only the <code>main</code> or primary branch can be deployed to <code>Production</code>.</li>
<li>Only <code>pre-release</code> versions can be deployed to <code>Development</code>; <code>Production</code> only accepts packages without <code>pre-release</code> versions.</li>
<li>If applicable, ITSM integration is enabled.</li>
</ol>
<p><strong>Important:</strong> <code>ProjectEdit</code> permission was restricted to Platform Engineers because it can modify a project’s Project Version Control and ITSM Provider configuration. Specifically, anyone with those permissions can turn off branch protection policies and ITSM requirements. What used to be a permission to change the name and icon of the project has grown into a powerful setting. Be mindful of who grants access to it.</p>
<h4 id="configure-project-version-control">Configure Project Version Control</h4>
<p>We want to use the same pull-request approval process for the deployment process, variables, and runbook updates as used for source code. To do that, configure project version control for your project. While doing that, configure the branch protection policy for the <code>main</code> or primary branch. In the Octopus Deploy UI, you’ll see notifications that the <code>main</code> or primary branch is protected and requires any changes to be made to the branch.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/project-version-control-settings.png" alt="project version control settings"></p>
<p>I also recommend using the same git repository as your source code for the following reasons:</p>
<ul>
<li>The code repository should store the source code, how it is built, its dependencies, how it is hosted, and how it is deployed. That allows you to reproduce production in local and testing environments quickly.</li>
<li>Often, underlying code changes, such as migrating to Kubernetes, require an update to the build and deployment process. All the necessary changes to migrate to Kubernetes can be made in the same branch and merged using the same pull request process.</li>
</ul>
<p><strong>Important:</strong> Anyone with the <code>ProjectEdit</code> permission can change the branch protection policy setting. Using the recommended roles from earlier will limit that permission to Platform Engineers.</p>
<h4 id="configure-channels">Configure Channels</h4>
<p>Now that version control is enabled, we can configure two channels for our two lifecycles. We will include version and branch rules in those channels.</p>
<ul>
<li>Default Channel
<ul>
<li>Lifecycle: Default Lifecycle</li>
<li>Package Version Rules: select the steps that deploy packages and enter this in the pre-release tag field <code>^[^\+].*</code>. That will ensure only packages with a pre-release tag can be deployed to <code>Development</code></li>
<li>Branch Protection Rules: Enter a pattern to ensure anything but the <code>main</code> or primary branch can be used. For my instance I use <code>[feature|hotfix]*/*</code></li>
</ul>
</li>
<li>Release Channel
<ul>
<li>Lifecycle: Release Lifecycle</li>
<li>Package Version Rules: select the steps that deploy packages and enter this into the pre-release tag field <code>^(|\+.*)$</code>. That will block any packages with a pre-release tag from being selected.</li>
<li>Branch Protection rules: enter <code>main</code> into the branches field. That will ensure only the main branch can be used.
With these rules in place, we can ensure only artifacts and processes from the <code>main</code> branch will be used for <code>Production</code> deployments.</li>
</ul>
</li>
</ul>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/channels-configuration.png" alt="channel configuration with version and branch rules"></p>
<h4 id="itsm">ITSM</h4>
<p>In the ITSM Providers screen, check the <code>Change Controlled</code> check box and select the appropriate ITSM provider connection. You can also select specific runbooks to be changed controlled in this screen.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/itsm-provider.png" alt="project itsm provider screen"></p>
<p><strong>Important:</strong> Anyone with the <code>ProjectEdit</code> permission can change the ITSM setting. Using the recommended roles from earlier will limit that permission to Platform Engineers.</p>
<h3 id="deployment-process">Deployment Process</h3>
<p>The final piece of our supply chain security workflow is to add the appropriate steps to pull secrets from Azure Key Vault, publish the SBOM as a deployment artifact, and verify the attestations from GitHub.</p>
<h4 id="pulling-secrets-from-azure-key-vault">Pulling secrets from Azure Key Vault</h4>
<p>The deployment process uses <a href="https://library.octopus.com/step-templates/6f59f8aa-b2db-4f7a-b02d-a72c13d386f0/actiontemplate-azure-key-vault-retrieve-secrets">community library step template</a> to interact with Azure Key Vault.</p>
<p>I recommend using the execution container <a href="https://hub.docker.com/r/octopuslabs/azure-workertools">octopuslabs/azure-workertools</a>, which has all the required CLIs to pull secrets from the key vault.</p>
<p>The step will pull one-to-many secrets. It uses the format <code>[Secret Name] | [Output Variable Name]</code> to determine the secrets to pull. For example:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">azure-sql-server</span><span style="color:#000000"> | </span><span style="color:#795E26">SQLServerName</span></span>
<span class="line"><span style="color:#795E26">azure-sql-password</span><span style="color:#000000"> | </span><span style="color:#795E26">SQLUserPassword</span></span>
<span class="line"><span style="color:#795E26">azure-sql-username</span><span style="color:#000000"> | </span><span style="color:#795E26">SQLUserName</span></span>
<span class="line"><span style="color:#795E26">octopus-api-key</span><span style="color:#000000"> | </span><span style="color:#795E26">OctopusApiKey</span></span>
<span class="line"><span style="color:#795E26">github-token</span><span style="color:#000000"> | </span><span style="color:#795E26">GitHubToken</span></span>
<span class="line"><span style="color:#795E26">sumo-logic-url</span><span style="color:#000000"> | </span><span style="color:#795E26">SumoLogicUrl</span></span></code></pre>
<p>The output variable name is how you can reference the secrets in subsequent steps. For example, the SQL Server Name can be accessed in subsequent steps by looking for the <code>Octopus.Action[Azure Key Vault - Retrieve Secrets].Output.SQLServerName</code> variable.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/azure-key-vault-pulling-secrets.png" alt="Azure key vault step in a deployment process"></p>
<h4 id="publishing-the-sbom-as-a-deployment-artifact">Publishing the SBOM as a deployment artifact</h4>
<p>We can use pre-existing functionality to publish the SBOM as a deployment artifact. Add a script step to the process, then add a package reference. Ensure the package reference extracts the package during the deployment.</p>
<p><img src="/blog/img/supply-chain-security-with-github-and-octopus-deploy/sbom-package-reference.png" alt="package reference that extracts a package"></p>
<p>The script only needs to find the <code>.json</code> file and publish it as a deployment artifact. The following is the PowerShell script I used in the deployment pipeline.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="powershell"><code><span class="line"><span style="color:#001080">$extractedPath</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Action.Package[Template.SBOM.Artifact].ExtractedPath"</span><span style="color:#000000">]</span></span>
<span class="line"><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Environment.Name"</span><span style="color:#000000">]</span></span>
<span class="line"></span>
<span class="line"><span style="color:#001080">$sbomFiles</span><span style="color:#000000"> = </span><span style="color:#795E26">Get-ChildItem</span><span style="color:#000000"> -Path </span><span style="color:#001080">$extractedPath</span><span style="color:#000000"> -Filter </span><span style="color:#A31515">"*.json"</span><span style="color:#000000"> -Recurse</span></span>
<span class="line"></span>
<span class="line"><span style="color:#AF00DB">foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$sbom</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $sbomFiles</span><span style="color:#000000">)</span></span>
<span class="line"><span style="color:#000000">{</span></span>
<span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Attaching </span><span style="color:#0000FF">$(</span><span style="color:#001080">$sbom</span><span style="color:#795E26">.FullName</span><span style="color:#0000FF">)</span><span style="color:#A31515"> as an artifact"</span></span>
<span class="line"><span style="color:#795E26"> New-OctopusArtifact</span><span style="color:#000000"> -Path </span><span style="color:#001080">$sbom</span><span style="color:#795E26">.FullName</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.SBOM.JSON"</span></span>
<span class="line"></span>
<span class="line"><span style="color:#AF00DB"> break</span></span>
<span class="line"><span style="color:#000000">} </span></span></code></pre>
<h4 id="verifying-attestations">Verifying attestations</h4>
<p>We will use the <code>gh attestation verify</code> CLI command as the <a href="https://cli.github.com/manual/gh_attestation_verify">means to verify the attestations</a>. We’ve provided <code>octopuslabs/github-workertools</code> as an <a href="https://hub.docker.com/r/octopuslabs/github-workertools">execution container</a> so you don’t have to worry about downloading and installing the CLI.</p>
<p>That command converts the deployment artifact to a SHA256 hash and then looks for attestations matching that hash for your repository or organization. If the artifact has been tampered with, the attestation verification will fail because it won’t find a matching hash. For example, here is the API endpoint my process attempted to hit on a failed attestation verification.</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>https://api.github.com/repos/BobJWalker/Trident/attestations/sha256:d96a5d80bc2d7ce427dc1f543d7caedf4ea014a397cae0140909dfa306a48b1b?per_page=30&predicate_type=https://slsa.dev/provenance/v1</span></span></code></pre>
<p>After the CLI finds the attestation for the artifact, it will perform an additional series of verifications. More information about how it verifies the attestation can be found in the <a href="https://docs.github.com/en/actions/how-tos/secure-your-work/use-artifact-attestations/use-artifact-attestations">GitHub’s docs</a>.</p>
<p>For package attestation verification, you can use our <a href="https://library.octopus.com/step-templates/3c76dffc-b524-438f-b04d-f1a103bdbfc7/actiontemplate-verify-github-attestation">community step template</a>.</p>
<p>For Docker containers, you’ll need to authenticate to the container registry. Once you do that, you can run a script similar to this:</p>
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="powershell"><code><span class="line"><span style="color:#001080">$packageVersion</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Action.Package[YOUR CONTAINER].PackageVersion"</span><span style="color:#000000">]</span></span>
<span class="line"><span style="color:#001080">$packageName</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Action.Package[YOUR CONTAINER].PackageId"</span><span style="color:#000000">]</span></span>
<span class="line"></span>
<span class="line"><span style="color:#001080">$GHToken</span><span style="color:#000000"> = </span><span style="color:#A31515">"YOUR GITHUB TOKEN"</span></span>
<span class="line"><span style="color:#001080">$image</span><span style="color:#000000"> = </span><span style="color:#A31515">"oci://</span><span style="color:#0000FF">$(</span><span style="color:#001080">$packageName</span><span style="color:#0000FF">)</span><span style="color:#A31515">$:</span><span style="color:#0000FF">$(</span><span style="color:#001080">$packageVersion</span><span style="color:#0000FF">)</span><span style="color:#A31515">$"</span></span>
<span class="line"><span style="color:#001080">$repo</span><span style="color:#000000"> = </span><span style="color:#A31515">"bobjwalker/trident"</span></span>
<span class="line"></span>
<span class="line"><span style="color:#001080">$env:GITHUB_TOKEN</span><span style="color:#000000">=</span><span style="color:#001080">$GHToken</span></span>
<span class="line"><span style="color:#001080">$attestation</span><span style="color:#000000">=gh attestation verify </span><span style="color:#A31515">"</span><span style="color:#001080">$image</span><span style="color:#A31515">"</span><span style="color:#000000"> --repo </span><span style="color:#001080">$repo</span><span style="color:#000000"> --format json</span></span>
<span class="line"><span style="color:#AF00DB">if</span><span style="color:#000000"> (</span><span style="color:#001080">$LASTEXITCODE</span><span style="color:#000000"> -ne </span><span style="color:#098658">0</span><span style="color:#000000">)</span></span>
<span class="line"><span style="color:#000000">{</span></span>
<span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "The attestation for </span><span style="color:#001080">$image</span><span style="color:#A31515"> could not be verified"</span></span>
<span class="line"><span style="color:#000000">}</span></span>
<span class="line"></span>
<span class="line"><span style="color:#795E26">Write-Highlight</span><span style="color:#A31515"> "</span><span style="color:#001080">$image</span><span style="color:#A31515"> successfully passed attestation verification"</span></span>
<span class="line"><span style="color:#795E26">Write-Verbose</span><span style="color:#001080"> $attestation</span></span>
<span class="line"><span style="color:#001080">$artifactVerified</span><span style="color:#000000"> = </span><span style="color:#0000FF">$true</span></span>
<span class="line"></span>
<span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "Writing the attest output to </span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span>
<span class="line"><span style="color:#795E26">New-Item</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -ItemType </span><span style="color:#A31515">"File"</span><span style="color:#000000"> -Value </span><span style="color:#001080">$attestation</span></span>
<span class="line"><span style="color:#795E26">New-OctopusArtifact</span><span style="color:#000000"> -Path </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span></code></pre>
<h2 id="conclusion">Conclusion</h2>
<p>There will always be a healthy tension between security and usability. You can enact security policies that replicate the computer vault from the first Mission: Impossible film, but it isn’t very usable at scale. On the other side of the spectrum, everyone can be made an admin, and there is no red tape, but that isn’t very secure. Finding the “right” amount of security is always a challenge. My goal with this article was to provide low-effort recommendations that are as non-intrusive as possible but substantially increase supply chain security. My guiding “north star” was <a href="https://slsa.dev/spec/v1.2-rc1/build-track-basics#build-l3">SLSA level 3</a> with this pipeline.</p>
<p>Despite the changes, when the rules are followed (pull request required, build on every check-in, always verify attestations, etc.), there should be no noticeable impact on the developer experience. It’s only when someone attempts to bypass the rules that they encounter friction.</p>]]></content>
</entry>
<entry>
<title>Changes to the Octopus C# client library open source repository</title>
<link href="https://octopus.com/blog/changes-to-octopus-csharp-client-repository" />
<id>https://octopus.com/blog/changes-to-octopus-csharp-client-repository</id>
<published>2025-08-05T00:00:00.000Z</published>
<updated>2025-08-05T00:00:00.000Z</updated>
<summary>The Octopus C# client library is moving into our monorepo</summary>
<author>
<name>Orion Edwards, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>We are changing how we develop the Octopus C# Client Library by bringing it into our private monorepo alongside Octopus Server. Key changes include:</p>
<ul>
<li><strong>Streamlined development</strong>: Client library code will be managed alongside Octopus Server, rather than separately.</li>
<li><strong>Read-Only Repository</strong>: The public GitHub repository will become a read-only mirror, with no direct community contributions or pull requests accepted.</li>
<li><strong>Automated Synchronization</strong>: Client changes by Octopus developers will automatically sync to the public repository.</li>
<li><strong>Resumption of GitHub Releases</strong>: We will resume publishing GitHub releases with release notes to keep users informed.</li>
</ul>
<p>We designed these changes to reduce the friction in the development process, ensure the library stays up-to-date, and enhance the overall quality for our customers.</p>
<h2 id="what-is-the-octopus-c-client-library">What is the Octopus C# Client Library?</h2>
<p>Octopus Server is <a href="https://octopus.com/docs/octopus-rest-api">built API-first</a>, enabling customers to perform advanced operations tailored to their specific environment and needs.</p>
<p>You can read the <a href="https://octopus.com/docs/octopus-rest-api">API documentation</a>, and we maintain OpenAPI definitions that describe its structure. You can use it in any programming language or system that sends JSON over HTTP. We provide open-source client libraries for C#, TypeScript, and Go, which take care of the HTTP and JSON for you, providing a better developer experience for integrators.</p>
<h2 id="the-current-state-of-the-library">The current state of the library</h2>
<p>The best way to use the C# client library is to import the <a href="https://www.nuget.org/packages/Octopus.Server.Client">Octopus.Server.Client</a> package from NuGet. We keep it up to date, including adding access to new features we add to Octopus Server.</p>
<p>The library’s home is the <a href="https://github.com/OctopusDeploy/OctopusClients">public GitHub repository</a>. You can find the source code and information about compiling it there if you want to work with it.</p>
<p>The C# client library is Open Source, under the <a href="https://github.com/OctopusDeploy/OctopusClients/blob/master/LICENSE.txt">Apache 2.0 license</a>. You can currently fork it, create pull requests (and view in-progress pull requests raised by software developers at Octopus), and raise issues.</p>
<h2 id="changes-to-the-way-we-operate">Changes to the way we operate</h2>
<p>Internally, we have a private <a href="https://en.wikipedia.org/wiki/Monorepo">monorepo</a> which includes the Octopus Server codebase, and various other supporting tools.</p>
<p>We will bring the C# client library code into the monorepo, so that Octopus software developers can make changes to Server and Client together in lockstep.</p>
<p>The <a href="https://github.com/OctopusDeploy/OctopusClients">OctopusClients</a> GitHub repository will become a read-only mirror of the internal monorepo code. When Octopus developers make changes to the client library code in the monorepo, an automated process (using Google’s Open Source <a href="https://github.com/google/copybara">Copybara</a> tool) will synchronize them to the public repository.</p>
<h2 id="why-are-we-making-these-changes">Why are we making these changes?</h2>
<p>We use the Octopus C# client library as part of our internal tooling and test suite for Octopus Server. This helps us ensure it is up to date and functioning correctly.</p>
<p>When Octopus software developers change the Octopus Server API, they must make a corresponding change to the Client codebase. Under the current model with two repositories, this results in two disconnected pull requests, which often must be coordinated. This coordination becomes more difficult if multiple people or teams are making API changes simultaneously, and as the company has grown over the years, it has become an increasing problem.</p>
<p>The friction created by managing two pull requests and the separation of focus across the two repositories had some unintended consequences:</p>
<ul>
<li>Sometimes developers forgot to update the client codebase or deferred the work. This results in the client lagging behind the server, making it less useful.</li>
<li>While we release packages to NuGet every time there is a change <em>(there have been <a href="https://www.nuget.org/packages/Octopus.Server.Client">46 updates so far in 2025</a>)</em>, we haven’t created a release in GitHub, nor have we issued any release notes, since 2022.</li>
<li>The client has not enjoyed the same level of tooling support as the server does internally; for example it currently uses an old version of NUnit for testing, which is overdue for an update.</li>
</ul>
<p>By bringing the client codebase into the monorepo, we will reduce this friction and be able to provide a higher quality, more up-to-date library for our customers.</p>
<p>Making the public client repository a read-only mirror hinders community contributions. However, the client is stable, and the most recent community pull request was in 2022. Balanced against the benefits of bringing the code into the monorepo, these changes will result in a better overall outcome for the C# client library.</p>
<p>If you have forked the client and have some changes you’d like to see incorporated into the Octopus official client, you can share these with our <a href="https://octopus.com/support">support team</a>, or via our <a href="https://octopus.com/community">Community Slack</a>.</p>
<h3 id="whats-changing">What’s Changing</h3>
<ul>
<li>In-progress pull requests from Octopus developers will no longer be visible in the public repo.</li>
<li>Developers at Octopus cannot make direct changes to the public repo. It will be a strict mirror.</li>
<li>We will no longer accept pull requests in the OctopusClients repository.</li>
<li>We will no longer accept new Issues in the OctopusClients repository. Please send bug reports or enhancement requests to our <a href="https://octopus.com/support">support team</a>.</li>
<li>We will resume creating GitHub releases with release notes for the client.</li>
</ul>
<h3 id="what-isnt-changing">What isn’t Changing</h3>
<ul>
<li>The OctopusClients repository will remain public</li>
<li>The code will remain open under the Apache 2.0 license</li>
<li>You will still be able to fork the repository and make changes</li>
<li>You can still compile the code directly from the public repository; We have set up internal safety nets to stop potential ties to any non-public tooling that might prevent this.</li>
<li>The code itself will remain unchanged throughout this process transition</li>
</ul>
<h3 id="when-is-this-happening">When is this happening?</h3>
<p>We have already made the technical changes to move the code and synchronize changes from the monorepo. The policy change around pull request acceptance will take effect from Monday, August 18, 2025. After this date, we will close any existing pull requests and incomplete branches on the public git repository.</p>
<h2 id="changes-to-other-client-libraries">Changes to other client libraries</h2>
<p>The Go and TypeScript client libraries are unaffected. We have plans to bring them into the monorepo in the future, but we have no specific timelines to share.</p>
<p>If you have any concerns or feedback, please get in touch with our <a href="https://octopus.com/support">helpful support team</a>.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Modernizing the Process Editor for greater control over complex processes</title>
<link href="https://octopus.com/blog/process-editor-release" />
<id>https://octopus.com/blog/process-editor-release</id>
<published>2025-07-31T00:00:00.000Z</published>
<updated>2025-07-31T00:00:00.000Z</updated>
<summary>Design updates to the Process Editor UI</summary>
<author>
<name>Jasmin Wong, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>When you think of deployments in Octopus, you probably think of steps: the sequence of tasks that orchestrate your releases.</p>
<p>Whether you’re configuring a health check or defining complex rolling deployments, the process editor is at the center of it all.</p>
<p>At Octopus, the design team has been working to modernize and improve the process editor experience.</p>
<p><video src="/blog/video/process-editor/before-after.mp4" width="750" height="400" controls autoplay loop muted></video></p>
<h2 id="why-are-we-doing-this">Why are we doing this?</h2>
<p>Deployment processes are more than lists of actions. They’re structured workflows that define how applications move through environments.</p>
<p>As Octopus has grown, so has our customers’ deployment needs. There are rolling steps, nested groups, parallel paths and more to consider. Over time, the editor’s interface hasn’t kept pace with these developments.</p>
<p>Customers have told us:</p>
<ul>
<li>It’s hard to scan steps and find what they’re looking for</li>
<li>They need to distinguish between parent and child steps, and what’s in a rolling step</li>
<li>They want to understand their deployment process at a glance</li>
</ul>
<p>The design team at Octopus got together to resolve these problems. Over a few sessions, we sketched out ideas and set out to improve a few key problematic areas.</p>
<p>We identified the process editor as the first and most obvious areas for improvement, and broke this down further into two key goals: reviewing a process and editing a process.</p>
<p><video src="/blog/video/process-editor/process-ui-highlights.mp4" width="750" height="400" controls autoplay loop muted></video></p>
<h2 id="modernizing-the-ui">Modernizing the UI</h2>
<p>We’ve redesigned the process overview and sidebar UI in the process editor to improve readability, structure, and control. Here’s what’s new:</p>
<ul>
<li><strong>Less noise, more focus</strong>: We’ve reduced visual clutter and introduced a clearer hierarchy. Icons, grouping, and more immediate interactions make it easier to quickly scan and edit your process sequence.</li>
<li><strong>Grouped views for parent and rolling steps</strong>: Parent-child and rolling steps are now grouped. This makes complex workflows easier to follow and edit without losing your place. These groups can also be collapsed for parent-child and rolling steps, helping you to control the amount of information you see.</li>
<li><strong>A consistent, modern interface</strong>: We’ve updated the styling to align with the broader Octopus design system. Expect better contrast, spacing, and alignment for a more streamlined editing experience.</li>
</ul>
<h2 id="why-is-this-important">Why is this important?</h2>
<p>The process editor is a critical part of how teams build and maintain their deployments. This update is designed to reduce friction, help you move faster, and give you greater control over complex processes.</p>
<p>Whether you’re onboarding a new team member or managing dozens of steps, the editor now scales more gracefully with your needs.</p>
<p>This update is the first step toward modernizing the process editor experience. Next up, we’re looking at ways to further customize how you view your deployment processes.</p>
<p>As always, we’re open to your feedback.</p>
<h2 id="what-to-expect">What to expect</h2>
<p>Cloud customers can expect the new design to land on 1 August 2025.</p>
<p>For self-hosted customers, this change will be reflected in Octopus Server Release 2025.3, when you upgrade to that version.</p>]]></content>
</entry>
<entry>
<title>AI deployments best practices</title>
<link href="https://octopus.com/blog/ai-deployments" />
<id>https://octopus.com/blog/ai-deployments</id>
<published>2025-07-30T00:00:00.000Z</published>
<updated>2025-07-30T00:00:00.000Z</updated>
<summary>AI deployments present some unique challenges for the DevOps team. And yet, existing DevOps best practices still apply.</summary>
<author>
<name>Matthew Casperson, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>AI has generated a lot of excitement and criticism ever since ChatGPT highlighted the potential and limitations of generative AI. DevOps teams have a unique challenge developing and deploying AI platforms because they’re tasked with realizing the value of AI and delivering it to customers safely, reliably, and predictably.</p>
<p>Fortunately, most of the best practices we’ve adopted for deploying and maintaining software still apply to those responsible for AI platforms. Much like database deployments, which have always been tightly coupled to application deployments, deploying AI models and other AI artifacts benefit from a tight integration with the DevOps lifecycle.</p>
<p>The atomic elements of AI deployments are just files to be moved around, API calls to hosting platforms, IaC to manage infrastructure, and human-based workflows to coordinate, test, and authorize changes, all which are solved problems. While AI presents novel social and ethical challenges, the largest technical challenges when deploying AI platforms are applying existing processes, like transferring packages, to AI-specific services, like <a href="https://huggingface.co/">HuggingFace</a>, or replicating security practices, such as CVE scanning, to AI models.</p>
<p>AI teams may also have the luxury of working on greenfield projects, which allows them to design their DevOps processes from the ground up, taking advantage of trends like Platform Engineering and focusing on DevEx.</p>
<p>In this post, we explore the practices, ideals, and compromises that DevOps teams spinning up new AI projects must consider as they lay the foundation of their DevOps lifecycle.</p>
<h2 id="it-starts-and-ends-with-devex">It starts and ends with DevEx</h2>
<p>DevOps started with an insight as obvious as it was brilliant: what if instead of separate development and operations teams with distinct management hierarchies, measured on unique outcomes, and communicating through tickets, we had one team reporting to a shared management hierarchy measured on shared outcomes focused on delivering value to customers?</p>
<p>Over the years, though, DevOps has come to encompass so much that it has almost no meaning. To say you have a DevOps job has about as much meaning as saying you have a sales, finance, or management job. DevOps’s nature is to consume as much responsibility as it can (demonstrated by all the Dev-Insert-Something-Here-Ops paradigms that have evolved), often without any consideration for who has to deliver this growing sphere of responsibility. The frustration of DevOps team members with the inability to answer the question “What am I not responsible for?” has led to the rise of DevEx.</p>
<p>As noted in the paper <a href="https://queue.acm.org/detail.cfm?id=3595878">DevEx: What Actually Drives Productivity</a>, DevEx has 3 dimensions:</p>
<ul>
<li>Flow state</li>
<li>Cognitive load</li>
<li>Feedback loops</li>
</ul>
<p><img src="/blog/img/ai-deployments/devex.png" alt=""></p>
<p>DevOps teams can support all 3 dimensions by ensuring every team member can confidently answer the question, “What am I not responsible for?” Keeping this question in mind as you develop your DevOps processes is a powerful strategy for maximizing the best that DevOps has to offer without inadvertently building a system that only functions because everyone is responsible for everything all the time.</p>
<h2 id="devex-as-a-service">DevEx as a Service</h2>
<p>Platform Engineering is one of the most effective ways to answer the question, “What am I not responsible for?” At its core, Platform Engineering, and specifically, the Internal Developer Platform (IDP) that is the interface between the platform and DevOps teams, must satisfy 3 requirements:</p>
<ol>
<li>Provide a repository of architectural decisions.</li>
<li>Enable architectural decisions to be implemented at scale.</li>
<li>Define feedback processes that ensure architectural decisions are updated over time.</li>
</ol>
<p><img src="/blog/img/ai-deployments/idp.png" alt=""></p>
<p>In this context, we refer to the book <a href="https://www.amazon.com/Objects-Components-Frameworks-UML-Catalysis/dp/0201310120">Objects, Components, and Frameworks With UML: The Catalysis Approach</a> by Desmond D’Souza and Alan Wills for this definition of “architecture”:</p>
<blockquote>
<p>The set of design decisions about any system (or smaller component) that keeps its implementors and maintainers from exercising needless creativity.</p>
</blockquote>
<p>In other words, architectural decisions answer the question, “What am I not responsible for?” by providing DevOps teams with golden pipelines, common tools, processes, and best practices. These serve as the foundation for building valuable solutions to meaningful problems.</p>
<p>When the goal of Platform Engineering is to deliver improved DevEx, the end result is <a href="https://octopus.com/publications/devex-as-a-service">DevEx as a Service (DEaaS)</a>.</p>
<p>The architectural decisions maintained by your DEaaS implementation will adopt one of 3 responsibility models.</p>
<p><strong>Customer responsibility</strong> (or eventual inconsistency) occurs when a customer takes a copy of an artifact capturing architectural decisions and then owns it.</p>
<p><img src="/blog/img/ai-deployments/customer-responsibility-model.png" alt=""></p>
<p><strong>Shared responsibility</strong> (or eventual consistency), where the DEaaS team and customers collaborate on shared artifacts.</p>
<p><img src="/blog/img/ai-deployments/shared-responsibility-model.png" alt=""></p>
<p><strong>Central responsibility</strong> (or enforced consistency), where the DEaaS team owns the artifacts and exposes controlled interfaces to customers.</p>
<p><img src="/blog/img/ai-deployments/central-responsibility-model.png" alt=""></p>
<p>Bringing this back to the question, “What am I not responsible for?”:</p>
<ul>
<li>The customer responsibility model makes the DEaaS team responsible for creating artifacts, while customers are responsible for editing and maintaining artifacts after they take ownership.</li>
<li>The shared responsibility model makes the DEaaS team responsible for creating artifacts and providing a process for customers to contribute updates, while customers have the responsibility (and sometimes the obligation) to improve the artifacts.</li>
<li>The central responsibility model makes the DEaaS team responsible for creating and maintaining artifacts. Customers can use the artifacts but are not responsible for maintaining them.</li>
</ul>
<p>The responsibility models are each subject to constraints around who can edit artifacts, how artifacts are maintained over time, and how many artifacts can be managed by the DEaaS team. These constraints are captured by the responsibility triad, where artifacts maintained by the DEaaS team can optimize for any 2 of the 3 concerns:</p>
<p><img src="/blog/img/ai-deployments/optimize-for-any-two.png" alt=""></p>
<p>When implemented correctly, every DevOps team member can clearly identify what they are and are not responsible for when consuming artifacts provided by the DEaaS team. This lets them focus on building valuable solutions to meaningful problems.</p>
<p>This is reinforced by Meryem Arik, co-founder of TitanML, who noted in her talk <a href="https://www.infoq.com/presentations/llm-deployment/">Navigating LLM Deployment: Tips, Tricks, and Techniques</a> that:</p>
<blockquote>
<p>Deployment is really hard, so it’s better if you deploy once, you have one team managing deployment, and then you maintain that, rather than having teams individually doing that deployment, because then each team individually has to discover that this is a good tradeoff to make. What this allows is it allows the rest of the org to focus on that application development while the infrastructure is taken care of.</p>
</blockquote>
<h2 id="the-10-pillars-of-pragmatic-deployments">The 10 pillars of pragmatic deployments</h2>
<p>There are several common non-functional requirements associated with the DevOps lifecycle that DEaaS teams must consider as they decide which architectural decisions to share with the DevOps teams. These have been grouped into the 10 pillars of pragmatic deployments.</p>
<p>AI developers are held to a high standard, with AWS noting that governance, defined as “Incorporating best practices into the AI supply chain, including providers and deployers,” is a <a href="https://aws.amazon.com/ai/responsible-ai/">core dimension of responsible AI</a>.</p>
<p><strong>Repeatable deployments</strong> ensure that DevOps teams can deploy new features and fixes in an automated and consistent manner. While it may be necessary to involve some human decision-making before software gets deployed to production, you must automate the low-level work involved in deployments. <a href="https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence">Google’s AI and ML perspective: Operational excellence</a> documentation notes that “Automation enables seamless, repeatable, and error-free model development and deployment.”</p>
<p><strong>Verifiable deployments</strong> bake testing into the deployment process. Testing has added significance when deploying AI applications because GenAI models are non-deterministic by design, which means every deployment is effectively slightly broken all the time. Tests are crucial for ensuring that AI applications meet minimum requirements while catching the subtle and hard to diagnose bugs that can arise from seemingly simple changes to underlying models. Google calls this out with “Test, Test, Test” as part of their <a href="https://ai.google/responsibility/responsible-ai-practices/">responsible AI practices</a>, saying developers should “Conduct integration tests to understand how individual ML components interact with other parts of the overall system.”</p>
<p><strong>Seamless deployments</strong> use strategies like blue/green and canary deployments to enable progressive delivery and facilitate quick recoveries. Like verifiable deployments, seamless deployments are important for AI applications because it can often be difficult to fully understand the outcome of every model change. Rolling changes out slowly or having the ability to quickly revert a deployment can reduce the risk of unintended changes negatively affecting your customers. <a href="https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence">Google’s AI and ML perspective: Operational excellence</a> documentation advises teams to “Implement phased release approaches such as canary deployments or A/B testing, for safe and controlled model releases.”</p>
<p><strong>Recoverable deployments</strong> are related to seamless deployments and focus on how quickly a team can roll forward or backward to restore a production service. Fortunately, AI platforms don’t tend to rely heavily on persistent state, meaning deployments can be easily rolled back. The post <a href="https://aws.amazon.com/blogs/machine-learning/achieve-operational-excellence-with-well-architected-generative-ai-solutions-using-amazon-bedrock/">Achieve operational excellence with well-architected generative AI solutions using Amazon Bedrock</a> notes that “Automated deployment techniques together with smaller, incremental changes reduces the blast radius and allows for faster reversal when failures occur.”</p>
<p><strong>Visible deployments</strong> let DevOps teams answer questions like “What is the state of production?” and “What has changed since the last deployment?” <a href="https://www.microsoft.com/en-us/haxtoolkit/workbook/">Microsoft’s Human AI Experience (HAX) workbook</a> lists “Notify users about major changes” as one of the guidelines, which requires a good understanding of how the production environment has changed between deployments.</p>
<p><strong>Measurable deployments</strong> are essential to measuring the performance of a DevOps team against common metrics like those <a href="https://dora.dev/guides/dora-metrics-four-keys/">popularized by DORA</a>. Capturing these metrics in the deployment process ensures they are consistent and reliable. AI deployments, in particular, must be able to quickly adapt to new models, as noted by Meryem Arik in her talk <a href="https://www.infoq.com/presentations/llm-deployment/">Navigating LLM Deployment: Tips, Tricks, and Techniques</a> saying “Build as if you’re going to replace the models within 12 months, because you will,” and “In fact, if you check out the Hugging Face, which is where all of these models are stored, if you check out their leaderboard of open-source models, the top model changes almost every week.”</p>
<p><strong>Auditable deployments</strong> let DevOps teams track what changes were made, by whom, and when the change was made. CSIRO calls out “supply chain accountability” as a core part of <a href="https://www.csiro.au/en/research/technology-space/ai/responsible-ai">their responsible AI research</a>.</p>
<p><strong>Standardized deployments</strong>, or golden pipelines, are a common architectural decision the DEaaS team shares. Golden pipelines define common deployment steps and encapsulate business requirements like security scanning, manual approvals, deployment windows, and notifications. Modeling deployments and resources using Infrastructure as Code (IaC) allows them to be recreated at scale. <a href="https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence">Google’s AI and ML perspective: Operational excellence</a> documentation advises teams to “Manage your infrastructure as code (IaC). This approach enables efficient version control, quick rollbacks when necessary, and repeatable deployments.”</p>
<p><strong>Maintainable deployments</strong> automate day-2 maintenance and ad-hoc tasks like downloading log files, restarting services, performing backups, or applying updates. Teams responsible for managed AI services will also benefit from automating processes like adjusting token limits.</p>
<p><strong>Coordinated deployments</strong> let DevOps teams synchronize the deployment of 2 or more applications that are coupled to each other. They also expose business processes, such as approvals with ITSM platforms, to ensure deployments are performed in conjunction with other business units and all appropriate stakeholders have given their approval. <a href="https://www.microsoft.com/en-us/ai/principles-and-approach">Microsoft’s Responsible AI principles and approach</a> calls out accountability as a key requirement, asking, “How can we create oversight so that humans can be accountable and in control?”</p>
<p>The 10 pillars of pragmatic deployments represent architectural decisions to bake into the DevOps lifecycle. Understanding which are important to your team and how they support AI best practices removes a manual decision your DevOps team needs to make as they deliver new features to your customers.</p>
<h2 id="summary">Summary</h2>
<p>It’s not surprising that AI and DevOps best practices overlap. All the major cloud providers emphasize automation, testing, supply chain accountability, human oversight, and transparency, all supported by a robust DevOps lifecycle.</p>
<p>Consistently delivering a high-quality product means best practices can not be opt-in. Define best practices as architectural decisions that are implemented at scale throughout your DevOps teams and refined over time. This allows DevOps teams to focus on creating valuable solutions to meaningful problems rather than being burdened with menial tasks.</p>
<p>DEaaS provides a framework for thinking about how these architectural decisions allow DevOps team members to answer the question “What am I not responsible for?”, which in turn supports good DevEx, with the 10 pillars of pragmatic deployments listing the non-functional requirements found in a robust DevOps lifecycle.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Azure private networking for Octopus Cloud</title>
<link href="https://octopus.com/blog/azure-private-networking-octopus-deploy" />
<id>https://octopus.com/blog/azure-private-networking-octopus-deploy</id>
<published>2025-07-29T00:00:00.000Z</published>
<updated>2025-07-29T00:00:00.000Z</updated>
<summary>Enterprise security meets deployment convenience. Discover how Azure Private Endpoints for Octopus Deploy Cloud eliminate the trade-off between private networking and managed SaaS platforms.</summary>
<author>
<name>Matthew Allford, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Your security team wants private connectivity between tools you use, but your organization loves the convenience of cloud-hosted applications that help the business run. Octopus Cloud with Azure Private Endpoints gives you both.</p>
<p>This functionality allows your teams and tools to access Octopus Deploy over Azure private network connections that bypass the public internet, reducing your attack surface, keeping your network traffic within your boundaries, without sacrificing any of the convenience and operational benefits our fully managed cloud offering provides.</p>
<h2 id="the-case-for-private-connectivity">The case for private connectivity</h2>
<p>Accessing cloud-hosted tools usually requires a trade-off: choose between the convenience of managed SaaS platforms and the security of private networking. Accessing SaaS platforms over the public internet is a valid approach for many customers. It meets their security and risk tolerance requirements and a standard way of operating. Octopus Cloud is <a href="https://octopus.com/docs/octopus-cloud#secure-and-compliant-out-of-the-box">secure and compliant out of the box</a> with internationally recognized security standards.</p>
<p>However, it can become particularly challenging when operating in regulated industries like financial services, healthcare, or government, where strict requirements look to prevent traffic from traversing public networks, especially for tools that touch production systems. That convenient SaaS deployment tool doesn’t look so convenient for these organizations anymore.</p>
<p>Some organizations prefer private connectivity by default as part of their broader security strategy. While zero-trust principles can certainly work with public internet connections through well-implemented authentication and encryption, many teams find that private network paths provide an additional layer of defense and better align with their risk management approach.</p>
<p>Companies with significant on-premises infrastructure encounter additional complexity. Secure connectivity between private networks and cloud services is needed, but traditional solutions often involve complex VPN setups that create operational overhead and potential failure points.</p>
<p>Beyond compliance, many teams value controlling their critical traffic routes. When deploying software that runs your business, you want private infrastructure, predictable network paths, and a reduced attack surface to provide that extra assurance.</p>
<p>For Octopus Deploy, these factors typically mean customers choose to deploy and manage Octopus Server on their infrastructure, secured by their private network. While this is a valid approach, for customers who don’t want the overhead of managing and running Octopus Server, we’ve introduced functionality for you to attach an Azure Private Endpoint to your Octopus Cloud instance, providing a path to route traffic to your Octopus Cloud instance over your private network infrastructure.</p>
<h2 id="inbound-access-to-octopus-cloud">Inbound access to Octopus Cloud</h2>
<p>Using Microsoft’s Private Link Service, the primary use case involves creating a private endpoint in your Azure virtual network that connects directly to your Octopus Cloud instance. This connection creates a secure tunnel that allows your teams and applications to interact with Octopus Deploy without touching the public internet.</p>
<p><img src="/blog/img/azure-private-networking-octopus-deploy/azure-private-endpoint-inbound.png" alt="Diagram showing network connectivity from a customers' Azure virtual network to an Octopus Cloud instance using a private endpoint"></p>
<ul>
<li>Your Azure subscription hosts a private endpoint in your chosen virtual network and subnet</li>
<li>This private endpoint connects directly to Octopus Deploy’s private link service for your Octopus Cloud instance</li>
<li>All communication between your environment and Octopus Cloud travels over Azure’s private backbone. This includes communication from private networks connected to your Azure network infrastructure, like other public clouds, or on-premises networks for hybrid network setups.</li>
</ul>
<h2 id="outbound-access-from-octopus-cloud">Outbound access from Octopus Cloud</h2>
<p>We’re also exploring secure outbound connectivity from your Octopus Cloud instance to resources in your private network infrastructure.</p>
<p>The diagram below illustrates this scenario using Sonatype Nexus Repository as an example, but you can substitute any service that Octopus Deploy needs to access. This could be a package repository, configuration management system, or any other tool that resides in your Azure virtual network, or connects through your Azure network to other private infrastructure in different clouds or on-premises environments.</p>
<p><img src="/blog/img/azure-private-networking-octopus-deploy/azure-private-endpoint-outbound.png" alt="Diagram showing network connectivity from an Octopus Cloud instance to a customers' Azure virtual network, to access privately accessible resources"></p>
<p>This configuration unlocks many different scenarios, with one example being:</p>
<ul>
<li>Your development team pushes code to your private repositories</li>
<li>Your CI system builds artifacts and stores them in your privately accessible package manager (like Sonatype Nexus, JFrog Artifactory, or others)</li>
<li>When running a release, Octopus Deploy securely fetches artifacts from your private services using outbound private network connectivity</li>
<li>All traffic between Octopus Cloud and your private resources stays within Azure’s backbone or your connected private networks</li>
</ul>
<h2 id="interested-to-learn-more">Interested to learn more?</h2>
<p>Whether you need private connectivity for compliance reasons or as part of your broader security strategy, Azure Private Endpoints for Octopus Cloud deliver enterprise-grade security without sacrificing functionality or the operational advantages of Octopus Cloud.</p>
<p>This functionality brings private networking to our cloud-hosted platform, available to Octopus Cloud Enterprise tier customers. Professional tier customers with private connectivity needs should also reach out - we’re always interested in understanding your requirements.</p>
<p>Happy Deployments!</p>]]></content>
</entry>
<entry>
<title>Transparent hiring: a fairer, more competitive approach</title>
<link href="https://octopus.com/blog/transparent-hiring-approach" />
<id>https://octopus.com/blog/transparent-hiring-approach</id>
<published>2025-07-24T00:00:00.000Z</published>
<updated>2025-07-24T00:00:00.000Z</updated>
<summary>This post explores how transparent hiring practices, including upfront compensation details and clear interview processes, benefit both candidates and hiring teams by building trust and setting clear expectations.</summary>
<author>
<name>Arnold Harry, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>At the start of my recruitment career, transparency wasn’t a hot topic. Now, it’s one of the main reasons people apply for a role. As part of the Talent Acquisition team at Octopus Deploy, I’ve seen how being open about compensation, interview processes, and company values changes the experience for candidates and our hiring teams.</p>
<p>It’s no longer a nice-to-have; people expect transparency.</p>
<h2 id="whats-driving-the-shift">What’s driving the shift</h2>
<p>Over the past few years, I’ve noticed a growing appetite for honesty and clarity, particularly for early-career job seekers. Adobe’s 2023 <a href="https://blog.adobe.com/en/publish/2023/01/24/adobes-future-workforce-study-reveals-what-next-generation-workforce-looking-for-in-workplace">Future Workforce Study</a> backs this up, with 85% of new graduates saying they’re less likely to apply for a role if the salary range isn’t listed. Several candidates have told me they applied for jobs because they saw a salary range listed and appreciated the transparency in our public Handbook.</p>
<p>Part of a much broader cultural shift, people are looking for more than just a job title, which has only accelerated in a post-pandemic world. They want to understand who they’re working for, how they work, and whether their values are shared. This has pushed employers to adapt their hiring strategies.</p>
<h2 id="how-we-approach-transparency-at-octopus">How we approach transparency at Octopus</h2>
<p>At Octopus Deploy, we remove the guesswork early on. We’ve found that when people have more information up front, the process is smoother, expectations are clearer, and trust is easier to build.</p>
<p>Here’s what that looks like:</p>
<ul>
<li>We include salary bands in our job ads</li>
<li>Candidates can access our public <a href="https://handbook.octopus.com/">handbook</a>, which outlines how we work</li>
<li>We share an overview of the interview process so they know what to expect</li>
<li>We answer questions openly and authentically</li>
</ul>
<p>This approach establishes alignment early and saves time for hiring managers by preventing awkward surprises later in the process.</p>
<h2 id="a-deeper-dive">A deeper dive</h2>
<p>In addition to what I’ve outlined above, we continue to raise the bar in our approach to compensation clarity and inclusive hiring practices.</p>
<p>Compensation, often a taboo subject, is vital to Octopus Deploy’s overall picture of transparency, fairness, and equity. We address this through our compensation philosophy and an internal website, Octo-comp, where all Octonauts can access the salary tables for each job family. Our career maps provide clear expectations, enabling more productive conversations around performance and career progression.</p>
<p>Beyond compensation transparency, we strive to build diverse candidate pipelines and reduce bias in our hiring processes. A genuine effort is made to:</p>
<ul>
<li>Assemble diverse interview panels</li>
<li>Create interview processes to suit all populations</li>
<li>Look for how someone ‘adds’ instead of ‘fits’ our culture</li>
<li>Building an inclusive culture so diverse Octonauts can thrive</li>
</ul>
<h2 id="is-it-worth-it">Is it worth it?</h2>
<p>It took time and effort to create our Handbook and adapt our hiring processes to the current state. However, the return on investment has been excellent, as we continue to attract exceptional candidates and have had over 90% of our offers accepted. Considering the time it takes to get one candidate to the offer stage, every offer accepted has a significant impact.</p>
<h2 id="a-final-thought">A final thought</h2>
<p>In a market where top candidates have more choice than ever, transparency isn’t just a competitive edge; it’s becoming a baseline expectation.</p>
<h2 id="continue-the-conversation">Continue the conversation</h2>
<p>How do you practice transparent recruitment, and have you seen a shift in what candidates expect?</p>
<p>If you’re curious about how we do things at Octopus Deploy, check out our <a href="https://octopus.com/company/careers">Careers page</a> to see our current opportunities and learn more.</p>]]></content>
</entry>
<entry>
<title>The productivity delusion</title>
<link href="https://octopus.com/blog/productivity-delusion" />
<id>https://octopus.com/blog/productivity-delusion</id>
<published>2025-07-22T00:00:00.000Z</published>
<updated>2025-07-22T00:00:00.000Z</updated>
<summary>Find out why measuring productivity is a fast-track to failure and what to do instead.</summary>
<author>
<name>Steve Fenton, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Over the past decade, research has dramatically shaped how organizations view software delivery. While there were studies before this, the relationship between organizations building software and the researchers looking at the industry has grown closer.</p>
<p>Teams and organizations have developed the skill to assess research and apply the findings to their practice, and they are taking a more experimental approach to their improvement process.</p>
<p>However, one huge stumbling block has become the most common wrong turn in the <a href="https://octopus.com/devops/history/">modern software delivery era</a>: Productivity.</p>
<h2 id="productivity-is-a-collective-delusion">Productivity is a collective delusion</h2>
<p>Productivity isn’t a tangible thing. It’s not something you can, or even should, measure. The lack of tangible meaning with the word productivity is why it’s such a marketing default. If you don’t know the concrete benefits of your product or service, you can skip the hard work and say it “increases productivity”.</p>
<p>We can think of productivity like an anamorphic sculpture, a technique used in the perceptual art movement. When artists create an anamorphic sculpture, they design it to show different forms when viewed from specific angles. Brooklyn sculptor <a href="https://en.wikipedia.org/wiki/Michael_Murphy_(sculptor)">Michael Murphy</a> creates three-dimensional art installations that fill a whole room, such as <a href="https://vimeo.com/266241166"><em>perceptual shift</em></a>, which looks like a cone made from spheres of various sizes, but when viewed from just the right place, it’s a pop-art style eye.</p>
<p>When you use the term “productivity”, everyone in the room imagines a different picture because they all view productivity from a different perspective. This makes the term dangerously unspecific for discussing the real world.</p>
<p>It’s sometimes valid to talk about whether you <em>feel</em> productive. The term is a proxy for some beneficial things, like whether you could work on the right things and complete tasks without unnecessary interruptions. But feeling productive is a measure of your personal experience, not a measure of any tangible benefit to your team or organization.</p>
<p>In fact, <em>feeling</em> productive turns out to be a poor indicator of <em>being</em> productive.</p>
<h2 id="feeling-versus-being-productive">Feeling versus being productive</h2>
<p>The best way of exploring the relationship between feeling and being productive is to look at a myth that researchers busted about multitasking.</p>
<p>Traditionally, people believed multitasking was a skill. They listed it as a requirement on job listings and often bragged about their multitasking ability. This illusion crumbled when psychologists studied the impact of multitasking and task switching.</p>
<p>The <a href="https://www.apa.org/pubs/journals/releases/xhp274763.pdf">research</a> found that multitasking directly damaged the speed, accuracy, and quality of work. On top of this, they found that the people who thought they were great at multitasking performed the worst in real life. This inverse relationship between perceptions of productivity and the reality of specific measures is crucial because we lean on this feeling when we judge new tools and techniques.</p>
<p>The only cure to this perceptual error is to measure, and the only way to measure productivity is to convert it into concrete expectations.</p>
<h2 id="how-to-measure-productivity">How to measure productivity</h2>
<p>The answer to measuring productivity is already starting to become clear. You must reach into the mists of vagueness and pull out the concrete measures you want to impact.</p>
<p>The process of converting productivity into meaningful measures is contextual. That means you must ask, “What do we mean by <em>productivity</em>?” You may find a single thing or many things that complete your picture of productivity, but the process will clarify your goals.</p>
<p>Say you plan to introduce a tech tool that automatically scans your code for vulnerabilities. It’s tempting to say it will make the team more productive. You’ll iterate towards a meaningful picture by repeatedly asking what productivity means in the context of a code scanning tool.</p>
<p>The most common first step in translating productivity into measures is “speed”. However, you shouldn’t settle for this unless you’re wholly convinced that you’re working on your end-to-end value stream’s constraint. For the code scanning example, the tool will likely do something you’re not doing now. That means it won’t make things faster; it will, in fact, marginally slow your build pipeline.</p>
<p>So what is the benefit? You’ll likely increase the frequency of this type of check from “never” or “once a year” to every time you run the software build process. You may increase quality and reduce rework later on. Or, the tool might detect vulnerabilities that you were previously missing when you relied solely on code reviews.</p>
<p>So, let’s say you measure the frequency of security scans, the number of vulnerabilities identified (and, hopefully, fixed), and the time between the introduction and remediation of a vulnerability. This paints a vivid picture of real progress. This might be what you meant by “productivity”, in which case, you’ve improved the clarity of the goals and aligned everyone’s expectations. More commonly, you’ll realize it’s not productivity you wanted when you asked for the tool.</p>
<h2 id="theres-no-need-to-type-faster">There’s no need to type faster</h2>
<p>There’s an industry obsession with speeding up developers, which wasn’t helped by some of the Agile strap-lines, like “twice the work in half the time”. I firmly believe successful teams have an alternative strategy that’s far better, which I call:</p>
<p>Twice the impact with half the work</p>
<p>To achieve this, you have to stop myopically fixating on programming speed. Instead, you need to observe how work gets all the way from idea to value. Imagine you just realized how to create a feature that would disrupt your market and bring crowds of customers to your organization. How long would it be before you started working on this idea?</p>
<p>In many organizations, the answer is weeks or months. They are sitting on an idea that could dramatically increase their market share, but lack the mechanism to re-plan. So, they deliver a sequence of old ideas that won’t move the needle.</p>
<p>When you compare these systemic delays, they outweigh programming time by orders of magnitude. Yet organizations are still obsessed with making developers type faster.</p>
<p>The reality is that most developer tools aren’t about speeding up development. Auto-complete tools likely speed up coding, but their real benefit is reducing the low-value information developers need to keep in their heads. The same goes for learning how to touch type, using test automation, or using an AI assistant.</p>
<p>The speed gain from these tools and techniques is modest at best, but it’s also not the measurable benefit you should seek, because these localized speed improvements don’t roll up to the value stream.</p>
<p>This makes it crucial for you to consider the real benefits you want by adopting a technique or a tool, because we know that we need to understand our goals to be able to measure, and we need to measure to avoid the perception trap.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Deploying LLMs with Octopus</title>
<link href="https://octopus.com/blog/deploying-llms-with-octopus" />
<id>https://octopus.com/blog/deploying-llms-with-octopus</id>
<published>2025-07-16T00:00:00.000Z</published>
<updated>2025-07-16T00:00:00.000Z</updated>
<summary>LLMs are now a common component of modern applications, but deploying them with Kubernetes requires careful consideration. This article explores the challenges and best practices for deploying LLMs in a Kubernetes environment.</summary>
<author>
<name>Matthew Casperson, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Large Language Models (LLMs) are becoming an increasingly common component in modern applications, powering everything from chatbots to code generation tools. However, DevOps teams must address several challenges to ensure LLMs are deployed reliably and efficiently. In this post, we’ll explore some key considerations for deploying LLMs, how Docker helps address these challenges, and how Octopus empowers DevOps teams to deploy LLMs with confidence.</p>
<h2 id="challenges-of-deploying-llms">Challenges of deploying LLMs</h2>
<p>In the video <a href="https://youtu.be/VzRYQuwVRn0?t=770">Prebuilt Docker Images for Inference in Azure Machine Learning</a> Shivani Sambare noted that:</p>
<blockquote>
<p>We have interacted with many of our customers and data scientists and what we know [is] once you have trained a machine learning model getting that model to production is key, especially having the right environments.</p>
<p>Environments encapsulates all your python packages and software setting around your scoring script and hence having the right dependency when you deploy your machine learning model is very important.</p>
</blockquote>
<p><a href="https://learn.microsoft.com/en-us/azure/machine-learning/concept-prebuilt-docker-images-inference?view=azureml-api-2">Prebuilt docker images</a> provides a solution to these requirements with a consistent environment, resulting in images that can be run locally, deployed to Azure Container Instances, or run in Kubernetes.</p>
<p>AWS uses Docker to execute and deploy LLMs via their <a href="https://aws.amazon.com/ai/machine-learning/containers/">Deep Learning Containers</a>. Emily Webber highlighted the benefits that these images provided in her talk <a href="https://youtu.be/qAFUQwTFnkY?t=512">AWS Deep Learning Containers Deep Dive</a>, including:</p>
<ul>
<li>Avoiding time-consuming builds of base images</li>
<li>The ability to run containers anywhere</li>
<li>The standardization of your machine learning development, training, tuning, and deployment environments</li>
<li>Taking advantage of the latest optimizations and performance improvements</li>
</ul>
<p>NVIDIA supplies Docker images for hosting LLMs with NVIDIA Inference Microservices (NIM), <a href="https://developer.nvidia.com/blog/securely-deploy-ai-models-with-nvidia-nim/">noting that these containers provide</a>:</p>
<blockquote>
<ul>
<li>No external dependencies: You’re in control of the model and its execution environment</li>
<li>Data privacy: Your sensitive data never leaves your infrastructure</li>
<li>Validated models: You get these models as intended by their authors</li>
<li>Optimized runtimes: Accelerated, optimized, and trusted container runtimes</li>
</ul>
</blockquote>
<p>We then have the statistic from <a href="https://portworx.com/wp-content/uploads/2024/06/The-Voice-of-Kubernetes-Experts-Report-2024.pdf">The Voice of Kubernetes Experts Report 2024</a> showing that 54% of companies are running AI/ML workloads on Kubernetes.</p>
<p>These industry examples demonstrate how Docker images provide a solid foundation for deploying LLMs, ensuring that the environment is consistent and reproducible across different development and deployment stages.</p>
<p>But building a Docker image is just the first step. DevOps teams are then responsible for deploying and maintaining these images and their containers in production environments. This is where Octopus Deploy comes in.</p>
<h2 id="repeatable-deployments">Repeatable deployments</h2>
<p>To understand why repeatable deployments are important, consider the <a href="https://github.com/NVIDIA/nim-deploy/tree/main">nim-deploy</a> repository from NVIDIA which:</p>
<blockquote>
<p>Is designed to provide reference architectures and best practices for production-grade deployments and product integrations</p>
</blockquote>
<p>The repo contains instructions and sample configuration files to deploy NIMs to multiple platforms. The instructions involve dozens of steps requiring multiple CLI tools, environment variables, configuration files, and credentials.</p>
<p>It is not feasible to expect every DevOps engineer to remember all of these steps, and even if they could, deployments would be error-prone. This is where Octopus shines. By creating repeatable deployment processes, Octopus allows LLMs to be deployed with the click of a button or automatically in response to an external trigger.</p>
<p><a href="https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence">Google’s AI and ML perspective: Operational excellence</a> documentation notes that:</p>
<blockquote>
<p>Automation enables seamless, repeatable, and error-free model development and deployment.</p>
</blockquote>
<p>With Octopus, everything required to deploy an LLM image is defined in a repeatable deployment process that references the details of Kubernetes clusters defined as <a href="https://octopus.com/docs/infrastructure/deployment-targets">deployment targets</a>, using credentials stored as <a href="https://octopus.com/docs/infrastructure/accounts">accounts</a>, and consuming images from a container registry via <a href="https://octopus.com/docs/octopus-rest-api/examples/feeds">feeds</a>. The deployment process is then repeated across <a href="https://octopus.com/docs/infrastructure/environments">environments</a>, ensuring internal environments such as Development or QA provide the same experience as the Production environment.</p>
<p>This ensures that the LLMs are run with a consistent set of libraries and dependencies (which is how Azure defines an environment in terms of their prebuilt Docker images) but also that the resulting Kubernetes deployments are consistent across the infrastructure used for internal development, testing, and production (which is how Octopus defines an environment).</p>
<p><img src="/blog/img/deploying-llms-with-octopus/deployment-process.png" alt="A screenshot of the deployment process"></p>
<h2 id="visibility-and-monitoring">Visibility and monitoring</h2>
<p>What is the state of the Production environment?</p>
<p>This is a critical question for DevOps teams, but one that can be difficult to answer. When deployments are done manually, the only way to know what version of your LLM is providing responses to your customers is to manually inspect the Kubernetes cluster or ask the individual who performed the deployment.</p>
<p>Octopus makes it trivial to understand the state of production by providing a centralized dashboard showing the current version of each LLM deployed to each environment.</p>
<p><img src="/blog/img/deploying-llms-with-octopus/dashboard-deployment-status.png" alt="A screenshot of the dashboard showing deployments"></p>
<p>The <a href="https://octopus.com/docs/kubernetes/live-object-status">Kubernetes Live Object Status (KLOS)</a> feature goes a step further displaying the current state of the resources in the cluster. KLOS gives DevOps teams confidence that their LLMs are running as expected, and if there are any issues, they can quickly identify which resources are affected.</p>
<p><img src="/blog/img/deploying-llms-with-octopus/dashboard-live-object-status.png" alt="A screenshot of the dashboard showing the Kubernetes live object status"></p>
<h2 id="testing-and-validation">Testing and validation</h2>
<p>One of the challenges with LLMs is that their behavior and performance are often subjective. Safety and QA teams may need to validate LLMs by running several sample prompts and reviewing the responses. This kind of testing requires an LLM to be deployed and available in a test environment.</p>
<p>It is also essential to understand how the LLM interacts with other system components, such as databases, APIs, and other services. This kind of integration testing is best performed in a private environment that closely resembles production.</p>
<p>Google calls this out with “Test, Test, Test” as part of their <a href="https://ai.google/responsibility/responsible-ai-practices/">responsible AI practices</a>, saying developers should:</p>
<blockquote>
<p>Conduct integration tests to understand how individual ML components interact with other parts of the overall system.</p>
</blockquote>
<p>Octopus supports LLM testing by ensuring teams can deploy to private production-like environments, such as a staging or QA environment, where the LLM can be tested in conjunction with other system components.</p>
<h2 id="auditing-and-compliance">Auditing and compliance</h2>
<p>AI has been embraced by many organizations subject to regulatory requirements. From <a href="https://summitsydney.awslivestream.com/cop201/live/">banks using GenAI to improve their architectures</a> to <a href="https://summitsydney.awslivestream.com/fsi202/live/">healthcare providers integrating LLMs into their call centers</a>, GenAI and LLMs are being embedded into critical systems of some of the most important industries in the world.</p>
<p>It is, therefore, critical that organizations can track the changes to their production LLMs, including when they were deployed, what version was deployed, and who performed the deployment. CSIRO calls out “supply chain accountability” as a core part of their <a href="https://www.csiro.au/en/research/technology-space/ai/responsible-ai">responsible AI research</a>.</p>
<p>Manual deployments make this kind of auditing all but impossible. Even if it were possible to reverse engineer the changes made to a production system, it would be a time-consuming and error-prone process.</p>
<p>Octopus provides auditing out-of-the-box, allowing teams to see the history of deployments, including who performed the deployment and when. Auditing is treated as a cross-cutting concern applied to every action taken in Octopus, giving teams confidence that they can track changes to their LLMs and ensure compliance with regulatory requirements.</p>
<p><img src="/blog/img/deploying-llms-with-octopus/audit-logs.png" alt="A screenshot of the audit log showing deployments"></p>
<h2 id="incremental-deployments-rollbacks-and-recovery">Incremental deployments, rollbacks, and recovery</h2>
<p>Even with the best planning and testing, things can go wrong when deploying LLMs. A new version of an LLM may not perform as expected, or it may introduce regressions that affect the system’s performance.</p>
<p>The post <a href="https://aws.amazon.com/blogs/machine-learning/achieve-operational-excellence-with-well-architected-generative-ai-solutions-using-amazon-bedrock/">Achieve operational excellence with well-architected generative AI solutions using Amazon Bedrock</a> notes that:</p>
<blockquote>
<p>Automated deployment techniques together with smaller, incremental changes reduces the blast radius and allows for faster reversal when failures occur.</p>
</blockquote>
<p>There are several deployment strategies that can be used to mitigate these risks, including:</p>
<ul>
<li><a href="https://octopus.com/docs/deployments/patterns/blue-green-deployments-with-octopus">Blue/green deployments</a>, which involve deploying a new version of an LLM alongside the existing version and then switching traffic to the new version once it has been validated.</li>
<li><a href="https://octopus.com/docs/deployments/patterns/canary-deployments-with-octopus">Canary releases</a>, which involve deploying a new version of an LLM to a small subset of users and then gradually increasing the number of users as the new version is validated.</li>
</ul>
<p>In the event of a failed deployment or a regression, it is important to quickly roll back to a previous version of the LLM. By capturing all the values used to deploy the LLM as a release, Octopus makes it easy to redeploy a previous version of the LLM.</p>
<h2 id="managing-infrastructure-and-supporting-services">Managing infrastructure and supporting services</h2>
<p>LLMs are just one part of a larger system including API gateways for traffic management and security, file storage for model files, identity providers to manage authentication, and, of course, the Kubernetes clusters that run the LLMs. All these components require the same repeatable deployments, visibility, monitoring, auditing, testing, and recovery capabilities that are so important for LLMs.</p>
<p><a href="https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence">Google’s AI and ML perspective: Operational excellence</a> documentation advises teams to:</p>
<blockquote>
<p>Manage your infrastructure as code (IaC). This approach enables efficient version control, quick rollbacks when necessary, and repeatable deployments.</p>
</blockquote>
<p>Octopus provides a unified platform for managing all of these components, supporting Infrastructure as Code (IaC) tools such as <a href="https://octopus.com/docs/deployments/terraform">Terraform</a>, <a href="https://octopus.com/docs/runbooks/runbook-examples/azure/resource-groups">ARM templates</a>, and <a href="https://octopus.com/docs/deployments/aws/cloudformation">CloudFormation</a> to deploy and manage the underlying infrastructure.</p>
<h2 id="orchestrating-and-approving-deployments">Orchestrating and approving deployments</h2>
<p>The decision to deploy an LLM to production is often not made by the DevOps team alone. It may require approval from stakeholders such as product owners, security teams, or compliance officers. This requires approval workflows that ensure the right people are involved in the decision-making process.</p>
<p><a href="https://www.microsoft.com/en-us/ai/principles-and-approach">Microsoft’s Responsible AI principles and approach</a> calls out accountability as a key requirement, asking:</p>
<blockquote>
<p>How can we create oversight so that humans can be accountable and in control?</p>
</blockquote>
<p>Octopus provides <a href="https://octopus.com/docs/projects/built-in-step-templates/manual-intervention-and-approvals">manual intervention steps</a> that pause a deployment until it is approved by the appropriate stakeholders. And for those teams that use ITSM tools such as <a href="https://octopus.com/docs/approvals/servicenow">ServiceNow</a> or <a href="https://octopus.com/docs/approvals/jira-service-management">Jira Service Manager</a>, Octopus can block deployments until a ticket or change request is approved as part of a larger change management process.</p>
<p><img src="/blog/img/deploying-llms-with-octopus/approvals.png" alt="Octopus manual intervention step"></p>
<h2 id="day-2-operations-and-maintenance">Day 2 operations and maintenance</h2>
<p>Getting an LLM into production is just the beginning. Once deployed, ongoing maintenance and support are required to ensure it performs as expected. This can include tasks such as:</p>
<ul>
<li>Restarting services if they become unresponsive</li>
<li>Applying security patches to the underlying infrastructure</li>
<li>Collecting and analyzing logs to identify performance issues</li>
<li>Backup and recovery in the event of a failure</li>
</ul>
<p>Octopus Runbooks let teams define and execute these maintenance tasks in a consistent and repeatable manner. Runbooks can be configured to run the same steps available in the deployment process, access all the same credentials, and interact with the same infrastructure. Runbooks can then be run in any environment to perform maintenance and ad-hoc tasks. This ensures that the same steps are followed every time, reducing the risk of human error and ensuring that the LLM and any supporting infrastructure remain healthy.</p>
<p><img src="/blog/img/deploying-llms-with-octopus/runbooks.png" alt="A screenshot of Octopus runbooks"></p>
<h2 id="conclusion">Conclusion</h2>
<p>Docker has emerged as a key enabler for deploying LLMs, providing a consistent and reproducible environment for running these complex models. However, creating a Docker image is just the first step. DevOps teams must also consider how to deploy, monitor, and maintain these images in production environments.</p>
<p>Octopus provides a comprehensive platform for deploying LLMs, addressing the challenges of repeatable deployments, visibility and monitoring, testing and validation, auditing and compliance, incremental deployments, and day 2 operations. This allows teams to automate and scale the entire lifecycle of LLMs, ensuring that they can deliver reliable and performant AI solutions to their customers.</p>]]></content>
</entry>
<entry>
<title>The reality of GitOps application recreation</title>
<link href="https://octopus.com/blog/the-reality-of-gitops-application-recreation" />
<id>https://octopus.com/blog/the-reality-of-gitops-application-recreation</id>
<published>2025-07-09T00:00:00.000Z</published>
<updated>2025-07-09T00:00:00.000Z</updated>
<summary>GitOps promises recreatable applications, but 54% of teams say 'partially.' Why confidence dips before soaring and what complete recreation actually requires.</summary>
<author>
<name>Matthew Allford, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Your application code is in Git, and you’ve adopted GitOps principles, so you can recreate it anywhere, anytime, right? If you’re like many teams, the honest answer is “sort of”. While GitOps promises the ability to recreate your application from version control, the reality is often more nuanced as you consider the holistic view of deploying and running your applications.</p>
<p>Our <a href="https://octopus.com/publications/state-of-gitops-report">State of GitOps report</a> reveals some fascinating insights about recreatable applications. As part of the survey, we asked this question:</p>
<blockquote>
<p>Can you recreate the application from the configuration files stored in version control, for example, to recover from a disaster or to create a new test environment?</p>
</blockquote>
<p>Overall, 52% of respondents feel confident they can recreate their applications from version control. For high-performing GitOps teams, this jumps to 70%. But what caught my attention is that “partially recreatable” was a significant response across GitOps maturity levels and the most common response among teams still developing their GitOps practices. It suggests that while most teams can handle their core applications, they struggle with recreating the complete environment stack.</p>
<p><img src="/blog/img/the-reality-of-gitops-application-recreation/recreate-from-version-control.png" alt="Bar chart showing responses to ‘can you recreate the application from configuration files stored in version control?’ across groups with different GitOps maturity scores."></p>
<p>Interestingly, our data reveals that teams often become less confident about recreation as they advance their GitOps practices, only to see confidence surge at higher maturity levels. This pattern likely reflects a natural discovery process. As teams dive deeper into GitOps, they uncover dependencies and complexities they didn’t initially realize existed.</p>
<p>So, what does complete application recreation entail? And why do so many teams find themselves in that “partially recreatable” category?</p>
<h2 id="the-real-world-value-of-recreatable-applications">The real-world value of recreatable applications</h2>
<p>Recreatable applications don’t make sense for all organizations and applications and arguably could be viewed as nice to have rather than a necessity. However, recreating your applications from Git can solve real problems teams face at different scales and circumstances.</p>
<p>When disasters occur, having your entire application stack defined in code means restoration becomes a deployment rather than a scramble. You can rebuild from your Git repository instead of hunting through documentation or recreating manual configurations. The same capability proves valuable during cloud provider migrations driven by acquisitions, regulatory requirements, or strategic business decisions, where recreation becomes a non-trivial but controlled transition rather than a risky lift-and-shift operation.</p>
<p>Of course, your application is just one piece of the puzzle. You’ll still need to consider the surrounding infrastructure, networking, and foundational services that enable your application to run in the target environment. While mature teams often manage this infrastructure through tools like Terraform and Crossplane, getting to that level of complete recreation from Git requires thoughtful planning and infrastructure provisioning processes.</p>
<p>Operational efficiency improves when you can create new test environments on demand with minimal overhead. Whether you’re testing critical fixes, running performance tests against production-like infrastructure, or validating new features, the ability to spin up identical environments quickly and tear them down when finished reduces both time and infrastructure costs.</p>
<p>Recreation provides auditable proof for regulated industries that infrastructure and deployment processes are fully documented and reproducible. Suppose you need to satisfy compliance frameworks that require demonstrable change control. In that case, recreatable applications help you meet audit requirements for deployment consistency and provide evidence that you can rebuild systems according to documented specifications.</p>
<h2 id="the-maturity-journey">The maturity journey</h2>
<p>Our survey data’s confidence curve tells a story about how teams learn and adopt GitOps. Rather than steady upward progress, there’s an initial dip in confidence before teams reach far higher levels of certainty about their recreation capabilities.</p>
<p>This pattern might highlight the natural learning process. Pre and low adopters will typically approach GitOps from an application-down perspective and focus on getting their manifests into Git repositories. The initial confidence may come from successfully deploying applications this way and feeling like they’ve “solved” recreation.</p>
<p>However, as teams mature, reality sets in when they add more applications to their GitOps processes and try to recreate complete environments. They likely discover their core applications are relatively easy to recreate, but the surrounding environment may not be. Infrastructure provisioning may be required, whether manual or automated, and teams may not have accounted for external dependencies yet.</p>
<p>You need backup and restore strategies for stateful components like databases that go beyond what you define in Kubernetes manifests. While the application might be recreatable from Git, the data likely isn’t. You’ll consider whether to increase cost and complexity by replicating databases across infrastructure, configure automated backup and restore processes, or accept that provisioning new environments require data restoration as a separate step.</p>
<p>This explains why “partially recreatable” was a popular response in our survey across all GitOps maturity levels. Most teams can handle their core applications but struggle with the complete environment stack. As low and medium-maturity teams adopt foundational GitOps practices, they discover the full scope of what requires management, decreasing their confidence in complete recreation.</p>
<h2 id="what-to-consider-with-complete-recreation">What to consider with complete recreation</h2>
<p>Success stories do exist. Teams using infrastructure provisioning tools like Crossplane, Terraform, and mature GitOps workflows have achieved recreatable applications. But getting there requires stepping back and considering what recreation means from a holistic perspective.</p>
<p>Your application manifests assume that the underlying infrastructure exists, but something must first create that foundation. Many organizations still manage infrastructure provisioning through separate Terraform workflows, creating a gap where recreation often breaks down. While infrastructure provisioning may be mostly automated, unless integrated with your GitOps workflows, you must step away and use another platform to create the infrastructure first.</p>
<p>Recreation means recreating the configuration and access to secrets in the new environment. Your GitOps process needs strategies for safely managing environment-specific values, API keys, and certificates.</p>
<p>Teams can easily recreate applications but can’t recreate data the same way. You need strategies for database backups, data replication, or accepting that data restoration happens separately from application recreation.</p>
<p>External services, APIs, or legacy systems your applications depend on often fall outside what GitOps can recreate. Your recreation strategy needs to account for these dependencies, whether through service discovery, configuration updates, or fallback mechanisms. Additionally, can your target environment support the infrastructure you rely on? Not all cloud services are available in every region, especially during disasters.</p>
<p>70% of high-performers have achieved confidence in their recreation capabilities, and have worked through these considerations. They prove that complete recreation is possible and likely find it worthwhile, but it requires treating it as a comprehensive system design challenge rather than just putting YAML in Git.</p>
<h2 id="the-path-forward">The path forward</h2>
<p>While the journey from partial to complete recreation involves discovering complexities you didn’t know existed, 70% of high-performing teams that achieved this capability prove it’s possible and worthwhile.</p>
<p>The key is treating recreation as a systematic challenge rather than an afterthought. Whether you’re just starting your GitOps journey or working through the complexities that come with maturity, understanding what complete recreation entails helps you make informed decisions about where to invest your efforts.</p>
<p>For more insights into how teams across different maturity levels approach GitOps and application recreation, download our complete <a href="https://octopus.com/publications/state-of-gitops-report">State of GitOps report</a> to see the full research findings and implementation patterns.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Improved control over package retention</title>
<link href="https://octopus.com/blog/retention-improvements" />
<id>https://octopus.com/blog/retention-improvements</id>
<published>2025-06-26T00:00:00.000Z</published>
<updated>2025-06-26T00:00:00.000Z</updated>
<summary>Optimize efficiency with our latest package retention enhancements</summary>
<author>
<name>Michelle O'Brien, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>Retention isn’t the most glamorous part of the deployment process, but it is critical to get right. When storage is efficiently used, it can improve deployment times and performance. To help achieve this, we’ve made two improvements to package retention, with more to come.</p>
<p>These changes are available to cloud customers now and will be included in the server release 2025.3.</p>
<h2 id="recent-improvements-and-why-they-matter-to-you">Recent improvements (and why they matter to you)</h2>
<ol>
<li>Package caching</li>
<li>Decoupling release and package retention</li>
</ol>
<h3 id="package-caching">Package caching</h3>
<p>Package cache retention currently runs by default when the target machine hits less than 20% storage. These retention rules can be problematic for users with both small and large amounts of disk space;
Smaller machines with large packages involved in deployments may be prematurely deleted due to storage constraints, causing deployment failures or delays when subsequent deployments need to re-download them
Machines with larger disk space may experience more infrequent cache clearing, accumulating obsolete deployment files that degrade performance or consume space needed for higher-priority files</p>
<p><img src="/blog/img/retention-improvements/package-cache.png" alt="Default Machine Policy page showing where package cache retention can be set"></p>
<p>Now, within the default machine policies, you have the choice of allowing Octopus to set the default for them or keep a specific number of packages. For your larger machines, you can ensure their package cache doesn’t get too big and for smaller machines, you can set a sensible default based on your deployment patterns.</p>
<h3 id="decoupling-release-and-package-retention">Decoupling release and package retention</h3>
<p>Lifecycle policies control the retention of releases and the associated packages. Customers with frequent deployments or large packages typically require shorter retention periods to stay within storage limits. Tightening retention policies reduces the number of deployments kept, which limits your ability to audit or troubleshoot old failed deployments.</p>
<p><img src="/blog/img/retention-improvements/decoupling.png" alt="Built-in Package Repo settings where you can now set your updated package retention policy"></p>
<p>We’ve added new flexibility to give you better control over your package retention. You can now choose between two approaches: keeping packages for all retained releases and runbooks (our current default), or only keeping packages for releases visible on your dashboard.
The dashboard-only option is a game-changer for storage management, this will keep relevant deployment history without the overhead of keeping the associated packages. While it means some older releases won’t be redeployable after their packages get cleaned up, we’ve found customers rarely need to redeploy those old releases. What you want is to keep more of your release history at your fingertips.
It’s all about giving you the right balance between storage efficiency and the information you need.</p>
<h3 id="whats-next">What’s Next?</h3>
<p>Our next iteration of this project will focus on centralizing where you view all retention policies. We aim to provide better context and the ability to standardize retention policies based on your organization’s guidelines.
If you have examples of where retention inefficiencies have cost you time or money, please <a href="https://roadmap.octopus.com/c/77-centralise-and-improve-retention-across-octopus">submit your examples here</a>. The more information we have, the better solutions we can build.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>Help shape Ephemeral Environments</title>
<link href="https://octopus.com/blog/ephemeral-environments" />
<id>https://octopus.com/blog/ephemeral-environments</id>
<published>2025-06-23T00:00:00.000Z</published>
<updated>2025-06-23T00:00:00.000Z</updated>
<summary>Learn about Ephemeral Environments, coming soon to Octopus, and help us shape the feature.</summary>
<author>
<name>Harriet Alexander, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>In the fast-paced world of software development, few phrases provoke as much frustration as “it works on my machine”. It’s the ultimate debugging roadblock. Your code works flawlessly in your local environment, but everything breaks when it merges. But what if there was a way to ensure your code wasn’t just “working on your machine” but on an environment that mirrors your production environment before you merge?</p>
<p>In this post, I introduce our solution—Ephemeral Environments.</p>
<h2 id="what-are-ephemeral-environments">What are Ephemeral Environments?</h2>
<p>We’ve been hard at work developing Ephemeral Environments, tailored for modern development workflows. These environments are automatically generated and linked to feature branches created through pull requests (PRs). They offer a temporary space for testing code changes without affecting the main lifecycle environments.</p>
<p>The primary goal is to <em>shift left</em> in the development process: identifying issues earlier when they are easier and less costly to address. This then minimizes surprises later in the workflow.</p>
<h2 id="how-ephemeral-environments-will-work">How Ephemeral Environments will work</h2>
<ul>
<li>Feature branch-based: When you create a pull request (PR) from your feature branch, the ephemeral environment automatically spins up.</li>
<li>Temporary by design: After you close or merge the PR, the environment spins down in a hassle-free manner.</li>
<li>Designed for early feedback: The feature will provide developers and collaborators with early integration testing, UI validation, and fast feedback loops.</li>
</ul>
<h2 id="why-you-should-get-excited">Why you should get excited</h2>
<p>Catching issues earlier in the development lifecycle leads to better software, faster delivery, and happier teams. Here’s why we’re excited about Ephemeral Environments—and why we think you will be, too:</p>
<ul>
<li>Fewer environment bottlenecks: Don’t waste time waiting for staging slots or shared resources. Every feature branch gets its own environment.</li>
<li>Increased confidence in merges: Know your code works before merging into the main branch, so there are fewer surprises and regressions.</li>
<li>Seamless feedback loops: Ephemeral Environments are dynamic and easy to share. This increases collaboration across teams and stakeholders.</li>
</ul>
<h2 id="have-your-say-and-get-early-access-to-ephemeral-environments">Have your say and get early access to Ephemeral Environments</h2>
<p>We’re building this feature to make pull request workflows smarter, faster, and more reliable— but we’d love your feedback.</p>
<p>Whether you want to sign up for a product demo, participate in our alpha testing, or simply get notified when early access is available, we want to hear from you.</p>
<h3 id="whats-in-it-for-you">What’s in it for you?</h3>
<ul>
<li>Get early access: Get access to the ephemeral environment feature before it’s widely available.</li>
<li>Influence design: Your feedback will directly shape how we refine the feature, ensuring it meets real-world workflows.</li>
</ul>
<h2 id="how-to-sign-up">How to sign up</h2>
<p><a href="https://octopusdeploy.typeform.com/to/ZOia9Aje">Register your interest via our form</a>.</p>
<p>We’re hoping to release ephemeral environments in all instances later this year.</p>
<p>Happy deployments!</p>]]></content>
</entry>
<entry>
<title>The State of GitOps report: Exploring effective GitOps</title>
<link href="https://octopus.com/blog/announcing-the-first-state-of-gitops-report" />
<id>https://octopus.com/blog/announcing-the-first-state-of-gitops-report</id>
<published>2025-06-17T00:00:00.000Z</published>
<updated>2025-06-17T00:00:00.000Z</updated>
<summary>Key insights from the first State of GitOps report based on 660 survey responses. Learn how high-performing teams achieve better software delivery, increased reliability, and stronger security through 6 essential GitOps practices.</summary>
<author>
<name>Steve Fenton, Octopus Deploy</name>
</author>
<content type="html"><![CDATA[<p>We’re thrilled to announce the release of the <a href="https://octopus.com/publications/state-of-gitops-report">State of GitOps report</a>. This report is the first to explore how practitioners apply GitOps concepts in the real world. Based on data from 660 survey responses and interviews with a panel of experts and practitioners, our goal was to understand what “good” GitOps looks like, explore different adoption styles, and analyze whether GitOps delivers the expected benefits.</p>
<p>Combining version control, developer practices, and an automated reconciliation loop, GitOps can deliver a secure and auditable way to drive system state with human-readable files. While teams with well-established GitOps practices are seeing a return on their investment, those who haven’t achieved a sufficient depth and breadth of adoption are struggling to beat the j-curve to get the benefits. This is where the State of GitOps report can help.</p>
<h2 id="four-key-findings">Four key findings</h2>
<p>Our research explored various aspects of GitOps adoption and its impact, and we’ve uncovered 4 key findings:</p>
<ol>
<li><strong>Better software delivery:</strong> High-performing GitOps teams demonstrated higher software delivery performance, as measured by the DORA 4 key metrics (change failure rate, time to recover, deployment frequency, and lead time for changes).</li>
<li><strong>Increased reliability:</strong> These high-performing teams also reported the best reliability records, based on user satisfaction, meeting uptime targets, and avoiding slowdowns and outages.</li>
<li><strong>Security and compliance:</strong> We found a clear link between GitOps maturity (how many practices teams adopt) and achieving security and compliance benefits.</li>
<li><strong>Adoption is increasing:</strong> Most organizations (93%) plan to continue or increase their GitOps adoption, indicating strong confidence in the approach.</li>
</ol>
<p>The report delves into the nuances of adoption, distinguishing between ‘breadth’ (the extent across production systems) and ‘depth’ (how many practices are implemented and how well).</p>
<p>We found that GitOps is most often used for application or service deployments (79%), application configurations (73%), and infrastructure (57%). We also challenge the idea that GitOps is <em>only</em> for Kubernetes, with 26% of organizations applying it to other technology stacks.</p>
<h2 id="the-gitops-model">The GitOps model</h2>
<p>A significant part of the report introduces the GitOps Model, outlining the 6 practices we found necessary for successful adoption and positive outcomes:</p>
<ol>
<li>Declarative desired state</li>
<li>Human readable format</li>
<li>Responsive code review</li>
<li>Version control</li>
<li>Automatic pull</li>
<li>Continuous reconciliation</li>
</ol>
<p><img src="/blog/img/announcing-the-first-state-of-gitops-report/gitops-model.png" alt="The 6 GitOps practices drive DevOps outcomes for software delivery, reliability, and wellbeing"></p>
<p>We developed a GitOps score based on how closely organizations align with these practices and found organizations with higher scores are most likely to obtain the benefits of GitOps. This means teams with higher scores were significantly more likely to report:</p>
<ul>
<li>Increased security</li>
<li>Prevention of configuration drift</li>
<li>Improved auditability</li>
<li>Easier compliance</li>
<li>Reduced elevated access</li>
</ul>
<p>The ability to recreate applications from version control is also strongly linked to higher scores.</p>
<h2 id="gitops-practices-and-devops-outcomes">GitOps practices and DevOps outcomes</h2>
<p>Beyond specific GitOps benefits, the report also examines the relationship between GitOps and broader DevOps outcomes, like software delivery performance, reliability, and even wellbeing. Higher GitOps scores correlate positively with these outcomes.</p>
<p>It’s essential to be aware of the potential “j-curve” effect when adopting new practices like GitOps. You might see an initial dip in performance as you introduce new skills and practices, but sticking with it leads to significant long-term gains. The 6 GitOps practices are mutually supportive; leaving one out can create gaps in the effectiveness of others.</p>
<p>Of course, adopting GitOps isn’t without its challenges. The report identifies potential “trip hazards”, like the risk of accidental resource deletion, leaking secrets in version control, overloading version control systems, and gaps in deployment processes or access controls. Understanding these pitfalls and adopting protective measures, like dry-runs, approval workflows, secret management tools, and robust access control, is crucial.</p>
<p>The State of GitOps Report is a comprehensive baseline for where GitOps stands today. It offers valuable insights into the practices that drive success and helps you understand how to improve your own outcomes.</p>
<p>We encourage you to <a href="https://octopus.com/publications/state-of-gitops-report">dive into the full report</a> to explore the findings, understand the GitOps model, and identify areas for continuous improvement on your GitOps journey.</p>
<p>Happy deployments!</p>]]></content>
</entry>
</feed>
If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid Atom 1.0" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src
attribute if necessary):
If you would like to create a text link instead, here is the URL you can use:
http://www.feedvalidator.org/check.cgi?url=http%3A//feeds.feedburner.com/OctopusDeploy