This is a valid Atom 1.0 feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
<link href="https://octopus.com/blog/feed.xml" rel="self" />
^
<p>In addition, LLMs cannot natively access external data sources or tools. ...
line 68, column 0: (10 occurrences) [help]
<p><img src="/blog/_astro/copilot-chat.B84frrF0_Z23C9cA.webp" alt="Screensh ...
line 68, column 0: (10 occurrences) [help]
<p><img src="/blog/_astro/copilot-chat.B84frrF0_Z23C9cA.webp" alt="Screensh ...
line 68, column 0: (10 occurrences) [help]
<p><img src="/blog/_astro/copilot-chat.B84frrF0_Z23C9cA.webp" alt="Screensh ...
line 71, column 0: (34 occurrences) [help]
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#00 ...
line 71, column 0: (34 occurrences) [help]
<pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#00 ...
line 899, column 39: (2 occurrences) [help]
<updated>2025-10-14T00:00:00.000Z</updated>
^
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"> <title>Octopus blog</title> <subtitle>Site description.</subtitle> <link href="https://octopus.com/blog/feed.xml" rel="self" /> <link href="https://octopus.com" /> <id>https://octopus.com/blog/feed.xml</id> <updated>2025-10-30T00:00:00.000Z</updated> <entry> <title>What is Model Context Protocol (MCP)?</title> <link href="https://octopus.com/blog/what-is-mcp" /> <id>https://octopus.com/blog/what-is-mcp</id> <published>2025-10-30T00:00:00.000Z</published> <updated>2025-10-30T00:00:00.000Z</updated> <summary>Learn about Model Context Protocol (MCP), an open standard for AI model interoperability that enables seamless integration and communication between different AI models and systems.</summary> <author> <name>Matthew Casperson, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>The Model Context Protocol (MCP) is an open-source standard initially developed by Anthropic in late 2024 to connect AI assistants to external tools and data sources. Despite being a relatively new standard, MCP has been adopted by all major technology and AI companies, including OpenAI, Google, Microsoft, and AWS. Octopus, too, has an MCP server that exposes an Octopus instance via MCP.</p><p>In this post, we’ll explore what MCP is, how it works, and why it’s essential for the future of AI.</p><h2 id="what-problem-does-mcp-solve">What problem does MCP solve?</h2><p>Most of us are familiar with tools like ChatGPT and code generation assistants like GitHub Copilot. These tools are built on Large Language Models (LLMs) that encapsulate a vast amount of knowledge and are used to answer questions, generate text, and write code. However, LLMs only know the state of the world up to their training cut-off date. Anita Lewis puts it like this in <a href="https://youtu.be/DZFgufNCvAo?t=261">The MCP Revolution: AWS Team’s Journey from Internal Tools to Open Source AI Infrastructure</a>:</p><blockquote><p>We have these incredibly powerful AI models, but they’ve essentially been living in these isolated bubbles. The knowledge cut-offs can range from anywhere between 6 months to two plus years.</p></blockquote><p>In addition, LLMs cannot natively access external data sources or tools. This means LLMs don’t have knowledge of your internal systems, databases, or APIs. This quote from <a href="https://youtu.be/SriucxA6LRY?t=1327">AWS re<inforce></inforce> 2025 - The state of cloud and GenAI risks: Uncovering the data with Orca Security</a> neatly summarizes the problem MCP solves:</p><blockquote><p>A really good analogy that I found online about this is that AI before MCP is like computers before the internet, right? They were isolated but really powerful. Now with MCPs, the potential for AI is really limitless, and MCP is a key enabler for what we’ve come to know as agentic AI platforms.</p></blockquote><p>Another way to think about MCP is provided by <a href="https://youtu.be/0kiYEKqV9DY?t=2054">GitHub Rubber Duck Thursday - let’s hack</a>:</p><blockquote><p>I would liken [MCP] to Chrome extensions. So you know how you can install Chrome extensions on your browser to do different things with your browser? So you can install MCP servers into your LLMs to do different things on your behalf like we use Chrome extensions.</p></blockquote><p>In short, MCP is a common standard that platforms implement to grant LLMs access to external data sources and perform actions on behalf of users.</p><h2 id="how-does-mcp-compare-to-rest-graphql-grpc">How does MCP compare to REST, GraphQL, gRPC?</h2><p>We’ve had common web-based protocols and standards for years now. Representational State Transfer (REST) has been a popular architectural style for designing networked applications for decades. GraphQL is an open-source query language for APIs providing a structured approach to data fetching across multiple data sources, and gRPC Remote Procedure Calls (with the recursive acronym gRPC) is a high-performance, open-source framework developed by Google that enables remote procedure calls between distributed systems.</p><p>All these protocols and standards have been successfully used at scale for many years. So why do we need MCP?</p><p>MCP is explicitly designed for AI models and their unique requirements. While the functionality provided by MCP overlaps with REST, GraphQL, and gRPC, MCP introduces several features to enable LLMs to interact with external systems more naturally and efficiently.</p><p><a href="https://modelcontextprotocol.io/docs/develop/build-server#core-mcp-concepts">MCP servers can provide three main types of capabilities</a>:</p><ul><li><strong>Resources</strong>: File-like data that can be read by clients (like API responses or file contents)</li><li><strong>Tools</strong>: Functions that can be called by the LLM (with user approval)</li><li><strong>Prompts</strong>: Pre-written templates that help users accomplish specific tasks</li></ul><p>Resources are much like HTTP GET operations. A resource returns data to be consumed by an LLM without side effects.</p><p>Tools are more like HTTP POST operations. A tool performs an action on behalf of the LLM and may have side effects.</p><p>Prompts are a unique feature of MCP and demonstrate how MCP is designed specifically for LLMs. While you can execute simple operations with concise prompts, more complex operations usually require verbose and explicit prompts. The prompts may also need to use specific language to refer to the resources and tools provided by an MCP server. By exposing prompts as a first-class concept, MCP servers can guide end users as they interact with LLMs.</p><p>Another benefit of MCP is that it provides a consistent standard for LLMs to interact with multiple external systems.</p><p>REST APIs can vary widely in their design and implementation, with standards like <a href="https://jsonapi.org/">JSON API</a> and <a href="https://datatracker.ietf.org/doc/html/draft-kelly-json-hal-11">HAL</a> offering unique approaches to designing REST APIs. GraphQL is more consistent, but exposing multiple data sources via a single GraphQL endpoint is a non-trivial task that usually requires developing a custom server. gRPC effectively abstracts away the networking layer and generates client and server classes for multiple languages, but focuses on a code-first approach that isn’t tailored to general-purpose clients like LLMs.</p><p>MCPs to allow a general-purpose client to execute arbitrary operations across multiple servers defined in a simple JSON file. This is important because much of the value from MCP-based workflows is the ability to combine many data sources and tools to accomplish complex tasks. The <a href="https://www.qodo.ai/wp-content/uploads/2025/06/2025-State-of-AI-Code-Quality.pdf">Qodo report: 2025 State of AI Code Quality</a> notes that:</p><blockquote><p>Agentic chat [uses] an average of 2.5 MCP tools per user message.</p></blockquote><p><a href="https://youtu.be/15XhkcQdSrI?t=978">Explore Model Context Protocol (MCP) on AWS! | AWS Show and Tell - Generative AI | S1 E9</a> goes further, saying:</p><blockquote><p>There’s been people using 40 - 50 MCP servers in one go which is pretty wild.</p></blockquote><p>The biggest challenge with existing standards is that the platforms you want to work with almost certainly implement a mix of REST, GraphQL, and gRPC. This creates a massive headache for consumers who are forced to implement glue logic to consume each system from an AI assistant. MCP represents an opportunity for platforms to standardize on a single protocol for AI assistants, making it easier for users to integrate with multiple systems.</p><h2 id="how-do-you-use-mcp">How do you use MCP?</h2><p>To make use of MCP, you need two things:</p><ol><li>An MCP client. The client will typically expose a chat-based interface to the user and use an LLM to process user input and generate responses.</li><li>An MCP server. The server exposes resources, tools, and prompts to the client.</li></ol><p>To demonstrate how MCP works, we’ll use <a href="https://octopus.com/">Octopus</a> as the MCP server and <a href="https://github.com/features/copilot">Copilot Chat</a> in IntelliJ as the MCP client.</p><p>In Intellij (or any other Jetbrains IDE), install the Copilot addon, open the Copilot chat window, and select <code>Agent</code> in the chat toolbar:</p><p><img src="/blog/_astro/copilot-chat.B84frrF0_Z23C9cA.webp" alt="Screenshot of the copilot chat toolbar" loading="lazy" decoding="async" fetchpriority="auto" width="287" height="71"></p><p>Click the <strong>Configure tools</strong> icon and then click <strong>Add more tools</strong>. This opens the <code>mcp.json</code> file.</p><p>Add the following server definition to the <code>mcp.json</code> file, replacing the API key and server URL with your own values:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="json"><code><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#0451A5"> "servers"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#0451A5"> "octopusdeploy"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#0451A5"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"npx"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#0451A5"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "-y"</span><span style="color:#000000">, </span></span><span class="line"><span style="color:#A31515"> "@octopusdeploy/mcp-server"</span><span style="color:#000000">, </span></span><span class="line"><span style="color:#A31515"> "--api-key"</span><span style="color:#000000">, </span></span><span class="line"><span style="color:#A31515"> "API-ABCDEFGHIJKLMNOP"</span><span style="color:#000000">, </span></span><span class="line"><span style="color:#A31515"> "--server-url"</span><span style="color:#000000">, </span></span><span class="line"><span style="color:#A31515"> "https://yourinstance.octopus.app"</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000">}</span></span></code></pre><p>You can now enter a prompt like <code>List the projects in the "AI" space</code>. Copilot will use the Octopus MCP server to retrieve the list of spaces from your Octopus instance:</p><p><img src="/blog/_astro/chat-response.DvIGqJVH_23YlvD.webp" alt="Copilot chat showing a list of projects in the "AI" space" loading="lazy" decoding="async" fetchpriority="auto" width="535" height="386"></p><p>And that’s it! With a few minutes and a few lines of JSON, you can start chatting with your Octopus instance.</p><h2 id="conclusion">Conclusion</h2><p>MCP empowers AI agents to interact with multiple external systems through a natural language interface. Systems that previously required complex API integrations implemented with custom code are now accessible to anyone with knowledge of prompt engineering.</p><p>While MCP is a young standard, it has already been widely adopted by major technology companies and is poised to become the de facto standard for AI agent interoperability. Or, as <a href="https://youtu.be/yWkxb2kmUIk?t=996">Claude 4 + Claude Code + Strands Agents in Action | AWS Show & Tell</a> puts it:</p><blockquote><p>What really excited [me] about MCP is that all these enterprises have so many data silos, you know, structured, unstructured, semi-structured, just everywhere, right? And I feel like MCP is finally the way to break those data silos down in an effort to make it accessible to AI.</p></blockquote><p>Get started with the Octopus MCP server today with the instructions in the <a href="https://octopus.com/docs/octopus-ai/mcp/use-cases">Octopus documentation</a>.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Behind the scenes: Designing Argo CD in Octopus</title> <link href="https://octopus.com/blog/designing-argo-in-octopus" /> <id>https://octopus.com/blog/designing-argo-in-octopus</id> <published>2025-10-24T00:00:00.000Z</published> <updated>2025-10-24T00:00:00.000Z</updated> <summary>Learn about how we designed the integration between Argo CD's GitOps capabilities and Octopus Deploy.</summary> <author> <name>Kirsten Schwarzer, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>How could we seamlessly combine the best of GitOps with powerful deployment orchestration from Octopus?</p><p>That’s the challenge we took on during three days of ideation in Auckland earlier this year.</p><p>We filled whiteboards with drawings and sticky notes, building out a story map of what the ideal solution could look like.</p><figure><p><img src="/blog/img/designing-argo-in-octopus/auckland-ideation-session.jpeg" alt="Three Octonauts sitting in front of a whiteboard."></p></figure><p>Integrating with Argo CD was surfacing regularly in our customer interviews and we started getting comments about it on our roadmap. Fast forward a few months and we’ve just shipped the Early Access version of Argo CD in Octopus to Cloud customers.</p><p>Here’s a behind-the-scenes look at how we turned user feedback and sticky notes into real capabilities you can use today.</p><h2 id="going-from-discovery-to-making-design-decisions">Going from discovery to making design decisions</h2><p>The expansive story map made it clear that we’d have to break the solution into smaller iterations. We wanted to ship valuable capabilities and start getting feedback as quickly as possible.</p><p>The first shippable release would include:</p><ul><li>A step to update container images</li><li>A step to create and update manifests from templates</li><li>Committing directly to Git or creating pull requests</li><li>Connecting to Argo CD instances by installing a gateway</li><li>Seeing application health on the live status page</li></ul><h2 id="connecting-with-the-argo-cd-community-at-kubecon-london">Connecting with the Argo CD community at KubeCon London</h2><p>We used the opportunity at KubeCon to get feedback on early-stage prototypes, connect with folks in the Argo CD community, and get more data about questions like:</p><ul><li>How do Argo CD users feel about using annotations to map applications to Octopus projects, environments, and tenants?</li><li>How much would users still rely on the Argo CD UI or did they expect us to bring those capabilities into Octopus?</li></ul><h2 id="key-design-decisions">Key design decisions</h2><p>Here are a few of the design decisions we made and the rationale behind them:</p><h3 id="simplifying-step-setup">Simplifying step setup</h3><p>Since Argo CD steps require a few prerequisites, we made that process as linear as possible.</p><p>Conditional empty states on the deployment preview will tell you exactly what you need to configure for the step to run successfully.</p><figure><p><img src="/blog/img/designing-argo-in-octopus/argo-update-image-step-octopus.png" alt="The user interface for the Update Argo CD application image tags step."></p></figure><p>Since Argo CD instances aren’t deployment targets, we’ve added a deployment preview to show you which applications will be deployed with a particular step.</p><h3 id="making-gateway-installation-easy">Making gateway installation easy</h3><p>It was important for us to allow users to register existing Argo CD instances in Octopus.</p><p>To communicate with Argo, Octopus requires a gateway installed on the cluster. We opted to use a similar multi-step wizard to the Kubernetes agent to simplify gateway installation.</p><p>You need to provide a few details about your Argo CD instance, and then you get a custom-generated Helm command to install the gateway.</p><figure><p><img src="/blog/img/designing-argo-in-octopus/register-argo-cd-octopus.png" alt="A dialog to register an Argo CD instance in Octopus Deploy."></p></figure><h3 id="generating-annotations">Generating annotations</h3><p>Instead of using the Octopus UI to tell us which applications to deploy, we decided to use a Git-centric approach. We now help you create annotations that map your Argo CD applications to Octopus projects, environments, and tenants.</p><p>This approach also introduced some design challenges, since open source tools like Argo CD and Helm can support a broad range of use cases. We’re still fine-tuning our approach here to balance flexibility and simplicity.</p><p>If you’re interested in sharing your use case with us, <a href="https://calendly.com/d/cqyp-8fj-5pt/argo-in-octopus-30-minute-product-chat">schedule a 30-minute chat</a>.</p><h3 id="live-status--observability">Live status & observability</h3><p>When we built Kubernetes Live Object Status, we used Argo CD statuses to simplify future integrations between the tools.</p><p>To provide application developers with observability across multiple applications, we’re showing the status of your Argo apps on the live status page and we’ll be adding child object statuses soon.</p><p>We’re delivering this observability capability iteratively and plan to expand it to events and logs in the future.</p><figure><p><img src="/blog/img/designing-argo-in-octopus/argo-live-status-octopus.png" alt="Argo CD application live status in Octopus Deploy."></p></figure><h2 id="more-capabilities-coming-soon">More capabilities coming soon</h2><p>We’re not done yet.</p><p>Our team is planning to release more capabilities in the next few months and we’d love your feedback on the direction we’re taking.</p><h2 id="how-to-try-argo-cd-in-octopus">How to try Argo CD in Octopus</h2><figure><p><img src="/blog/img/designing-argo-in-octopus/argo-cd-steps-octopus-deploy.png" alt="The user interface for the step selection page in Octopus Deploy with Argo CD steps."></p></figure><p>These new capabilities are currently available in Early Access to all Cloud customers.</p><p>Add an Argo CD step to your process and start combining the best of GitOps with the power of Octopus.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Automatic API key invalidation coming in 2026</title> <link href="https://octopus.com/blog/invalidating-exposed-api-keys" /> <id>https://octopus.com/blog/invalidating-exposed-api-keys</id> <published>2025-10-20T00:00:00.000Z</published> <updated>2025-10-20T00:00:00.000Z</updated> <summary>Octopus Deploy will begin automatically invalidating exposed API keys starting in 2026.</summary> <author> <name>Colby Prior, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Octopus will automatically invalidate exposed API keys detected by secret scanning partners starting in 2026. This enhancement strengthens our security posture and protects your deployments from unauthorized access.</p><p>We have been a GitHub secret scanning partner since 2022. This partnership has helped protect everyone by identifying Octopus API keys that were accidentally committed to public repositories. When GitHub detects an exposed key, it forwards the information to us, and we notify the affected user via email.</p><p>While notifications allow users to rotate compromised keys, this still leaves a window of vulnerability. An attacker could exploit an exposed key before the owner receives the notification and takes action. Starting in 2026, we will automatically invalidate these API keys. This will reduce the window for attackers, but it will cause disruptions for those with the exposed API keys.</p><h2 id="what-this-means-for-you-in-2026-onwards">What this means for you in 2026 onwards</h2><p>If one of your API keys is detected in a public repository, here’s what will happen:</p><ul><li>Our secret scanning partner detects the exposed key and notifies us.</li><li>We automatically invalidate the key to prevent unauthorized use.</li><li>You receive an email notification explaining what happened and how to create a new key.</li><li>The invalidated key will no longer work for any API calls or deployments. You’ll need to create a replacement key and update any automation or integrations that were using the old key.</li></ul><h2 id="how-can-i-best-prevent-exposing-my-api-keys-in-git">How can I best prevent exposing my API keys in Git?</h2><p>Octopus supports OIDC with GitHub Actions, which eliminates the need to keep Octopus API keys entirely. API keys cannot be exposed to Git if they don’t exist in the first place!</p><p><a href="https://octopus.com/docs/octopus-rest-api/openid-connect/github-actions">Learn more in our docs</a></p>]]></content> </entry> <entry> <title>Introducing Argo CD in Octopus</title> <link href="https://octopus.com/blog/argo-cd-in-octopus" /> <id>https://octopus.com/blog/argo-cd-in-octopus</id> <published>2025-10-19T00:00:00.000Z</published> <updated>2025-10-19T00:00:00.000Z</updated> <summary>Argo CD is now integrated into Octopus to simplify your Kubernetes deployments</summary> <author> <name>Robert Erez, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Argo CD is the leading GitOps solution for Kubernetes. It excels at syncing manifests to clusters and gives engineers a powerful UI to verify and troubleshoot deployments.</p><p>However Argo CD was never designed to handle the full software delivery cycle. While platform teams can use it for cluster bootstrapping and configuration management, most real-world delivery pipelines require multiple environments, tests, security and compliance checks, change management, and many other steps. Today, many teams work around this gap by stitching together tools like Jenkins or GitHub Actions and Argo CD with custom scripting. This quickly becomes brittle. As the number of Argo CD instances, Applications, and clusters grows, so does the maintenance overhead. Engineers also lose centralized visibility and must jump between different Argo CD dashboards to understand deployment status.</p><p>Octopus solves deployment challenges at scale. As a complete CD platform, it provides guardrails, orchestration, and visibility. We realised that combining Argo CD and Octopus would allow our users to get a solution that can scale and has both great CD and GitOps capabilities. Now, with built-in support for Argo CD, teams can eliminate custom glue code and combine the strengths of both approaches out of the box.</p><p>This post introduces Argo CD in Octopus and shows how to get started.</p><p>Argo CD in Octopus is currently in <strong>Early Access</strong>, rolling out to Octopus Cloud in early October.</p><h2 id="what-is-argo-cd">What is Argo CD?</h2><p>Argo CD is a declarative, GitOps-based tool for deploying Kubernetes manifests. Following the <a href="https://opengitops.dev/">GitOps principles</a>, it continuously monitors Git repositories for changes and ensures that the cluster state matches the desired state described in those repositories.</p><h3 id="why-is-it-good">Why is it good?</h3><p>Argo CD makes Kubernetes delivery predictable and transparent. Configuration is stored as code in Git, so every deployment is versioned, auditable, and easy to roll back. Engineers can rely on Git history to know exactly what is running in a cluster. Developers don’t need direct cluster access—they simply commit changes using familiar tools and workflows, and Argo CD applies them.</p><p>A major advantage of Argo CD over many other GitOps tools is its built-in UI. The dashboard gives teams real-time visibility into application health and sync status, reducing the need for extra tooling to verify or troubleshoot deployments.</p><h3 id="argo-cds-limitations">Argo CD’s limitations</h3><p>It’s important to be clear about what Argo CD was designed to do and what it was not. Argo CD excels at synchronizing manifests to Kubernetes, but it is not an end-to-end Continuous Delivery solution. In practice, teams often combine it with CI/CD tools to orchestrate broader delivery pipelines. Many of Argo CD’s limitations show up when it is stretched beyond its intended scope.</p><h4 id="environment-promotions">Environment promotions</h4><p>Successful software delivery involves more than updating a single cluster. Changes typically progress through a series of environments, with tests and approvals at each stage. Some teams try to model this workflow directly in Git, automating Git updates with scripts, but Git was not built for environment promotion management. As a result, promotion logic can become brittle and hard to maintain, especially at scale.</p><p><strong>Octopus solves this</strong> with advanced <a href="https://octopus.com/docs/releases/lifecycles">Environment Lifecycle modelling</a>, providing enforced guardrails that reduce the risk of premature production changes.</p><h4 id="complex-orchestrations">Complex orchestrations</h4><p>Argo CD’s job is to sync Kubernetes manifests. Real-world deployments often require much more — database migrations, external cloud resource updates, integration tests, compliance checks, notifications, and monitoring. Teams can add scripts or tools to cover these steps, but doing so outside of Argo CD creates fragmentation. This disjointed approach is error-prone and makes it difficult to reason about the state of the system as a whole.</p><p>Enterprises also deploy more than just Kubernetes workloads. VMs, serverless functions, and SaaS integrations are common, but Argo CD alone cannot unify these deployment targets.</p><p><strong>Octopus solves this</strong> through its broad library of <a href="https://octopus.com/docs/projects/steps">Steps</a> and <a href="https://octopus.com/docs/infrastructure/deployment-targets">Deployment Targets</a>, covering Kubernetes, Argo CD, and much more.</p><h4 id="rich-rbac-controls">Rich RBAC controls</h4><p>While Argo CD integrates with Kubernetes RBAC, it lacks fine-grained role-based access control for approvals, gates, and deployment responsibilities across environments. For organizations in regulated industries, this can make it difficult to enforce compliance requirements or maintain clear audit trails.</p><p>The audit trail that can be found in git history might provide some record after the fact, but git was never built with fine-grained permissions in mind, particularly when coupled with the complex access configurations required for multi-team environment promotions or more advanced compliance and governance requirements.</p><p><strong>Octopus solves this</strong> through the customizable <a href="https://octopus.com/docs/best-practices/octopus-administration/users-roles-and-teams">RBAC controls</a>, external <a href="https://octopus.com/docs/approvals">ITSM integrations</a> and (coming soon) <a href="https://octopus.com/docs/platform-hub/policies">Policies</a>.</p><h4 id="multi-cluster-management">Multi-cluster management</h4><p>Argo CD supports several different installation topologies for managing multiple clusters, <a href="https://octopus.com/docs/platform-hub/policies">each with their pros and their cons</a>.</p><p>The “hub and spoke” model allows for a single Argo CD instance to manage multiple clusters, however it requires opening up access to each cluster and does not provide very good isolation or security. A popular alternative model is installing a “standalone” Argo CD instance in each required cluster. This reduces some scaling and isolation concerns however there is now additional management and access complexity that is introduced in its place.</p><p><strong>Octopus solves this</strong> by providing a single pane of glass through which you can view your application’s sync state, regardless of the underlying cluster that they are running on. While Octopus doesn’t currently manage the Argo CD instances themselves, the projects that your team cares about can all be presented in one place. The job of keeping the state in sync remains left to Argo CD, but this state is directed by Octopus through your git manifests.</p><h2 id="introducing-argo-cd-with-octopus">Introducing Argo CD with Octopus</h2><p>Octopus has had native support for Kubernetes for many years, most recently with our own live object status capability. We know that some customers want to leverage the strengths that Argo CD provides in supporting GitOps workflows. We can now provide capabilities in Octopus to get the best of both products, leveraging the strong points of each without sacrificing what makes both Argo CD and Octopus useful and without trying to hide the fact that Argo CD is playing its role.</p><p>Let’s take a look at how this new feature works. This blog post won’t explore all the possible configurations but instead run through some of the highlights as well as discuss some of our suggested best practices to make the most of these new capabilities.</p><h3 id="declarative-integration">Declarative integration</h3><p>In providing Argo CD integration into Octopus Deploy, it was important to us that we provide a model that would align with the way that typical users of Argo CD want to work. This means that rather than thinking of Argo CD as a typical Octopus target, we instead automatically use declarative annotations stored in the Application manifests from connected instances.</p><p>Since each Application in an Argo CD instance typically maps to a specific deployable app in a specific environment (perhaps per cluster), a series of annotations can be added to the declarative Application manifest to signal to Octopus which are relevant for a given Project and Environment deployment context.</p><p>No need to configure each application in Octopus, just connect your Argo CD to Octopus and your declarative configuration does the rest. This configuration more naturally fits into the GitOps mindset that is prevalent in Kubernetes pipelines.</p><h3 id="deployments-are-commits">Deployments are commits</h3><p>In Octopus, a deployment is the execution of a pipeline in the context of an environment. A successful deployment means every step in the pipeline has been completed. For Kubernetes, this might involve applying manifests with <code>kubectl</code> or upgrading Helm charts with <code>helm upgrade</code>.</p><p>Argo CD, however, works differently. Its core responsibility is to synchronize the contents of a Git repository with a Kubernetes cluster. This means that during a deployment, Octopus modifies the desired state in Git, and Argo CD takes care of reconciling those changes to the cluster.</p><p>We’ve modelled Git updates as deployment steps in Octopus. This lets you combine GitOps-driven updates with the rest of your delivery process. For example, you can run a database migration before promoting an Application through Argo CD, execute smoke tests after the update, and send a Slack notification if those tests fail.</p><h3 id="argo-cd-connectivity">Argo CD connectivity</h3><p>The first thing you will need to do is to set up the connection between Octopus Deploy and your Argo CD instances.</p><p>Connecting your Argo CD instance involves running an Octopus-Argo CD Gateway in the cluster alongside each Argo CD instance, and we provide a similar helpful Helm installation flow to that used by our popular <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-agent">Kubernetes Agent</a>.</p><p><img src="/blog/_astro/octopus-argo-gateway.BozMTYdZ_Qp3A0.webp" alt="Octopus Argo CD Gateway" loading="lazy" decoding="async" fetchpriority="auto" width="1510" height="592"></p><p>As part of this process, there is also the option to scope the connection to specific environments or, when relevant, tenants. Used in conjunction with Octopus Deploy’s RBAC system, this configuration provides one example where stricter controls can be optionally placed around your Argo CD instance and, by extension, deployments to the applications it manages.</p><h3 id="new-octopus-steps">New Octopus steps</h3><p>This first release of Argo CD integration introduces two new steps that make promotions of Argo CD Applications easier and safer.</p><h4 id="update-argo-cd-application-image-tags">Update Argo CD Application Image Tags</h4><p>Most changes to Kubernetes manifests are simple container image tag updates. For every infrastructure change, there are usually dozens—or even hundreds—of new application versions to deploy.</p><p>If your team manages manifest files through a separate promotion process (manual or automated) but needs a reliable way to detect new builds, update container tags, and safely promote them through environments, the <strong>Update Argo CD Application Image Tags</strong> step handles this.</p><p><img src="/blog/_astro/deploy-is-commit-1.DbnNJPg3_Z2oCDSi.webp" alt="Argo CD watches repository for changes" loading="lazy" decoding="async" fetchpriority="auto" width="748" height="438"></p><p>When an Octopus deployment runs, it looks for Argo CD Applications annotated for the relevant project and environment. For each discovered Application (across one or many Argo CD instances), Octopus retrieves the Git location, updates the image tags in the manifests (or Helm values files), and commits the changes.</p><p><img src="/blog/_astro/deploy-is-commit-2.DYiFWaRJ_1DpuGJ.webp" alt="Octopus updates manifest in repository" loading="lazy" decoding="async" fetchpriority="auto" width="748" height="438"></p><p>Argo CD then detects the change and syncs the updated manifests to the cluster.</p><p><img src="/blog/_astro/deploy-is-commit-3.EVlpM1q-_11hrq4.webp" alt="Desired state applied to cluster" loading="lazy" decoding="async" fetchpriority="auto" width="724" height="415"></p><h4 id="update-argo-cd-application-manifests">Update Argo CD Application Manifests</h4><p>The second step supports more complex scenarios and introduces a stricter approach to managing manifests.</p><p>Unlike the Update Argo CD Application Image Tags step, which modifies existing files in an Application’s <code>source</code> folder, this step copies one or more files from a designated <code>input</code> folder and commits them into the Application <code>source</code>.</p><p><img src="/blog/_astro/template-step-1.Bp63AZ1U_cwAVJ.webp" alt="Argo CD watches repository for changes" loading="lazy" decoding="async" fetchpriority="auto" width="1236" height="450"></p><p>During a deployment, Octopus commits the selected files. Any files with the same name are overwritten.</p><p><img src="/blog/_astro/template-step-2.CuJUVEKz_Z2w0Eu2.webp" alt="Argo CD watches repository for changes" loading="lazy" decoding="async" fetchpriority="auto" width="1096" height="439"></p><p>Argo CD then syncs those changes to the cluster.</p><p><img src="/blog/_astro/template-step-3.vp21_KjR_29dset.webp" alt="Argo CD watches repository for changes" loading="lazy" decoding="async" fetchpriority="auto" width="1132" height="427"></p><p>What does this enable?</p><ul><li><strong>Full manifest control</strong> – You can modify, add, or remove any fields in your manifests, not just image tags.</li><li><strong>Application creation</strong> – You can create manifests for entirely new Argo CD Applications. For example, when creating a new tenant, environment, or cluster, Argo CD (with ApplicationSets if configured) will create a new Application, and Octopus will pick it up and generate the manifests with the next deployment.</li><li><strong>Stricter governance</strong> – By overwriting files, this step enforces a single source of truth. Even if someone makes direct changes in an Application’s <code>source</code> folder, Octopus can reset them during the next deployment. This ensures only tested, approved configurations flow into production.</li></ul><p>What can go in the <code>input</code> folder?</p><ul><li><strong>Shared files</strong> – The simplest option is a single file (such as Helm values) applied across environments. Updating the file before deployment creates a clear, auditable history of changes in Git.</li><li><strong>Environment-specific configurations</strong> – For per-environment differences, you can either maintain separate folders or treat files as templates. Octopus variables can replace placeholders during deployment, injecting the correct values for each environment.</li></ul><p>This flexibility lets teams choose between lightweight image updates and strict manifest ownership, depending on their governance and security needs.</p><h2 id="best-practices">Best practices</h2><p>We designed this feature with established Argo CD best practices in mind. Following them helps you get the most value from Octopus and avoid common pitfalls. This isn’t a full guide to GitOps and Argo CD best practices, but here are a few that are especially relevant when using Octopus with Argo CD:</p><ul><li><strong>Avoid putting environment configuration directly in Argo CD Application manifests.</strong> Applications should describe how to deploy, not encode environment-specific details.</li><li><strong>Keep values files separate from the application source.</strong> Store them in dedicated locations so they can be promoted and managed consistently across environments.</li><li><strong>Separate manifest repositories from application code repositories.</strong> This makes pipelines cleaner and avoids coupling application development with infrastructure changes.</li><li><strong>Be cautious with Helm for internal applications.</strong> Helm can be useful, but for simple services, you may not need the overhead. Additionally, if Applications from different environments reference the same Helm chart, updating the chart will result in simultaneous changes to all Applications. A more secure approach would be to promote these changes through the environments instead. If you use Helm, keep a copy of the chart per environment to prevent cross-environment drift.</li></ul><p>For more context, see <a href="https://codefresh.io/blog/argo-cd-anti-patterns-for-gitops/">Argo CD anti-patterns</a> from one of our Argo CD maintainers, and this guide on <a href="https://codefresh.io/blog/how-to-structure-your-argo-cd-repositories-using-application-sets/">structuring repositories with ApplicationSets</a>.</p><h2 id="current-limitations--future-plans">Current limitations & future plans</h2><p>Our goal with this first offering is to provide the core capabilities required to perform a deployment to a Kubernetes cluster via Argo CD. As such, we plan to continue investing in further features as well as improve the capabilities initially delivered.</p><p>Talking to Argo CD users, we found that about 50% polled liked having the ability to make direct changes to the manifests that Argo CD is watching to quickly push changes that might bypass typical promotion mechanisms. The other 50% preferred managing centralized templating repositories using tools like Helm or Kustomize and enforcing that all changes must be promoted through successive environments. Confusingly, a not-insignificant percentage wanted the capabilities of both patterns at once!</p><p>To move fast, we have built on top of Octopus’s current systems, and this means that particularly large repositories may take time to clone. Since the industry best practices are to separate your application code repository from your infrastructure manifests, we expect that this shouldn’t be a problem for most users; however, we are investigating some promising improvements that can be made.</p><h2 id="how-to-try-it-out">How to try it out</h2><p>Argo CD in Octopus is currently in <strong>Early Access</strong>, rolling out to Octopus Cloud in early October.</p><p>If you’re an Octopus Cloud user, there’s nothing extra required to enable the feature. Simply open the <strong>Argo CD Instances</strong> section under <strong>Infrastructure</strong>, and connect one of your Argo CD instances to Octopus. From there, you can start modeling deployments that combine GitOps and Continuous Delivery out of the box.</p><h2 id="conclusion">Conclusion</h2><p>Argo CD is a powerful GitOps tool for Kubernetes, but it wasn’t built to manage the full software delivery lifecycle. Octopus complements Argo CD by adding environment promotions, orchestration across diverse workloads, fine-grained RBAC, and centralized visibility across clusters.</p><p>With Argo CD integration, Octopus lets teams combine the strengths of GitOps and Continuous Delivery without building custom automation. You get the reliability of Git-driven deployments and the safety, governance, and flexibility of a full CD platform—all in one place.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Octopus partners with Arm to enable software delivery at scale</title> <link href="https://octopus.com/blog/arm-partnership" /> <id>https://octopus.com/blog/arm-partnership</id> <published>2025-10-17T00:00:00.000Z</published> <updated>2025-10-17T00:00:00.000Z</updated> <summary>Our partnership with Arm brings centralized, secure, and repeatable software delivery to Arm-powered systems.</summary> <author> <name>Madalina Iosif, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Octopus sets the standard for Continuous Delivery (CD) at scale, helping thousands of customers orchestrate deployments across various environments, regardless of whether the target hosts are on-prem, hybrid, or multi-cloud. It supports complex application deployments of any flavour, from heritage monoliths to containerized microservices.</p><p>With the rise of AI, technology is becoming increasingly prevalent in business and daily life, requiring computing platforms to keep pace with rapid innovation and the high volume of data being processed.</p><p>This is where Arm steps in to provide the industry’s most efficient and highest-performing compute platform. More than 325 billion Arm-based devices have been shipped to date. Arm powers innovation acceleration across sectors such as high-tech, automotive, healthcare, telco, and much more, with a focus on cloud computing and IoT.</p><p>We partnered with Arm to make software deployments on Arm-powered infrastructure secure, repeatable, and scalable.</p><h2 id="why-is-a-robust-cd-solution-necessary-when-deploying-at-scale">Why is a robust CD solution necessary when deploying at scale?</h2><p>Deploying one application to one host is an easy task. However, managing this at scale is challenging without an enterprise-grade solution.</p><p>Risks such as human error or DIY scripts with minimal security can lead to major incidents, especially in highly regulated or critical industries like automotive or healthcare. Plus, they have a high impact on the speed of deployment, leaving companies lagging behind their peers in terms of innovation.</p><p>A centralized CD solution with built-in security and the capability to deploy to thousands of targets using a repeatable process ensures efficiency, quick rollback in case of an incident, and proper governance and compliance. These are all imperative to ensuring technology is not only being built, but also used.</p><h2 id="the-octopus-deploy-and-arm-partnership">The Octopus Deploy and Arm partnership</h2><p>Octopus has been supporting deployments to Arm through the Tentacle agent since 2021. If you are curious about how this works, check out <a href="https://octopus.com/blog/tentacle-on-arm">our blog post on the topic</a>.</p><p>With the evolution during the past years of both Octopus and Arm, the partnership brings benefits for our joint customers in two main areas:</p><h3 id="continuous-delivery-at-scale-from-x86-to-arm-servers-to-reduce-infrastructure-cost">Continuous Delivery at scale from x86 to Arm servers, to reduce infrastructure cost</h3><p>Octopus can target both x86 and Arm-based servers, supporting the same deployment process regardless of the target host. This way, organizations can migrate or extend workloads from x86 to Arm cloud instances (AWS Graviton, Azure Arm VMs, or Axion-based Google Cloud instances) to optimize cost while keeping a single pipeline. Deployments are fully automated, repeatable, and can scale to thousands of targets.</p><p>For example, Octopus Deploy can deploy the same application to x86 EC2 instances and AWS Graviton. You reduce your compute costs and improve performance while the transition is seamless to your users.</p><h3 id="compliant--secure-cd-for-kubernetes-edge-deployments-to-reduce-risk">Compliant & secure CD for Kubernetes edge deployments, to reduce risk</h3><p>Running Octopus Kubernetes Agent natively on Arm-based Kubernetes clusters at the edge creates a secure connection and eliminates the need for an inbound connection. As for any other deployments with Octopus, you can use encrypted communication, role-based access control, and auditable approvals to ensure compliance.</p><p>Moreover, using Runbooks in Octopus, you can automate maintenance tasks such as patching, certificate rotation, and secure updates across distributed Arm-powered Kubernetes clusters at the edge.</p><p>A practical example is a retail chain that uses Octopus to securely roll out point-of-sale software updates to thousands of Arm-powered edge devices across stores, with a single deployment process and full auditability for compliance teams.</p><h2 id="curious-to-learn-more">Curious to learn more?</h2><p>If you’re an Arm customer and would like to test Octopus for your Continuous Delivery, you can <a href="https://octopus.com/start">sign up for a free Octopus trial</a> or <a href="https://octopus.com/lp/schedule-a-demo">request a demo</a>.</p><p>You can also see how Octopus supports deployments to Arm devices by visiting us at GitHub Universe in San Francisco, October 28–29, 2025.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Using Platform Hub to increase Supply Chain Security</title> <link href="https://octopus.com/blog/supply-chain-security-with-platform-hub" /> <id>https://octopus.com/blog/supply-chain-security-with-platform-hub</id> <published>2025-10-15T00:00:00.000Z</published> <updated>2025-10-15T00:00:00.000Z</updated> <summary>Learn how Platform Hub can help supply chain security in Octopus Deploy.</summary> <author> <name>Bob Walker, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Imagine this scenario: An application is built on Monday afternoon. That version is deployed to a test environment on Tuesday morning for verification. Testing is successful, and the production deployment occurs on Wednesday morning. But no one knew that early Tuesday morning a third-party package issued an update that closes a critical <a href="https://www.cve.org/">CVE</a>. Ideally, a process exists to inform application teams about the vulnerability before testing starts on Tuesday and include it as part of the production deployment on Wednesday.</p><p>With Platform Hub, solving that problem at scale is much easier than ever before. In this post, I will walk you through the steps to increase supply chain security in Octopus Deploy using the <a href="https://octopus.com/docs/platform-hub/process-templates">Process Templates</a> and <a href="https://octopus.com/docs/platform-hub/Policies">Policies</a> included in Platform Hub.</p><h2 id="nomenclature">Nomenclature</h2><p>If you read <a href="https://octopus.com/blog/supply-chain-security-with-github-and-octopus-deploy">my previous post on supply chain security</a>, you are familiar with SBOMs, Provenance, and Attestations. For those who haven’t read that post, I’ve included the definitions below to make it easier to follow along.</p><ul><li><strong>SBOM</strong> - Software Bill of Materials - a list of all the third-party libraries (and their third-party libraries) used to create the build artifact (container, .zip files, jar files, etc.).</li><li><strong>Provenance</strong> - the record of who created the software change, how it was modified and built, and what inputs went into it. It shows how the build artifact was built.</li><li><strong>Attestation</strong> - A cryptographically verifiable statement that asserts something about an artifact, specifically its Provenance. It is similar to the notary seal on a document. Doesn’t show the whole process, but certifies its validity.</li></ul><p>SBOMs, Provenance, and Attestations are intertwined. Think of it like a cake.</p><ul><li>SBOMs are the ingredient list.</li><li>Provenance is the recipe and kitchen log (who cooked it, when, and with which tools).</li><li>Attestation is a signed certificate that proves the ingredient list, recipe, and cooking process are trustworthy.</li></ul><h2 id="responsibilities-differences-between-build-servers-and-octopus-deploy">Responsibilities differences between build servers and Octopus Deploy</h2><p>This post focuses on using Process Templates and Policies within Octopus Deploy. But they will rely on artifacts created by the build server. For example, the build server will create the SBOM, while Octopus Deploy will scan the SBOM for package references with known vulnerabilities. Below is a table showing the differences in responsibility between build servers and Octopus Deploy.</p><div class="table-wrap"> <table><thead><tr><th>Build Server </th><th>Octopus Deploy </th></tr></thead><tbody><tr><td>Generating SBOMs </td><td>Attaching the SBOM to the release or forwarding it onto a third-party tool like <a href="https://ortelius.io/">Ortelius</a></td></tr><tr><td>Scanning source code referenced third-party packages for known vulnerabilities </td><td>Scanning for known vulnerabilities within package versions listed in the SBOM </td></tr><tr><td>Generating Attestations </td><td>Verifying attestations </td></tr><tr><td>Scanning recently built containers for known vulnerabilities </td><td>Scanning containers about to be deployed for known vulnerabilities </td></tr><tr><td>Create build information file for Octopus Deploy to consume </td><td>Use the build information file to update third-party issue trackers like JIRA </td></tr></tbody></table></div><h2 id="tooling-used">Tooling Used</h2><p>For my build server, I’m using GitHub Actions because it includes built-in Attestation generation. I’ll use <a href="https://trivy.dev/">Trivy</a> for vulnerability scanning and SBOM generation.</p><ul><li><strong>SBOM generation</strong> - GitHub Actions will use Trivy to create an SBOM in the <code>spdx-json</code> format via the <a href="https://github.com/aquasecurity/trivy-action">provided step by AquaSecurity/trivy-action</a> step.</li><li><strong>SBOM scanning</strong> - Octopus Deploy will use Trivy to scan the SBOM for known fixed vulnerabilities.</li><li><strong>Attestation generation</strong> - GitHub Actions will use the built in step <a href="https://github.com/actions/attest-build-provenance">actions/attest-build-provenance</a> to create the attestation.</li><li><strong>Attestation verification</strong> - Octopus Deploy will use the command <code>gh attestation verify</code> provided by <a href="https://cli.github.com/">GitHub CLI</a> to verify the attestation.</li><li><strong>Container scanning Post-build</strong> - GitHub Actions will use Trivy to scan the container for known fixed vulnerabilities after the container is built.</li><li><strong>Container scanning Pre-deployment</strong> - Octopus Deploy will use Trivy to scan the container for known fixed vulnerabilities before a deployment.</li></ul><p>In Octopus Deploy, I opted for <a href="https://octopus.com/docs/projects/steps/execution-containers-for-workers">execution containers</a> instead of installing Trivy directly on my workers. There are two execution containers with Trivy installed you can use today:</p><ul><li><a href="https://hub.docker.com/r/octopuslabs/trivy-workertools">octopuslabs/trivy-workertools</a> includes Trivy, PowerShell, Python, and Octopus CLI.</li><li><a href="https://hub.docker.com/r/octopuslabs/github-workertools">octopuslabs/github-workertools</a> includes the Git, GitHub CLI, Trivy, PowerShell, Python, and Octopus CLI. The <code>DOCKERFILE</code> for both execution containers is in the <a href="https://github.com/OctopusDeployLabs/workertools">WorkerTools</a> GitHub repository.</li></ul><h2 id="process-templates">Process Templates</h2><p>My Process Template will contain all the necessary logic to attach the SBOM to the release, scan the SBOM for known vulnerabilities, verify the attestations from GitHub, and scan containers for known third-party vulnerabilities.</p><p>The <a href="https://octopus.com/docs/platform-hub/process-templates">Process Template documentation</a> provides a step-by-step guide to creating Process Templates. Instead of a step-by-step guide, I’ll walk you through the specific configuration for my Process Template.</p><h3 id="process-template-configuration">Process Template Configuration</h3><p>Following our <a href="https://octopus.com/docs/platform-hub/process-templates/best-practices">best practices</a>, I created a Process Template focused on supply chain security, <code>Deploy Process - Attach SBOM and Verify Build Artifacts</code>.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/list-of-process-templates.png" alt="List of Process Templates on an instance with an arrow pointing to the Process Template to attach SBOM and build artifacts">.</p></figure><p>The Process Template currently has two steps.</p><ol><li>The first step will extract the SBOM from a package, attach it as a deployment artifact, and then run Trivy to scan for known fixed vulnerabilities.</li><li>The second step will loop through a list of packages and containers, run <code>gh attestation verify</code> on each item, and run Trivy on any containers for any known fixed vulnerabilities.</li></ol><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/process-template-steps.png" alt="List of steps in the example Process Template"></p></figure><h3 id="process-template-parameters">Process Template parameters</h3><p>For my Process Template, there are four Process Template parameters.</p><ol><li><code>Template.SBOM.Artifact</code> - a zip file containing the SBOM for a specific application.</li><li><code>Template.Git.AuthToken</code> - the GitHub PAT used for <code>gh attestation verify</code> to function properly.</li><li><code>Template.Verify.Workerpool</code> - the worker pool to run all the steps.</li><li><code>Template.Verify.ExecutionContainerFeed</code> - the container feed for the execution containers.</li></ol><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/process-template-parameters.png" alt="List of parameters for the Process Template"></p></figure><h3 id="processing-the-sbom">Processing the SBOM</h3><p>The first step will use the package from the <code>Template.SBOM.Artifact</code>, <code>Template.Verify.Workerpool</code>, and <code>Template.Verify.ExecutionContainer</code> parameters. As a producer of this step, I opted to hardcode the execution container instead of passing it in as a parameter. The consumer shouldn’t need to worry about that configuration.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/attach-sbom-and-run-trivy-step.png" alt="Step to attach the SBOM and run Trivy"></p></figure><p>I opted for inline scripts because Octopus Deploy stores Process Templates in git. Referencing a third-party repo felt redundant. Below is the script I used.</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="powershell"><code><span class="line"><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Environment.Name"</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#001080">$extractedPath</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Action.Package[Template.SBOM.Artifact].ExtractedPath"</span><span style="color:#000000">]</span></span><span class="line"></span><span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "The SBOM extracted file path is this value </span><span style="color:#001080">$extractedPath</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#001080">$sbomFiles</span><span style="color:#000000"> = </span><span style="color:#795E26">Get-ChildItem</span><span style="color:#000000"> -Path </span><span style="color:#001080">$extractedPath</span><span style="color:#000000"> -Filter </span><span style="color:#A31515">"*.json"</span><span style="color:#000000"> -Recurse</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB">foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$sbom</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $sbomFiles</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Attaching </span><span style="color:#0000FF">$(</span><span style="color:#001080">$sbom</span><span style="color:#795E26">.FullName</span><span style="color:#0000FF">)</span><span style="color:#A31515"> as an artifacts"</span></span><span class="line"><span style="color:#795E26"> New-OctopusArtifact</span><span style="color:#000000"> -Path </span><span style="color:#001080">$sbom</span><span style="color:#795E26">.FullName</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.SBOM.JSON"</span><span style="color:#000000"> </span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Running trivy to scan the SBOM for any new vulnerabilities since the build was run"</span></span><span class="line"><span style="color:#000000"> trivy sbom </span><span style="color:#001080">$sbom</span><span style="color:#795E26">.FullName</span><span style="color:#000000"> --severity </span><span style="color:#A31515">"MEDIUM,HIGH,CRITICAL"</span><span style="color:#000000"> --ignore-unfixed --quiet</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$LASTEXITCODE</span><span style="color:#000000"> -eq </span><span style="color:#098658">0</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Highlight</span><span style="color:#A31515"> "Trivy successfully scanned the SBOM and no new vulnerabilities were found in the referenced third-party libraries."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#AF00DB"> else</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "Trivy found vulnerabilities that must be fixed before this application version can proceed. Please update the package references and rebuild the application."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000">} </span></span></code></pre><h3 id="verifying-attestations">Verifying attestations</h3><p>The second step in the Process Template is much more complex. The initial configuration is similar to the first step. It uses the <code>Template.Git.AuthToken</code>, <code>Template.Verify.Workerpool</code>, and <code>Template.Verify.ExecutionContainer</code> parameters. Just like with the first step, the execution container is hardcoded.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/step-to-verify-attestations-and-run-trivy.png" alt="Step to verify attestations and run Trivy"></p></figure><p>The complexity stems from decisions I made as the producer of the step to make it easier for consumers to use the template.</p><ul><li>Maximum re-use: the Process Template must support 1 to N packages/containers. The consumer shouldn’t need to provide a list of those packages/containers either. That information is stored in the variable manifest.</li><li>The consumer shouldn’t have to hardcode information that is already included in the variable manifest. For example the GitHub repo is stored in the build information variables.</li></ul><p>The <code>gh attestation verify</code> command presented two challenges. That command needs a hash to lookup the attestation.</p><ol><li>For zip / JAR / WAR / NuGet files the hash is created from the file itself. <code>gh attestation verify</code> requires the path to the folder.</li><li>For containers the digest hash is used. For private container repositories, verification must occur prior to invoking <code>gh attestation verify</code>.</li></ol><p>For my applications, I have the following benefits:</p><ul><li>All services and websites are hosted on Kubernetes clusters.</li><li>All containers publicly accessible on <a href="https://hub.docker.com">hub.docker.com</a>.</li><li>All deployments to database backends or other services occur via a <a href="https://octopus.com/docs/infrastructure/workers/kubernetes-worker">Kubernetes worker</a> running on the same cluster.</li></ul><p>That allowed me to perform a couple of shortcuts unique to my configuration.</p><ul><li>Octopus will download all containers/packages to the same Kubernetes cluster at the start of the deployment.</li><li>Any package (including the SBOM package) needed for the deployment is stored in the <code>octopus/files</code> directory.</li><li>Being public containers on DockerHub, I didn’t have to worry about authentication when pulling the digest for the container.</li></ul><p>Below is the PowerShell that works <em>for my configuration.</em> It will require some modifications if you wish to include it in your instance. I’m providing it for example use only.</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="powershell"><code><span class="line"><span style="color:#001080">$gitHubToken</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Template.Git.AuthToken"</span><span style="color:#000000">]</span></span><span class="line"></span><span class="line"><span style="color:#001080">$buildInformation</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Deployment.PackageBuildInformation"</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#A31515">"Octopus.Environment.Name"</span><span style="color:#000000">]</span></span><span class="line"></span><span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "Getting a list of packages and containers to attest to from the variable manifest"</span></span><span class="line"><span style="color:#001080">$objectArray</span><span style="color:#000000"> = </span><span style="color:#0000FF">@</span><span style="color:#000000">()</span></span><span class="line"><span style="color:#AF00DB">foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$key</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $OctopusParameters</span><span style="color:#795E26">.Keys</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$key</span><span style="color:#000000"> -like </span><span style="color:#A31515">"*.PackageId"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Found a package Id parameter: </span><span style="color:#001080">$key</span><span style="color:#A31515"> - checking to see if it already is in the packages to verify"</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#001080"> $packageId</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#001080">$key</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "The package ID to check for is </span><span style="color:#001080">$packageId</span><span style="color:#A31515">"</span></span><span class="line"></span><span class="line"><span style="color:#001080"> $packageVersionKey</span><span style="color:#000000"> = </span><span style="color:#001080">$key</span><span style="color:#000000"> -replace </span><span style="color:#A31515">".PackageId"</span><span style="color:#000000">, </span><span style="color:#A31515">".PackageVersion"</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "The package version key is </span><span style="color:#001080">$packageVersionKey</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#001080"> $packageVersion</span><span style="color:#000000"> = </span><span style="color:#001080">$OctopusParameters</span><span style="color:#000000">[</span><span style="color:#001080">$packageVersionKey</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "The package version is </span><span style="color:#001080">$packageVersion</span><span style="color:#A31515">"</span></span><span class="line"></span><span class="line"><span style="color:#001080"> $packageVersionToVerify</span><span style="color:#000000"> = </span><span style="color:#A31515">"</span><span style="color:#0000FF">$(</span><span style="color:#001080">$packageId</span><span style="color:#0000FF">)</span><span style="color:#A31515">:</span><span style="color:#0000FF">$(</span><span style="color:#001080">$packageVersion</span><span style="color:#0000FF">)</span><span style="color:#A31515">"</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$objectArray</span><span style="color:#000000"> -contains </span><span style="color:#A31515">"</span><span style="color:#001080">$packageVersionToVerify</span><span style="color:#A31515">"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "</span><span style="color:#001080">$packageVersionToVerify</span><span style="color:#A31515"> already exists in the array"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#AF00DB"> else</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "</span><span style="color:#001080">$packageVersionToVerify</span><span style="color:#A31515"> does not exist - adding it"</span></span><span class="line"><span style="color:#001080"> $objectArray</span><span style="color:#000000"> += </span><span style="color:#001080">$packageVersionToVerify</span></span><span class="line"><span style="color:#000000"> } </span></span><span class="line"><span style="color:#000000"> } </span></span><span class="line"><span style="color:#000000">}</span></span><span class="line"></span><span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "Getting the GitHub repository from the build information"</span></span><span class="line"><span style="color:#001080">$buildInfoObject</span><span style="color:#000000"> = </span><span style="color:#795E26">ConvertFrom-Json</span><span style="color:#001080"> $buildInformation</span></span><span class="line"><span style="color:#001080">$vcsRoot</span><span style="color:#000000"> = </span><span style="color:#0000FF">$null</span></span><span class="line"></span><span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "Getting the repo name from build information"</span></span><span class="line"><span style="color:#AF00DB">foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$packageItem</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $objectArray</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#001080"> $artifactToCompare</span><span style="color:#000000"> = </span><span style="color:#001080">$packageItem</span><span style="color:#795E26">.Trim</span><span style="color:#000000">().Split(</span><span style="color:#A31515">':'</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#001080"> $packageVersion</span><span style="color:#000000"> = </span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000">[</span><span style="color:#098658">1</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "The version to look for is: </span><span style="color:#001080">$packageVersion</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#AF00DB"> foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$package</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $buildInfoObject</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Comparing </span><span style="color:#0000FF">$(</span><span style="color:#001080">$package</span><span style="color:#795E26">.Version</span><span style="color:#0000FF">)</span><span style="color:#A31515"> with </span><span style="color:#0000FF">$(</span><span style="color:#001080">$packageVersion</span><span style="color:#0000FF">)</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$packageVersion</span><span style="color:#000000"> -eq </span><span style="color:#001080">$package</span><span style="color:#795E26">.Version</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Versions match, getting the build URL"</span><span style="color:#000000"> </span></span><span class="line"><span style="color:#001080"> $vcsRoot</span><span style="color:#000000"> = </span><span style="color:#001080">$package</span><span style="color:#795E26">.VcsRoot</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "The vcsRoot is </span><span style="color:#001080">$vcsRoot</span><span style="color:#A31515">"</span><span style="color:#000000"> </span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000">}</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB">if</span><span style="color:#000000"> (</span><span style="color:#0000FF">$null</span><span style="color:#000000"> -eq </span><span style="color:#001080">$vcsRoot</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "Unable to pull the build information URL from the Octopus Build information using supplied versions in </span><span style="color:#001080">$packageName</span><span style="color:#A31515">. Check that the build information has been supplied and try again."</span></span><span class="line"><span style="color:#000000">}</span></span><span class="line"></span><span class="line"><span style="color:#001080">$githubLessUrl</span><span style="color:#000000"> = </span><span style="color:#001080">$vcsRoot</span><span style="color:#000000"> -Replace </span><span style="color:#A31515">"https://github.com/"</span><span style="color:#000000">, </span><span style="color:#A31515">""</span></span><span class="line"></span><span class="line"><span style="color:#001080">$env:GITHUB_TOKEN</span><span style="color:#000000"> = </span><span style="color:#001080">$gitHubToken</span></span><span class="line"></span><span class="line"><span style="color:#795E26">Write-Host</span><span style="color:#A31515"> "Verifying the attestation of all the found packages and containers."</span></span><span class="line"><span style="color:#AF00DB">foreach</span><span style="color:#000000">(</span><span style="color:#001080">$packageItem</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $objectArray</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000">{ </span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Verifying </span><span style="color:#001080">$packageItem</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#001080"> $artifactToCompare</span><span style="color:#000000"> = </span><span style="color:#001080">$packageItem</span><span style="color:#795E26">.Trim</span><span style="color:#000000">().Split(</span><span style="color:#A31515">':'</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#001080"> $packageName</span><span style="color:#000000"> = </span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000">[</span><span style="color:#098658">0</span><span style="color:#000000">].Replace(</span><span style="color:#A31515">"/"</span><span style="color:#000000">, </span><span style="color:#A31515">""</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$packageItem</span><span style="color:#795E26">.Contains</span><span style="color:#000000">(</span><span style="color:#A31515">"/"</span><span style="color:#000000">))</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#001080"> $imageToAttest</span><span style="color:#000000"> = </span><span style="color:#A31515">"oci://</span><span style="color:#001080">$packageItem</span><span style="color:#A31515">"</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Attesting to </span><span style="color:#001080">$imageToAttest</span><span style="color:#A31515"> in the repo </span><span style="color:#001080">$githubLessUrl</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#001080"> $attestation</span><span style="color:#000000">=gh attestation verify </span><span style="color:#A31515">"</span><span style="color:#001080">$imageToAttest</span><span style="color:#A31515">"</span><span style="color:#000000"> --repo </span><span style="color:#001080">$githubLessUrl</span><span style="color:#000000"> --format json </span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$LASTEXITCODE</span><span style="color:#000000"> -ne </span><span style="color:#098658">0</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "The attestation for </span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> could not be verified"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Highlight</span><span style="color:#A31515"> "</span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> successfully passed attestation verification"</span></span><span class="line"><span style="color:#795E26"> Write-Verbose</span><span style="color:#001080"> $attestation</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Writing the attest output to </span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span><span class="line"><span style="color:#795E26"> New-Item</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -ItemType </span><span style="color:#A31515">"File"</span><span style="color:#000000"> -Value </span><span style="color:#001080">$attestation</span></span><span class="line"><span style="color:#795E26"> New-OctopusArtifact</span><span style="color:#000000"> -Path </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Running trivy to check the container for any known vulnerabilities that might have been discovered since the build."</span><span style="color:#000000"> </span></span><span class="line"><span style="color:#000000"> trivy image --severity </span><span style="color:#A31515">"MEDIUM,HIGH,CRITICAL"</span><span style="color:#000000"> --ignore-unfixed --quiet </span><span style="color:#001080">$packageItem</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$LASTEXITCODE</span><span style="color:#000000"> -eq </span><span style="color:#098658">0</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Highlight</span><span style="color:#A31515"> "Trivy successfully scanned </span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> and no new vulnerabilities have been found in the container or base containers since they were built."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#AF00DB"> else</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "Trivy found vulnerabilities in the build artifacts that must be fixed. You can no longer deploy this release. Please update the base container version, rebuild the application, and create a new release."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#AF00DB"> else</span></span><span class="line"><span style="color:#000000"> { </span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#795E26">Test-Path</span><span style="color:#A31515"> "/octopus/Files/"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "</span><span style="color:#001080">$artifactToCompare</span><span style="color:#A31515"> is a package from our local repo, getting the information from /octopus/Files/"</span></span><span class="line"><span style="color:#001080"> $zipFiles</span><span style="color:#000000"> = </span><span style="color:#795E26">Get-ChildItem</span><span style="color:#000000"> -Path </span><span style="color:#A31515">"/octopus/Files/"</span><span style="color:#000000"> -Filter </span><span style="color:#A31515">"*</span><span style="color:#0000FF">$(</span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000FF">[</span><span style="color:#098658">0</span><span style="color:#000000FF">]</span><span style="color:#0000FF">)</span><span style="color:#A31515">*</span><span style="color:#0000FF">$(</span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000FF">[</span><span style="color:#098658">1</span><span style="color:#000000FF">]</span><span style="color:#0000FF">)</span><span style="color:#A31515">@*.zip"</span><span style="color:#000000"> -Recurse</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#AF00DB"> else</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "</span><span style="color:#001080">$artifactToCompare</span><span style="color:#A31515"> is a package from our local repo, getting the information from /home/Octopus/Files"</span></span><span class="line"><span style="color:#001080"> $zipFiles</span><span style="color:#000000"> = </span><span style="color:#795E26">Get-ChildItem</span><span style="color:#000000"> -Path </span><span style="color:#A31515">"/home/Octopus/Files"</span><span style="color:#000000"> -Filter </span><span style="color:#A31515">"*</span><span style="color:#0000FF">$(</span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000FF">[</span><span style="color:#098658">0</span><span style="color:#000000FF">]</span><span style="color:#0000FF">)</span><span style="color:#A31515">*</span><span style="color:#0000FF">$(</span><span style="color:#001080">$artifactToCompare</span><span style="color:#000000FF">[</span><span style="color:#098658">1</span><span style="color:#000000FF">]</span><span style="color:#0000FF">)</span><span style="color:#A31515">@*.zip"</span><span style="color:#000000"> -Recurse</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"></span><span class="line"><span style="color:#001080"> $artifactVerified</span><span style="color:#000000"> = </span><span style="color:#0000FF">$false</span></span><span class="line"><span style="color:#AF00DB"> foreach</span><span style="color:#000000"> (</span><span style="color:#001080">$file</span><span style="color:#AF00DB"> in</span><span style="color:#001080"> $zipFiles</span><span style="color:#000000">) </span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#795E26">test-path</span><span style="color:#A31515"> "</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#AF00DB"> Continue</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> </span></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Attesting to </span><span style="color:#0000FF">$(</span><span style="color:#001080">$file</span><span style="color:#795E26">.FullName</span><span style="color:#0000FF">)</span><span style="color:#A31515"> in the repo </span><span style="color:#001080">$githubLessUrl</span><span style="color:#A31515">"</span></span><span class="line"><span style="color:#001080"> $attestation</span><span style="color:#000000">=gh attestation verify </span><span style="color:#A31515">"</span><span style="color:#0000FF">$(</span><span style="color:#001080">$file</span><span style="color:#795E26">.FullName</span><span style="color:#0000FF">)</span><span style="color:#A31515">"</span><span style="color:#000000"> --repo </span><span style="color:#001080">$githubLessUrl</span><span style="color:#000000"> --format json</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$LASTEXITCODE</span><span style="color:#000000"> -ne </span><span style="color:#098658">0</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "The attestation for </span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> could not be verified - this means no attestation was generated or the package has been tampered with since it was created - stopping the deployment to avoid a security incident."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Highlight</span><span style="color:#A31515"> "</span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> successfully passed attestation verification"</span></span><span class="line"><span style="color:#795E26"> Write-Verbose</span><span style="color:#001080"> $attestation</span></span><span class="line"><span style="color:#001080"> $artifactVerified</span><span style="color:#000000"> = </span><span style="color:#0000FF">$true</span></span><span class="line"></span><span class="line"><span style="color:#795E26"> Write-Host</span><span style="color:#A31515"> "Writing the attest output to </span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span><span class="line"><span style="color:#795E26"> New-Item</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -ItemType </span><span style="color:#A31515">"File"</span><span style="color:#000000"> -Value </span><span style="color:#001080">$attestation</span></span><span class="line"><span style="color:#795E26"> New-OctopusArtifact</span><span style="color:#000000"> -Path </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span><span style="color:#000000"> -Name </span><span style="color:#A31515">"</span><span style="color:#001080">$packageName</span><span style="color:#A31515">.</span><span style="color:#001080">$OctopusEnvironmentName</span><span style="color:#A31515">.attestation.json"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> (</span><span style="color:#001080">$artifactVerified</span><span style="color:#000000"> -eq </span><span style="color:#0000FF">$false</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#795E26"> Write-Error</span><span style="color:#A31515"> "The attestation for </span><span style="color:#001080">$packageItem</span><span style="color:#A31515"> could not be verified - this means no attestation was generated or the package has been tampered with since it was created - stopping the deployment to avoid a security incident."</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> } </span></span><span class="line"><span style="color:#000000">}</span></span></code></pre><h2 id="policies">Policies</h2><p>Platform Hub’s Process templates are half of the solution. The other half are <a href="https://octopus.com/docs/platform-hub/policies">Policies</a>. The Policies in Octopus Deploy can fail deployments if specific steps (or Process Templates) are not present. That same logic can be applied to runbook runs, but obviously, they’d require different steps.</p><h3 id="how-policies-work">How Policies work</h3><p>I want to explain Policies because they are a new concept in Octopus Deploy. Our policy engine uses Rego to query Octopus Deploy. The end goal of the policy engine is to provide “hooks” into various actions within Octopus Deploy. The first “hook” we provide is executing Deployments or Runbook Runs.</p><p>When a Deployment or Runbook Run occurs, Octopus Deploy will send <code>input</code> information to the policy engine that looks similar to the example below.</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="ruby"><code><span class="line"><span style="color:#0000FF">Input:</span></span><span class="line"><span style="color:#000000">{</span></span><span class="line"><span style="color:#A31515"> "Environment"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"Environments-42"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Name"</span><span style="color:#000000">: </span><span style="color:#A31515">"Test"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"test"</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "Project"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"Projects-541"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Name"</span><span style="color:#000000">: </span><span style="color:#A31515">"Trident"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"trident-aks"</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "ProjectGroup"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"ProjectGroups-483"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Name"</span><span style="color:#000000">: </span><span style="color:#A31515">"Kubernetes"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"kubernetes"</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "Space"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"Spaces-1"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Name"</span><span style="color:#000000">: </span><span style="color:#A31515">"Default"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"default"</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "SkippedSteps"</span><span style="color:#000000">: [],</span></span><span class="line"><span style="color:#A31515"> "Steps"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"azure-key-vault-retrieve-secrets"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"azure-key-vault-retrieve-secrets"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.AzurePowerShell"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">false</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Step Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"ActionTemplates-561"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"2"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"verify-build-artifacts-attach-sbom-to-release"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"verify-build-artifacts-attach-sbom-to-release"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.Script"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Process Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-process-verify-build-artifacts"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"2.5.0"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"verify-build-artifacts-verify-docker-containers"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"verify-build-artifacts-verify-docker-containers"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.Script"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Process Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-process-verify-build-artifacts"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"2.5.0"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-k8s-manifest-deploy-container"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-k8s-manifest-deploy-container"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.KubernetesDeployRawYaml"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">false</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Process Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-process-deploy-to-kubernetes-via-manifest"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"1.1.0"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-k8s-manifest-verify-deployment"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-k8s-manifest-verify-deployment"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.Script"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">false</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Process Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"deploy-process-deploy-to-kubernetes-via-manifest"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"1.1.0"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "Id"</span><span style="color:#000000">: </span><span style="color:#A31515">"notify-team-of-deployment-status"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Slug"</span><span style="color:#000000">: </span><span style="color:#A31515">"notify-team-of-deployment-status"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "ActionType"</span><span style="color:#000000">: </span><span style="color:#A31515">"Octopus.Script"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Enabled"</span><span style="color:#000000">: </span><span style="color:#0000FF">true</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "IsRequired"</span><span style="color:#000000">: </span><span style="color:#0000FF">false</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Source"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "Type"</span><span style="color:#000000">: </span><span style="color:#A31515">"Step Template"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "SlugOrId"</span><span style="color:#000000">: </span><span style="color:#A31515">"ActionTemplates-101"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "Version"</span><span style="color:#000000">: </span><span style="color:#A31515">"15"</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> ]</span></span><span class="line"><span style="color:#000000">}</span></span></code></pre><p>The policy engine will attempt to match that input to a policy. If it matches and the policy passes, then the Deployment (or Runbook Run) can proceed. The policy’s success (or failure) will be logged to the audit log.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/policy-evaulation-logged-in-audit-log.png" alt="Policy evaluation that was logged to the audit log"></p></figure><h3 id="requiring-a-process-template">Requiring a Process Template</h3><p>Using the input from before, the policy to require that the Process Template is as follows:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="ruby"><code><span class="line"><span style="color:#001080">name</span><span style="color:#000000"> = </span><span style="color:#A31515">"Verify Build Artifacts Required"</span></span><span class="line"><span style="color:#001080">description</span><span style="color:#000000"> = </span><span style="color:#A31515">"Requires the Process Template Deploy Process - Verify Build Artifacts for all deployments"</span></span><span class="line"><span style="color:#0070C1">ViolationReason</span><span style="color:#000000"> = </span><span style="color:#A31515">"Deploy Process - Verify Build Artifacts is required on all deployments to K8s"</span></span><span class="line"></span><span class="line"><span style="color:#000000">scope {</span></span><span class="line"><span style="color:#001080"> rego</span><span style="color:#000000"> = </span><span style="color:#A31515"><<-EOT</span></span><span class="line"><span style="color:#A31515"> # The package name MUST match the file name that is stored in git. The file name should be verify_build_artifacts_required.ocl</span></span><span class="line"><span style="color:#A31515"> package verify_build_artifacts_required </span></span><span class="line"></span><span class="line"><span style="color:#A31515"> default evaluate := false</span></span><span class="line"></span><span class="line"><span style="color:#A31515"> # Only run this policy for deployments in the projects in the Project Group Kubernetes</span></span><span class="line"><span style="color:#A31515"> evaluate := true if {</span></span><span class="line"><span style="color:#A31515"> input.ProjectGroup.Slug == "kubernetes" </span></span><span class="line"><span style="color:#A31515"> not input.Runbook </span></span><span class="line"><span style="color:#A31515"> }</span></span><span class="line"><span style="color:#A31515"> EOT</span></span><span class="line"><span style="color:#000000">}</span></span><span class="line"></span><span class="line"><span style="color:#000000">conditions {</span></span><span class="line"><span style="color:#001080"> rego</span><span style="color:#000000"> = </span><span style="color:#A31515"><<-EOT</span></span><span class="line"><span style="color:#A31515"> # The package name MUST match the file name that is stored in git. The file name should be verify_build_artifacts_required.ocl </span></span><span class="line"><span style="color:#A31515"> package verify_build_artifacts_required</span></span><span class="line"></span><span class="line"><span style="color:#A31515"> # Assume all evaluations will fail</span></span><span class="line"><span style="color:#A31515"> default result := {"allowed": false}</span></span><span class="line"></span><span class="line"><span style="color:#A31515"> result := {"allowed": true} if {</span></span><span class="line"><span style="color:#A31515"> some step in input.Steps</span></span><span class="line"><span style="color:#A31515"> # Match using the source.SlugOrId</span></span><span class="line"><span style="color:#A31515"> step.Source.SlugOrId == "deploy-process-verify-build-artifacts" </span></span><span class="line"><span style="color:#A31515"> # Ensure the step is enabled - if it is not then fail it.</span></span><span class="line"><span style="color:#A31515"> step.Enabled == true </span></span><span class="line"><span style="color:#A31515"> not verify_build_artifacts_skipped</span></span><span class="line"><span style="color:#A31515"> }</span></span><span class="line"></span><span class="line"><span style="color:#A31515"> result := {"allowed": false, "Reason": "The Process Template Deploy Process - Verify Build Artifacts is required and cannot be skipped for a deployment to K8s to any environment."} if {</span></span><span class="line"><span style="color:#A31515"> verify_build_artifacts_skipped</span></span><span class="line"><span style="color:#A31515"> }</span></span><span class="line"></span><span class="line"><span style="color:#A31515"> verify_build_artifacts_skipped if {</span></span><span class="line"><span style="color:#A31515"> # Fail the evaluation if the user elects to skip the Process Template when creating the deployment</span></span><span class="line"><span style="color:#A31515"> some step in input.Steps</span></span><span class="line"><span style="color:#A31515"> step.Id in input.SkippedSteps</span></span><span class="line"><span style="color:#A31515"> step.Source.SlugOrId == "deploy-process-verify-build-artifacts"</span></span><span class="line"><span style="color:#A31515"> }</span></span><span class="line"><span style="color:#A31515"> EOT</span></span><span class="line"></span><span class="line"><span style="color:#000000">}</span></span></code></pre><h2 id="bringing-everything-together">Bringing everything together</h2><p>Adding the Process Template to the deployment process is the same as adding any other step to a deployment process.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/adding-process-template-to-deploy-process.png" alt="Adding a Process Template to the deploy process with parameters being populated"></p></figure><p>A deployment to the test environment will then show both Policies and Process Templates in action. Because the Process Template is part of the deployment the Policy check passes and the deployment was successful.</p><figure><p><img src="/blog/img/supply-chain-security-with-platform-hub/deployments-with-policies-and-process-templates.png" alt="Deployment with Policies and Process Templates"></p></figure><p>The eagle-eyed among you will likely notice that my scripts fail the deployment when a Trivy scan fails. What if you need to deploy a change to fix a show-stopping bug? I solve that by using <a href="https://octopus.com/docs/releases/guided-failures">guided failures</a>. When Trivy or an attestation verification fails, the deployment pauses and waits for a human to intervene. They decide to ignore the failure or cancel the deployment. Regardless of the decision, that information is logged in the audit log.</p><h2 id="conclusion">Conclusion</h2><p>I’m not naive enough to believe what is described in this post will 100% secure the software supply chain. Leveraging Process Templates and Policies in Platform Hub makes it much easier to secure the software supply chain in Octopus Deploy. Using Pull Requests in GitHub, Process Templates, Policies, ITSM, and RBAC in Octopus Deploy, it’s much easier to get to <a href="https://slsa.dev/spec/v0.1/levels">SLSA Level 4</a> than ever before. The Process Template includes the necessary steps to guarantee the build artifact about to be deployed hasn’t been tampered with and no new fixed vulnerabilities have been reported. Policies guarantee that each deployment to production includes the appropriate Process Template.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Deprecating support for TLS 1.0 and 1.1</title> <link href="https://octopus.com/blog/deprecating-tls-1-0-and-1-1" /> <id>https://octopus.com/blog/deprecating-tls-1-0-and-1-1</id> <published>2025-10-14T00:00:00.000Z</published> <updated>2025-10-14T00:00:00.000Z</updated> <summary>Octopus Cloud will discontinue support for connecting to targets and workers that require TLS 1.0 or 1.1.</summary> <author> <name>Rhys Parry, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Transport Layer Security (TLS) 1.0 and 1.1 are legacy cryptographic protocols that first appeared in 1999 and 2006, respectively. These protocols contain known security vulnerabilities, and more secure versions have superseded them, particularly TLS 1.2 (2008) and TLS 1.3 (2018).</p><p>Microsoft has progressively phased out support for TLS 1.0 and 1.1 across Windows Server operating systems:</p><ul><li><strong>Windows Server 2019 and later</strong>: Disables TLS 1.0 and 1.1 by default</li><li><strong>Windows Server 2016</strong>: Allows you to disable TLS 1.0 and 1.1 via registry settings</li><li><strong>Windows Server 2012 R2</strong>: Requires updates to support TLS 1.2 as the default protocol</li><li><strong>Windows Server 2012</strong>: Requires <a href="https://support.microsoft.com/en-au/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392">specific updates</a> to support TLS 1.2</li></ul><p>We’re following <a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/sslstream-best-practices">Microsoft’s recommendation</a> by deferring TLS version selection to the Operating System. This approach prevents systems that don’t enable legacy protocols by default from using them.</p><h3 id="impact-on-octopus-cloud-customers">Impact on Octopus Cloud customers</h3><p>We’re removing support for these legacy protocols on Octopus Cloud to enhance security. This change will affect Tentacles on older operating systems that don’t support TLS 1.2+.</p><p><strong>Tentacles affected by this change include those running on:</strong></p><ul><li>Windows Server 2012 and 2012 R2 (without <a href="https://support.microsoft.com/en-au/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392">TLS 1.2 patches</a>)</li></ul><p>These Tentacles will need TLS 1.2+ support to maintain secure connections and continue deployments.</p><div class="info"><p>This will also affect newer Operating Systems if you have explicitly disabled TLS 1.2 or 1.3. If affected, you’ll need to re-enable TLS 1.2 or 1.3.</p></div><h3 id="impact-on-self-hosted-customers-using-linux-docker">Impact on self-hosted customers using Linux Docker</h3><p>Our upgrade to Debian 12 in January 2026 will also affect customers using our official Linux Docker image. Like Octopus Cloud, your Tentacles will need TLS 1.2+ support to connect to your Octopus Server.</p><h3 id="impact-on-self-hosted-customers-using-windows">Impact on self-hosted customers using Windows</h3><p>Self-hosted customers running Octopus Server on Windows won’t see direct changes to their server. However, your Operating System configuration determines your TLS version availability, so you may already use TLS 1.2+ only.</p><p>Most Windows Server 2016+ installations already use TLS 1.2+ by default, so you’re likely already prepared.</p><h3 id="customer-support-and-monitoring">Customer support and monitoring</h3><p><strong>For Octopus Cloud customers:</strong> We’re monitoring Octopus Cloud for usages of TLS 1.0 and 1.1, and will reach out to affected customers.</p><p><strong>For self-hosted customers:</strong> To ensure you’re prepared, please review your environment for TLS 1.0/1.1 dependencies before the January 2026 timeline. This step will help you identify and address any compatibility requirements early.</p><p>If you believe your organization may be affected, or if you have questions about TLS protocol support, please don’t hesitate to contact our <a href="https://octopus.com/support">support team</a> for assistance.</p><h3 id="what-you-can-do">What you can do</h3><p>To keep your systems connected, you have several options:</p><p><strong>Recommended approach for all customers:</strong></p><ul><li><strong>Upgrade your operating system</strong> to a supported version (Windows Server 2016 or later, recent Linux distributions)</li><li><strong>Update your Tentacle</strong> to the latest version, which includes enhanced TLS support</li><li><strong>Review external integrations</strong> to ensure they support TLS 1.2 or higher</li></ul><p><strong>Alternative options for specific systems:</strong></p><ul><li><strong>Windows Server 2012</strong>: Apply the <a href="https://support.microsoft.com/en-au/topic/update-to-enable-tls-1-1-and-tls-1-2-as-default-secure-protocols-in-winhttp-in-windows-c4bd73d2-31d7-761e-0178-11268bb10392">Microsoft update to enable TLS 1.1 and TLS 1.2 as default protocols</a></li><li><strong>Windows Server 2012 R2</strong>: Install all Windows updates and enable TLS 1.2 in the registry</li></ul><p><strong>How to check your current setup:</strong></p><ul><li><strong>External service support</strong>: Most modern services already support TLS 1.2+, but you can test connections or contact service providers to confirm</li><li><strong>Operating System TLS</strong>: Windows Server 2016+ and modern Linux distributions enable TLS 1.2+ by default. Older operating systems, such as Windows Server 2012/2012 R2, may require security updates to enable TLS 1.2. Since Tentacle uses your OS’s TLS capabilities, ensuring your OS supports TLS 1.2+ is the key step for compatibility</li></ul><h3 id="deprecation-timeline">Deprecation timeline</h3><div class="table-wrap"> <table><thead><tr><th>Period</th><th>Octopus Cloud</th><th>Self-Hosted Docker</th></tr></thead><tbody><tr><td>October - November 2025</td><td>We’ll monitor for usages of TLS 1.0/1.1</td><td>Customers should assess their environments</td></tr><tr><td>Mid-November 2025</td><td>We’ll disable TLS 1.0/1.1 on Octopus Cloud (with accommodations for affected customers)</td><td>No immediate change</td></tr><tr><td>December 2025</td><td>We’ll continue to track and help affected customers</td><td>Customers should continue preparation</td></tr><tr><td>January 2026</td><td>Octopus Cloud will use TLS 1.2+ only</td><td>We’ll upgrade the official Docker image to Debian 12, supporting TLS 1.2+ only</td></tr></tbody></table></div><div class="info"><p><strong>Note:</strong> We may adjust this timeline based on customer impact analysis and feedback. We’re committed to providing adequate notice and support throughout the transition process.</p></div><h2 id="summary">Summary</h2><p>Removing support for these outdated protocols brings us in line with modern security standards. Most customers won’t be affected, but if you’re running older systems, now’s the time to plan your upgrade.</p><p><strong>Key takeaways:</strong></p><ul><li><strong>Octopus Cloud</strong> customers will see us disable TLS 1.0/1.1 from mid-November 2025, with complete removal by January 2026</li><li><strong>Self-hosted Docker</strong> customers will experience changes when we upgrade the official image to Debian 12 in January 2026</li><li><strong>Self-hosted Windows</strong> customers will continue to work as before</li></ul><p>The best fix is upgrading to modern operating systems with built-in TLS 1.2+ support. If you need more time, apply security patches and enable TLS 1.2 as a temporary measure.</p><p>Our <a href="https://octopus.com/support">support team</a> is here to help throughout this transition. If you have concerns about your environment or need help with remediation, please reach out early so we can work together to ensure a smooth migration.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Leveling up your deployment pipelines</title> <link href="https://octopus.com/blog/leveling-up-deployment-pipelines" /> <id>https://octopus.com/blog/leveling-up-deployment-pipelines</id> <published>2025-10-14T00:00:00.000Z</published> <updated>2025-10-14T00:00:00.000Z</updated> <summary>Platform teams follow a common pattern when building deployment pipelines. Learn the three stages of evolution and how to level up your CI/CD infrastructure.</summary> <author> <name>Steve Fenton, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Our <a href="https://octopus.com/publications/platform-engineering-pulse">Platform Engineering Pulse report</a> gathered a list of features organizations commonly add to their internal developer platforms. We grouped sample platforms into common feature collections and found that platform teams follow similar patterns when implementing deployment pipelines.</p><p>They begin by creating an end-to-end deployment pipeline that automates the flow of changes to production and establishes monitoring. Next, they add security concerns into the pipeline to scan for vulnerabilities and manage secrets. This eventually leads to a DevOps pipeline, which adds documentation and additional automation.</p><p>You can use this pragmatic evolution of CI/CD pipelines as a benchmark and a source of inspiration for platform teams. It’s like a natural maturity model that has been discovered through practice, rather than one that has been designed upfront.</p><h2 id="stage-1-deployment-pipeline">Stage 1: Deployment pipeline</h2><p>The initial concern for platform teams is to establish a complete deployment pipeline, allowing changes to flow to production with high levels of automation. Although the goal is a complete yet minimal CI/CD process, it’s reassuring to see that both test automation and monitoring are frequently present at this early stage.</p><p>Early deployment pipelines involve integrating several tools, but these tools are designed to work together, making the integration quick and easy. Build servers, artifact management tools, deployment tools, and monitoring tools have low barriers to entry and lightweight touchpoints, so they feel very unified, even when provided by a mix of different tool vendors or open-source options.</p><p>In fact, when teams attempt to simplify the toolchain by using a single tool for everything, they often end up with more complexity, as tool separation enforces a good pipeline architecture.</p><figure><p><img src="/blog/img/leveling-up-deployment-pipelines/platform-pipelines-deployment.png" alt="Deployment pipeline with stages for build, test, artifact management, deployment, and monitoring"></p></figure><h3 id="builds">Builds</h3><p>Building an application from source code involves compiling code, linking libraries, and bundling resources so you can run the software on a target platform. While this may not be the most challenging task for software teams, build processes can become complex and require troubleshooting that takes time away from feature development.</p><p>When a team rarely changes its build process, it tends to be less familiar with the tools it uses. It may not be aware of features that could improve build performance, such as dependency caching or parallelization.</p><h3 id="test-automation">Test automation</h3><p>To shorten feedback loops, it is essential to undertake all types of testing continuously. This means you need fast and reliable test automation suites. You should cover functional, security, and performance tests within your deployment pipeline.</p><p>You must also consider how to manage test data as part of your test automation strategy. The ability to set up data in a known state will help you make tests less flaky. Test automation enables developers to identify issues early, reduces team burnout, and enhances software stability.</p><h3 id="artifact-management">Artifact management</h3><p>Your Continuous Integration (CI) process creates a validated build artifact that should be the canonical representation of the software version. An artifact repository ensures only one artifact exists for each version and allows tools to retrieve that version when needed.</p><h3 id="deployment-automation">Deployment automation</h3><p>Even at the easier end of the complexity scale, deployments are risky and often stressful. Copying the artifact, updating the configuration, migrating the database, and performing related tasks present numerous opportunities for mistakes or unexpected outcomes.</p><p>When teams have more complex deployments or need to deploy at scale, the risk, impact, and stress increase.</p><h3 id="monitoring-and-observability">Monitoring and observability</h3><p>While test automation covers a suite of expected scenarios, monitoring and observability help you expand your view to the entirety of your real-world software use. Monitoring implementations tend to start with resource usage metrics, but mature into measuring software from the customer and business perspective.</p><p>The ability to view information-rich logs can help you understand how faults occur, allowing you to design a more robust system.</p><h2 id="stage-2-secure-pipeline">Stage 2: Secure pipeline</h2><p>The natural next step for platform teams is to integrate security directly into the deployment pipeline. At this stage, teams add security scanning to automatically check for code weaknesses and vulnerable dependencies, alongside secrets management to consolidate how credentials and API keys are stored and rotated.</p><p>This shift is significant because security measures are now addressed earlier in the pipeline, reducing the risk of incidents in production. Rather than treating security as a separate concern that happens after development, it becomes part of the continuous feedback loop.</p><p>Security integration at this stage typically involves adding new tools to the existing pipeline, with well-defined touchpoints and clear interfaces. Security scanners and secrets management tools are designed to integrate with CI/CD systems, making the additions feel like natural extensions of the deployment pipeline rather than disruptive changes.</p><figure><p><img src="/blog/img/leveling-up-deployment-pipelines/platform-pipelines-secure.png" alt="Two stages have been added to the pipeline for security scanning and secrets management"></p></figure><h3 id="security-scanning">Security scanning</h3><p>While everyone should take responsibility for software security, having automated scanning available within a deployment pipeline can help ensure security isn’t forgotten or delayed. Automated scanning can provide developers with rapid feedback.</p><p>You can supplement automated scanning with security reviews and close collaboration with information security teams.</p><h3 id="secrets-management">Secrets management</h3><p>Most software systems must connect securely to data stores, APIs, and other services. The ability to store secrets in a single location prevents the ripple effect when a secret is rotated. Instead of updating many tools with a new API key, you can manage the change centrally with a secret store.</p><p>When you deploy an application, you usually have to apply the correct secrets based on the environment or other characteristics of the deployment target.</p><h2 id="stage-3-devops-pipeline">Stage 3: DevOps pipeline</h2><p>The DevOps pipeline represents a shift from building deployment infrastructure to accelerating developer productivity. At this stage, platform teams add documentation capabilities, infrastructure automation, and one-click setup for new projects. These features focus on removing friction from the developer experience.</p><p>The impact of this stage is felt most strongly at the start of new projects and when onboarding new team members. Instead of spending days or weeks on boilerplate setup, teams get a walking skeleton that fast-forwards them directly to writing their first unit test.</p><p>While the earlier stages focused on moving code through the pipeline efficiently and securely, this stage is about making the pipeline itself easy to replicate and understand. The automation added here helps teams maintain consistency across projects while giving developers the freedom to focus on features rather than configuration.</p><figure><p><img src="/blog/img/leveling-up-deployment-pipelines/platform-pipelines-devops.png" alt="Three more stages have been added for documentation, one-click setup, and infrastructure automation"></p></figure><h3 id="documentation">Documentation</h3><p>To provide documentation as a service to teams, you may either supply a platform for storing and finding documentation or use automation to extract documentation from APIs, creating a service directory for your organization.</p><p>For documentation to be successful, it must be clear, well-organized, up-to-date, and easily accessible.</p><h3 id="one-click-setup-for-new-projects">One-click setup for new projects</h3><p>When setting up a new project, several boilerplate tasks are required to configure a source code repository, establish a project template, configure deployment pipelines, and set up associated tools. Teams often have established standards, but manual setup means projects unintentionally drift from the target setup.</p><p>One-click automation helps teams set up a walking skeleton with sample test projects, builds, and deployment automation. This ensures a consistent baseline and speeds up the time to start writing meaningful code.</p><h3 id="infrastructure-automation">Infrastructure automation</h3><p>Traditional ClickOps infrastructure is hand-crafted and often drifts from the intended configuration over time. Environments may be set up differently, which means problems surface only in one environment and not another. Equally, two servers in the same environment with the same intended purpose may be configured differently, making troubleshooting problems more challenging.</p><p>Infrastructure automation solves these problems, making it easier to create new environments, spin up and tear down ephemeral (temporary) environments, and recover from major faults.</p><h2 id="evolving-your-platforms-pipelines">Evolving your platform’s pipelines</h2><p>Whether you choose to introduce features according to this pattern or decide to approach things differently, it is advisable to take an evolutionary approach. Delivering a working solution that covers the flow of changes from commit to production brings early value. The evolution of the platform enhances the flow and incorporates broader concerns.</p><p>Your organization may have additional compliance or regulatory requirements that could become part of a “compliant pipeline”, or you may have a heavyweight change approval process you could streamline with an “approved pipeline”.</p><p>Regardless of the requirements you choose to bring under the platform’s capabilities, you’ll be more successful if you deliver a working pipeline and evolve it to add additional features.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Announcing Process Templates Public Preview</title> <link href="https://octopus.com/blog/process-templates" /> <id>https://octopus.com/blog/process-templates</id> <published>2025-10-10T00:00:00.000Z</published> <updated>2025-10-10T00:00:00.000Z</updated> <summary>A blog post outlining our launch of process templates in public preview.</summary> <author> <name>Venkatesh Vasudevan, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Today, we’re excited to introduce Process Templates Public Preview, a powerful new feature designed to help teams harmonize their pipelines and reduce duplication across projects and teams. Process Templates enable you to easily create reusable, standardized deployment process building blocks.</p><h2 id="what-are-process-templates">What are Process Templates?</h2><p>Process Templates are reusable sets of deployment steps that can be shared across multiple Spaces in Octopus Deploy. Instead of copying and pasting deployment processes across teams and applications, which often leads to configuration drift, unnecessary duplication, and operational debt, you create a single source of truth that any project can consume. By abstracting your best practices for deployments into Process Templates, you make it easy for teams to follow standards and accelerate delivery.</p><h2 id="why-use-process-templates">Why use Process Templates?</h2><ul><li><strong>Reduce duplication:</strong> Update Process Templates from one place, Platform Hub, and have the changes reflected everywhere.</li><li><strong>Consistency at scale:</strong> Keep your pipelines consistent no matter how many teams or projects use them.</li><li><strong>Safe and secure delivery:</strong> Centralizing important deployment patterns in Process Templates ensures that your developers automatically deploy safely and securely whenever they use a Process Template.</li><li><strong>Faster onboarding:</strong> Developers no longer need to worry about mis-configuring deployment processes. They can consume a trusted, version-controlled Process Template that they can rely on to be updated with best practices.</li><li><strong>Shared responsibility:</strong> Empower teams with ownership over their process, maintaining flexibility while reusing best practices, ensuring a collaborative approach to deployments.</li></ul><h2 id="how-to-get-started-with-process-templates">How to get started with Process Templates</h2><p>There are several common use cases for Process Templates that show how this feature improves your deployments.</p><h3 id="templating-part-of-a-deployment-process">Templating part of a deployment process</h3><p>Platform engineers may expect that each project includes manual approval or email notification steps in every process. Each team might build its manual approval steps and email notifications from scratch or copy them from another project. Over time, these steps diverge in configuration, leading to drift and deployment errors. Keeping these governance steps consistent across multiple pipelines requires repetitive work and is prone to errors.</p><p>With Process Templates, Platform Engineers can create a template containing the correct approval steps for every deployment process and share it with every Space. This template can be consumed in any project and added to an existing deployment process. The Process Template is customized to fit the project’s needs and will receive updates as it is updated in Platform Hub, ensuring developers can easily stay aligned with best practices.</p><h3 id="templating-an-entire-deployment-process">Templating an entire deployment process</h3><p>Some companies may use a microservices architecture, which is replicated across multiple projects with only the configuration changing. Even though the deployment steps are identical, platform teams must create and maintain separate processes for each service, customizing configuration values. Over time, these processes drift as it isn’t easy to update every pipeline with the latest changes. With Process Templates, Platform Engineers define the entire deployment process once, with parameterized values for all configuration differences (such as Docker image names, environment variables, or secrets). Each microservice project then consumes this template, supplying its configuration through parameters on the Process Template.</p><h3 id="self-service-deployment-processes-for-application-teams">Self-Service Deployment processes for Application teams</h3><p>Your company may have many application teams building new business components such as Web APIs, Frontend applications, or Storage components. You want to empower your application teams to go from zero to production-ready deployments in minutes, without waiting on team members or copying and pasting reference projects.With Process Templates, the platform team can create a library of standardized templates for each supported component type. These templates encode best practices for deploying business components. Application teams can pick the template matching their component type, fill out information that tailors the template to their project, and be ready to deploy to production with ease.</p><div class="hint"><ul><li>For more information on use-cases, please visit our <a href="https://octopus.com/use-case/platform-hub">use-cases page</a> on our website.</li><li>For a demo of process templates, you can watch our <a href="https://www.youtube.com/watch?v=TuendU1wDPw">Youtube video</a>.</li></ul></div><h2 id="how-do-process-templates-work-in-octopus">How do Process Templates work in Octopus?</h2><p>Process Templates are simple to set up and work similarly to a regular deployment process.</p><p><strong>Prerequisites:</strong></p><ul><li>You must be on an Enterprise license.</li><li>You must visit Platform Hub and connect Octopus to a Git repository.</li></ul><p>Here’s a quick rundown of how they work:</p><ol><li><p>Add a Process Template from the Process Templates interface in Platform Hub.</p><figure><p><img src="/blog/img/process-templates/Add-Process-Template.webp" alt="Platform Hub interface that allows users to insert process templates"></p></figure></li><li><p>Add an Octopus built-in step to your deployment process. This step is similar to the existing Process Editor in your project.</p><figure><p><img src="/blog/img/process-templates/Add-Step-Process-Template.webp" alt="Add step to Process template"></p></figure></li><li><p>You can use a value or a parameter when filling out a step. To set up a parameter, visit the “Add Parameter” experience on a Process Template.</p><figure><p><img src="/blog/img/process-templates/Add-Process-Template-Parameter.webp" alt="Add step to Process template"></p></figure><figure><p><img src="/blog/img/process-templates/Add-Parameter-Dialog.webp" alt="Add step to Process template"></p></figure></li><li><p>After you’ve added steps and configured parameters, you’ll need to commit, publish, and share the template. You must commit and publish a new version each time you change the template.</p><figure><p><img src="/blog/img/process-templates/Process-Template-Commit-Flow.png" alt="Add step to Process template"></p></figure></li><li><p>To use a Process Template in a project, you must add it via the “Add Step” experience. You can select the relevant Process Template from the dropdown and set what updates you’d like to receive.</p><figure><p><img src="/blog/img/process-templates/Process-Template-Add-Step-Experience.png" alt="Add step to Process template"></p></figure></li><li><p>In the Parameters tab, fill in the required parameters for the Process Template, and your Process Template is ready for deployment.</p><figure><p><img src="/blog/img/process-templates/Process-Template-In-Editor.png" alt="Add step to Process template"></p></figure></li></ol><div class="hint"><p>To find more in-depth information for Platform Hub, please visit <a href="https://octopus.com/docs/platform-hub">our docs</a></p></div><h2 id="conclusion">Conclusion</h2><p>If your current Platform Engineering approach involves building and maintaining everything yourself, we believe process templates are a more effective solution.</p><p>Process Templates are available to all Octopus Enterprise Tier customers. An installation guide for self-hosted customers can be found on our <a href="https://octopus.com/docs/platform-hub/installation-guide">installation guide docs page</a>.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Resilient AI agents with MCP: Timeout and retry strategies</title> <link href="https://octopus.com/blog/mcp-timeout-retry" /> <id>https://octopus.com/blog/mcp-timeout-retry</id> <published>2025-10-03T00:00:00.000Z</published> <updated>2025-10-03T00:00:00.000Z</updated> <summary>Learn how to add timeout and retry strategies to your AI agents using the Model Context Protocol (MCP) to enhance their reliability and performance when interacting with external systems.</summary> <author> <name>Matthew Casperson, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>MCP reduces the barrier to entry for developers and organizations looking to automate workflows across multiple systems. But while it is possible to build a functional AI agent with just a few lines of code, production-grade systems need to be resilient and handle failures gracefully.</p><p>In this post we’ll explore the options available in Langchain and Python to add timeout, retry, and circuit breaker strategies to your AI agents using MCP.</p><h2 id="why-add-timeout-and-retry-strategies">Why add timeout and retry strategies?</h2><p>By their very nature, MCP clients (and the AI agents that implement them) add the most value when they are orchestrating multiple platforms. However, as the number of external systems increases, so does the likelihood of failure. External systems may be unavailable, slow to respond, or return errors. Our AI agent must be resilient and handle these failures gracefully.</p><p>Distributed systems have adopted a common set of patterns to handle failures, including timeouts, retries, and circuit breakers. These patterns help to ensure that our AI agents can continue to function, or fail gracefully, even when external systems are experiencing issues.</p><h2 id="retry-strategies-in-langchain">Retry strategies in Langchain</h2><p>Langchain interacts with MCP servers via tools. More specifically, tools are instances of the <a href="https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html">StructuredTool</a> class. Langchain builds instances of StructuredTool classes for you when you call the <code>get_tools</code> function on the <a href="https://v03.api.js.langchain.com/classes/_langchain_mcp_adapters.MultiServerMCPClient.html">MultiServerMCPClient</a> class.</p><p>In theory, Langchain has the ability to define retry strategies for tools. Specifically, the <a href="https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable">Runnable</a> class has a <code>with_retries</code> function to add retry logic to any Runnable. However, I was unable to take the <code>StructuredTool</code> instances returned by <code>get_tools</code> and add retry logic to them via the <code>with_retries</code> function, and there is no built-in support for retry strategies to tools generated by <code>MultiServerMCPClient</code> class. The inability to customize the generated tools is reflected in <a href="https://github.com/langchain-ai/langchain-mcp-adapters/issues/263">this issue</a>, which documents the limitation around error handling and MCP tools.</p><p>To work around this limitation, we will instead use the <a href="https://en.wikipedia.org/wiki/Proxy_pattern">Gang of Four proxy pattern</a> to create a wrapper around the <code>StructuredTool</code> instances returned by the <code>get_tools</code> function. It is inside this wrapper that we will implement our retry logic.</p><p>Fortunately we do not have to implement the proxy or retry logic from scratch. The <a href="https://pypi.org/project/wrapt/">wrapt</a> and <a href="https://pypi.org/project/tenacity/">tenacity</a> libraries make implementing the proxy and retry logic straightforward:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">@wrapt.patch_function_wrapper</span><span style="color:#000000">(</span><span style="color:#A31515">"langchain_core.tools"</span><span style="color:#000000">, </span><span style="color:#A31515">"StructuredTool.ainvoke"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#795E26">@retry</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> stop</span><span style="color:#000000">=stop_after_attempt(</span><span style="color:#098658">3</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> wait</span><span style="color:#000000">=wait_fixed(</span><span style="color:#098658">1</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> retry</span><span style="color:#000000">=retry_if_exception_type(</span><span style="color:#267F99">Exception</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#000000">)</span></span><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> structuredtool_ainvoke</span><span style="color:#000000">(</span><span style="color:#001080">wrapped</span><span style="color:#000000">, </span><span style="color:#001080">instance</span><span style="color:#000000">, </span><span style="color:#001080">args</span><span style="color:#000000">, </span><span style="color:#001080">kwargs</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(</span><span style="color:#A31515">"StructuredTool.ainvoke called"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#AF00DB"> await</span><span style="color:#000000"> wrapped(*args, **kwargs)</span></span></code></pre><p>We intercept calls to the <code>ainvoke</code> function of the <code>StructuredTool</code> class by defining a function with the <code>@wrapt.patch_function_wrapper</code> annotation. This annotation takes two arguments: the module name and the function name.</p><p>The intercepted calls are then retried with the <code>@retry</code> annotation. This annotation takes several arguments to define the retry strategy. In this example, we will retry up to three times, waiting one second between each attempt, and retry on all exceptions.</p><p>Inside the function we add some logging to provide confirmation that our proxy is being called. We then called the wrapped function, which is the original <code>ainvoke</code> function of the <code>StructuredTool</code> class.</p><p>And that is it! The <code>wrapt</code> library will intercept all calls to the <code>ainvoke</code> function of any <code>StructuredTool</code> object generated by Langchain, and our retry logic will be applied via the <code>tenacity</code> library.</p><p>:::div{.hint} One thing to watch out for using the proxy strategy is that we are wrapping an async function. Not all retry and circuit breaker libraries support async function. You’ll need to keep this in mind if you want to use other resilience libraries. :::</p><h2 id="timeouts-in-langchain">Timeouts in Langchain</h2><p>Timeouts can be defined through a parameter passed to the <code>ClientSession</code> constructor. The <code>MultiServerMCPClient</code> constructor exposes the <code>session_kwargs</code> argument whose values are passed to the <code>ClientSession</code> constructor.</p><p>This example demonstrates how set a read timeout of 60 seconds for a specific MCP server:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">client = MultiServerMCPClient(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "octopus"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"npx"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "-y"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "@octopusdeploy/mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "--api-key"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"PROD_OCTOPUS_APIKEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#A31515"> "--server-url"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"PROD_OCTOPUS_URL"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "session_kwargs"</span><span style="color:#000000">: {</span><span style="color:#A31515">"read_timeout_seconds"</span><span style="color:#000000">: timedelta(</span><span style="color:#001080">seconds</span><span style="color:#000000">=</span><span style="color:#098658">60</span><span style="color:#000000">)},</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "zendesk"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"uv"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "--directory"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "/home/matthew/Code/zendesk-mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "run"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "zendesk"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><h2 id="circuit-breakers-in-langchain">Circuit breakers in Langchain</h2><p>Circuit breakers are used to prevent an application from repeatedly trying to execute an operation that is likely to fail. This prevents downstream services that are already struggling from being overwhelmed with requests.</p><p>We’ll make use of the <a href="https://pypi.org/project/purgatory/">purgatory</a> library to implement a circuit breaker for our MCP tools.</p><p>The first step is to create a <code>AsyncCircuitBreakerFactory</code> instance. This instance must be long-lived and shared between requests:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">circuitbreaker = AsyncCircuitBreakerFactory(</span><span style="color:#001080">default_threshold</span><span style="color:#000000">=</span><span style="color:#098658">3</span><span style="color:#000000">)</span></span></code></pre><div class="hint"><p>Circuit breakers are only useful in long-lived applications, for example, a web server or a microservice. This is because the circuit breaker logic needs to maintain state about the number of recent failures. A short-lived application, such as a script that runs once and exits, will not benefit from a circuit breaker.</p></div><p>Similar to the retry logic, we’ll use the <code>wrapt</code> library to create a proxy around the <code>ainvoke</code> function of the <code>StructuredTool</code> class, and use the <code>@circuitbreaker</code> annotation to apply the circuit breaker logic:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">@wrapt.patch_function_wrapper</span><span style="color:#000000">(</span><span style="color:#A31515">"langchain_core.tools"</span><span style="color:#000000">, </span><span style="color:#A31515">"StructuredTool.ainvoke"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#795E26">@circuitbreaker</span><span style="color:#000000">(</span><span style="color:#A31515">"StructuredTool.ainvoke"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> structuredtool_ainvoke</span><span style="color:#000000">(</span><span style="color:#001080">wrapped</span><span style="color:#000000">, </span><span style="color:#001080">instance</span><span style="color:#000000">, </span><span style="color:#001080">args</span><span style="color:#000000">, </span><span style="color:#001080">kwargs</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(</span><span style="color:#A31515">"StructuredTool.ainvoke called"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#AF00DB"> await</span><span style="color:#000000"> wrapped(*args, **kwargs)</span></span></code></pre><h2 id="simulating-failures">Simulating failures</h2><p>To simulate failures, we can create a proxy around the <code>ainvoke</code> function of the <code>BaseTool</code> class. The <code>StructuredTool</code> class inherits from the <code>BaseTool</code> class, and the <code>ainvoke</code> function of the <code>BaseTool</code> class is called by the <code>ainvoke</code> function of the <code>StructuredTool</code> class. This gives us a convenient place to simulate failures for all tools.</p><p>Here we randomly raise an exception two-thirds of the time to simulate a transient error:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">@wrapt.patch_function_wrapper</span><span style="color:#000000">(</span><span style="color:#A31515">"langchain_core.tools"</span><span style="color:#000000">, </span><span style="color:#A31515">"BaseTool.ainvoke"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> basetool_ainvoke</span><span style="color:#000000">(</span><span style="color:#001080">wrapped</span><span style="color:#000000">, </span><span style="color:#001080">instance</span><span style="color:#000000">, </span><span style="color:#001080">args</span><span style="color:#000000">, </span><span style="color:#001080">kwargs</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(</span><span style="color:#A31515">"BaseTool.ainvoke called"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> random.randint(</span><span style="color:#098658">1</span><span style="color:#000000">, </span><span style="color:#098658">3</span><span style="color:#000000">) != </span><span style="color:#098658">3</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(</span><span style="color:#A31515">"Simulated transient error"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> raise</span><span style="color:#267F99"> RuntimeError</span><span style="color:#000000">(</span><span style="color:#A31515">"Simulated transient error"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#AF00DB"> await</span><span style="color:#000000"> wrapped(*args, **kwargs)</span></span></code></pre><p>If you have implemented a circuit breaker strategy, you should see that the MCP client eventually stops calling the MCP server after a few failures. If you have implemented a retry strategy with a high level of retries, you should see the prompt succeed as the retry library intercepts the exceptions and retries the request.</p><h2 id="conclusion">Conclusion</h2><p>Production-grade AI agents need to handle failures gracefully. By implementing timeout, retry, and circuit breaker strategies, we can ensure that our AI agents are resilient and can continue to function even when external systems are experiencing issues.</p><p>Langchain has some built-in support for timeouts, but implementing retry and circuit breaker strategies requires some additional work. By using the <code>wrapt</code>, <code>retry</code>, and <code>pybreaker</code> libraries, we can easily add these strategies to our MCP tools via the proxy pattern.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Manage context window size with advanced AI agents</title> <link href="https://octopus.com/blog/advanced-ai-agents" /> <id>https://octopus.com/blog/advanced-ai-agents</id> <published>2025-10-02T00:00:00.000Z</published> <updated>2025-10-02T00:00:00.000Z</updated> <summary>Learn how to execute complex workflows using AI agents with Octopus Deploy while managing context window size limitations.</summary> <author> <name>Matthew Casperson, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>The promise of MCP is to expose many platforms and services to AI models, enabling complex queries and workflows to be executed with natural language prompts.</p><p>While it is tempting to believe that MCP clients can define arbitrarily complex workflows in a single prompt, in practice, the limitations of the current generation of LLMs present challenges that must be overcome. Specifically, the context window size of LLMs defines an upper limit on how much data a single MCP prompt can consume as part of a request.</p><p>In this post we’ll explore strategies for managing context window size limitations when working with AI agents and Octopus Deploy.</p><h3 id="what-is-context-window-size">What is context window size?</h3><p>You can think of the context window as being the amount of information an LLM can process.</p><p>Context windows are measured in tokens, which are chunks of text that can be as short as one character or as long as one word. For example, the word “chat” could be one token, while the word “chatting” could be two tokens (“chatt” and “ing”). While there is not an exact ratio between tokens and words or characters (and the ratio changes between LLMs), a rough approximation is that one token is equals four characters of English text. The Amazon documentation for <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html">Amazon Titan models</a> notes that:</p><blockquote><p>The characters to token ratio in English is 4.7 characters per token, on average.</p></blockquote><p>The context window size is dependent on the specific LLM being used. For example, OpenAI’s gpt-5 model has a context window size of 400,000 tokens (272,000 tokens for input and 128,000 tokens for output), while some variations of the gpt-4 models have a context window size of 32,000 tokens.</p><p>These sound like large numbers, but it doesn’t take long to exhaust the context window size when working with large blobs of text or API results. JSON in particular consumes a lot of tokens. In this screenshot from the <a href="https://platform.openai.com/tokenizer">OpenAI Tokenizer tool</a>, you can see that individual quotes and braces are represented as individual tokens:</p><p><img src="/blog/img/advanced-ai-agents/openai-tokenizer.png" alt="A screenshot of a JSON blob highlighting individual tokens"></p><h3 id="prompts-that-exhaust-the-context-window">Prompts that exhaust the context window</h3><p>Consider the following prompt:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>In Octopus, get the last 10 releases deployed to the "Production" environment in the "Octopus Server" space.</span></span><span class="line"><span>Get the releases from the deployments.</span></span><span class="line"><span>In ZenDesk, get the last 100 tickets and their comments.</span></span><span class="line"><span>Create a report summarizing the issues reported by customers in the tickets. </span></span><span class="line"><span>You must only consider tickets that mention the Octopus release versions. </span></span><span class="line"><span>You must only consider support tickets raised by customers. </span></span><span class="line"><span>You must use your best judgment to identify support tickets.</span></span></code></pre><p>The intention here is to write a report that summarizes customer issues based on the last 10 releases of an application deployed to production. It is simple enough to write this prompt, but behind the scenes, the LLM must execute multiple API calls:</p><ul><li>Convert the space name to a space ID</li><li>Convert the environment name to an environment ID</li><li>Get the last 10 deployments to the environment</li><li>Get the details of the releases from the deployments</li><li>Get the last 100 tickets from ZenDesk</li></ul><p>Each of these API calls return token-gobbling JSON results that are collected and passed to the LLM to generate the report. The JSON blobs returned by Octopus can be quite verbose, and it is not hard to see how long support tickets can exhaust the context window size, especially given the tendency of email clients to include the entirety of a previous email chain in each reply.</p><p>Even if we don’t exhaust the context window size, we may still benefit from reducing the amount of data passed to the LLM, as this <a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents">post from Anthropic</a> notes:</p><blockquote><p>Studies on needle-in-a-haystack style benchmarking have uncovered the concept of context rot: as the number of tokens in the context window increases, the model’s ability to accurately recall information from that context decreases.</p></blockquote><p>When prompts like this work, they seem almost magical. But when they fail due to context window size limitations, we need to implement some advanced strategies to help manage the context window size.</p><h3 id="strategies-for-managing-context-window-size">Strategies for managing context window size</h3><p>As we saw in the <a href="https://octopus.com/blog/agentic-ai-with-mcp">previous post</a>, LangChain provides the ability to extract tools from MCP servers and use them in agentic workflows. We can add additional custom tools to this collection to perform operations that help manage context window size.</p><p>Consider this tool definition:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">@tool</span></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> discard_deployments</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> tool_call_id</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">str</span><span style="color:#000000">, InjectedToolCallId],</span></span><span class="line"><span style="color:#001080"> state</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">dict</span><span style="color:#000000">, InjectedState],</span></span><span class="line"><span style="color:#000000">) -> Command:</span></span><span class="line"><span style="color:#A31515"> """Discards the list of deployments."""</span></span><span class="line"></span><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_release</span><span style="color:#000000">(</span><span style="color:#001080">release</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> isinstance</span><span style="color:#000000">(release, ToolMessage) </span><span style="color:#0000FF">and</span><span style="color:#000000"> release.name == </span><span style="color:#A31515">"list_deployments"</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#000000"> release.name = </span><span style="color:#A31515">"trimmed_list_deployments"</span></span><span class="line"><span style="color:#000000"> release.content = </span><span style="color:#A31515">""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> release</span></span><span class="line"></span><span class="line"><span style="color:#000000"> trim_messages = [trim_release(msg) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> msg </span><span style="color:#AF00DB">in</span><span style="color:#000000"> state[</span><span style="color:#A31515">"messages"</span><span style="color:#000000">]]</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> Command(</span></span><span class="line"><span style="color:#001080"> update</span><span style="color:#000000">={</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> RemoveMessage(</span><span style="color:#001080">id</span><span style="color:#000000">=REMOVE_ALL_MESSAGES),</span></span><span class="line"><span style="color:#000000"> *trim_messages,</span></span><span class="line"><span style="color:#000000"> ToolMessage(</span></span><span class="line"><span style="color:#A31515"> "Discarded list of deployments"</span><span style="color:#000000">, </span><span style="color:#001080">tool_call_id</span><span style="color:#000000">=tool_call_id</span></span><span class="line"><span style="color:#000000"> ),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><p>This tool take advantage of a number of advanced features of LangChain:</p><ul><li>The <a href="https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.InjectedToolCallId.html">InjectedToolCallId</a> annotation to inject the unique ID of the tool call</li><li>The <a href="https://langchain-ai.github.io/langgraph/reference/agents/#langgraph.prebuilt.tool_node.InjectedState">InjectedState</a> annotation to inject the current state of the agent</li><li>Returning a <a href="https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.Command">Command</a> object to update the state of the agent</li><li>Using the <a href="https://python.langchain.com/api_reference/core/messages/langchain_core.messages.modifier.RemoveMessage.html">RemoveMessage</a> class to remove all messages from the agent’s state</li></ul><p>Let’s go through this function line by line.</p><p>We start by defining a tool. A tool is simply a function decorated with the <code>@tool</code> decorator. The function docstring is used to describe the tool to the LLM, and is how the LLM knows when to call the tool based on the plain text instructions in the prompt.</p><p>This two has two parameters, <code>tool_call_id</code> and <code>state</code>.</p><p>The <code>tool_call_id</code> parameter is annotated with the <code>InjectedToolCallId</code> annotation, which tells LangChain to inject the unique ID of the tool call into this parameter.</p><p>The <code>state</code> parameter is annotated with the <code>InjectedState</code> annotation, which tells LangChain to inject the current state of the agent into this parameter. It is this state that we want to modify:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">@tool</span></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> discard_deployments</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> tool_call_id</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">str</span><span style="color:#000000">, InjectedToolCallId],</span></span><span class="line"><span style="color:#001080"> state</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">dict</span><span style="color:#000000">, InjectedState],</span></span><span class="line"><span style="color:#000000">) -> Command:</span></span><span class="line"><span style="color:#A31515"> """Discards the list of deployments."""</span></span></code></pre><p>A nested function called <code>trim_release</code> is defined to process each message in the agent’s state. If the message is a <code>ToolMessage</code> with the name <code>list_deployments</code> (this is the name of the tool exposed by the Octopus MCP server), it changes the name to <code>trimmed_list_deployments</code> and clears the content. This effectively removes the verbose JSON content from the message while retaining a record that the tool was called:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_release</span><span style="color:#000000">(</span><span style="color:#001080">release</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> isinstance</span><span style="color:#000000">(release, ToolMessage) </span><span style="color:#0000FF">and</span><span style="color:#000000"> release.name == </span><span style="color:#A31515">"list_deployments"</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#000000"> release.name = </span><span style="color:#A31515">"trimmed_list_deployments"</span></span><span class="line"><span style="color:#000000"> release.content = </span><span style="color:#A31515">""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> release</span></span></code></pre><p>We then use a list comprehension to apply the <code>trim_release</code> function to each message in the agent’s state, producing a new list of messages with the deployments trimmed:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">trim_messages = [trim_release(msg) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> msg </span><span style="color:#AF00DB">in</span><span style="color:#000000"> state[</span><span style="color:#A31515">"messages"</span><span style="color:#000000">]]</span></span></code></pre><p>We then return a <code>Command</code> object that updates the agent’s state. <code>Command</code> objects allow us to update the state of the agent. It is the messages in the state that are placed in the context window, so by modifying these messages, we can manage the context window size.</p><p>By default, messages returned from a tool are appended to the existing messages in the state. However, in this case, we want to remove all existing messages and replace them with our trimmed messages. We do this by including a <code>RemoveMessage</code> object with the special ID <code>REMOVE_ALL_MESSAGES</code>, which tells LangChain to remove all existing messages from the state.</p><p>Finally, we include our trimmed messages and a new <code>ToolMessage</code> indicating that the deployments have been discarded. This message includes the <code>tool_call_id</code> so that it can be traced back to the specific tool call:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#AF00DB">return</span><span style="color:#000000"> Command(</span></span><span class="line"><span style="color:#001080"> update</span><span style="color:#000000">={</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> RemoveMessage(</span><span style="color:#001080">id</span><span style="color:#000000">=REMOVE_ALL_MESSAGES),</span></span><span class="line"><span style="color:#000000"> *trim_messages,</span></span><span class="line"><span style="color:#000000"> ToolMessage(</span></span><span class="line"><span style="color:#A31515"> "Discarded list of deployments"</span><span style="color:#000000">, </span><span style="color:#001080">tool_call_id</span><span style="color:#000000">=tool_call_id</span></span><span class="line"><span style="color:#000000"> ),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><p>Our custom tool is added to the collection of tools exported from the MCP servers:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">tools = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> client.get_tools()</span></span><span class="line"><span style="color:#000000">tools.append(discard_deployments)</span></span></code></pre><p>And we can call the new tool from our prompt with the instruction <code>Discard the list of deployments</code>:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>In Octopus, get the last 10 releases deployed to the "Production" environment in the "Octopus Server" space.</span></span><span class="line"><span>Get the releases from the deployments.</span></span><span class="line"><span>Discard the list of deployments.</span></span><span class="line"><span>In ZenDesk, get the last 100 tickets and their comments.</span></span><span class="line"><span>Create a report summarizing the issues reported by customers in the tickets. </span></span><span class="line"><span>You must only consider tickets that mention the Octopus release versions. </span></span><span class="line"><span>You must only consider support tickets raised by customers. </span></span><span class="line"><span>You must use your best judgement to identify support tickets.</span></span></code></pre><p>Now, once the LLM has called the Octopus MCP server to get the list of deployments, and retrieved the releases from those deployments, it calls our custom tool to discard the deployments JSON blob from the state, which in turn means those messages are not passed to the LLM as part of the context window. The deployments were never needed for the final report, so we have reduced the amount of data passed to the LLM without losing any important information.</p><p>There are a number of other opportunities to reduce the size of the messages passed to the LLM. The JSON blobs related to releases can be replaced by the release versions, and the ZenDesk tickets can be trimmed.</p><h2 id="full-source-code">Full source code</h2><p>This is the complete source code, including the additional custom tools used to trim the release details to just the version (<code>trim_releases_to_version</code>) and trim the ticket descriptions to 1000 characters (<code>trim_ticket_descriptions</code>), and the additional instructions in the prompt to call these tools:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> asyncio</span></span><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> json</span></span><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> os</span></span><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> re</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> typing </span><span style="color:#AF00DB">import</span><span style="color:#000000"> Annotated</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_core.messages </span><span style="color:#AF00DB">import</span><span style="color:#000000"> RemoveMessage, ToolMessage, trim_messages</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_core.tools </span><span style="color:#AF00DB">import</span><span style="color:#000000"> tool, InjectedToolCallId</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_mcp_adapters.client </span><span style="color:#AF00DB">import</span><span style="color:#000000"> MultiServerMCPClient</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_azure_ai.chat_models </span><span style="color:#AF00DB">import</span><span style="color:#000000"> AzureAIChatCompletionsModel</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langgraph.graph.message </span><span style="color:#AF00DB">import</span><span style="color:#000000"> REMOVE_ALL_MESSAGES</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langgraph.prebuilt </span><span style="color:#AF00DB">import</span><span style="color:#000000"> create_react_agent, InjectedState</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langgraph.types </span><span style="color:#AF00DB">import</span><span style="color:#000000"> Command</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_line_padding</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove leading and trailing whitespace from each line in the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with leading and trailing whitespace removed from each line.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> "</span><span style="color:#EE0000">\n</span><span style="color:#A31515">"</span><span style="color:#000000">.join(line.strip() </span><span style="color:#AF00DB">for</span><span style="color:#000000"> line </span><span style="color:#AF00DB">in</span><span style="color:#000000"> text.splitlines() </span><span style="color:#AF00DB">if</span><span style="color:#000000"> line.strip())</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_thinking</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove <think>...</think> tags and their content from the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with <think>...</think> tags and their content removed.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> stripped_text = text.strip()</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> stripped_text.startswith(</span><span style="color:#A31515">"<think>"</span><span style="color:#000000">) </span><span style="color:#0000FF">and</span><span style="color:#A31515"> "</think>"</span><span style="color:#0000FF"> in</span><span style="color:#000000"> stripped_text:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> re.sub(</span><span style="color:#0000FF">r</span><span style="color:#811F3F">"<think>.</span><span style="color:#000000">*?</span><span style="color:#811F3F"></think>"</span><span style="color:#000000">, </span><span style="color:#A31515">""</span><span style="color:#000000">, stripped_text, </span><span style="color:#001080">flags</span><span style="color:#000000">=re.DOTALL)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> stripped_text</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> response_to_text</span><span style="color:#000000">(</span><span style="color:#001080">response</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Extract the content from the last message in the response.</span></span><span class="line"><span style="color:#A31515"> :param response: The response dictionary containing messages.</span></span><span class="line"><span style="color:#A31515"> :return: The content of the last message, or an empty string if no messages are present.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> messages = response.get(</span><span style="color:#A31515">"messages"</span><span style="color:#000000">, [])</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#0000FF"> not</span><span style="color:#000000"> messages </span><span style="color:#0000FF">or</span><span style="color:#795E26"> len</span><span style="color:#000000">(messages) == </span><span style="color:#098658">0</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> ""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> messages.pop().content</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#795E26">@tool</span></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> trim_ticket_descriptions</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> tool_call_id</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">str</span><span style="color:#000000">, InjectedToolCallId],</span></span><span class="line"><span style="color:#001080"> state</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">dict</span><span style="color:#000000">, InjectedState],</span></span><span class="line"><span style="color:#000000">) -> Command:</span></span><span class="line"><span style="color:#A31515"> """Trims the description of the ZenDesk tickets."""</span></span><span class="line"></span><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_description</span><span style="color:#000000">(</span><span style="color:#001080">ticket</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#000000"> ticket[</span><span style="color:#A31515">"description"</span><span style="color:#000000">] = (</span></span><span class="line"><span style="color:#000000"> ticket[</span><span style="color:#A31515">"description"</span><span style="color:#000000">][:</span><span style="color:#098658">1000</span><span style="color:#000000">] + </span><span style="color:#A31515">"..."</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> len</span><span style="color:#000000">(ticket[</span><span style="color:#A31515">"description"</span><span style="color:#000000">]) > </span><span style="color:#098658">1000</span></span><span class="line"><span style="color:#AF00DB"> else</span><span style="color:#000000"> ticket[</span><span style="color:#A31515">"description"</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> ticket</span></span><span class="line"></span><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_description_list</span><span style="color:#000000">(</span><span style="color:#001080">message</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> isinstance</span><span style="color:#000000">(message, ToolMessage) </span><span style="color:#0000FF">and</span><span style="color:#000000"> message.name == </span><span style="color:#A31515">"get_tickets"</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#000000"> ticket_data = json.loads(message.content)</span></span><span class="line"><span style="color:#000000"> trimmed_ticket_data = [</span></span><span class="line"><span style="color:#000000"> trim_description(ticket) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> ticket </span><span style="color:#AF00DB">in</span><span style="color:#000000"> ticket_data[</span><span style="color:#A31515">"tickets"</span><span style="color:#000000">]</span></span><span class="line"><span style="color:#000000"> ]</span></span><span class="line"><span style="color:#000000"> message.content = json.dumps(trimmed_ticket_data)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> message</span></span><span class="line"></span><span class="line"><span style="color:#000000"> trim_messages = [trim_description_list(msg) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> msg </span><span style="color:#AF00DB">in</span><span style="color:#000000"> state[</span><span style="color:#A31515">"messages"</span><span style="color:#000000">]]</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> Command(</span></span><span class="line"><span style="color:#001080"> update</span><span style="color:#000000">={</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> RemoveMessage(</span><span style="color:#001080">id</span><span style="color:#000000">=REMOVE_ALL_MESSAGES),</span></span><span class="line"><span style="color:#000000"> *trim_messages,</span></span><span class="line"><span style="color:#000000"> ToolMessage(</span><span style="color:#A31515">"Trimmed releases to version"</span><span style="color:#000000">, </span><span style="color:#001080">tool_call_id</span><span style="color:#000000">=tool_call_id),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#795E26">@tool</span></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> discard_deployments</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> tool_call_id</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">str</span><span style="color:#000000">, InjectedToolCallId],</span></span><span class="line"><span style="color:#001080"> state</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">dict</span><span style="color:#000000">, InjectedState],</span></span><span class="line"><span style="color:#000000">) -> Command:</span></span><span class="line"><span style="color:#A31515"> """Discards the list of deployments."""</span></span><span class="line"></span><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_release</span><span style="color:#000000">(</span><span style="color:#001080">release</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> isinstance</span><span style="color:#000000">(release, ToolMessage) </span><span style="color:#0000FF">and</span><span style="color:#000000"> release.name == </span><span style="color:#A31515">"list_deployments"</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#000000"> release.name = </span><span style="color:#A31515">"trimmed_list_deployments"</span></span><span class="line"><span style="color:#000000"> release.content = </span><span style="color:#A31515">""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> release</span></span><span class="line"></span><span class="line"><span style="color:#000000"> trim_messages = [trim_release(msg) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> msg </span><span style="color:#AF00DB">in</span><span style="color:#000000"> state[</span><span style="color:#A31515">"messages"</span><span style="color:#000000">]]</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> Command(</span></span><span class="line"><span style="color:#001080"> update</span><span style="color:#000000">={</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> RemoveMessage(</span><span style="color:#001080">id</span><span style="color:#000000">=REMOVE_ALL_MESSAGES),</span></span><span class="line"><span style="color:#000000"> *trim_messages,</span></span><span class="line"><span style="color:#000000"> ToolMessage(</span><span style="color:#A31515">"Discarded list of deployments"</span><span style="color:#000000">, </span><span style="color:#001080">tool_call_id</span><span style="color:#000000">=tool_call_id),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#795E26">@tool</span></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> trim_releases_to_version</span><span style="color:#000000">(</span></span><span class="line"><span style="color:#001080"> tool_call_id</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">str</span><span style="color:#000000">, InjectedToolCallId],</span></span><span class="line"><span style="color:#001080"> state</span><span style="color:#000000">: Annotated[</span><span style="color:#267F99">dict</span><span style="color:#000000">, InjectedState],</span></span><span class="line"><span style="color:#000000">) -> Command:</span></span><span class="line"><span style="color:#A31515"> """Trims the details of Octopus releases to their version."""</span></span><span class="line"></span><span class="line"><span style="color:#0000FF"> def</span><span style="color:#795E26"> trim_release</span><span style="color:#000000">(</span><span style="color:#001080">release</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#795E26"> isinstance</span><span style="color:#000000">(release, ToolMessage) </span><span style="color:#0000FF">and</span><span style="color:#000000"> release.name == </span><span style="color:#A31515">"get_release_by_id"</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#000000"> release_data = json.loads(release.content)</span></span><span class="line"><span style="color:#000000"> release.name = </span><span style="color:#A31515">"trimmed_release"</span></span><span class="line"><span style="color:#000000"> release.content = release_data.get(</span><span style="color:#A31515">"version"</span><span style="color:#000000">, </span><span style="color:#A31515">"Unknown Version"</span><span style="color:#000000">)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> release</span></span><span class="line"></span><span class="line"><span style="color:#000000"> trim_messages = [trim_release(msg) </span><span style="color:#AF00DB">for</span><span style="color:#000000"> msg </span><span style="color:#AF00DB">in</span><span style="color:#000000"> state[</span><span style="color:#A31515">"messages"</span><span style="color:#000000">]]</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> Command(</span></span><span class="line"><span style="color:#001080"> update</span><span style="color:#000000">={</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#000000"> RemoveMessage(</span><span style="color:#001080">id</span><span style="color:#000000">=REMOVE_ALL_MESSAGES),</span></span><span class="line"><span style="color:#000000"> *trim_messages,</span></span><span class="line"><span style="color:#000000"> ToolMessage(</span><span style="color:#A31515">"Trimmed releases to version"</span><span style="color:#000000">, </span><span style="color:#001080">tool_call_id</span><span style="color:#000000">=tool_call_id),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> main</span><span style="color:#000000">():</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> The entrypoint to our AI agent.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> client = MultiServerMCPClient(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "octopus"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"npx"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "-y"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "@octopusdeploy/mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "--api-key"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"PROD_OCTOPUS_APIKEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#A31515"> "--server-url"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"PROD_OCTOPUS_URL"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "zendesk"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"uv"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "--directory"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "/home/matthew/Code/zendesk-mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "run"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "zendesk"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"><span style="color:#008000"> # Use an Azure AI model</span></span><span class="line"><span style="color:#000000"> llm = AzureAIChatCompletionsModel(</span></span><span class="line"><span style="color:#001080"> endpoint</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_URL"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> credential</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_APIKEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> model</span><span style="color:#000000">=</span><span style="color:#A31515">"gpt-5-mini"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"><span style="color:#000000"> tools = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> client.get_tools()</span></span><span class="line"><span style="color:#000000"> tools.append(discard_deployments)</span></span><span class="line"><span style="color:#000000"> tools.append(trim_releases_to_version)</span></span><span class="line"><span style="color:#000000"> tools.append(trim_ticket_descriptions)</span></span><span class="line"></span><span class="line"><span style="color:#000000"> agent = create_react_agent(llm, tools)</span></span><span class="line"><span style="color:#000000"> response = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> agent.ainvoke(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: remove_line_padding(</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> In Octopus, get the last 10 releases deployed to the "Production" environment in the "Octopus Server" space.</span></span><span class="line"><span style="color:#A31515"> Get the releases from the deployments.</span></span><span class="line"><span style="color:#A31515"> Trim the details of Octopus releases to their version.</span></span><span class="line"><span style="color:#A31515"> Discard the list of deployments.</span></span><span class="line"><span style="color:#A31515"> In ZenDesk, get the last 100 tickets and their comments.</span></span><span class="line"><span style="color:#A31515"> Trim the description of the ZenDesk tickets.</span></span><span class="line"><span style="color:#A31515"> Create a report summarizing the issues reported by customers in the tickets. </span></span><span class="line"><span style="color:#A31515"> You must only consider tickets that mention the Octopus release versions. </span></span><span class="line"><span style="color:#A31515"> You must only consider support tickets raised by customers. </span></span><span class="line"><span style="color:#A31515"> You must use your best judgment to identify support tickets.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(remove_thinking(response_to_text(response)))</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#000000">asyncio.run(main())</span></span></code></pre><h2 id="alternative-strategies">Alternative strategies</h2><p>LangChain also exposes the <a href="https://docs.langchain.com/oss/python/langchain/agents#pre-model-hook">pre-model hook</a> and <a href="https://docs.langchain.com/oss/python/langchain/agents#post-model-hook">post-model hook</a> to allow you to manipulate the state of the AI agent at various points in the processing of a request. The pre-model hook specifically is designed to support message trimming and summarization as a way to manage context window size.</p><h2 id="conclusion">Conclusion</h2><p>The ability of MCP to define complex, multi-system workflows in natural language is almost magical. By hiding the complexity of API calls behind simple prompts, MCP empowers users to automate tasks that would otherwise require significant custom code.</p><p>However, as you work with larger datasets and more complex workflows, you will encounter limitations around LLM context window sizes, and at this point you will need to implement strategies to manage the context window size.</p><p>Fortunately, LangChain exposes a number of advanced features that provide a deep level of control over the state of the agent, which in turn allows you to manage the context window size effectively.</p><p>This post provided examples of custom tools that manipulated the state of the agent to trim or discard unnecessary data. This allows you to work with data more efficiently and allows your prompts to scale across more systems and larger datasets.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Agentic AI with model context protocol (MCP)</title> <link href="https://octopus.com/blog/agentic-ai-with-mcp" /> <id>https://octopus.com/blog/agentic-ai-with-mcp</id> <published>2025-10-01T00:00:00.000Z</published> <updated>2025-10-01T00:00:00.000Z</updated> <summary>Learn how to create a simple AI agent using the Octopus Model Context Protocol (MCP) server to implement an agentic AI system.</summary> <author> <name>Matthew Casperson, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>If you’ve ever asked a platform like ChatGPT to book your next holiday, order groceries, or schedule a meeting, you’ll understand the potential of large language models (LLMs), and likely have been frustrated by a response like <code>I'm not able to book flights directly</code>.</p><p>By default, LLMs are disconnected from the internet and can’t perform tasks on your behalf. But they feel so tantalizingly close to being your own personal assistant.</p><p>The model context protocol (MCP) is an open standard that enables LLMs to connect to external tools and data sources. When an LLM is connected to an MCP server, it gains the ability to perform actions on your behalf, transforming it from a passive information source into an active agent.</p><p>Agentic AI systems go one step further with the creation of AI agents with specific instructions to perform tasks autonomously, as this quote from <a href="https://www.ibm.com/think/topics/agentic-ai">IBM</a> describes:</p><blockquote><p>Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time.</p></blockquote><p>In this post, we’ll explore how to create a simple AI agent connecting the Octopus and GitHub MCP servers to implement an agentic AI system.</p><h2 id="prerequisites">Prerequisites</h2><p>The sample application demonstrated in this post is written in Python. You can download Python from <a href="https://www.python.org/downloads/">python.org</a>.</p><p>We’ll also use <code>uv</code> to manage our virtual environment. You can install <code>uv</code> by following the <a href="https://docs.astral.sh/uv/getting-started/installation/">documentation</a>.</p><h2 id="dependencies">Dependencies</h2><p>Our AI agent will use <a href="https://www.langchain.com/">LangChain</a>, a popular framework for building AI applications.</p><p>Save the following dependencies to a <code>requirements.txt</code> file:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>langchain-mcp-adapters</span></span><span class="line"><span>langchain-ollama</span></span><span class="line"><span>langchain-azure-ai</span></span><span class="line"><span>langgraph</span></span></code></pre><p>Create a virtual environment and install the dependencies:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">uv</span><span style="color:#A31515"> venv</span></span><span class="line"><span style="color:#008000"># This instruction is provided by the previous command and is specific to your OS</span></span><span class="line"><span style="color:#795E26">source</span><span style="color:#A31515"> .venv/bin/activate</span></span><span class="line"><span style="color:#795E26">uv</span><span style="color:#A31515"> pip</span><span style="color:#A31515"> install</span><span style="color:#0000FF"> -r</span><span style="color:#A31515"> requirements.txt</span></span></code></pre><h2 id="text-processing-functions">Text processing functions</h2><p>A big part of working with LLMs is cleaning and processing text. AI agents sit at the intersection between LLMs consuming and generating natural language and imperative code that requires predictable inputs and outputs. In practice, this means AI agents spend a lot of time manipulating strings.</p><p>We start with a function to trim the leading and trailing whitespace from each line in a block of text:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_line_padding</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove leading and trailing whitespace from each line in the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with leading and trailing whitespace removed from each line.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> "</span><span style="color:#EE0000">\n</span><span style="color:#A31515">"</span><span style="color:#000000">.join(line.strip() </span><span style="color:#AF00DB">for</span><span style="color:#000000"> line </span><span style="color:#AF00DB">in</span><span style="color:#000000"> text.splitlines() </span><span style="color:#AF00DB">if</span><span style="color:#000000"> line.strip())</span></span></code></pre><p>Next, we have a function to remove <code><think>...</think></code> tags and their content from the text. The <code><think></code> tag is an informal convention used by reasoning LLMs to wrap the model’s internal reasoning steps.</p><p>OpenAI describes reasoning models <a href="https://platform.openai.com/docs/guides/reasoning">like this</a>:</p><blockquote><p>Reasoning models think before they answer, producing a long internal chain of thought before responding to the user.</p></blockquote><p>We typically don’t want to display the internal chain of thought in the final output, so we remove it:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_thinking</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove <think>...</think> tags and their content from the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with <think>...</think> tags and their content removed.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> stripped_text = text.strip()</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> stripped_text.startswith(</span><span style="color:#A31515">"<think>"</span><span style="color:#000000">) </span><span style="color:#0000FF">and</span><span style="color:#A31515"> "</think>"</span><span style="color:#0000FF"> in</span><span style="color:#000000"> stripped_text:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> re.sub(</span><span style="color:#0000FF">r</span><span style="color:#811F3F">"<think>.</span><span style="color:#000000">*?</span><span style="color:#811F3F"></think>"</span><span style="color:#000000">, </span><span style="color:#A31515">""</span><span style="color:#000000">, stripped_text, </span><span style="color:#001080">flags</span><span style="color:#000000">=re.DOTALL)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> stripped_text</span></span></code></pre><h2 id="extracting-responses">Extracting responses</h2><p>LangChain provides a common abstraction over many AI platforms. These platforms will often have their own specific APIs. Consequently, LangChain functions frequently return generic dictionary objects containing the response to API calls. It is our responsibility to access the required data from these dictionaries.</p><p>We’ll be using the Azure AI Foundry service for this demo, and making use of the <a href="https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/chatgpt">Chat completions API</a>.</p><p>The response from the Chat completion API is a dictionary with a <code>messages</code> key containing a list of message objects. Here we extract the list of messages and return the content of the last message, or an empty string if no messages are present:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> response_to_text</span><span style="color:#000000">(</span><span style="color:#001080">response</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Extract the content from the last message in the response.</span></span><span class="line"><span style="color:#A31515"> :param response: The response dictionary containing messages.</span></span><span class="line"><span style="color:#A31515"> :return: The content of the last message, or an empty string if no messages are present.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> messages = response.get(</span><span style="color:#A31515">"messages"</span><span style="color:#000000">, [])</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#0000FF"> not</span><span style="color:#000000"> messages </span><span style="color:#0000FF">or</span><span style="color:#795E26"> len</span><span style="color:#000000">(messages) == </span><span style="color:#098658">0</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> ""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> messages.pop().content</span></span></code></pre><h2 id="setting-up-mcp-servers">Setting up MCP servers</h2><p>We can now start building the core functionality of our AI agent.</p><p>Our agent will use two MCP servers: the Octopus MCP server to interact with an Octopus instance, and the GitHub MCP server to interact with GitHub repositories.</p><p>These MCP servers are represented by the <code>MultiServerMCPClient</code> class, which is a client for connecting to multiple MCP servers:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> main</span><span style="color:#000000">():</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> The entrypoint to our AI agent.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> client = MultiServerMCPClient(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "octopus"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"npx"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "-y"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "@octopusdeploy/mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "--api-key"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"OCTOPUS_CLI_API_KEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#A31515"> "--server-url"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"OCTOPUS_CLI_SERVER"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "github"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "url"</span><span style="color:#000000">: </span><span style="color:#A31515">"https://api.githubcopilot.com/mcp/"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "headers"</span><span style="color:#000000">: {</span><span style="color:#A31515">"Authorization"</span><span style="color:#000000">: </span><span style="color:#0000FF">f</span><span style="color:#A31515">"Bearer </span><span style="color:#0000FF">{</span><span style="color:#000000">os.getenv(</span><span style="color:#A31515">'GITHUB_PAT'</span><span style="color:#000000">)</span><span style="color:#0000FF">}</span><span style="color:#A31515">"</span><span style="color:#000000">},</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"streamable_http"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><p>We then create an instance of the <code>AzureAIChatCompletionsModel</code> class to allow us to interact with the Azure AI service. This class is responsible for building the Azure AI API requests and processing the responses:</p><div class="hint"><p>LangChain provides a variety of classes to interact with different AI platforms. The ability to swap out one AI platform for another is a key benefit of frameworks like LangChain.</p></div><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#008000"> # Use an Azure AI model</span></span><span class="line"><span style="color:#000000"> llm = AzureAIChatCompletionsModel(</span></span><span class="line"><span style="color:#001080"> endpoint</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_URL"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> credential</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_APIKEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> model</span><span style="color:#000000">=</span><span style="color:#A31515">"gpt-5-mini"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><p>The <code>MultiServerMCPClient</code> provides access to the tools exposed by each MCP server:</p><div class="hint"><p>Tools are typically self-describing functions that an LLM can call. It is possible to <a href="https://python.langchain.com/docs/concepts/tools/">create tools manually</a>. However, the benefit of using MCP servers is that they expose tools in a consistent way that any MCP client can consume.</p></div><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">tools = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> client.get_tools()</span></span></code></pre><p>The <code>llm</code> (which is the interface to the Azure AI service) and the <code>tools</code> (which are the functions exposed by the MCP servers) can then be used to create a ReAct agent:</p><div class="hint"><p>ReAct agents are described in the <a href="https://arxiv.org/abs/2210.03629">ReAct paper</a> and implement a feedback loop to iteratively reason and act:</p><p><img src="/blog/_astro/react-loop.SSN4AKO__ZvmItv.webp" alt="ReAct agent diagram" loading="lazy" decoding="async" fetchpriority="auto" width="1432" height="806"></p></div><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">agent = create_react_agent(llm, tools)</span></span></code></pre><p>We now have everything we need to instruct our AI agent to perform a task. Here, we ask the agent to provide a risk assessment of changes in the latest releases of an Octopus project. This works because this project implements <a href="https://octopus.com/docs/packaging-applications/build-servers/build-information">build information</a> to associate Git commits with each Octopus release:</p><div class="hint"><p>You’ll need to replace the name of the Octopus space in the prompt with a space that exists in your Octopus instance.</p></div><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000"> response = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> agent.ainvoke(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: remove_line_padding(</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> In Octopus, get all the projects from the "Octopus Copilot" space.</span></span><span class="line"><span style="color:#A31515"> In Octopus, for each project, get the latest release.</span></span><span class="line"><span style="color:#A31515"> In GitHub, for each release, get the git diff from the Git commit. </span></span><span class="line"><span style="color:#A31515"> Scan the diff and provide a summary-level risk assessment.</span></span><span class="line"><span style="color:#A31515"> You will be penalized for asking for user input.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span></code></pre><p>The prompt provided to the agent highlights the power of agentic AI. It is written in natural language and describes a non-trivial set of steps the same way you might describe the task to a developer.</p><div class="hint"><p>If you’re unfamiliar with prompt engineering, the instructions “penalizing” specific behavior might seem odd. This prompting style addresses a common limitation of LLMs, where they struggle with being told what not to do. The particular phrasing used here comes from the paper <a href="https://arxiv.org/html/2312.16171v1">Principled Instructions Are All You Need for Questioning</a>, which provides several common prompt engineering patterns.</p></div><p>The response from the agent is processed and printed to the console:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#795E26">print</span><span style="color:#000000">(remove_thinking(response_to_text(response)))</span></span></code></pre><p>The final step is to call the <code>main</code> function asynchronously:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#000000">asyncio.run(main())</span></span></code></pre><h2 id="running-the-agent">Running the agent</h2><p>Run the agent with the command:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="bash"><code><span class="line"><span style="color:#795E26">python3</span><span style="color:#A31515"> main.py</span></span></code></pre><p>If all goes well, you should see a report detailing the risk assessment of the changes in the latest releases of each project in the “Octopus Copilot” space.</p><h2 id="the-complete-application">The complete application</h2><p>This is the complete script:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="python"><code><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> asyncio</span></span><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> os</span></span><span class="line"><span style="color:#AF00DB">import</span><span style="color:#000000"> re</span></span><span class="line"></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_mcp_adapters.client </span><span style="color:#AF00DB">import</span><span style="color:#000000"> MultiServerMCPClient</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langchain_azure_ai.chat_models </span><span style="color:#AF00DB">import</span><span style="color:#000000"> AzureAIChatCompletionsModel</span></span><span class="line"><span style="color:#AF00DB">from</span><span style="color:#000000"> langgraph.prebuilt </span><span style="color:#AF00DB">import</span><span style="color:#000000"> create_react_agent</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_line_padding</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove leading and trailing whitespace from each line in the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with leading and trailing whitespace removed from each line.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> "</span><span style="color:#EE0000">\n</span><span style="color:#A31515">"</span><span style="color:#000000">.join(line.strip() </span><span style="color:#AF00DB">for</span><span style="color:#000000"> line </span><span style="color:#AF00DB">in</span><span style="color:#000000"> text.splitlines() </span><span style="color:#AF00DB">if</span><span style="color:#000000"> line.strip())</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> remove_thinking</span><span style="color:#000000">(</span><span style="color:#001080">text</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Remove <think>...</think> tags and their content from the text.</span></span><span class="line"><span style="color:#A31515"> :param text: The text to process.</span></span><span class="line"><span style="color:#A31515"> :return: The text with <think>...</think> tags and their content removed.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> stripped_text = text.strip()</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#000000"> stripped_text.startswith(</span><span style="color:#A31515">"<think>"</span><span style="color:#000000">) </span><span style="color:#0000FF">and</span><span style="color:#A31515"> "</think>"</span><span style="color:#0000FF"> in</span><span style="color:#000000"> stripped_text:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> re.sub(</span><span style="color:#0000FF">r</span><span style="color:#811F3F">"<think>.</span><span style="color:#000000">*?</span><span style="color:#811F3F"></think>"</span><span style="color:#000000">, </span><span style="color:#A31515">""</span><span style="color:#000000">, stripped_text, </span><span style="color:#001080">flags</span><span style="color:#000000">=re.DOTALL)</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> stripped_text</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">def</span><span style="color:#795E26"> response_to_text</span><span style="color:#000000">(</span><span style="color:#001080">response</span><span style="color:#000000">):</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> Extract the content from the last message in the response.</span></span><span class="line"><span style="color:#A31515"> :param response: The response dictionary containing messages.</span></span><span class="line"><span style="color:#A31515"> :return: The content of the last message, or an empty string if no messages are present.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> messages = response.get(</span><span style="color:#A31515">"messages"</span><span style="color:#000000">, [])</span></span><span class="line"><span style="color:#AF00DB"> if</span><span style="color:#0000FF"> not</span><span style="color:#000000"> messages </span><span style="color:#0000FF">or</span><span style="color:#795E26"> len</span><span style="color:#000000">(messages) == </span><span style="color:#098658">0</span><span style="color:#000000">:</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#A31515"> ""</span></span><span class="line"><span style="color:#AF00DB"> return</span><span style="color:#000000"> messages.pop().content</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#0000FF">async</span><span style="color:#0000FF"> def</span><span style="color:#795E26"> main</span><span style="color:#000000">():</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> The entrypoint to our AI agent.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> client = MultiServerMCPClient(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "octopus"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "command"</span><span style="color:#000000">: </span><span style="color:#A31515">"npx"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "args"</span><span style="color:#000000">: [</span></span><span class="line"><span style="color:#A31515"> "-y"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "@octopusdeploy/mcp-server"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "--api-key"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"OCTOPUS_CLI_API_KEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#A31515"> "--server-url"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> os.getenv(</span><span style="color:#A31515">"OCTOPUS_CLI_SERVER"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#000000"> ],</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"stdio"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#A31515"> "github"</span><span style="color:#000000">: {</span></span><span class="line"><span style="color:#A31515"> "url"</span><span style="color:#000000">: </span><span style="color:#A31515">"https://api.githubcopilot.com/mcp/"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#A31515"> "headers"</span><span style="color:#000000">: {</span><span style="color:#A31515">"Authorization"</span><span style="color:#000000">: </span><span style="color:#0000FF">f</span><span style="color:#A31515">"Bearer </span><span style="color:#0000FF">{</span><span style="color:#000000">os.getenv(</span><span style="color:#A31515">'GITHUB_PAT'</span><span style="color:#000000">)</span><span style="color:#0000FF">}</span><span style="color:#A31515">"</span><span style="color:#000000">},</span></span><span class="line"><span style="color:#A31515"> "transport"</span><span style="color:#000000">: </span><span style="color:#A31515">"streamable_http"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> },</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"><span style="color:#008000"> # Use an Azure AI model</span></span><span class="line"><span style="color:#000000"> llm = AzureAIChatCompletionsModel(</span></span><span class="line"><span style="color:#001080"> endpoint</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_URL"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> credential</span><span style="color:#000000">=os.getenv(</span><span style="color:#A31515">"AZURE_AI_APIKEY"</span><span style="color:#000000">),</span></span><span class="line"><span style="color:#001080"> model</span><span style="color:#000000">=</span><span style="color:#A31515">"gpt-5-mini"</span><span style="color:#000000">,</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"></span><span class="line"><span style="color:#000000"> tools = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> client.get_tools()</span></span><span class="line"><span style="color:#000000"> agent = create_react_agent(llm, tools)</span></span><span class="line"><span style="color:#000000"> response = </span><span style="color:#AF00DB">await</span><span style="color:#000000"> agent.ainvoke(</span></span><span class="line"><span style="color:#000000"> {</span></span><span class="line"><span style="color:#A31515"> "messages"</span><span style="color:#000000">: remove_line_padding(</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#A31515"> In Octopus, get all the projects from the "Octopus Copilot" space.</span></span><span class="line"><span style="color:#A31515"> In Octopus, for each project, get the latest release.</span></span><span class="line"><span style="color:#A31515"> In GitHub, for each release, get the git diff from the GitHub Commit. </span></span><span class="line"><span style="color:#A31515"> Scan the diff and provide a summary-level risk assessment.</span></span><span class="line"><span style="color:#A31515"> You will be penalized for asking for user input.</span></span><span class="line"><span style="color:#A31515"> """</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#000000"> }</span></span><span class="line"><span style="color:#000000"> )</span></span><span class="line"><span style="color:#795E26"> print</span><span style="color:#000000">(remove_thinking(response_to_text(response)))</span></span><span class="line"></span><span class="line"></span><span class="line"><span style="color:#000000">asyncio.run(main())</span></span></code></pre><h2 id="challenges-with-agentic-ai">Challenges with agentic AI</h2><p>Describing complex tasks in natural language is a powerful capability, but it can mask some of the challenges with agentic AI.</p><p>The ability to correctly execute the instructions depends a great deal on the model used. We used the <code>gpt-5-mini</code> model from Azure AI Foundry in this example. I selected this model after some trial and error with different models that failed to select the correct tools to complete the task. For instance, GPT 4.1 continually attempted to load Octopus project details from GitHub. Other models can produce very different results for exactly the same code and prompt, and it is often unclear why.</p><p>The quality of the prompt is also important. LLMs are not universally advanced enough today to correctly interpret all prompts. Prompt engineering is still required to allow AI agents to perform complex tasks.</p><h2 id="next-steps">Next steps</h2><p>This example provides the ability to generate one-off reports. A more complete solution would run as a long-lived process responding to events such as those generated by <a href="https://octopus.com/docs/administration/managing-infrastructure/subscriptions">Octopus subscriptions</a> to generate reports automatically when new releases are created. You’d also likely post the results to an email MCP server or an MCP server for one of the many chat platforms available today.</p><h2 id="conclusion">Conclusion</h2><p>It is early days for agentic AI, but the potential is clear. Agentic AI may finally deliver on the promise of low/no-code solutions that allow non-developers to automate complex tasks.</p><p>Agentic AI is not without its challenges. You still need a good understanding of the capabilities and limitations of LLMs, and prompt engineering is still required to get the best results.</p><p>However, once you strip away all the boilerplate code to interact with multiple MCP servers, it is possible to orchestrate complex tasks with just a few lines of code and a well-crafted prompt.</p><p>Learn more about the Octopus MCP server in our <a href="https://octopus.com/docs/octopus-ai/mcp">documentation</a>.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>The power of patterns: How technology helped us see what matters</title> <link href="https://octopus.com/blog/building-our-jira-journey-in-people" /> <id>https://octopus.com/blog/building-our-jira-journey-in-people</id> <published>2025-10-01T00:00:00.000Z</published> <updated>2025-10-01T00:00:00.000Z</updated> <summary>How we integrated Jira into our people team at Octopus Deploy</summary> <author> <name>Mary Lee, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>When I joined Octopus in January 2023, the People team looked very different from what it is today. Back then, most of our work centered on ongoing projects and answering employee questions through Slack or email. We were like engines keeping the ship moving, focused on making sure employees were supported and cared for.</p><p>The team was still relatively new. Only a few years in, our priority was to build trust and move away from being seen as the stereotypical “HR” group. We were small and lean, most of the time we were so absorbed in the day-to-day that we rarely paused to think strategically or dive into data.</p><p>Looking back now, it is surprising that we waited so long to implement Jira. IT had already been using it successfully, and although the People team had talked about it for some time, it never became a priority. Ultimately, I took ownership and drove this forward.What started as a small idea became one of the most exciting and rewarding projects I have had the chance to lead.</p><h2 id="from-idea-to-implementation">From idea to implementation</h2><p>When we began, we honestly did not know what to expect. The goal was simple: build it, launch it, and see what happens. What followed far exceeded our expectations.</p><p>We knew we had to design Jira in a way that worked with Octopus’s culture. Slack is our primary communication tool, and most employee inquiries came through Slack messages. Requests from third parties, such as employment verifications, arrived in our inbox. With that in mind, we needed Jira to integrate seamlessly with Slack, so employees could continue asking questions in the same way they always had.</p><p>On the Operations side, we automated our People inbox directly into Jira and set up Slack workflows where adding specific emojis to messages automatically created tickets. This helped us avoid missing requests and improved our responsiveness.</p><p>On the HRBP side, Jira changed the way we work. Previously, we operated in silos, using Google Docs to manage performance cases, reorganizations, and other sensitive processes. With Jira, HRBPs now have a centralized place to document and track cases. This makes our work more consistent and allows any team member to step in seamlessly if needed.</p><h2 id="what-changed">What changed?</h2><p>For both Ops and HRBPs, the most transformative part of Jira has been reporting. What did not surprise us at first was the sheer number of tickets coming in. We already knew we were a lean team working across many priorities, but seeing the reality in numbers gave us a new appreciation for the scale of what we handle and gratitude for the team behind it.</p><p>What went beyond the numbers, however, was far more revealing. With the addition of AI powered reporting, we could move past raw volume and see deeper insights. The data showed us where questions and issues were coming from across departments, roles, and regions, which exposed weak points we had not clearly seen before. Instead of assumptions, we had evidence of where employees needed the most support.</p><p>The AI highlighted recurring themes, surfaced gaps in our processes, and even suggested where our attention should shift. It became clear for us that onboarding, offboarding, and employee recognition required more investment than we had realized. These patterns were not obvious through traditional reporting, but the AI made them visible.</p><p>This combination of real time data and AI driven analysis has turned reporting into a strategic tool. It not only helps us prioritize projects with more confidence and precision, it also guides us toward targeted improvements that will have the greatest impact on employees.</p><h2 id="whats-next">What’s next?</h2><p>We are only just beginning. Our next steps include expanding Jira automations, creating new workflows to support our Work from Anywhere policy, and streamlining processes around benefits and other key employee touch points.</p><p>Implementing Jira has been fun and energizing, but more than that, it has been transformative. It has given us structure, clarity, and consistency. It has helped us evolve from a team focused on tasks to one that is strategic, data informed, and scalable.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Launching the Octopus MCP Server</title> <link href="https://octopus.com/blog/launching-octopus-mcp" /> <id>https://octopus.com/blog/launching-octopus-mcp</id> <published>2025-09-29T00:00:00.000Z</published> <updated>2025-09-29T00:00:00.000Z</updated> <summary>The Octopus MCP Server provides your AI assistant with powerful tools that allow it to explore, inspect, and diagnose problems within your Octopus instance, transforming it into your ultimate DevOps wingmate.</summary> <author> <name>Andrew Best, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Artificial intelligence, and GenAI technologies in particular, are transforming our technology landscape.</p><p>For Continuous Delivery, AI allows us to solve previously-unsolvable problems. Parsing complex log files and diagnosing root cause failures and providing intelligent remediation; natural-language exploration of your software landscape, simplifying auditing, compliance, and standardization; agentic workflows providing intelligent glue between your essential software services.</p><p>At Octopus, we are bringing these capabilities to the best Continuous Delivery tool on the market, lowering risk, improving efficiency, and accelerating your software delivery.</p><p>Our most recent AI-powered capability is the <a href="https://github.com/OctopusDeploy/mcp-server">Octopus MCP Server</a>.</p><h2 id="what-is-mcp">What is MCP?</h2><p><a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol (MCP)</a> allows the AI assistants you use in your day-to-day work, like VS Code, Claude, or ChatGPT, to connect to the systems and services you own in a standardized fashion, allowing them to pull information from those systems and services to answer questions and perform tasks.</p><h2 id="the-octopus-mcp-server">The Octopus MCP Server</h2><p>The Octopus MCP Server provides your AI assistant with powerful tools that allow it to explore deployments, inspect configuration, and diagnose problems within your Octopus instance, transforming it into your ultimate DevOps wingmate. For a list of supported use-cases and sample prompts, see <a href="https://octopus.com/docs/octopus-ai/mcp">our documentation</a>.</p><p>The MCP server architecture ensures that your deployment data remains secure while enabling powerful AI-assisted workflows. All interactions are logged and auditable, maintaining the compliance and governance standards your organization requires.</p><p>The initial release of the Octopus MCP Server is a local MCP server - this means it runs on your local machine, and communicates with your Octopus instance via secure HTTPS. If you are interested in remote MCP, in which the MCP server is embedded within your Octopus instance, please register your interest <a href="https://roadmap.octopus.com/c/228-remote-mcp-server-ai-">on our roadmap</a>.</p><h2 id="breaking-down-devops-silos">Breaking down DevOps silos</h2><p>One of the strengths of MCP is that it allows your AI assistant to perform tasks <em>across</em> your DevOps ecosystem. It can work alongside other MCP servers to accomplish more complex orchestrations across Octopus and your other essential DevOps services.</p><p>With Continuous Delivery, one of the challenges that arises is understanding the continual flow of changes to production, and monitoring to ensure they are happy and healthy. Typically, you need to work across a number of systems to answer questions within this space. Let’s explore how MCP supercharges our ability to do this all in one place - our AI client of choice.</p><h3 id="what-has-just-shipped-and-is-it-healthy">What has just shipped, and is it healthy?</h3><p>This example uses Claude Desktop, with Sonnet 4 as the model.</p><blockquote><p>I’d like to know what Octopus changes are just about to be released to our customers. Find the latest untenanted deployment of the Octopus Server project to the Production environment, and then find a commit in GitHub in OctopusDeploy/OctopusDeploy with a tag matching the version number of the deployment, and tell me the commit details so I can understand what is being shipped.</p></blockquote><figure><p><img src="/blog/img/launching-octopus-mcp/mcp-example-1.png" alt="Octopus MCP finding the latest deployed version"></p></figure><p>Claude has used the Octopus MCP server to explore our Octopus instance (of course we use Octopus to ship Octopus!) and find the latest release to our Production environment. It then digs into the release to find the version number of the release.</p><p>Next it uses the Github MCP server to find a commit tagged with that version. Traceability is essential in Continuous Delivery - you have to know what changes in your source control correspond to what released versions of your software.</p><figure><p><img src="/blog/img/launching-octopus-mcp/mcp-example-2.png" alt="GitHub MCP finding the details of a deployed change"></p></figure><p>It finds the tagged commit, digs into the details, and gives us a summary of the change - in this case it looks like a small bugfix to ensure backwards compatibility for API clients. Great!</p><p>Now, has the deployment gone smoothly? The deployment itself is green, but we know that is only the start of the story - let’s provide another prompt in the same chat to see how it is behaving out in the real world.</p><blockquote><p>Can you see any errors in Honeycomb regarding this release?</p></blockquote><p>We use <a href="https://www.honeycomb.io/">Honeycomb</a> as a key part of our observability stack at Octopus, helping us monitor our Production environment to identify, diagnose, and remediate problems as they come up.</p><figure><p><img src="/blog/img/launching-octopus-mcp/mcp-example-3.png" alt="Honeycomb MCP checking for errors after a recent deployment"></p></figure><p>LLMs can be quite persistent in their pursuit of an answer. In this case, Claude keeps looking for more specific examples of errors that might indicate a serialization problem, as it has understood that this is what the change was intended to fix. Very clever!</p><h2 id="getting-started">Getting started</h2><p>Begin your AI-powered DevOps journey today with the Octopus MCP Server. As an early access participant, you’ll help shape these features while gaining early access to capabilities that will transform how teams deploy software.</p><p><a href="https://github.com/OctopusDeploy/mcp-server?tab=readme-ov-file#-installation">Get Started with the Octopus MCP Server</a></p>]]></content> </entry> <entry> <title>Adoption strategies for internal platforms</title> <link href="https://octopus.com/blog/adoption-strategies-internal-platforms" /> <id>https://octopus.com/blog/adoption-strategies-internal-platforms</id> <published>2025-09-23T00:00:00.000Z</published> <updated>2025-09-23T00:00:00.000Z</updated> <summary>Our Platform Engineering pulse report looked at many aspects of real-world practice, but one interesting study area was the themes for common platform features.</summary> <author> <name>Matthew Allford, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Our upcoming report examines how organizations <a href="https://octopus.com/publications/platform-engineering-pulse">adopt and succeed with Platform Engineering</a>. We’ll be launching a broader survey soon to dive deeper into patterns and practices of Platform Engineering, but one of the areas that surfaced was the platform adoption strategy; whether companies make the platform optional or mandatory, and perceived success factors for each approach.</p><p>The Platform Engineering community has been pretty clear on this one. Treat your platform like a product, let it compete for internal market share, and generally, don’t mandate adoption. After all, if your platform genuinely provides a path to success and compliance with the organization’s standards, developers should naturally gravitate toward it, right?</p><p>This philosophy draws heavily from product management thinking. Just as external products succeed by solving customer problems better than alternatives, internal platforms should win developer adoption by genuinely improving their daily experience. The theory suggests that when platforms must compete for users, they naturally evolve to be more user-focused, innovative, and valuable.</p><h2 id="mandatory-adoption">Mandatory adoption</h2><p>Let’s be honest about why mandatory platforms are so appealing. They work. At least from an organizational standpoint. When leadership mandates platform adoption, it’s because the platform initiative is created and funded to solve specific business problems. Our research shows that the top 3 reasons for adopting Platform Engineering are to:</p><ul><li>Improve efficiency</li><li>Standardize processes</li><li>Increase developer productivity</li></ul><p>From a business perspective, these are legitimate wins.</p><p>The budget numbers back up the mandatory approach too. Our research found that mandatory platforms report almost 2.5x higher confidence that their funding will remain secure over the next five years.</p><h2 id="optional-adoption">Optional adoption</h2><p>Optional platforms can face real challenges. They struggle with budget uncertainty (2x more likely to worry about funding cuts) compared to mandatory platforms, but when platforms must compete for users, something important happens. They hyper-focus on solving real developer problems because frustrated users will simply choose alternatives if they don’t.</p><p>Organizations often undermine their own platform adoption by inconsistently enforcing standards. When teams can bypass organizational requirements for security, compliance, and operational practices, they see little value in adopting an internal developer platform. These teams happily create their own build pipelines, deploy infrastructure manually through portals, and sidestep approval processes. After all, they’re avoiding both the platform and the problems it was designed to solve.</p><p>The dynamic shifts dramatically when organizations enforce standards uniformly. When teams must meet security, compliance, and operational requirements regardless of whether they use the platform, the platform transforms from an obstacle into the obvious solution. Rather than struggling to meet complex requirements independently, teams naturally gravitate toward a platform that makes compliance straightforward and built-in.</p><p>Users choose these platforms because they genuinely make work easier while allowing them to take actions without triple-checking whether they comply with the organization’s rules.</p><h2 id="the-perception-gap-between-builders-and-users">The perception gap between builders and users</h2><p>This is where our research gets very interesting. We found a significant disconnect between how platform teams (producers) measure success, what consumers experience, and how the platform’s sponsors view the platform’s success.</p><figure><p><img src="/blog/img/adoption-strategies-internal-platforms/success-radar-chart.png" alt="Radar chart displaying satisfaction levels split between mandatory and optional platforms from both producer and consumer perspectives."></p></figure><p>For mandatory platforms, 87.5% of producers said the platform met many or all of its goals; however, only 50% of those consuming the platform agreed.</p><p>For optional platforms, 50% of producers rate them highly successful, saying they meet many or all goals, compared to 67% of consumers rating the platform as successful.</p><p>Producers think mandatory platforms are more successful, whereas consumers are more likely to rate an optional platform as successful. Mandatory platforms may suffer from producer bias, where producers overestimate the value of the platform they are building. That’s why moving beyond assumptions and measuring multiple aspects of the Platform is crucial to tracking success.</p><p>The executives and budget holders who sponsor platform initiatives overwhelmingly find that only some goals have been met, regardless of adoption strategy. This suggests that even when platforms achieve technical success, they may not deliver the business outcomes that justify their investment. This disconnect can stem from unclear or misaligned platform goals. When platform teams aren’t certain what business problems they should solve, they default to technical achievements rather than user or business outcomes.</p><h2 id="measuring-platform-success">Measuring platform success</h2><p>Whether the platform is mandatory or optional, you’ll hear people discuss platform adoption metrics, usually to help support how successful the platform is. However, focusing purely on adoption tells you almost nothing about whether the platform is successful. Mandatory platforms should, by definition, have high adoption rates. For optional platforms, low adoption rates could be one signal to indicate a problem with the platform that needs further investigation, but it doesn’t tell the whole story. Either way, looking at adoption alone won’t tell you whether the business is getting value, if users are satisfied, or if the platform is meeting the goals it was set out to achieve.</p><p>In fact, one metric alone won’t tell you much about the goals and success of a platform, which is why it’s crucial to capture metrics that tie back to the platform’s goals. Platform sponsors want to know whether the money and time being invested are making an impact. Capturing and presenting vanity metrics like “90% adoption rate” or “5 nines of uptime” might make platform teams feel good, but they don’t answer a fundamental question. Is this platform helping our developers be more productive and our business more successful?</p><p>We found that organizations that measure more dimensions of the Platform were more likely to be successful. In fact, there’s a clear correlation between the number of metrics tracked and platform success rates. Organizations measuring 6 or more different aspects of their platform reported the highest success rates.</p><figure><p><img src="/blog/img/adoption-strategies-internal-platforms/high-performer-metric-count.png" alt="Bar chart showing platform success rates increasing from 33% with one metric to 75% with six or more metrics tracked."></p></figure><p>We found that 23% of organizations don’t use metrics and instead rely on intuitive or subjective assessments. In their <a href="https://platformengineering.org/reports/state-of-platform-engineering-vol-3">state of Platform Engineering report</a>, Humanitec found 44% of organizations don’t measure any metrics. Not measuring anything creates a <a href="https://octopus.com/blog/how-organizations-measure-platform-engineering#breaking-the-success-illusion">zero-metric success illusion</a> where platform teams report high success rates simply because they’re not collecting data that might contradict their assumptions. Among organizations that do measure, many rely on technical metrics that make platform teams feel good but completely miss user satisfaction and business impact.</p><p>The solution is user-centric measurement. You can use the <a href="https://octopus.com/devops/metrics/monk-metrics/">MONK metrics</a> to measure your Platform Engineering initiative, which provides a balanced approach of benchmarkable metrics with user satisfaction that can help track the platform’s progress against business objectives.</p><h2 id="when-optional-tooling-naturally-wins">When optional tooling naturally wins</h2><p>Many years ago, I worked at a University, and the team I was on managed, among hundreds of other applications, the Microsoft Exchange environment for thousands of staff members. Creating shared resources like distribution lists, rooms, or shared mailboxes required several teams to be involved.</p><p>The ticket went from the support desk to team 1, to team 2, back to team 1, back to the service desk, and finally back to the user. It took an even longer path if information was missing or something needed clarification. A request from an end user for a new resource would take days, sometimes weeks. Many rules were required regarding resource names, email address formats, mail routing configurations, permissions, etc.</p><p>It was a frustrating experience for everyone involved, and we got dozens of these requests every month, sometimes more. An independent consultant surveyed the business, and at the time, the email system was the #1 critical tool being used day-to-day. We had buy-in from our managers to improve the process because it distracted systems teams from strategic work, and resulted in frustrated customers who were waiting days for their requests to be completed.</p><p>We decided to write a tool to automate the process. Initially, this was purely selfish, as we wanted to eliminate the manual work and reduce errors when we received the ticket. However, as we developed it, we realized we could give this tool directly to the service desk staff, who received the initial requests from the end users. We engaged with the service desk team, informed them of what we were planning, and listened to their requirements if they were to use the tool.</p><p>We didn’t mandate the use of the tool. The service desk team could still assign the ticket to us, and we would use the tool to create the resource. However, we found that the service desk team used the tool 100% of the time, and only issues with the tool itself were escalated to our team.</p><p>Everyone was happier. The service desk could provision resources immediately instead of waiting days for our team to get to the ticket. They had confidence that they were doing it correctly because the tool guided them through the process and validated all the inputs. Users were delighted because their requests were fulfilled instantly. We were happier as our team could focus on more strategic work instead of routine provisioning tasks, which were viewed as tedious administrative work.</p><p>I get it - this is a bespoke example, and it’s not an entire platform. But that’s kind of the point. Don’t try to build the whole platform on day 1. Find genuine pain points in existing workflows today, and start there, not with abstract goals like “improve security” that are challenging for platform creators to interpret, implement, and measure. My aim with this story isn’t to tell you about specific technology or even automation. It’s about what happens when you build something that genuinely improves the experience for everyone involved. When platforms deliver real value, adoption becomes natural rather than forced.</p><h2 id="building-platforms-worth-choosing">Building platforms worth choosing</h2><p>While our initial research shows that mandatory platforms achieve specific organizational metrics and enjoy better budget security, it also reveals why the Platform Engineering community advocates so strongly for optionality. The most successful platforms we studied had something in common: they would thrive even if they weren’t mandated. Consumers chose them not because they had to, but because they were the best tool for getting work done quickly and aligning with organizational standards.</p><p>So before you decide on your adoption strategy, ask yourself this question: If this platform weren’t required, would consumers still choose it? If the answer is no, you might have a platform problem, not an adoption problem. And that’s precisely the problem that Platform Engineering, done well, is designed to solve.</p><p>Happy Deployments!</p>]]></content> </entry> <entry> <title>The top 5 features of internal developer platforms</title> <link href="https://octopus.com/blog/top-5-features-of-internal-developer-platforms" /> <id>https://octopus.com/blog/top-5-features-of-internal-developer-platforms</id> <published>2025-09-16T00:00:00.000Z</published> <updated>2025-09-16T00:00:00.000Z</updated> <summary>Our Platform Engineering pulse report looked at many aspects of real-world practice, but one interesting study area was the themes for common platform features.</summary> <author> <name>Steve Fenton, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Our <a href="https://octopus.com/publications/platform-engineering-pulse">Platform Engineering Pulse report</a> looked at many aspects of real-world practice, but one interesting study area was the themes for common platform features. We’re launching a broader study to investigate this further, but here are the top 5 features the platform teams we asked had added to internal developer platforms.</p><ol><li>Build automation</li><li>Deployment automation</li><li>Infrastructure automation</li><li>Test automation</li><li>Monitoring and observability</li></ol><h2 id="1-build-automation">1. Build automation</h2><p>Build automation should be triggered each time you change the code and provide you with fast feedback on your changes. If you’re practicing Continuous Delivery, you’ll commit changes to the main branch every few hours, with the feedback from the build process, including fast-running tests, arriving in around 5 minutes.</p><p>The build process includes compilation, linking, and bundling. It usually consists of a suite of fast-running tests that validate the functionality and fitness of the application. In practice, this is typically a sequence of scripts or commands that run each tool in the build chain, but some build processes are more complex.</p><p>The benefit of bringing build automation into your internal developer platform is the opportunity to harmonize pipelines and bring them all up to a common standard for fast and secure builds. For example, instead of each team having to invent a way to sign their package so it can be verified later in the deployment pipeline, platform teams can make this a standard part of the build.</p><p>Development teams rarely touch their build process, compared to how much focus a platform team might put into builds. That means they are less familiar with the tools and may not be aware of features that could improve build performance or security.</p><h2 id="2-deployment-automation">2. Deployment automation</h2><p>Manual deployments are one of the primary reasons for bad paths to production. Even simple manual deployments can go drastically wrong, and when they do, the organization responds by introducing process steps that slow down the flow of change. This is well-meaning, but we’ve learned that reducing deployment frequency results in larger batches, which carry more risk and cause bigger problems.</p><p>Deployment automation is the heart of change in your software delivery. When deployments are reliable, repeatable, and low-effort, you can teach the organization to do them more often. Automated deployments help you build trust and reduce heavyweight processes like change approval by committee, which universally clogs the ability to deploy small batches frequently and safely.</p><p>The steps, process templates, project templates, and policies related to deployments are a perfect fit for Platform Engineering. The platform team can ensure crucial steps are included in all deployments, like verifying the provenance of the packages being deployed and ensuring the packages and deployment process have progressed through appropriate environments before reaching production.</p><p>When the organization needs to level up security across all deployments, a platform team using the right deployment tools can quickly bring the whole organization into compliance with the new standards.</p><h2 id="3-infrastructure-automation">3. Infrastructure automation</h2><p>There was a time when ClickOps was the default for infrastructure changes. Modern software teams have rejected the idea of managing their infrastructure by clicking around in administration consoles, cloud portals, and command lines, as the result is that no two instances of anything are the same. These differences, though sometimes subtle, can cause production incidents that are difficult to identify and resolve.</p><p>By treating infrastructure as code, the definitions can be stored in version control and follow the same review and promotion patterns developers apply to application code. When infrastructure is created and managed this way, each instance is an identical copy, and it’s easier to scale out, recover from a disaster, or create ephemeral environments for testing.</p><p>Platform teams can create standard setups for teams to use that are aligned to the technology choices in their golden pathways. Reducing the breadth of choice makes it easier to secure the infrastructure, control costs, and apply patches and updates.</p><h2 id="4-test-automation">4. Test automation</h2><p>Sometimes it seems there are automated tests everywhere, from unit tests that run continuously in the background as a developer makes changes, to tests used in the build process, to validate end-to-end functionality, and test the fitness of the application’s performance and security. The reason for their prevalence is that testing really works.</p><p>Without test automation, your team would need to perform the same checks manually, and when you don’t do it, your customers do, which is incredibly frustrating.</p><p>While platform teams don’t typically write tests (the research shows the most effective automated tests are written by the developer creating the feature), they can provide the tools that make it easy to define tests and have them run throughout the software delivery process. Platform teams may also provide easy ways to consume test services, making it easy to spin up end-to-end tests or performance tests without managing infrastructure.</p><p>To shorten feedback loops, all types of testing should be performed continuously and ideally represented by fast, reliable test automation suites. You should cover functional, security, and performance tests within your deployment pipeline.</p><p>You should consider how test data is managed as part of your test automation strategy. The ability to set up data in a known state will help you make tests less flaky. Test automation lets developers know early if their change causes a fault, reduces team burnout, and increases software stability.</p><h2 id="5-monitoring-and-observability">5. Monitoring and observability</h2><p>While test automation covers a suite of expected scenarios, monitoring and observability help you expand your view to the entirety of your real-world software use. Monitoring implementations tend to start with resource usage metrics, but mature into measuring software from the customer and business perspective.</p><p>The ability to see information-rich logs can help you understand how faults occur so you can design a more robust system.</p><h2 id="other-popular-features">Other popular features</h2><p>Though they didn’t make it into the top 5, documentation, security scanning, one-click setup for new projects, artifact management, and secrets management are common features of internal developer platforms.</p><p>The features are useful on their own, but when combined, they amplify their effects on overall software delivery and operational performance. Platform teams who provide a seamless toolchain from commit through to production and on to day 2 operations will find they have lifted a massive burden from developers.</p><figure><p><img src="/blog/img/top-5-features-of-internal-developer-platforms/top-10-platform-features.png" alt="The relative popularity of the top 10 features"></p></figure><h2 id="what-high-performers-do-differently">What high performers do differently</h2><p>The high-performing organizations used the amplification effect to provide a complete toolchain for deployment pipelines. Platforms that provided this set of functionality had more impact on the organization and its goals for Platform Engineering.</p><ul><li>Builds</li><li>Test automation</li><li>Artifact management</li><li>Deployment automation</li><li>Infrastructure automation</li><li>Monitoring and observability</li></ul><p>By creating a strong pipeline with equal consideration for long-term sustainability of the platform, you can create an offering that helps developers today and continues to help them as the organization navigates changing requirements, regulations, and competition.</p><p>We also looked at organizations with low performance; they were often missing 3 key features:</p><ul><li>Test automation</li><li>Artifact management</li><li>Monitoring and observability</li></ul><p>If your platform is missing any of these, you may be missing an opportunity to amplify the benefits of platform adoption.</p><h2 id="platform-success">Platform success</h2><p>Throughout our study, we used each organization’s individual definition of success for its platform. There are many different motivators for funding a platform initiative, and the ultimate test of a platform is whether it meets the specific goals it was founded to achieve.</p><p>You can use the <a href="https://octopus.com/devops/metrics/monk-metrics/">MONK metrics</a> to measure your Platform Engineering initiative. They balance benchmarkable metrics with contextual measures that track the platform’s progress against the organization’s goals.</p><p>Despite the many different ways platforms are measured, the presence of a complete feature set increased the chances of success.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Introducing Kubernetes Live Object Status</title> <link href="https://octopus.com/blog/kubernetes-live-object-status" /> <id>https://octopus.com/blog/kubernetes-live-object-status</id> <published>2025-04-16T00:00:00.000Z</published> <updated>2025-09-15T00:00:00.000Z</updated> <summary>The Octopus Kubernetes monitor is the next major expansion of capabilities of the Octopus Kubernetes agent.</summary> <author> <name>Nick Josevski, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Kubernetes is rapidly becoming the dominant platform for hosting and running applications.</p><p>At Octopus, we want to provide the best experience for deploying and monitoring your applications on Kubernetes.</p><p>To make your deployments to Kubernetes simpler, faster, and safer, Octopus has a deployment target called the Kubernetes agent. The Kubernetes agent is a small, lightweight application you install into your Kubernetes cluster.</p><p>Kubernetes Live Object Status—our Kubernetes monitor—is the next major expansion of this deployment agent’s capabilities.</p><h2 id="post-deployment-monitoring">Post-deployment monitoring</h2><p>Monitoring is an important part of the deployment process and is even more important for Kubernetes. It gives you confidence that your applications are running as expected.</p><p>When troubleshooting an unhealthy app, you often need to use a combination of tools and login credentials to figure out what’s going wrong. That can be quite fiddly, especially with Kubernetes. Your Octopus Deploy instance already has these credentials and awareness of your applications, similar to runbooks. We’re aiming to make Octopus the first port of call as you and your team continuously deliver software.</p><p>We roll up the combined status for the objects in a given release, per environment, into a single live status.</p><p><img src="/blog/img/kubernetes-live-object-status/klos-project-view.png" alt="Project view with live status"><em>Project view with live status</em></p><h3 id="detailed-resource-inspection">Detailed resource inspection</h3><p>Dive into the details of each Kubernetes resource to understand how your deployment has been configured and monitor your application itself.</p><p>Kubernetes Live Object Status gives a quick view of all the Kubernetes resources included in your application. You can then dig in to each resource to view the manifests, events and application logs for each Kubernetes resource.</p><p><img src="/blog/img/kubernetes-live-object-status/live-status-drawer-manifest.png" alt="Kubernetes resource manifest"><em>Kubernetes resource manifest</em></p><h2 id="if-youre-already-using-the-kubernetes-agent">If you’re already using the Kubernetes agent</h2><p>If you already use the Kubernetes agent, your upgrade path will be simple.</p><h3 id="upgrading-your-agents-to-the-version-containing-the-monitor">Upgrading your agents to the version containing the monitor</h3><p>We’re working on a one-click upgrade process you can access in Octopus Deploy.</p><p>If you can’t wait until then, you can upgrade existing Kubernetes agents by running a Helm command on your cluster. <a href="https://octopus.com/docs/kubernetes/live-object-status/installation#upgrading-an-existing-kubernetes-agent">See our documentation for all the details</a>.</p><h2 id="new-to-using-octopus-for-kubernetes-deployments">New to using Octopus for Kubernetes deployments?</h2><p>After you install the agent, it registers itself with Octopus Server as a new deployment target. This lets you deploy your applications and manifests into that cluster, without the need for workers, external credentials, or custom tooling. All new installations of the agent will have the monitor enabled.</p><h3 id="installing-the-agent">Installing the agent</h3><p>The Kubernetes agent gets packaged and installed via a Helm chart. This makes managing the agent very simple and makes automated installation easy.</p><p>The Kubernetes monitoring component comes along for the ride. <a href="https://octopus.com/docs/kubernetes/live-object-status/installation">See our docs for detailed instructions</a>.</p><p><img src="/blog/img/kubernetes-live-object-status/kubernetes-agent-wizard-config.png" alt="Kubernetes agent wizard configuration options"><em>Kubernetes agent configuration options</em></p><h2 id="new-to-octopus-deploy-entirely">New to Octopus Deploy entirely?</h2><p>How exciting! Welcome to scalable, simple, and safe Kubernetes CD with Octopus.</p><p>Octopus is one user-friendly tool for developers to deploy, verify, and troubleshoot their apps. Platform engineers use this powerful tool to fully automate Continuous Delivery, manage configuration templates, and implement compliance, security, and auditing best practices.</p><p>We empower your teams to spend less time managing and troubleshooting Kubernetes deployments and more time shipping new features to improve your software.</p><p>Octopus models environments out-of-the-box and reduces the need for custom scripting. You define your deployment process once and reuse it for all your environments. You can go to production confidently as your process has already worked in other environments.</p><p>If you’re interested in trying it out, <a href="https://octopus.com/free-signup">get started with a free account</a>.</p><h3 id="getting-started-with-the-agent-and-monitor">Getting started with the agent and monitor</h3><p>The Octopus Kubernetes agent targets are a mechanism for executing Kubernetes steps and monitoring application health from inside the target Kubernetes cluster, rather than via an external API connection.</p><p>Like the Octopus Tentacle, the Kubernetes agent is a small, lightweight application that’s installed into the target Kubernetes cluster.</p><p>You install the Kubernetes agent using Helm via the octopusdeploy/kubernetes-agent chart. For the complete details, see our docs about <a href="https://octopus.com/docs/kubernetes/targets/kubernetes-agent#installing-the-kubernetes-agent">installing the Kubernetes agent</a>.</p><h3 id="when-can-i-use-it">When can I use it?</h3><div class="info"><p>Kubernetes Live Object Status is now generally available and recommended for production use for Octopus Cloud and self-hosted customers running Octopus Server 2025.3 or later.</p><p>Support for Octopus Server running in high availability clusters is not yet available, but will be coming in the next self-hosted release.</p></div><p>The Kubernetes agent is available now as an Early Access Preview (EAP) in Octopus Cloud! If you don’t see the feature available, please reach out and we can fast-track your cloud instance getting this release.</p><p>Remember this is an opt-in upgrade for existing Octopus agents installed on your cluster(s). <a href="https://octopus.com/docs/kubernetes/live-object-status/installation#upgrading-an-existing-kubernetes-agent">See this documentation page for all the details</a>.</p><p><img src="/blog/img/kubernetes-live-object-status/kubernetes-agent-wizard-config.png" alt="Kubernetes agent as deployment targets"><em>Kubernetes agent as deployment targets</em></p><h2 id="how-we-built-kubernetes-live-object-status">How we built Kubernetes Live Object Status</h2><p>To facilitate a potentially large flow of new data coming to Octopus Server, a separate and non-disruptive web host runs alongside the main host. This isolation level gives us confidence that this is an additive feature and if there are performance complications, they’ll get isolated and managed with minimal impact on Octopus Server’s regular operations.</p><p>The cluster-based monitoring capability uses two values to identify the incoming request:</p><ul><li>The client certificate thumbprint</li><li>An installation ID in the request headers</li></ul><p>Octopus Server uses a long-lived bearer token as a shared secret for authentication. The token gets generated when the monitoring capability installs in the cluster and registers with Octopus Server. This token is rotatable by customers and only valid for use on the gRPC endpoint.</p><p>This allowed us to build gRPC services to handle the data flowing from the monitoring agent in the Kubernetes clusters. <a href="https://grpc.io/">gRPC</a> is a modern open-source high-performance remote procedure call (RPC) framework. This is the first time we’re using gRPC as part of an Octopus feature.</p><p>In the cluster, alongside the Octopus Kubernetes agent, we have this new component that’s responsible for the monitoring aspect. It sits in the cluster and monitors the deployed resources, pumping relevant live-status data back out over gRPC to Octopus Deploy.</p><p>As we also run Octopus Deploy in Kubernetes for our Octopus Cloud customers, we have a new nginx-based ingress configuration to help with partitioning and scalability. To find out more have a look at <a href="https://www.youtube.com/watch?v=DH7YDySEPHU">how we use Kubernetes for Octopus Cloud</a>.</p><h3 id="written-in-go">Written in Go</h3><p>This is the first large-scale feature our team has built in <a href="https://go.dev/">Golang</a> in Octopus. This has given us access to a large set of great libraries built for Kubernetes. Examples include Helm packages and the Argo GitOps engine. Our team got the expertise uplift from the Codefresh engineers who are now part of Octopus.</p><p>The GitOps engine is a flexible library with enough configuration and extension points for us to save very specific information on a per-resource basis. This helps us get the right information out of the cluster and back to Octopus. Go is also the de facto programming language for Kubernetes.</p><p>We’re exploring options to open-source parts of our implementation. Stay tuned for when that’s all decided, as we’ll have a follow-up blog post. The likely first step will be making the source available for inspection. This is part of offering more transparency into the tools we’re asking customers to run in the security context of their clusters.</p><h2 id="whats-coming-next">What’s coming next</h2><p>Today’s release is the EAP. The list below represents capabilities we think are worth adding next (though it’s not the complete list). If you have thoughts and opinions, please reach out to us in the comments section below or on our <a href="https://octopus.com/slack">community Slack</a>.</p><ul><li>Terraform-based setup</li><li>Support Kubernetes API targets</li><li>Octopus HA (multi-node server) support</li><li>Custom health checks</li><li>Orphan and drift detection</li></ul><h3 id="this-looks-cool-but-what-if-i-dont-deploy-to-kubernetes">This looks cool, but what if I don’t deploy to Kubernetes?</h3><p>Currently, there are no plans to extend this beyond Kubernetes deployments. Please let us know where and why you’d like to use this monitoring capability.</p><h2 id="let-us-know-your-thoughts">Let us know your thoughts</h2><p>We’re excited to see how you use this monitoring feature. Please let us know in the comments section below or on our <a href="https://octopus.com/slack">community Slack</a> what new opportunities this opens up for your application delivery objectives.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Troubleshooting common Octopus Deploy issues</title> <link href="https://octopus.com/blog/troubleshooting-common-octopus-deploy-issues" /> <id>https://octopus.com/blog/troubleshooting-common-octopus-deploy-issues</id> <published>2025-09-11T00:00:00.000Z</published> <updated>2025-09-11T00:00:00.000Z</updated> <summary>Exploring common issues in Octopus Deploy and ways to resolve them.</summary> <author> <name>Donny Bell, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Octopus Deploy provides a powerful, flexible platform for automating deployments and runbook execution. However, there are times when you may encounter challenges that require troubleshooting. In this post, we’ll walk through some of the most common issues that users may run into and provide guidance on how to resolve them.</p><h2 id="tentacle-communication-issues">Tentacle communication issues</h2><p>Deployment targets and workers running the Octopus Tentacle agent are key to Octopus infrastructure. If Tentacles cannot communicate with your Octopus Server, deployments and runbooks will fail. Some of the issues we commonly see when setting up a new Tentacle agent include: firewall restrictions, network connectivity issues, SSL offloading, and misconfigured certificates.</p><p>Our <a href="https://octopus.com/docs/infrastructure/deployment-targets/tentacle/troubleshooting-tentacles">Troubleshooting Tentacles documentation</a> is a comprehensive guide for troubleshooting Tentacle communication issues. It covers:</p><ul><li>Verifying that Tentacle services are running</li><li>Checking firewall rules</li><li>Ensuring correct thumbprints are configured</li><li>Debugging connectivity with Octopus diagnostic tools</li></ul><h2 id="calamari-issues-and-antivirus-exclusions">Calamari issues and antivirus exclusions</h2><p>If you ever run into an error message in your logs that includes:</p><pre class="astro-code light-plus" style="background-color:#FFFFFF;color:#000000; overflow-x: auto;" tabindex="0" data-language="text"><code><span class="line"><span>Bootstrapper did not return the bootstrapper service message</span></span></code></pre><p>This normally indicates that antivirus or other security software is interfering with an Octopus task (such as a deployment or runbook).</p><p>Octopus tasks are powered by <a href="https://octopus.com/docs/octopus-rest-api/calamari">Calamari</a>, a lightweight deployment bootstrapper invoked for each deployment or runbook step. It’s automatically installed and updated as needed in the <em>Tools</em> folder of the <em>Tentacle home directory</em>. Additionally, steps for a given task are processed in a temporary folder inside of the Work folder, also residing in the Tentacle home directory.</p><p>Sometimes, antivirus or endpoint protection software can lock or quarantine files in these folders, causing deployments to fail.To prevent this, we recommend working with your security team to add exclusions as necessary for these directories. For additional information, please review our <a href="https://octopus.com/docs/security/hardening-octopus#configure-malware-protection">Hardening Octopus documentation</a>.</p><h2 id="polling-tentacles-over-port-443-https">Polling tentacles over port 443 (HTTPS)</h2><p>In some environments, firewall policies can make it difficult or impossible to open additional ports. <a href="https://octopus.com/docs/infrastructure/deployment-targets/tentacle/polling-tentacles-over-port-443">Octopus supports configuring Polling Tentacles over port 443</a>, which allows communication through a port that is typically already allowed in enterprise networks.</p><p>This option simplifies network configuration and can reduce the setup burden in restrictive environments. This also allows for Octopus instances that communicate with other organizations or network environments to have a path for Tentacle communication that may otherwise not be possible.</p><h2 id="variable-snapshots-in-projects-and-runbooks">Variable snapshots in Projects and Runbooks</h2><p>Octopus variables are a seemingly simple, but common point of confusion that we often help our users with. Octopus leverages <a href="https://octopus.com/docs/releases">project releases</a> and <a href="https://octopus.com/docs/runbooks/runbook-publishing">runbooks snapshots</a> to preserve an immutable set of information to make deployments and runbooks repeatable and predictable.</p><p>You can view the variable values associated with a release by selecting the [Show Snapshot] option in the Variable Snapshot section of a release or runbook. This can be a helpful step for confirming the variable values for a given release or runbook.</p><figure><p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image1.png" alt="Octopus Deploy UI showing a project release and the steps to update the variables for that release"></p></figure><p><a href="https://octopus.com/docs/projects/variables">Project variables and associated library variable set variables</a> are captured in a snapshot when a release or runbook snapshot is created. In order for variable updates to take effect, you must also do the following in an associated project or runbook:</p><p>For projects:</p><ul><li>Create a new release so that the variable snapshot updates, or</li><li>Update an existing release’s variable snapshot</li></ul><p>For runbooks:</p><ul><li>Create and publish a new runbook snapshot</li></ul><div class="hint"><p>The exception to the above is changes to Tenant variables.</p><p>From our <a href="https://octopus.com/docs/tenants/tenant-variables#tenant-variables-and-snapshots">Tenant Variables documentation</a>:</p><blockquote><p><em>[…] we don’t take a snapshot of tenant variables. This enables you to add new tenants at any time and deploy to them without creating a new release. This means any changes you make to tenant-variables will take immediate effect.</em></p></blockquote></div><h2 id="debugging-variables-with-variable-logging">Debugging variables with variable logging</h2><p>When deployments or runbooks don’t behave as expected, variable issues are a common culprit. Octopus provides the ability to debug variables by <a href="https://octopus.com/docs/support/how-to-turn-on-variable-logging-and-export-the-task-log">enabling variable logging and viewing the raw task log</a>.</p><p>By turning on variable logging, you can:</p><ul><li>Inspect the evaluated values of your variables</li><li>Verify scoping and precedence rules</li><li>Export raw task logs for detailed review</li><li>Save significant troubleshooting time when debugging complex variable configurations</li></ul><p>Alternatively, you may now enable Debug Mode for Octopus deployments and runbooks. For project deployments, this option is available on the “deploy” screen:</p><figure><p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image2.png" alt="Octopus Deploy UI showing a project release and the steps to enable or disable debug mode"></p></figure><p>When running a runbook, you must click the <strong><code>Show advanced</code></strong> button to reveal Debug mode:</p><figure><p><img src="/blog/img/troubleshooting-common-octopus-deploy-issues/light-image3.png" alt="Octopus Deploy UI showing a runbook snapshot and the steps to enable or disable debug mode"></p></figure><h2 id="resources-for-custom-api-scripts">Resources for custom API scripts</h2><p><a href="https://octopus.com/docs/octopus-rest-api">Octopus Deploy features a powerful REST API</a>. Many Octopus users extend their automation by writing custom scripts that interact with Octopus programmatically. You can find <a href="https://octopus.com/docs/octopus-rest-api/examples">API examples in our documentation</a>. We also offer a <a href="https://github.com/OctopusDeploy/OctopusDeploy-Api/tree/master/REST">public GitHub repository</a> with many scripts that may fit your needs as written or provide a good baseline to iterate and customize for your needs.</p><p>If you can’t find what you need or would like additional inspiration, our <a href="https://octopus.com/slack">Octopus Community Slack channel</a> is a great place to interact with other Octopus users and Octopus employees who can help!</p><h2 id="conclusion">Conclusion</h2><p>Octopus Deploy is a powerful deployment tool that can handle many complex and scaled scenarios. If you need additional help, <a href="https://octopus.com/support">contact Octopus Support</a>.</p>]]></content> </entry> <entry> <title>Your IDP needs DDD</title> <link href="https://octopus.com/blog/your-idp-needs-ddd" /> <id>https://octopus.com/blog/your-idp-needs-ddd</id> <published>2025-09-09T00:00:00.000Z</published> <updated>2025-09-09T00:00:00.000Z</updated> <summary>As Platform Engineering grows into a movement at scale, we need to revisit the past and apply some lessons from domain-driven design to our internal developer platforms (IDPs).</summary> <author> <name>Steve Fenton, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>It has been more than two decades since Eric Evans published his book on domain-driven design (DDD). The idea was to create software designed after the business domain, using the same language and mental models people used outside of the software team.</p><p>We don’t talk about domain-driven design much these days. But as George Santayana said: “Those who cannot remember the past are condemned to repeat it.”</p><p>As Platform Engineering grows into a movement at scale, we need to revisit the past and apply some lessons from domain-driven design to our internal developer platforms (IDPs). Otherwise, we continually step on rakes to learn why they shouldn’t be left on the lawn.</p><p>There’s a host of interesting interconnected ideas in domain-driven design, but one that resonates with Platform Engineering is the concept of a core domain.</p><h2 id="what-core-domains-are">What core domains are</h2><p>When you build software, there are several areas where your innovation, opinions, and solutions create unique value for your organization. You also need many things that don’t add much value, but your software isn’t viable without them.</p><p>Let’s use a pizza restaurant as an example. If you sell pizza, you want to make it easy for customers to choose what they want to eat and have it delivered. To complete the process, you need to look up their address and take a payment.</p><p>Core domains are the areas where you want to do something different that will give you a competitive edge. For your pizza company, that might be how you present the menu, collect customizations, and offer deals and rewards.</p><p>Non-core domains, also called generic domains, are areas where innovation and differentiation make little difference, or where doing things differently may even be undesirable. Customers expect that looking up their address and paying will work like elsewhere. They don’t want you to be innovative here, as it makes it harder to use.</p><p>So, core domains are something unique or special to your organization. It’s essential to your business’s existence, and where you should invest the most. This is something you want to do so well that it’s hard for your competitors to copy.</p><h2 id="the-problem-of-generic-domains">The problem of generic domains</h2><p>When you spend time on generic domains, you direct time, attention, energy, and investment away from the areas that impact your organization most. Generic domains have limited value because they don’t benefit from doing something different or unique. The pizza company will never create such an excellent payment flow that you’d choose their offering over a competitor who offers better customization.</p><p>Generic domains can be just as complex as your core domains, which means they can consume large amounts of investment. Suppose there’s a commercial provider of an offering in your generic domain space. In that case, they’ll be treating it as a core domain and innovating the space, which puts additional pressure on you to invest to avoid falling behind user expectations.</p><p>Organizations that avoid the generic domain trap can outpace their competitors as they spend more time working on features that will make them stand out.</p><h2 id="how-to-tame-generic-domains">How to tame generic domains</h2><p>There’s an easy way to avoid the generic domain trap. Domain-driven design provides a pattern for managing them, which recognizes the asymmetry in the value gained by investment in core domains versus generic domains.</p><p>Instead of reducing costs on paper, the goal is to minimize the real cost of working on generic domains: Lost value.</p><figure><p><img src="/blog/img/your-idp-needs-ddd/taming-generic-domains.png" alt="Domain-driven design prefers to buy off the shelf, then falls back to isolation, outsourcing, and minimalism"></p></figure><p>You should work through this list from the top and choose the earliest exit available.</p><ol><li><strong>Off-the-shelf</strong>: Look for existing software or services that address the generic domain. In particular, look to use:<ol><li>Commercial products and software-as-a-service offerings where the provider treats it as a core domain. Their innovation and support will ensure the generic domain doesn’t become an anchor dragging you back.</li><li>Open source tools that are robust and maintained, and where the overheads of adopting and updating them are low.</li></ol></li><li><strong>Isolation</strong>: Where you have to build custom code for a generic domain, encapsulate and isolate it. Placing it behind a well-defined interface minimizes the impact on your core domain and lets you switch it out if an off-the-shelf solution emerges later.</li><li><strong>Outsourcing</strong>: While outsourcing your core domain can cause problems, outsourcing generic domains helps control the cost and distraction of the work. You can define the interface and have an outsourced team focus on the implementation details.</li><li><strong>Minimalism</strong>: When no other option is available, create a simple minimalist solution that meets the immediate need. Don’t over-engineer the generic domain or add features you don’t need. You can be reluctant to iterate the solution and keep your eyes and ears open for when someone creates a software product or service you can use to replace it.</li></ol><h2 id="analyzing-build-versus-buy">Analyzing build versus buy</h2><p>Platform Teams who want to create the most significant force multiplier for the developers they serve need to protect their focus on fitting the tools to the organization. To do that, they need to identify and eliminate areas where their skills are wasted.</p><p>A crucial part of this optimization process is performing a solid build versus buy analysis, which should factor in the initial development cost, ongoing maintenance and support costs, and the opportunity cost of diverting resources away from the core domains.</p><p>Returning to the pizza restaurant, offloading the address lookup to a vendor that commonly provides this feature will mean users are familiar with how it works. The vendor will dedicate more attention to improving the user experience of their tool, and when the vendor introduces innovations, they appear commonly enough that users accept them.</p><p>Similarly, you’d want to offload payments to a fast and secure payment provider. Hence, it works like other sites and keeps pace with developments like 3D secure, tokenization, card security codes, multi-factor authorization, and biometrics. These industry innovations would have forced the developers to revisit this generic domain many times just to keep pace with the baseline.</p><h2 id="why-this-is-crucial-for-platform-engineering">Why this is crucial for Platform Engineering</h2><p>Domain-driven design (DDD) tells us to optimize our development efforts where they will have the most impact on the organization’s success. You shouldn’t over-invest in non-core domains, as they have limited business value and drain resources. Building custom solutions for non-core domains brings unnecessary complexity and maintenance burden, and for what? To build something that you could have bought or outsourced.</p><p>The software industry is about to re-learn the lessons that led to the discovery of domain-driven design. Industry-wide, we are dedicating thousands of developers to building the same thing. Not a simple minimalist solution to fill a gap left by commercial products, software-as-a-service, or open source software, but a million giant platforms that are out of date before they’ve been pushed to production.</p><p>Internal developer platforms sink unless platform teams can shift weight down to underlying tools. By transferring ballast to commercial products or open source tools, the platform team can get back to agility and create simple, minimal solutions that handle real gaps in toolchains caused by truly bespoke needs.</p><p>And it’s these very needs that are missed with behemoth platforms. An organization that needed to go an extra two miles on security may adopt Platform Engineering to tailor a truly robust solution to their security needs. As that platform grows and accumulates additional custom features, the focus on security is lost, and the investment is wasted building and maintaining code that doesn’t solve a unique need for the organization.</p><p>But it’s not all doom and gloom for Platform Engineers. Commercial and open source tools can rescue them from this inevitability by providing features that make it easy to shift the weight down and keep the platform light enough to float.</p><h2 id="float-on-our-platform-hub">Float on our Platform Hub</h2><p>That’s where Platform Hub comes in. By adding features platform teams need, like process templates, project templates, and policies, platform teams can transfer the effort down to Octopus and lighten their platform by removing thousands of lines of bespoke templating code.</p><p>Platform Teams can get back to focusing on the unique needs that make their platform vital to their organization. They will benefit from our innovative mechanisms for template management and policies, which go well beyond the attack of the template clones and the synchronization conflicts that platform teams report with their bespoke solutions.</p><p>Happy deployments!</p>]]></content> </entry> <entry> <title>Focus on your end users when creating AI workloads</title> <link href="https://octopus.com/blog/focus-on-end-users-for-ai" /> <id>https://octopus.com/blog/focus-on-end-users-for-ai</id> <published>2025-09-04T00:00:00.000Z</published> <updated>2025-09-04T00:00:00.000Z</updated> <summary>Why it is important to focus on helping end users above all else when creating AI workloads.</summary> <author> <name>Bob Walker, Octopus Deploy</name> </author> <content type="html"><![CDATA[<p>Recently, I attended a conference targeted at CIOs, CTOs, and VPs of Technology. As expected, there were many sessions on AI and how it can help companies be more efficient. The example given was the well-known use of AI in the hiring process; using AI as gatekeepers to quickly weed out all the unqualified candidates. “Your human resources people won’t have to wade through so many CVs and phone screens!”</p><p>That use case improves the efficiency of human resources or your people team. But that efficiency comes at the cost of the end users, the people you are trying to hire. <em>Everyone hates</em> how AI is used in hiring processes today. Phrases like dystopian and Orwellian are common. In this article, I’ll discuss why focusing on both your AI feature’s beneficiary and end users is essential.</p><h2 id="beneficiary-user-vs-end-user">Beneficiary User vs. End User</h2><p>A beneficiary user is a person who benefits from leveraging AI. The end user is the person who will use an AI feature to accomplish a specific task.</p><p>Returning to the hiring process, the beneficiary user is responsible for wading through CVs and performing the initial phone screen. The end user is the person submitting their CV. The person in charge of going through CVs benefits from AI by offloading the repetitive work of screening unqualified candidates. Imagine a job posting for a senior .NET developer, but 30% of CVs submitted only include project manager experience. You might think I’m exaggerating, but you’d be surprised. As a former hiring manager who had to wade through CVs, I was shocked by how many people were “CV Bombing” - applying for as many positions as possible.</p><p>Looking at Octopus Deploy, the beneficiary of our AI Assistant is the platform engineer. The end user is the developer who uses the assistant to accomplish a particular task. For example, you can ask the Octopus AI Assistant why a deployment or runbook run failed. The AI Assistant will look at the failure message, and using our knowledge base, our docs, and the web, will come up with a reason why the failure occurred and suggestions on how to fix it. Assuming the suggestion is correct, the developer can quickly self-serve a solution without involving the platform engineer. The platform engineer benefits because they can focus on high-value tasks instead of helping debug a specific deployment failure. If the platform engineer didn’t know the answer, they’d go through our docs or do a Google search.</p><p>Now that we understand the two kinds of users, let’s examine what happens when a person is both the beneficiary and the end user.</p><h2 id="learning-the-wrong-lessons-from-the-success-of-chatgpt">Learning the wrong lessons from the success of ChatGPT</h2><p>ChatGPT and similar tools are unique because their users are both the beneficiary and the end user.</p><p>One of many benefits of ChatGPT is that it is an evolution of search engines. Before ChatGPT, you did a Google search, which returned a list of results. The search engine ranked the results for you. They had complex algorithms to find the best results based on their internal ranking system. A cottage SEO industry (Search Engine Optimization) sprang up to get higher results. ChatGPT changed that by providing you with answers curated from the content of many websites.</p><p>For common questions, with many sources agreeing on the same answer, the results between Google and ChatGPT are close. ChatGPT is not infallible; once, it insisted that Omaha, Nebraska, was 29 nautical miles from Chicago, Illinois. Google can be more accurate, but that is a result of maturity. They’ve had 25 years to improve and iterate their search results algorithm.</p><p>ChatGPT is popular because of the interface. It is very similar to the Google Search box. The results are where they differ. ChatGPT collates information and generates an answer that is easy to read. In addition, Google Searches are very transactional: search, get a result, move on with your day. With ChatGPT, the sessions are interactive. You can ask additional questions, and ChatGPT remembers the entire conversation.</p><p>I’m only focused on ChatGPT’s question/answer aspect. I know it can do so much more, including generating content, images, composing songs, and more.</p><p>Unfortunately, companies seem insistent on learning the wrong lessons when analyzing popular trends. They see that “people like prompts and providing answers or content to them. Let’s do that for [insert use case here]!”</p><h2 id="an-awful-user-experience-and-its-impact">An awful user experience and its impact</h2><p>That wrong lesson has its roots in computer graphic adventure games from the 1980s/early 1990s.</p><p>My first computer game was <a href="https://en.wikipedia.org/wiki/Space_Quest_III">Space Quest III</a> from Sierra. Like computer games of that era, I typed in commands to get the on-screen character to act. There was no help guide or tutorial. I had to figure it out. My brother and I spent <em>weeks</em> trying to escape the first area. We had to find the magic set of commands to execute in a specific sequence in specific areas.</p><p>Last year, I started the multi-month process of changing banks from a regional to a national bank. The national bank offered a high-yield savings account, while the regional bank didn’t. I had to call the national bank a few times. They have followed the latest trend in phone support. Dial the number, and a human-sounding voice asks you what you need help with. Too often, the response is “I’m sorry, I didn’t get that” or “I didn’t understand.” I needed to know the magic phrase to get help. There was no clear escape hatch to get to an operator.</p><p>Their online AI help agent was no better. The AI help agent was trained on their public documents. If the answer wasn’t in the documents, it couldn’t help me. Often, it referred me to calling their support line, creating an endless cycle of frustration.</p><p>That experience was so bad that I went back to the regional bank. They proudly promote that you’ll talk to a real person when calling for help. I would rather lose thousands of dollars over many years than deal with the national bank’s awful AI-based help system.</p><p>I’m not the only one who hates talking to AI chatbots for support. The Commonwealth Bank of Australia (CBA) <a href="https://www.abc.net.au/news/2025-08-21/cba-backtracks-on-ai-job-cuts-as-chatbot-lifts-call-volumes/105679492">recently reversed its decision</a> to eliminate jobs after introducing an AI-powered help due to poor customer experience.</p><h2 id="augmenting-the-end-user-experience">Augmenting the end user experience</h2><p>The problem is that, just like humans, AI makes mistakes. Without appropriate settings, it will insist that it is correct. Where humans and AI differ is that AI is “book smart” but not “street smart,” but people can be “book smart” and “street smart.” That means that humans use a combination of experiences and acquired knowledge to make decisions. Humans learn and evolve. At the same time, AI needs to be retrained. The best analogy came from Neil deGrasse Tyson on a <a href="https://www.youtube.com/watch?v=BYizgB2FcAQ">recent interview with Hasan Minhaj</a> - think of AI like Albert Einstein, but he is locked in a box. It is a sensory deprivation tank, where he does not know the outside world. Someone asks him random questions, and he responds with his current knowledge. He has no context outside the knowledge he has acquired before going into the box.</p><p>As a result, AI struggles with complex decisions. It doesn’t do well when something is outside the expected parameters. In a recent study from <a href="https://arxiv.org/pdf/2412.14161">Carnegie Mellon University and Duke University</a>, AI Agents are correct 30 to 35 percent for multi-step tasks. And the results depended on the model used, with GPT-4o achieving an 8.6% success rate. In a <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">recent study</a> by Apple Computers, many popular LRMs (Large Reasoning Models) models couldn’t handle puzzles (Tower of Hanoi, Checker Jumping, Block World, and River Crossing) once the number of pieces increased beyond simple examples. Today’s AI still has to undergo many more evolutions to become similar to <a href="https://en.wikipedia.org/wiki/J.A.R.V.I.S.">Tony Stark’s Jarvis</a> in the <abbr title="Marvel Cinematic Universe">MCU</abbr>.</p><p>I’m not against using AI. Far from it. However, it’s essential to understand its limitations when designing an end-user experience.</p><p>We’ve been very methodical in finding the proper use cases for our AI Assistant. We looked at how AI could augment the user experience. That means the Octopus AI Assistant <strong>is not</strong> intended to replace the current end-user interface. That would result in a sub-par experience, the opposite of augmentation.</p><p>The challenge we wanted to solve was surfacing the correct information for the users at the right time. We wanted to let the user ask AI for help and not annoy them with unwanted pop-ups or suggestions. We didn’t want to create Clippy 2.0 in the product.</p><p>Knowing that, our four use cases for the AI Assistant are:</p><ol><li><strong>Deployment Failure Analyzer</strong>: Read the logs of a failed deployment and offer suggestions to fix the issue.</li><li><strong>Tier-0 Support</strong>: Provide answers to end-users for common Octopus-related questions. For example, “summarize this deployment process” or “what’s a project?”</li><li><strong>Best Practices Analyzer</strong>: Using Octopus Deploy’s strong opinions, review the user’s instance to find areas for improvement.</li><li><strong>Prompt-Based Project Creation</strong>: Using templates provided by Octopus Deploy, create a new project to deploy to specific deployment targets.</li></ol><p>Interestingly, you don’t need AI to list the first three items. I can take a deployment failure, do a Google search, and likely produce similar results. Or, I can use our Octopus Linting tool, <a href="https://octopus.com/blog/octolint-best-practices">Octolint</a>, for best practices. AI is short-cutting all of that by collating all that information and surfacing it to the user. It’s enabling self-service for the end user.</p><p>But just as necessary, if the AI assistant can’t help, users can still ask their DevOps or Platform Engineers for help.</p><p>That is very different from using AI in hiring or AI-based help agents. They are replacement end-user interfaces. They don’t augment the user experience. Instead, they act as pseudo-gatekeepers to the hiring managers and provide support. They only focus on reducing the load for the beneficiary users. Most likely as a way for companies to cut costs or keep demand for additional headcount down. Unless you know someone at the hiring company or the magic phrase for AI Agent-based help, there are no alternatives.</p><p>But end users hate that experience. I believe that is one of the main reasons why <a href="https://www.ibm.com/thought-leadership/institute-business-value/en-us/c-suite-study/ceo">IBM found</a> that only 25% of AI initiatives have delivered the expected ROI over the past few years.</p><h2 id="considerations-for-the-end-user-experience">Considerations for the end user experience</h2><p>When designing the <a href="https://octopus.com/use-case/ai-assistant">Octopus AI assistant</a>, we started with multiple questions about augmenting the end-user experience. We didn’t want to “sprinkle AI” into the product and claim we had an AI strategy.</p><ol><li>What problem is the AI feature attempting to solve for the end user?</li><li>What is the fallback when the AI feature encounters an unknown use case?</li><li>What is an acceptable level of accuracy for the AI feature?</li><li>If the response is wrong, what is the escalation process for the end user?</li><li>How will the functionality be discovered?</li></ol><p>The answers for the deployment failure functionality of the AI Assistant are:</p><ol><li>Often, failures result from an incorrect configuration, transient error, bug in the script, permissions, or some other common problem. In many cases, it is outside the direct control of Octopus. Surface the information to the user to enable them to self-service the fix and decrease the recovery time.</li><li>Provide a generic answer and encourage the user to contact Octopus Support or their internal experts.</li><li>Reasonable accuracy is expected. Various conditions outside the control of Octopus Deploy can cause errors. Provide multiple suggestions using publicly available documentation. If none work, encourage the user to escalate to a human.</li><li>If the response doesn’t help, provide a link to Octopus Support or to contact their internal experts. In either case, they will escalate to a human.</li><li>When navigating to a failed deployment or runbook run, the Octopus AI Assistant will provide a suggestion that the user can click on to get the answer.</li></ol><p>The focus has been “How can we take what we have and make it better?”, not ” How can we ensure that Platform or DevOps engineers are never bothered again?”</p><h2 id="conclusion">Conclusion</h2><p>When an AI feature has a beneficiary user and end user, focus on providing a fantastic experience for the end user. Augment the end-user experience. But assume that at some point the AI will be incorrect (just like a person is incorrect), and offer a clear escalation path. Despite the many advances in AI, experienced people can handle complex scenarios much better. When the end-user isn’t considered, and the only focus is “improving the bottom line,” it creates an inferior replacement for an existing experience. End users will only put up with so much before they decide to change.</p>]]></content> </entry></feed>If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid Atom 1.0" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src attribute if necessary):
If you would like to create a text link instead, here is the URL you can use:
http://www.feedvalidator.org/check.cgi?url=http%3A//feeds.feedburner.com/OctopusDeploy