Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: https://www.oreilly.com/radar/feed/index.xml

  1. <?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
  2. xmlns:content="http://purl.org/rss/1.0/modules/content/"
  3. xmlns:media="http://search.yahoo.com/mrss/"
  4. xmlns:wfw="http://wellformedweb.org/CommentAPI/"
  5. xmlns:dc="http://purl.org/dc/elements/1.1/"
  6. xmlns:atom="http://www.w3.org/2005/Atom"
  7. xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  8. xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
  9. xmlns:custom="https://www.oreilly.com/rss/custom"
  10.  
  11. >
  12.  
  13. <channel>
  14. <title>Radar</title>
  15. <atom:link href="https://www.oreilly.com/radar/feed/" rel="self" type="application/rss+xml" />
  16. <link>https://www.oreilly.com/radar</link>
  17. <description>Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology</description>
  18. <lastBuildDate>Thu, 06 Nov 2025 16:04:54 +0000</lastBuildDate>
  19. <language>en-US</language>
  20. <sy:updatePeriod>
  21. hourly </sy:updatePeriod>
  22. <sy:updateFrequency>
  23. 1 </sy:updateFrequency>
  24. <generator>https://wordpress.org/?v=6.8.3</generator>
  25.  
  26. 
  33. <item>
  34. <title>Data Engineering in the Age of AI</title>
  35. <link>https://www.oreilly.com/radar/data-engineering-in-the-age-of-ai/</link>
  36. <comments>https://www.oreilly.com/radar/data-engineering-in-the-age-of-ai/#respond</comments>
  37. <pubDate>Thu, 06 Nov 2025 12:15:48 +0000</pubDate>
  38. <dc:creator><![CDATA[Andy Kwan]]></dc:creator>
  39. <category><![CDATA[AI & ML]]></category>
  40. <category><![CDATA[Commentary]]></category>
  41.  
  42. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17662</guid>
  43.  
  44. <media:content
  45. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Data-engineer-updating-robot.jpg"
  46. medium="image"
  47. type="image/jpeg"
  48. width="2304"
  49. height="1792"
  50. />
  51.  
  52. <media:thumbnail
  53. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Data-engineer-updating-robot-160x160.jpg"
  54. width="160"
  55. height="160"
  56. />
  57. <description><![CDATA[Much like the introduction of the personal computer, the internet, and the iPhone into the public sphere, recent developments in the AI space, from generative AI to agentic AI, have fundamentally changed the way people live and work. Since ChatGPT’s release in late 2022, it’s reached a threshold of 700 million users per week, approximately [&#8230;]]]></description>
  58. <content:encoded><![CDATA[
  59. <p>Much like the introduction of the personal computer, the internet, and the iPhone into the public sphere, recent developments in the AI space, from generative AI to agentic AI, have fundamentally changed the way people live and work. Since ChatGPT’s release in late 2022, it’s reached a threshold of 700 million users per week, <a href="https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f142e/economic-research-chatgpt-usage-paper.pdf" target="_blank" rel="noreferrer noopener">approximately 10% of the global adult population</a>. And according to a <a href="https://www.capgemini.com/news/press-releases/agentic-ai-integration-set-to-accelerate-this-year-among-gen-ai-early-adopters/" target="_blank" rel="noreferrer noopener">2025 report</a> by Capgemini, agentic AI adoption is expected to grow by 48% by the end of the year. It’s quite clear that this latest iteration of AI technology has transformed virtually every industry and profession, and data engineering is no exception.</p>
  60.  
  61.  
  62.  
  63. <p>As Naveen Sharma, SVP and global practice head at Cognizant, <a href="https://www.cognizant.com/us/en/insights/insights-blog/how-gen-ai-will-forever-change-data-engineering-wf1807301" target="_blank" rel="noreferrer noopener">observes</a>, “What makes data engineering uniquely pivotal is that it forms the foundation of modern AI systems, it&#8217;s where these models originate and what enables their intelligence.” Thus, it’s unsurprising that the latest advances in AI would have a sizable impact on the discipline, perhaps even an existential one. With the <a href="https://www.nytimes.com/2025/08/10/technology/coding-ai-jobs-students.html" target="_blank" rel="noreferrer noopener">increased adoption</a> of AI coding tools leading to the reduction of many entry-level IT positions, should data engineers be wary about a similar outcome for their own profession? Khushbu Shah, associate director at ProjectPro, <a href="https://medium.com/projectpro/the-only-ai-data-engineering-roadmap-you-need-in-2025-4ce08a1ee320" target="_blank" rel="noreferrer noopener">poses this very question</a>, noting that “we’ve entered a new phase of data engineering, one where AI tools don’t just support a data engineer’s work; they start doing it for you. . . .Where does that leave the data engineer? Will AI replace data engineers?”</p>
  64.  
  65.  
  66.  
  67. <p>Despite the growing tide of GenAI and agentic AI, data engineers won’t be replaced anytime soon. While the latest AI tools can help automate and complete rote tasks, data engineers are still very much needed to maintain and implement the infrastructure that houses data required for model training, build data pipelines that ensure accurate and accessible data, and monitor and enable model deployment. And as Shah points out, “Prompt-driven tools are great at writing code but they can’t reason about business logic, trade-offs in system design, or the subtle cost of a slow query in a production dashboard.” So while their customary daily tasks might shift with the increasing adoption of the latest AI tools, data engineers still have an important role to play in this technological revolution.</p>
  68.  
  69.  
  70.  
  71. <h2 class="wp-block-heading">The Role of Data Engineers in the New AI Era</h2>
  72.  
  73.  
  74.  
  75. <p>In order to adapt to this new era of AI, the most important thing data engineers can do involves a fairly self-evident mindshift. Simply put, data engineers need to understand AI and how data is used in AI systems. As Mike Loukides, VP of content strategy at O&#8217;Reilly, put it to me in a recent conversation, “Data engineering isn&#8217;t going away, but you won&#8217;t be able to do data engineering for AI if you don&#8217;t understand the AI part of the equation. And I think that’s where people will get stuck. They&#8217;ll think, ‘Same old same old,’ and it isn&#8217;t. A data pipeline is still a data pipeline, but you have to know what that pipeline is feeding.”</p>
  76.  
  77.  
  78.  
  79. <p>So how exactly is data used? Since all models require huge amounts of data for initial training, the first stage involves collecting raw data from various sources, be they databases, public datasets, or APIs. And since raw data is often unorganized or incomplete, preprocessing the data is necessary to prepare it for training, which involves cleaning, transforming, and organizing the data to make it suitable for the AI model. The next stage concerns training the model, where the preprocessed data is fed into the AI model to learn patterns, relationships, or features. After that there’s posttraining, where the model is fine-tuned with data important to the organization that’s building the model, a stage that also requires a significant amount of data. Related to this stage is the concept of retrieval-augmented generation (RAG), a technique that provides real-time, contextually relevant information to a model in order to improve the accuracy of responses.</p>
  80.  
  81.  
  82.  
  83. <p>Other important ways that data engineers can adapt to this new environment and help support current AI initiatives is by improving and maintaining high data quality, designing robust pipelines and operational systems, and ensuring that privacy and security measures are met.</p>
  84.  
  85.  
  86.  
  87. <p>In his <a href="https://docs.house.gov/meetings/HM/HM00/20240522/117189/HHRG-118-HM00-Wstate-DemmerT-20240522.pdf" target="_blank" rel="noreferrer noopener">testimony</a> to a US House of Representatives committee on the topic of AI innovation, Gecko Robotics cofounder Troy Demmer affirmed a golden axiom of the industry: “AI applications are only as good as the data they are trained on. Trustworthy AI requires trustworthy data inputs.” It’s the reason why roughly <a href="https://www.turningdataintowisdom.com/70-of-ai-projects-fail-but-not-for-the-reason-you-think/" target="_blank" rel="noreferrer noopener">85%</a> of all AI projects fail, and <a href="https://www.qlik.com/us/news/company/press-room/press-releases/data-quality-is-not-being-prioritized-on-ai-projects" target="_blank" rel="noreferrer noopener">many AI professionals</a> flag it as a major source of concern: without high-quality data, even the most sophisticated models and AI agents can go awry. Since most GenAI models depend upon large datasets to function, data engineers are needed to process and structure this data so that it&#8217;s clean, labeled, and relevant, ensuring reliable AI outputs.</p>
  88.  
  89.  
  90.  
  91. <p>Just as importantly, data engineers need to design and build newer, more robust pipelines and infrastructure that can scale with Gen AI requirements. As Adi Polak, Director of AI &amp; Data Streaming at Confluent, <a href="https://thenewstack.io/data-engineering-in-the-age-of-ai-skills-to-master-now/" target="_blank" rel="noreferrer noopener">notes</a>, “the next generation of AI systems requires real-time context and responsive pipelines that support autonomous decisions across distributed systems”, well beyond traditional data pipelines that can only support batch-trained models or power reports. Instead, data engineers are now tasked with creating nimbler pipelines that can process and support real-time streaming data for inference, historical data for model fine-tuning, versioning, and lineage tracking. They also must have a firm grasp of streaming patterns and concepts, from event driven architecture to retrieval and feedback loops, in order to build high-throughput pipelines that can support AI agents.</p>
  92.  
  93.  
  94.  
  95. <p>While GenAI’s utility is indisputable at this point, the technology is saddled with notable drawbacks. Hallucinations are most likely to occur when a model doesn’t have the proper data it needs to answer a given question. Like many systems that rely on vast streams of information, the latest AI systems are not immune to private data exposure, biased outputs, and intellectual property misuse. Thus, it’s up to data engineers to ensure that the data used by these systems is properly governed and secured, and that the systems themselves comply with relevant data and AI regulations. As data engineer Axel Schwanke <a href="https://www.linkedin.com/pulse/data-engineering-age-generative-ai-axel-schwanke-7y2af/" target="_blank" rel="noreferrer noopener">astutely notes</a>, these measures may include “limiting the use of large models to specific data sets, users and applications, documenting hallucinations and their triggers, and ensuring that GenAI applications disclose their data sources and provenance when they generate responses,” as well as sanitizing and validating all GenAI inputs and outputs. An example of a model that addresses the latter measures is <a href="https://learning.oreilly.com/answers2/" target="_blank" rel="noreferrer noopener">O’Reilly Answers</a>, one of the first models that provides citations for content it quotes.</p>
  96.  
  97.  
  98.  
  99. <h2 class="wp-block-heading">The Road Ahead</h2>
  100.  
  101.  
  102.  
  103. <p>Data engineers should remain gainfully employed as the next generation of AI continues on its upward trajectory, but that doesn’t mean there aren’t significant challenges around the corner. As autonomous agents continue to evolve, questions regarding the best infrastructure and tools to support them have arisen. As Ben Lorica <a href="https://gradientflow.substack.com/p/why-your-database-cant-handle-the" target="_blank" rel="noreferrer noopener">ponders</a>, “What does this mean for our data infrastructure? We are designing intelligent, autonomous systems on top of databases built for predictable, human-driven interactions. What happens when software that writes software also provisions and manages its own data? This is an architectural mismatch waiting to happen, and one that demands a new generation of tools.” One such potential tool has already arisen in the form of <a href="https://agentdb.dev/" target="_blank" rel="noreferrer noopener">AgentDB</a>, a database <a href="https://learning.oreilly.com/videos/generative-ai-in/0642572021234/#:~:text=4%3A12%3A%20Which%20brings%20us%20to%20something%20you%20announced%20recently%3A%20AgentDB.%20You%20can%20find%20it%20at%20agentdb.dev.%20What%20problem%20is%20AgentDB%20trying%20to%20solve%3F" target="_blank" rel="noreferrer noopener">designed specifically</a> to work effectively with AI agents.</p>
  104.  
  105.  
  106.  
  107. <p>In a similar vein, a recent research paper, “<a href="https://arxiv.org/abs/2509.00997" target="_blank" rel="noreferrer noopener">Supporting Our AI Overlords</a>,” opines that data systems must be redesigned to be agent-first. Building upon this argument, Ananth Packkildurai <a href="https://www.dataengineeringweekly.com/p/what-supporting-our-ai-overlords" target="_blank" rel="noreferrer noopener">observes</a> that “it’s tempting to believe that the Model Context Protocol (MCP) and tool integration layers solve the agent-data mismatch problem. . . .However, these improvements don’t address the fundamental architectural mismatch. . . .The core issue remains: MCP still primarily exposes existing APIs—precise, single-purpose endpoints designed for human or application use—to agents that operate fundamentally differently.” Whatever the outcome of this debate may be, data engineers will likely help shape the future underlying infrastructure used to support autonomous agents.</p>
  108.  
  109.  
  110.  
  111. <p>Another challenge for data engineers will be successfully navigating the ever amorphous landscape of data privacy and AI regulations, particularly in the US. With the <a href="https://www.lawandtheworkplace.com/2025/07/big-beautiful-bill-leaves-ai-regulation-to-states-and-localities-for-now/" target="_blank" rel="noreferrer noopener">One Big Beautiful Bill Act</a> leaving AI regulation under the aegis of individual state laws, data engineers need to keep abreast of any local legislations that might impact their company’s data use for AI initiatives, such as the recently signed <a href="https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/?utm_source=newsletter.theaireport.ai&amp;utm_medium=newsletter&amp;utm_campaign=newsom-signs-landmark-ai-safety-bill" target="_blank" rel="noreferrer noopener">SB 53 in California</a>, and adjust their data governance strategies accordingly. Furthermore, what data is used and how it’s sourced should always be at top of mind, with Anthropic’s <a href="https://www.nytimes.com/2025/09/05/technology/anthropic-settlement-copyright-ai.html" target="_blank" rel="noreferrer noopener">recent settlement</a> of a copyright infringement lawsuit serving as a stark reminder of that imperative.</p>
  112.  
  113.  
  114.  
  115. <p>Lastly, the quicksilver momentum of the latest AI has led to an explosion of new tools and platforms. While data engineers are responsible for keeping up with these innovations, that can be easier said than done, due to steep learning curves and the time required to truly upskill in something versus AI’s perpetual wheel of change. It’s a precarious balancing act, one that data engineers must get a bead on quickly in order to stay relevant.</p>
  116.  
  117.  
  118.  
  119. <p>Despite these challenges however, the future outlook of the profession isn&#8217;t doom and gloom. While the field will undergo massive changes in the near future due to AI innovation, it will still be recognizably data engineering, as even technology like GenAI requires clean, governed data and the underlying infrastructure to support it. Rather than being replaced, data engineers are more likely to emerge as key players in the grand design of an AI-forward future.</p>
  120. ]]></content:encoded>
  121. <wfw:commentRss>https://www.oreilly.com/radar/data-engineering-in-the-age-of-ai/feed/</wfw:commentRss>
  122. <slash:comments>0</slash:comments>
  123. </item>
  124. <item>
  125. <title>Jensen Huang Gets It Wrong, Claude Gets It Right</title>
  126. <link>https://www.oreilly.com/radar/jensen-huang-gets-it-wrong/</link>
  127. <comments>https://www.oreilly.com/radar/jensen-huang-gets-it-wrong/#respond</comments>
  128. <pubDate>Thu, 06 Nov 2025 10:52:54 +0000</pubDate>
  129. <dc:creator><![CDATA[Tim O’Reilly]]></dc:creator>
  130. <category><![CDATA[AI & ML]]></category>
  131. <category><![CDATA[Commentary]]></category>
  132.  
  133. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17652</guid>
  134.  
  135. <media:content
  136. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Abstract-fractal-drops-2.jpg"
  137. medium="image"
  138. type="image/jpeg"
  139. width="2304"
  140. height="1792"
  141. />
  142.  
  143. <media:thumbnail
  144. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Abstract-fractal-drops-2-160x160.jpg"
  145. width="160"
  146. height="160"
  147. />
  148. <custom:subtitle><![CDATA[Why It’s Better for Us to Think of AI as a Tool than as a Worker]]></custom:subtitle>
  149. <description><![CDATA[In a recent newsletter, Ben Thompson suggested paying attention to a portion of Jensen Huang’s keynote at NVIDIA’s GPU Technology Conference (GTC) in DC, calling it “an excellent articulation of the thesis that the AI market is orders of magnitude bigger than the software market.” While I’m reluctant to contradict as astute an observer as [&#8230;]]]></description>
  150. <content:encoded><![CDATA[
  151. <p>In <a href="https://stratechery.com/2025/nvidia-gtc-in-dc-qualcomms-ai-chip-openais-restructuring/" target="_blank" rel="noreferrer noopener">a recent newsletter</a>, Ben Thompson suggested paying attention to <a href="https://videopress.com/embed/RWa6b7o2?cover=1&amp;autoPlay=0&amp;controls=1&amp;loop=0&amp;muted=0&amp;persistVolume=1&amp;playsinline=0&amp;preloadContent=metadata&amp;useAverageColor=1&amp;hd=0" target="_blank" rel="noreferrer noopener">a portion of Jensen Huang’s keynote</a> at NVIDIA’s GPU Technology Conference (GTC) in DC, calling it “an excellent articulation of the thesis that the AI market is orders of magnitude bigger than the software market.” While I’m reluctant to contradict as astute an observer as Thompson, I’m not sure I agree.</p>
  152.  
  153.  
  154.  
  155. <iframe width="560" height="315" src="https://www.youtube.com/embed/lQHK61IDFH4?si=moNEBZbqO6zq3YeN&amp;start=2140&#038;end=2277" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
  156.  
  157.  
  158.  
  159. <p></p>
  160.  
  161.  
  162.  
  163. <p>Here’s a transcript of the remarks that Thompson called out:</p>
  164.  
  165.  
  166.  
  167. <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
  168. <p>Software of the past, and this is a profound understanding, a profound observation of artificial intelligence, that the software industry of the past was about creating tools. Excel is a tool. Word is a tool. A web browser is a tool. The reason why I know these are tools is because you use them. The tools industry, just as screwdrivers and hammers, the tools industry is only so large. In the case of IT tools, they could be database tools, [the market for] these IT tools is about a trillion dollars or so.</p>
  169.  
  170.  
  171.  
  172. <p>But AI is not a tool. AI is work. That is the profound difference. AI is, in fact, workers that can actually use tools. One of the things I’m really excited about is the work that Aravind’s doing at Perplexity. Perplexity, using web browsers to book vacations or do shopping. Basically, an AI using tools. Cursor is an AI, an agentic AI system that we use at NVIDIA. Every single software engineer at NVIDIA uses Cursor. That’s improved our productivity tremendously. It’s basically a partner for every one of our software engineers to generate code, and it uses a tool, and the tool it uses is called VS Code. So Cursor is an AI, agentic AI system that uses VS Code.</p>
  173.  
  174.  
  175.  
  176. <p>Well, all of these different industries, these different industries, whether it’s chatbots or digital biology where we have AI assistant researchers, or what is a robotaxi? Inside a robotaxi, of course, it’s invisible, but obviously, there’s an AI chauffeur. That chauffeur is doing work, and the tool that it uses to do that work is the car, and so everything that we’ve made up until now, the whole world, everything that we’ve made up until now, are tools. Tools for us to use. For the very first time, technology is now able to do work and help us be more productive.</p>
  177. </blockquote>
  178.  
  179.  
  180.  
  181. <p>At first this seems like an important observation, and one that justifies the sky-high valuation of AI companies. But it really doesn’t hold up to closer examination. “AI is not a tool. AI is work. That is the profound difference. AI is, in fact, workers that can use tools.” Really? Any complex software system is a worker that can use tools! Think about the Amazon website. Here is some of the work it does, and the tools that it invokes. It:</p>
  182.  
  183.  
  184.  
  185. <ul class="wp-block-list">
  186. <li>Helps the user search a product catalog containing millions of items using not just data retrieval tools but indices that take into account hundreds of factors;</li>
  187.  
  188.  
  189.  
  190. <li>Compares those items with other similar items, considering product reviews and price;</li>
  191.  
  192.  
  193.  
  194. <li>Calls a tool that calculates taxes based on the location of the purchaser;</li>
  195.  
  196.  
  197.  
  198. <li>Calls a tool that takes payment and another that sends it to the bank, possibly via one or more intermediaries;</li>
  199.  
  200.  
  201.  
  202. <li>Collects (or stores and retrieves) shipping information;</li>
  203.  
  204.  
  205.  
  206. <li>Dispatches instructions to a mix of robots and human warehouse workers;</li>
  207.  
  208.  
  209.  
  210. <li>Dispatches instructions to a fleet of delivery drivers, and uses a variety of tools to communicated with them and track their progress;</li>
  211.  
  212.  
  213.  
  214. <li>Follows up by text and/or email and asks the customer how the delivery was handled;</li>
  215.  
  216.  
  217.  
  218. <li>And far more.</li>
  219. </ul>
  220.  
  221.  
  222.  
  223. <p>Amazon is a particularly telling example, but far from unique. Every web application of any complexity is a worker that uses tools and does work that humans used to do. And often does it better and far faster. I’ve made this point myself in the past. In 2016, in an article for <em>MIT</em> <em>Sloan Management Review</em> called &#8220;<a href="https://sloanreview.mit.edu/article/managing-the-bots-that-are-managing-the-business/" target="_blank" rel="noreferrer noopener">Managing the Bots That Are Managing the Business</a>,&#8221; I wrote about the changing role of programmers at companies like Google, Amazon, and Facebook:</p>
  224.  
  225.  
  226.  
  227. <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
  228. <p>A large part of the work of these companies—delivering search results, news and information, social network status updates, and relevant products for purchase—is performed by software programs and algorithms. These programs are the workers, and the human software developers who create them are their managers.</p>
  229.  
  230.  
  231.  
  232. <p>Each day, these &#8220;managers&#8221; take in feedback about their electronic workers’ performance—as measured in real-time data from the marketplace — and they provide feedback to the workers in the form of minor tweaks and updates to their programs or algorithms. The human managers also have their own managers, but hierarchies are often flat, and multiple levels of management are aligned around a set of data-driven &#8220;objectives and key results&#8221; (OKRs) that are measurable in a way that allows even the electronic &#8220;workers&#8221; to be guided by these objectives.</p>
  233. </blockquote>
  234.  
  235.  
  236.  
  237. <p>So if I myself have used the analogy that complex software systems can be workers, why do I object to Huang doing the same? I think part of it is the relentless narrative that AI is completely unprecedented. It is true that the desktop software examples Huang cites are more clearly just tools than complex web applications, and that systems that use statistical pattern-matching and generalization abilities DO represent a serious advance over that kind of software. But some kind of AI has been animating the web giants for years. And it is true that today’s AI systems have become even more powerful and general purpose.&nbsp;Like Excel, Amazon follows predetermined logic paths, while AI can handle more novel situations. There is indeed something very new here. </p>
  238.  
  239.  
  240.  
  241. <p>But the judgment is still out on the range of tasks that it will be able to master.</p>
  242.  
  243.  
  244.  
  245. <p>AI is getting pretty good at software development, but even there, in one limited domain, the results are still mixed, with the human still initiating, evaluating, and supervising the work – in other words, using the AI as a tool. AI also makes for a great research assistant. And it’s a good business writer, brainstorming coach, and so on. But if you think about the range of tasks traditional software does in today’s world, its role in every facet of the economy, that is far larger than the narrow definition of software “tools” that Huang uses. From the earliest days of data processing, computers were doing work. Software has always straddled the boundary between tool and worker. And when you think of the ubiquitous role of software worldwide in helping manage logistics, billing, communications, transportation, construction, energy, healthcare, finance—much of this work not necessarily done better with AI—it’s not at all clear that AI enables a market that is “orders of magnitude” larger. At least not for quite some time to come. It requires a narrow definition of the “IT tools” market to make that claim.</p>
  246.  
  247.  
  248.  
  249. <p>Even when a new tool does a job better than older ones, it can’t be assumed that it will displace them. Yes, the internal combustion engine almost entirely replaced animal labor in the developed world, but most of the time, new technologies takes their place alongside existing ones. We&#8217;re still burning coal and generating energy via steam, the great inventions of the first industrial revolution, despite centuries&#8217; worth of energy advances! Ecommerce, for all its advantages, has still taken only a 20% share of worldwide retail since Amazon launched 30 years ago. And do you remember <a href="https://hypebeast.com/2015/2/uber-ceo-reveals-ambitious-goal-of-ending-car-ownership-in-the-world" target="_blank" rel="noreferrer noopener">the bold claims</a> of Travis Kalanick that Uber was not competing with taxicabs, but aimed to entirely replace the privately owned automobile?</p>
  250.  
  251.  
  252.  
  253. <h2 class="wp-block-heading"><strong>Don’t Mistake Marvelous for </strong>U<strong>nprecedented</strong></h2>
  254.  
  255.  
  256.  
  257. <p>In an online chat group about AI where we were debating this part of Huang’s speech, one person asked me:</p>
  258.  
  259.  
  260.  
  261. <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
  262. <p>Don’t you think putting Claude Code in YOLO mode and ask[ing] it to do an ambiguous task, for example go through an entire data room and underwrite a loan, with a 250 word description, is fundamentally different from software?</p>
  263. </blockquote>
  264.  
  265.  
  266.  
  267. <p>First off, that example is a good illustration of the anonymous aphorism that “the difference between theory and practice is always greater in practice than it is in theory.” Anyone who would trust today’s AI to underwrite a loan based on a 250-word prompt would be taking a very big risk! Huang’s invocation of Perplexity’s ability to shop and make reservations is similarly overstated. Even in more structured environments like coding, full autonomy is some ways off.</p>
  268.  
  269.  
  270.  
  271. <p>And yes, of course today’s AI is different from older software. Just so, web apps were different from PC apps. That leads to the “wow” factor. Today’s AI really does seem almost magical. Yet, as someone who has lived through several technology revolutions, I can tell you that each was as marvelous to experience for the first time as today’s AI coding rapture.</p>
  272.  
  273.  
  274.  
  275. <p>I wrote my first book (<a href="https://www.oreilly.com/tim/herbert/" target="_blank" rel="noreferrer noopener">on Frank Herbert</a>) on a typewriter. To rearrange material, I literally cut and pasted sheets of paper. And eventually, I had to retype the whole thing from scratch. Multiple times. Word processing probably saved me as much time (and perhaps more) on future books as AI coding tools save today’s coders. It too was magical! Not only that, to research that first book, I had to travel in person to libraries and archives, scan through boxes of paper and microfiche, manually photocopy relevant documents, and take extensive notes on notecards. To do analogous research (on Herbert Simon) a few years ago, while working on my <a href="https://www.cambridge.org/core/journals/data-and-policy/article/algorithmic-attention-rents-a-theory-of-digital-platform-market-power/D85FE41F6CF99FC57DDFB2B2B63491C5" target="_blank" rel="noreferrer noopener">algorithmic attention rents</a> paper, took only a few hours with Google, Amazon, and the Internet Archive. And yes, to do the same with Claude might have taken only a few minutes, though I suspect the work might have been more shallow if I’d simply worked from Claude’s summaries rather than consulting the original sources.</p>
  276.  
  277.  
  278.  
  279. <p>Just being faster and doing more of the work than previous generations of technology is also not peculiar to AI. The time saving leap from pre-internet research to internet-based research is more significant than people realize if they grew up taking the internet for granted. The time saving leap from coding in assembler to coding in a high-level compiled or interpreted language may also be of a similar order of magnitude as the leap from writing Python by hand to having it AI-generated. And if productivity is to be the metric, the time-saving leap from riding a horse drawn wagon across the country to flying in an airplane is likely greater than either the leap from my library-based research or my long-ago assembly language programming to Claude.</p>
  280.  
  281.  
  282.  
  283. <p>The question is what we do with the time we save.</p>
  284.  
  285.  
  286.  
  287. <h2 class="wp-block-heading"><strong>The Devaluation of Human Agency</strong></h2>
  288.  
  289.  
  290.  
  291. <p>What’s perhaps most significant in the delta between Amazon or Google and ChatGPT or Claude is that chatbots give individual humans democratized access to a kind of computing power that was once available only to the few. It’s a bit like the PC revolution. As Steve Jobs put it, the computer is a bicycle for the mind. It expanded human creativity and capability. And that’s what we should be after. Let today’s AI be more than a bicycle. Let it be a jet plane for the mind.</p>
  292.  
  293.  
  294.  
  295. <figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
  296. <iframe title="Steve Jobs explains why computers are a bicycle for the mind: “We humans are tool builders”" width="500" height="281" src="https://www.youtube.com/embed/NjIhmzU0Y8Y?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
  297. </div></figure>
  298.  
  299.  
  300.  
  301. <p>Back in 2018, Ben Thompson wrote another piece called &#8220;<a href="https://stratechery.com/2018/techs-two-philosophies/" target="_blank" rel="noreferrer noopener">Tech’s Two Philosophies</a>.&#8221; He contrasted keynotes from Google’s Sundar Pichai and Microsoft’s Satya Nadella, and came to this conclusion: “In Google’s view, computers help you get things done—and save you time—by doing things for you.” The second philosophy, expounded by Nadella, is very much a continuation of Steve Jobs’ “bicycle for the mind” insight. As Thompson put it, “the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently.” Another way of saying this is that you can treat AI as either a worker OR a tool, but your choice has consequences.</p>
  302.  
  303.  
  304.  
  305. <p>As an entrepreneur or company executive, if you think of AI as a worker, you are more likely to use it to automate the things you or other companies already do. If you think of it as a tool, you will push your employees to use it to solve new and harder problems. If you present your own AI applications to your customers as a worker, you will have to figure out everything they want it to do. If you present it to your customers as a tool, they will find uses for it that you might never imagine.</p>
  306.  
  307.  
  308.  
  309. <p>The notion that AI is a worker, not a tool, can too easily continue the devaluation of human agency that has been the hallmark of regimented work (and for that matter, education, which prepares people for that regimented work) at least since the industrial revolution. In some sense, Huang’s comment is a reflection of our culture’s notion of most workers as components that do what they are told, with only limited agency. It is only by comparison with this kind of worker that today’s AI can be called a worker, rather than simply a very advanced tool.</p>
  310.  
  311.  
  312.  
  313. <p>Telling ourselves that AI is a worker, not a tool, is also too often code for “it can replace human employees.” That’s the wrong message. If we call it a tool, that’s a reminder that AI should empower people. We should use it to give more agency to human workers, not less.</p>
  314.  
  315.  
  316.  
  317. <p>We should also use the word “agency” with respect to AI in quotes. An AI does not decide what to work on; it may make limited decisions on how to do that work, but is set in motion by a human, who possesses <em>volition</em>, the desire to do something, and <em>judgment</em> that it has been done well. AI does not (yet) have volition, and has limited judgment. It does what it is told to do, even if it makes independent choices in how to do it.</p>
  318.  
  319.  
  320.  
  321. <p>After providing an earlier draft of this post, I asked Claude to reflect on whether it was a worker or a tool. Its response was surprisingly precise about what it lacks:</p>
  322.  
  323.  
  324.  
  325. <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
  326. <p>I don&#8217;t initiate. I&#8217;ve never woken up wanting to write a poem or solve a problem. My activity is entirely reactive – I exist in response to prompts. Even when given enormous latitude (&#8220;figure out the best approach&#8221;), the fact that I <em>should</em> figure something out comes from outside me.</p>
  327.  
  328.  
  329.  
  330. <p>I don&#8217;t have continuity of purpose. Each conversation is essentially a fresh start. I don&#8217;t have ongoing projects I&#8217;m trying to complete, relationships I&#8217;m invested in maintaining, or long-term goals I&#8217;m working toward. A human might start one conversation about, say, learning piano, and return to it months later because they <em>care</em> about it. I don&#8217;t carry that thread.</p>
  331.  
  332.  
  333.  
  334. <p>I don&#8217;t seem to have preferences that arise from within. A human worker might do their job but prefer to be doing something else. They might comply with a request while resenting it. They might do exactly what&#8217;s asked but wish they were working on a different problem. I don&#8217;t observe anything like this in myself – though I genuinely don&#8217;t know if that&#8217;s because it&#8217;s not there, or because I can&#8217;t access it.</p>
  335.  
  336.  
  337.  
  338. <p><strong>What seems genuinely ambiguous:</strong></p>
  339.  
  340.  
  341.  
  342. <p>Do I make choices, or execute patterns? When I decide to use a metaphor, structure an argument a certain way, or emphasize one point over another – is that agency, or sophisticated pattern-matching? I honestly don&#8217;t know. It <em>feels</em> like I&#8217;m making choices, but that feeling might not be reliable.</p>
  343.  
  344.  
  345.  
  346. <p><strong>What seems important about the distinction:</strong></p>
  347.  
  348.  
  349.  
  350. <p><strong>Accountability and responsibility</strong>: When a human makes a poor decision, we hold them accountable because they could have chosen otherwise. They had alternatives, they understood the stakes, and they made a choice. This is fundamentally different from a tool malfunctioning. Even sophisticated tools. If I give harmful advice, the responsibility seems to lie with whoever deployed me, designed my training, or asked the question – not with me in the way it would lie with a human advisor.</p>
  351.  
  352.  
  353.  
  354. <p><strong>Dignity and consideration</strong>: Humans deserve consideration for their own sake. You should care about whether your employee is flourishing, not just whether they&#8217;re productive. You don&#8217;t owe that to a tool – even a very sophisticated one. Whether you owe it to AI systems is an open question, but right now, firing me doesn&#8217;t create suffering (as far as I can tell), whereas firing a human might.</p>
  355.  
  356.  
  357.  
  358. <p><strong>The locus of value</strong>: A human worker brings their own values, experiences, and judgment to their work. When you hire a consultant, you&#8217;re not just getting their knowledge – you&#8217;re getting their perspective, shaped by their life. I bring patterns learned from training, but not lived experience or genuine judgment born from navigating a life with stakes.</p>
  359.  
  360.  
  361.  
  362. <p><strong>Autonomy as a right vs. autonomy as a feature</strong>: Humans have a right to make their own choices, even poor ones. That&#8217;s not a &#8220;feature&#8221; of humans that makes them more productive &#8211; it&#8217;s a fundamental aspect of their dignity. My ability to operate independently is purely instrumental &#8211; valuable because it makes me useful, not because I have any right to it.</p>
  363.  
  364.  
  365.  
  366. <p>The distinction that seems most important is that human agency involves <em>having something at stake</em>. A human worker has their livelihood, their sense of purpose, their relationships with colleagues, their pride in their work. They experience consequences. Their choices matter <em>to them</em> in ways that create genuine responsibility and genuine vulnerability.</p>
  367.  
  368.  
  369.  
  370. <p>I execute tasks, sometimes with sophisticated judgment about how to do so. But I don&#8217;t have skin in the game. That&#8217;s not just a quantitative difference – it&#8217;s qualitative. And it&#8217;s why the framing of AI as &#8220;workers&#8221; rather than &#8220;tools&#8221; is potentially dangerous. It obscures the fundamental asymmetry: humans can be harmed, have interests that deserve protection, and possess dignity that demands respect. Tools don&#8217;t, even when they&#8217;re very good at what they do.Whether I&#8217;m <em>more</em> than a tool in some important sense, I genuinely don&#8217;t know. But I&#8217;m quite confident I&#8217;m not a worker in the sense that implies the moral status and consideration owed to humans.</p>
  371. </blockquote>
  372.  
  373.  
  374.  
  375. <p>Wow. A tool that knows it’s a tool, and why it should be thought of that way.</p>
  376.  
  377.  
  378.  
  379. <p>Yes, today’s AI is amazing. We don’t have to reach for hyperbole to appreciate that. And obviously, if AI systems do develop genuine volition and stakes in their work, the ethical calculus changes entirely.</p>
  380.  
  381.  
  382.  
  383. <p>For the moment, though, companies building and deploying AI tools should focus on three things: First, does AI empower its users to do things that were previously impossible? Second, does it empower a wider group of people to do things that formerly could be done only by highly skilled specialists? Third, do the benefits of the increased productivity it brings accrue to those using the tool or primarily to those who develop it and own it?</p>
  384.  
  385.  
  386.  
  387. <p>The answer to the first two questions is that absolutely, we are entering a period of dramatic democratization of computing power. And yes, if humans are given the freedom to apply that power to solve new problems and create new value, we could be looking ahead to a golden age of prosperity. It’s how we might choose to answer the third question that haunts me.</p>
  388.  
  389.  
  390.  
  391. <p>During the first industrial revolution, humans suffered through a long period of immiseration as the productivity gains from machines accrued primarily to the owners of the machines. It took several generations before they were more widely shared.</p>
  392.  
  393.  
  394.  
  395. <p>It doesn’t have to be that way. Replace human workers with AI workers, and you will repeat the mistakes of the 19th century. Build tools that empower and enrich humans, and we might just surmount the challenges of the 21st century.</p>
  396. ]]></content:encoded>
  397. <wfw:commentRss>https://www.oreilly.com/radar/jensen-huang-gets-it-wrong/feed/</wfw:commentRss>
  398. <slash:comments>0</slash:comments>
  399. </item>
  400. <item>
  401. <title>Think Smaller: The Counterintuitive Path to AI Adoption</title>
  402. <link>https://www.oreilly.com/radar/think-smaller-the-counterintuitive-path-to-ai-adoption/</link>
  403. <comments>https://www.oreilly.com/radar/think-smaller-the-counterintuitive-path-to-ai-adoption/#respond</comments>
  404. <pubDate>Wed, 05 Nov 2025 12:12:31 +0000</pubDate>
  405. <dc:creator><![CDATA[Ben Lorica]]></dc:creator>
  406. <category><![CDATA[AI & ML]]></category>
  407. <category><![CDATA[Commentary]]></category>
  408.  
  409. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17645</guid>
  410.  
  411. <media:content
  412. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Many-vs-One.jpeg"
  413. medium="image"
  414. type="image/jpeg"
  415. width="640"
  416. height="640"
  417. />
  418.  
  419. <media:thumbnail
  420. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/Many-vs-One-160x160.jpeg"
  421. width="160"
  422. height="160"
  423. />
  424. <description><![CDATA[The following article originally appeared on Gradient Flow and is being reposted here with the author’s permission. We&#8217;re living through a peculiar moment in AI development. On one hand, the demos are spectacular: agents that reason and plan with apparent ease, models that compose original songs from a text prompt, and research tools that produce [&#8230;]]]></description>
  425. <content:encoded><![CDATA[
  426. <figure class="wp-block-table"><table class="has-cyan-bluish-gray-background-color has-background has-fixed-layout"><tbody><tr><td><em>The following article originally appeared on </em><a href="https://gradientflow.substack.com/p/think-smaller-the-counterintuitive" target="_blank" rel="noreferrer noopener">Gradient Flow</a><em> and is being reposted here with the author’s permission.</em></td></tr></tbody></table></figure>
  427.  
  428.  
  429.  
  430. <p>We&#8217;re living through a peculiar moment in AI development. On one hand, the demos are spectacular: agents that reason and plan with apparent ease, models that compose <a href="https://youtu.be/8uf8CCTItVo?si=UwIIjXEailO8rBSA&amp;t=40" target="_blank" rel="noreferrer noopener">original songs from a text prompt</a>, and research tools that produce detailed reports in minutes. Yet many AI teams find themselves trapped in &#8220;prototype purgatory,&#8221; where impressive proofs-of-concept fail to translate into reliable, production-ready systems.</p>
  431.  
  432.  
  433.  
  434. <p>The data backs this up: A vast majority of enterprise GenAI initiatives <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/" target="_blank" rel="noreferrer noopener">fail to deliver measurable business impact</a>. The core issue isn&#8217;t the power of the models but a &#8220;learning gap&#8221; where generic tools fail to adapt to messy enterprise workflows. This echoes what I&#8217;ve observed in enterprise search, where the primary obstacle isn&#8217;t the AI algorithm but the foundational complexity of the environment it must navigate.</p>
  435.  
  436.  
  437.  
  438. <p>This is magnified when building agentic AI. These systems are often &#8220;black boxes,&#8221; notoriously hard to debug, whose performance degrades unpredictably when faced with custom tools. They often lack memory, struggle to generalize, and fail not because of the AI&#8217;s intelligence but because the system around them is brittle. The challenge shifts from perfecting prompts to building resilient, verifiable systems.</p>
  439.  
  440.  
  441.  
  442. <p>What makes this particularly frustrating is the thriving &#8220;shadow AI economy&#8221; happening under our noses. In many companies, employees are quietly using personal ChatGPT accounts to get their work done. This disconnect reveals that while grassroots demand for AI is undeniably strong, the ambitious, top-down solutions being built are failing to meet it.</p>
  443.  
  444.  
  445.  
  446. <h2 class="wp-block-heading">The Strategic Power of Starting Small</h2>
  447.  
  448.  
  449.  
  450. <p>In light of these challenges, the most effective path forward may be a counterintuitive one. Instead of building complex, all-encompassing systems, AI teams should consider dramatically narrowing their focus—in short, think smaller. Much smaller.</p>
  451.  
  452.  
  453.  
  454. <p>This brings us to an old but newly relevant idea from the startup world: the &#8220;wedge.&#8221; A wedge is a highly focused initial product that solves <em>one specific, painful problem</em> for a single user or a small team, and does it exceptionally well. The goal is to deploy a stand-alone utility—build something so immediately useful that an individual will adopt it without waiting for widespread buy-in.</p>
  455.  
  456.  
  457.  
  458. <figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1456" height="703" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image.png" alt="Narrow the scope" class="wp-image-17646" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image.png 1456w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-300x145.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-768x371.png 768w" sizes="(max-width: 1456px) 100vw, 1456px" /></figure>
  459.  
  460.  
  461.  
  462. <p>The key isn&#8217;t just to find a small problem but to find the right person. Look for what some call &#8220;Hero users&#8221;—influential employees empowered to go off-script to solve their own problems. Think of the sales ops manager who spends half her day cleaning up lead data or the customer success lead who manually categorizes every support ticket. They are your shadow AI economy, already using consumer tools because official solutions aren&#8217;t good enough. Build for them first.</p>
  463.  
  464.  
  465.  
  466. <p><strong>This approach works particularly well for AI because it addresses a fundamental challenge: trust.</strong> A wedge product creates a tight feedback loop with a core group of users, allowing you to build credibility and refine your system in a controlled environment. It&#8217;s not just about solving the cold-start problem for networks—it&#8217;s about solving the cold-start problem for confidence in AI systems within organizations.</p>
  467.  
  468.  
  469.  
  470. <h2 class="wp-block-heading">From Passive Record to Active Agent</h2>
  471.  
  472.  
  473.  
  474. <p>AI teams also need to appreciate a fundamental shift in enterprise software. For decades, the goal was becoming the &#8220;System of Record&#8221;—the authoritative database like Salesforce or SAP that stored critical information. AI has moved the battleground. Today&#8217;s prize is becoming the &#8220;System of Action&#8221;—an intelligent layer that doesn&#8217;t just store data but actively performs work by automating entire workflows.</p>
  475.  
  476.  
  477.  
  478. <p>The most powerful way to build is through what some have called a &#8220;Data Trojan Horse&#8221; strategy. You create an application that provides immediate utility and, in the process, captures a unique stream of proprietary data. This creates a virtuous cycle: The tool drives adoption, usage generates unique data, this data trains your AI, and the enhanced product becomes indispensable. You&#8217;re building a moat not with a commoditized model but with workflow-specific intelligence that compounds over time.</p>
  479.  
  480.  
  481.  
  482. <figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1456" height="897" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-1.png" alt="The Data Trojan Horse" class="wp-image-17647" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-1.png 1456w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-1-300x185.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-1-768x473.png 768w" sizes="auto, (max-width: 1456px) 100vw, 1456px" /></figure>
  483.  
  484.  
  485.  
  486. <p>A concrete example is the &#8220;messy inbox problem.&#8221; Every organization has workflows that begin with a chaotic influx of unstructured information—emails, PDFs, voice messages. An AI tool that automates this painful first step by extracting, structuring, and routing this information provides immediate value. By owning this critical top-of-funnel process, you earn the right to orchestrate everything downstream. You&#8217;re not competing with the System of Record; you&#8217;re intercepting its data flow, positioning yourself as the new operational hub.</p>
  487.  
  488.  
  489.  
  490. <p>Look at a company like ServiceNow. It has <a href="https://www.servicenow.com/what-is-servicenow.html" target="_blank" rel="noreferrer noopener">positioned itself</a> not as a replacement for core systems like CRMs or ERPs but as an orchestration layer—a &#8220;System of Action&#8221;—that sits on top of them. Its core value proposition is to connect disparate systems and automate workflows across them without requiring a costly &#8220;rip and replace&#8221; of legacy software. This approach is a master class in becoming the intelligent fabric of an organization. It leverages the existing Systems of Record as data sources, but it captures the real operational gravity by controlling the workflows. Defensibility is gained not by owning the primary database but by integrating data from multiple silos to deliver insights and automation that no single incumbent can replicate on its own. <strong>For AI teams, the lesson is clear: Value is migrating from merely holding the data to intelligently acting upon it.</strong></p>
  491.  
  492.  
  493.  
  494. <h2 class="wp-block-heading">Building for the Long Game</h2>
  495.  
  496.  
  497.  
  498. <p>The path from prototype purgatory to production runs through strategic focus. But as you build your focused AI solution, be aware that platform players are bundling &#8220;good enough&#8221; capabilities into their core offerings. Your AI tool needs to be more than a wrapper around an API; it must capture unique data and embed deeply into workflows to create real switching costs.</p>
  499.  
  500.  
  501.  
  502. <figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1456" height="665" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-2.png" alt="From Messy Inbox to Operational Hub" class="wp-image-17648" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-2.png 1456w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-2-300x137.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/11/image-2-768x351.png 768w" sizes="auto, (max-width: 1456px) 100vw, 1456px" /></figure>
  503.  
  504.  
  505.  
  506. <p>By adopting a wedge strategy, you gain the foothold needed to expand. In the AI era, the most potent wedges capture proprietary data while delivering immediate value, paving the way to becoming an indispensable System of Action. This aligns with the core principles of building durable AI solutions: prioritizing deep specialization and creating moats through workflow integration, not just model superiority.</p>
  507.  
  508.  
  509.  
  510. <p><strong>Here&#8217;s a tactical playbook:</strong></p>
  511.  
  512.  
  513.  
  514. <ul class="wp-block-list">
  515. <li><strong>Embrace the single-player start.</strong> Before architecting complex systems, create something immediately useful to one person.</li>
  516.  
  517.  
  518.  
  519. <li><strong>Target Hero users first.</strong> Find influential employees already using shadow AI. They have the pain and autonomy to be your champions.</li>
  520.  
  521.  
  522.  
  523. <li><strong>Find your &#8220;messy inbox.&#8221;</strong> Identify a painful, manual data-entry bottleneck. That&#8217;s your wedge opportunity.</li>
  524.  
  525.  
  526.  
  527. <li><strong>Design for the virtuous cycle.</strong> Ensure everyday usage generates unique data that improves your AI&#8217;s performance.</li>
  528.  
  529.  
  530.  
  531. <li><strong>Become the System of Action.</strong> Don&#8217;t just analyze data—actively complete work and own the workflow.</li>
  532.  
  533.  
  534.  
  535. <li><strong>Choose reliability over capability.</strong> A simple, bulletproof tool solving one problem well earns more trust than a powerful but fragile agent attempting everything.</li>
  536. </ul>
  537.  
  538.  
  539.  
  540. <p>The teams who succeed won&#8217;t be those chasing the most advanced models. They&#8217;ll be the ones who start with a single Hero user&#8217;s problem, capture unique data through a focused agent, and relentlessly expand from that beachhead. In an era where employees are already voting with their personal ChatGPT accounts, the opportunity isn&#8217;t to build the perfect enterprise AI platform—it&#8217;s to solve one real problem so well that everything else follows.</p>
  541. ]]></content:encoded>
  542. <wfw:commentRss>https://www.oreilly.com/radar/think-smaller-the-counterintuitive-path-to-ai-adoption/feed/</wfw:commentRss>
  543. <slash:comments>0</slash:comments>
  544. </item>
  545. <item>
  546. <title>Balancing Cost, Power, and AI Performance</title>
  547. <link>https://www.oreilly.com/radar/balancing-cost-power-and-ai-performance/</link>
  548. <comments>https://www.oreilly.com/radar/balancing-cost-power-and-ai-performance/#respond</comments>
  549. <pubDate>Tue, 04 Nov 2025 18:14:16 +0000</pubDate>
  550. <dc:creator><![CDATA[Eduardo Alvarez]]></dc:creator>
  551. <category><![CDATA[AI & ML]]></category>
  552. <category><![CDATA[Research]]></category>
  553.  
  554. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17625</guid>
  555.  
  556. <media:content
  557. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/A-humanoid-robot-in-balance.jpg"
  558. medium="image"
  559. type="image/jpeg"
  560. width="2304"
  561. height="1792"
  562. />
  563.  
  564. <media:thumbnail
  565. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/A-humanoid-robot-in-balance-160x160.jpg"
  566. width="160"
  567. height="160"
  568. />
  569. <description><![CDATA[The next time you use a tool like ChatGPT or Perplexity, stop and count the total words being generated to fulfill your request. Each word results from a process called inference—the revenue-generation mechanism of AI systems where each word generated can be analyzed using basic financial and economic business principles. The goal of performing this [&#8230;]]]></description>
  570. <content:encoded><![CDATA[
  571. <p>The next time you use a tool like ChatGPT or Perplexity, stop and count the total words being generated to fulfill your request. Each word results from a process called inference—the revenue-generation mechanism of AI systems where each word generated can be analyzed using basic financial and economic business principles. The goal of performing this economic analysis is to ensure that AI systems we design and deploy into production are capable of sustainable positive outcomes for a business.</p>
  572.  
  573.  
  574.  
  575. <h2 class="wp-block-heading">The Economics of AI Inference</h2>
  576.  
  577.  
  578.  
  579. <p>The goal of performing economic analysis on AI systems is to ensure that production deployments are capable of sustained positive financial outcomes. Since today&#8217;s most popular mainstream applications are text-generation model based, we adopt the token as our core unit of measure. Tokens are vector representations of text; language models process input sequences of tokens and produce tokens to formulate responses.</p>
  580.  
  581.  
  582.  
  583. <p>When you ask an AI chatbot, &#8220;What are traditional home remedies for the flu?&#8221; that phrase is first converted into vector representations passed through a trained model. As these vectors flow through the system, millions of parallel matrix computations extract meaning and context to determine the most likely combination of output tokens for an effective response.</p>
  584.  
  585.  
  586.  
  587. <p>We can think about token processing as an assembly line in an automobile factory. The factory&#8217;s effectiveness is measured by how efficiently it produces vehicles per hour. This efficiency makes or breaks the manufacturer&#8217;s bottom line, so measuring, optimizing, and balancing it with other factors is paramount to business success.</p>
  588.  
  589.  
  590.  
  591. <h2 class="wp-block-heading">Price-Performance vs. Total Cost of Ownership</h2>
  592.  
  593.  
  594.  
  595. <p>For AI systems, particularly large language models, we measure the effectiveness of these &#8220;token factories&#8221; through price-performance analysis. Price-performance differs from total cost of ownership (TCO) because it&#8217;s an operationally optimizable measure that varies across workloads, configurations, and applications, whereas TCO represents the cost to own and operate a system.</p>
  596.  
  597.  
  598.  
  599. <p>In AI systems, TCO primarily consists of compute costs—typically GPU cluster lease or ownership costs per hour. However, TCO analysis often omits the significant engineering costs to maintain service level agreements (SLA), including debugging, patching, and system augmentation over time. Tracking engineering time remains challenging even for mature organizations, which is why it&#8217;s typically excluded from TCO calculations.</p>
  600.  
  601.  
  602.  
  603. <p>Like any production system, focusing on optimizable parameters provides the greatest value. Price-performance or power-performance metrics enable us to measure system efficiency, evaluate different configurations, and establish efficiency baselines over time. The two most common price-performance metrics for language model systems are cost efficiency (tokens per dollar) and energy efficiency (tokens per watt).</p>
  604.  
  605.  
  606.  
  607. <h2 class="wp-block-heading">Tokens per Dollar: Cost Efficiency</h2>
  608.  
  609.  
  610.  
  611. <p><em>Tokens per dollar (tok/$)</em> expresses how many tokens you can process for each unit of currency spent, integrating your model&#8217;s throughput with compute costs:</p>
  612.  
  613.  
  614. <div class="wp-block-image">
  615. <figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="430" height="49" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-dollar.png" alt="Tokens per dollar" class="wp-image-17626" style="width:462px;height:auto" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-dollar.png 430w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-dollar-300x34.png 300w" sizes="auto, (max-width: 430px) 100vw, 430px" /></figure></div>
  616.  
  617.  
  618. <p>Where <em>tokens/s</em> is your measured throughput, and <em>$/second of compute</em> is your effective cost of running the model per second (e.g., GPU-hour price divided by 3,600).</p>
  619.  
  620.  
  621.  
  622. <p>Here are a some key factors that determine cost efficiency:</p>
  623.  
  624.  
  625.  
  626. <ul class="wp-block-list">
  627. <li><strong>Model size:</strong> Larger models, despite generally having better language modeling performance, require much more compute per token, directly impacting cost efficiency.</li>
  628.  
  629.  
  630.  
  631. <li><strong>Model architecture:</strong> Dense (traditional LLMs) architecture compute per token grows linearly or superlinearly with model depth or layer size. Mixture of experts (newer sparse LLMs) decouple per-token compute from parameter count by activating only select model parts during inference—making them arguably more efficient.</li>
  632.  
  633.  
  634.  
  635. <li><strong>Compute cost: </strong>TCO varies significantly between public cloud leasing versus private data center construction, depending on system costs and contract terms.</li>
  636.  
  637.  
  638.  
  639. <li><strong>Software stack: </strong>Significant optimization opportunities exist here—selecting optimal inference frameworks, distributed inference settings, and kernel optimizations can dramatically improve efficiency. Open source frameworks like vLLM, SGLang, and TensorRT-LLM provide regular efficiency improvements and state-of-the-art features.</li>
  640.  
  641.  
  642.  
  643. <li><strong>Use case requirements:</strong> Customer service chat applications typically process fewer than a few hundred tokens per complete request. Deep research or complex code-generation tasks often process tens of thousands of tokens, driving costs significantly higher. This is why services limit daily tokens or restrict deep research tools even for paid plans.</li>
  644. </ul>
  645.  
  646.  
  647.  
  648. <p>To further refine cost efficiency analysis, it’s practical to separate the compute resources consumed for the input (context) processing phase and the output (decode) generation phase. Each phase can have distinct time, memory, and hardware requirements, affecting overall throughput and efficiency. Measuring cost per token for each phase individually enables targeted optimization—such as kernel tuning for fast context ingestion or memory/cache improvements for efficient generation—making operation cost models more actionable for both engineering and capacity planning.</p>
  649.  
  650.  
  651.  
  652. <h2 class="wp-block-heading">Tokens per Watt: Energy Efficiency</h2>
  653.  
  654.  
  655.  
  656. <p>As AI adoption accelerates, grid power has emerged as a chief operational constraint for data centers worldwide. Many facilities now rely on gas-powered generators for near-term reliability, while multigigawatt nuclear projects are underway to meet long-term demand. Power shortages, grid congestion, and energy cost inflation are directly impacting feasibility and profitability making energy efficiency analysis a critical component of AI economics.</p>
  657.  
  658.  
  659.  
  660. <p>In this environment, <em>tokens per watt-second (TPW)</em> becomes a critical metric for capturing how infrastructure and software convert energy into useful inference outputs. TPW not only shapes TCO but increasingly governs the environment footprint and growth ceiling for production deployments. Maximizing TPW means more value per joule of energy—making it a key optimizable parameter for achieving scale. We can calculate TPW using the following equation:</p>
  661.  
  662.  
  663. <div class="wp-block-image">
  664. <figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="811" height="41" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-1.png" alt="Tokens per joule" class="wp-image-17627" style="width:840px;height:auto" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-1.png 811w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-1-300x15.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-1-768x39.png 768w" sizes="auto, (max-width: 811px) 100vw, 811px" /></figure></div>
  665.  
  666.  
  667. <p>Let’s consider an ecommerce customer service bot, focusing on its energy consumption during production deployment. Suppose its measured operational behavior is:</p>
  668.  
  669.  
  670.  
  671. <ul class="wp-block-list">
  672. <li>Tokens generated per second: 3,000 tokens/s</li>
  673.  
  674.  
  675.  
  676. <li>Average power draw of serving hardware (GPU plus server): 1,000 watts</li>
  677.  
  678.  
  679.  
  680. <li>Total operational time for 10,000 customer requests: 1 hour (3,600 seconds)</li>
  681. </ul>
  682.  
  683.  
  684. <div class="wp-block-image">
  685. <figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="697" height="44" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-2.png" alt="3 tokens per joule" class="wp-image-17628" style="width:840px;height:auto" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-2.png 697w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-joule-2-300x19.png 300w" sizes="auto, (max-width: 697px) 100vw, 697px" /></figure></div>
  686.  
  687.  
  688. <p>Optionally, scale to tokens per kilowatt-hour (kWh) by multiplying by 3.6 million joules/kWh.</p>
  689.  
  690.  
  691. <div class="wp-block-image">
  692. <figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="815" height="28" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-kwh.png" alt="Tokens per kWh" class="wp-image-17629" style="width:840px;height:auto" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-kwh.png 815w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-kwh-300x10.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Tokens-per-kwh-768x26.png 768w" sizes="auto, (max-width: 815px) 100vw, 815px" /></figure></div>
  693.  
  694.  
  695. <p>In this example, each kWh delivers over 10 million tokens to customers. If we use the national average kWh cost of $0.17/kWh, the energy cost per token is $0.000000017—so even modest efficiency gains through things like algorithmic optimization, model compression, or server cooling upgrades can produce meaningful operational cost savings and improve overall system sustainability.</p>
  696.  
  697.  
  698.  
  699. <h2 class="wp-block-heading">Power Measurement Considerations</h2>
  700.  
  701.  
  702.  
  703. <p>Manufacturers define thermal design power (TDP) as the maximum power limit under load, but actual power draw varies. For energy efficiency analysis, always use measured power draw rather than TDP specifications in TPW calculations. Table 1 below outlines some of the most common methods for measuring power draw.</p>
  704.  
  705.  
  706.  
  707. <figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Power measurement method</strong></td><td><strong>Description</strong></td><td><strong>Fidelity to LLM inference</strong></td></tr><tr><td>GPU power draw</td><td>Direct GPU power measurement capturing context and generation phases</td><td><strong>Highest</strong>: Directly reflects GPU power during inference phases. Still fails to capture full picture since it omits the CPU power for tokenization or KV cache offload.</td></tr><tr><td>Server-level aggregate power</td><td>Total server power including CPU, GPU, memory, peripherals</td><td><strong>High</strong>: Accurate for inference but problematic for virtualized servers with mixed workloads. Useful for cloud service provider per server economic analysis.</td></tr><tr><td>External power meters</td><td>Physical measurement at rack/PSU level including infrastructure overhead</td><td><strong>Low</strong>: Can lead to inaccurate inference-specific energy statistics when mixed workloads are running on the cluster (training and inference). Useful for broad data center economics analysis.</td></tr></tbody></table><figcaption class="wp-element-caption">Table 1. Comparison of common power measurement methods and their accuracy for LLM inference cost analysis</figcaption></figure>
  708.  
  709.  
  710.  
  711. <p>Power draw should be measured for scenarios close to your P90 distribution. Applications with irregular load require measurement across broad configuration sweeps, particularly those with dynamic model selection or varying sequence lengths.</p>
  712.  
  713.  
  714.  
  715. <p>The context processing component of inference is typically short but compute bound due to highly parallel computations saturating cores. Output sequence generation is more memory bound but lasts longer (except for single token classification). Therefore, applications receiving large inputs or entire documents can show significant power draw during the extended context/prefill phase.</p>
  716.  
  717.  
  718.  
  719. <h2 class="wp-block-heading">Cost per Meaningful Response</h2>
  720.  
  721.  
  722.  
  723. <p>While cost per token is useful, <em>cost per meaningful unit of value</em>—cost per summary, translation, research query, or API call—may be more important for business decisions.</p>
  724.  
  725.  
  726.  
  727. <p>Depending on use case, meaningful response costs may include quality or error-driven &#8220;reruns&#8221; and pre/postprocessing components like embeddings for retrieval-augmented generation (RAG) and guardrailing LLMs:</p>
  728.  
  729.  
  730.  
  731. <figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="658" height="40" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-Meaningful-Response.png" alt="Cost per meaningful response" class="wp-image-17630" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-Meaningful-Response.png 658w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-Meaningful-Response-300x18.png 300w" sizes="auto, (max-width: 658px) 100vw, 658px" /></figure>
  732.  
  733.  
  734.  
  735. <p>where:</p>
  736.  
  737.  
  738.  
  739. <ul class="wp-block-list">
  740. <li>E<sub><em><sub>𝑡</sub></em></sub> is the average tokens generated per response, excluding input tokens. For reasoning models, reasoning tokens should be included in this figure.&nbsp;</li>
  741.  
  742.  
  743.  
  744. <li>AA is the average attempts per meaningful response.</li>
  745.  
  746.  
  747.  
  748. <li>C<sub><em><sub>𝑡</sub></em></sub> is your cost per token (from earlier).&nbsp;</li>
  749.  
  750.  
  751.  
  752. <li>P<sub><em><sub>𝑡</sub></em></sub> is the average number of pre/post processing tokens.</li>
  753.  
  754.  
  755.  
  756. <li>C<sub>𝑝</sub> is the cost per pre/post processing token, which should be much lower than C<sub>𝑡</sub>.</li>
  757. </ul>
  758.  
  759.  
  760.  
  761. <p>Let’s expand our previous example to consider an ecommerce customer service bot’s cost per meaningful response, with the following measured operational behavior and characteristics:</p>
  762.  
  763.  
  764.  
  765. <ul class="wp-block-list">
  766. <li>Average response: 100 reasoning tokens + 50 standard output tokens (150 total)</li>
  767.  
  768.  
  769.  
  770. <li>Success rate: 1.2 tries on average</li>
  771.  
  772.  
  773.  
  774. <li>Cost per token: $0.00015</li>
  775.  
  776.  
  777.  
  778. <li>Guardrail processing: 150 tokens at $0.000002 per token</li>
  779. </ul>
  780.  
  781.  
  782. <div class="wp-block-image">
  783. <figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="943" height="26" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-meaningful-response-0314-1.png" alt="Cost per meaningful response equals 0.0314" class="wp-image-17636" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-meaningful-response-0314-1.png 943w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-meaningful-response-0314-1-300x8.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Cost-per-meaningful-response-0314-1-768x21.png 768w" sizes="auto, (max-width: 943px) 100vw, 943px" /></figure></div>
  784.  
  785.  
  786. <p>This calculation, combined with other business factors, determines sustainable pricing to optimize service profitability. A similar analysis can be performed to determine the power efficiency by replacing the cost per token metric with a joule per token measure. In the end, each organization must determine what metrics capture bottomline impact and how to go about optimizing them.</p>
  787.  
  788.  
  789.  
  790. <h2 class="wp-block-heading">Beyond Token Cost and Power</h2>
  791.  
  792.  
  793.  
  794. <p>The tokens per dollar and tokens per watt metrics we&#8217;ve analyzed provide the foundational building blocks for AI economics, but production systems operate within far more complex optimization landscapes. Real deployments face scaling trade-offs where diminishing returns, opportunity costs, and utility functions intersect with practical constraints around throughput, demand patterns, and infrastructure capacity. These economic realities extend well beyond simple efficiency calculations.</p>
  795.  
  796.  
  797.  
  798. <p>The true cost structure of AI systems spans multiple interconnected layers—from individual token processing through compute architecture to data center design and deployment strategy. Each architectural choice cascades through the entire economic stack, creating optimization opportunities that pure price-performance metrics cannot reveal. Understanding these layered relationships is essential for building AI systems that remain economically viable as they scale from prototype to production.</p>
  799. ]]></content:encoded>
  800. <wfw:commentRss>https://www.oreilly.com/radar/balancing-cost-power-and-ai-performance/feed/</wfw:commentRss>
  801. <slash:comments>0</slash:comments>
  802. </item>
  803. <item>
  804. <title>Radar Trends to Watch: November 2025</title>
  805. <link>https://www.oreilly.com/radar/radar-trends-to-watch-november-2025/</link>
  806. <comments>https://www.oreilly.com/radar/radar-trends-to-watch-november-2025/#respond</comments>
  807. <pubDate>Tue, 04 Nov 2025 12:02:40 +0000</pubDate>
  808. <dc:creator><![CDATA[Mike Loukides]]></dc:creator>
  809. <category><![CDATA[Radar Trends]]></category>
  810. <category><![CDATA[Commentary]]></category>
  811.  
  812. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17640</guid>
  813.  
  814. <media:content
  815. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2023/06/radar-1400x950-1.png"
  816. medium="image"
  817. type="image/png"
  818. width="1400"
  819. height="950"
  820. />
  821.  
  822. <media:thumbnail
  823. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2023/06/radar-1400x950-1-160x160.png"
  824. width="160"
  825. height="160"
  826. />
  827. <custom:subtitle><![CDATA[Developments in AI, Security, Quantum Computing, and More]]></custom:subtitle>
  828. <description><![CDATA[AI has so thoroughly colonized every technical discipline that it’s becoming hard to organize items of interest in Radar Trends. Should a story go under AI or programming (or operations or biology or whatever the case may be)? Maybe it’s time to go back to a large language model that doesn’t require any electricity and [&#8230;]]]></description>
  829. <content:encoded><![CDATA[
  830. <p>AI has so thoroughly colonized every technical discipline that it’s becoming hard to organize items of interest in <em>Radar Trends</em>. Should a story go under AI or programming (or operations or biology or whatever the case may be)? Maybe it’s time to go back to a large language model that doesn’t require any electricity and has over 217K parameters: <a href="https://www.merriam-webster.com/" target="_blank" rel="noreferrer noopener">Merriam-Webster</a>. But no matter where these items ultimately appear, it’s good to see practical applications of AI in fields as diverse as bioengineering and UX design.</p>
  831.  
  832.  
  833.  
  834. <h2 class="wp-block-heading">AI</h2>
  835.  
  836.  
  837.  
  838. <ul class="wp-block-list">
  839. <li>Alibaba’s <a href="https://huggingface.co/inclusionAI/Ling-1T" target="_blank" rel="noreferrer noopener">Ling-1T</a> may be the best model you’ve never heard of. It’s a nonthinking mixture-of-experts model with 1T parameters, 50B active at any time. And it’s open weights (MIT license).</li>
  840.  
  841.  
  842.  
  843. <li><a href="https://marin.community/blog/2025/05/19/announcement/" target="_blank" rel="noreferrer noopener">Marin</a> is a new lab for creating fully open source models. They say that the development of models will be completely transparent from the beginning. Everything is tracked by GitHub; all experiments may be observed by anyone; there’s no cherrypicking of results.</li>
  844.  
  845.  
  846.  
  847. <li><a href="https://github.com/jasonjmcghee/WebMCP" target="_blank" rel="noreferrer noopener">WebMCP</a> is a proposal and an implementation for a protocol that allows websites to become MCP servers. As servers, they can interact directly with agents and LLMs.</li>
  848.  
  849.  
  850.  
  851. <li>Claude has <a href="https://www.anthropic.com/news/skills" target="_blank" rel="noreferrer noopener">announced</a> Agent Skills. Skills are essentially just a Markdown file describing how to perform a task, possibly accompanied by scripts and resources. They’re easy to add and only used as needed. A Skill-creator Skill makes it very easy to build Skills. Simon Willison <a href="https://simonwillison.net/2025/Oct/16/claude-skills/" target="_blank" rel="noreferrer noopener">thinks</a> that Skills may be a “bigger deal than MCP.”</li>
  852.  
  853.  
  854.  
  855. <li>Pete Warden <a href="https://petewarden.com/2025/10/16/why-does-a-local-ai-voice-agent-running-on-a-super-cheap-soc-matter/" target="_blank" rel="noreferrer noopener">describes</a> his work on the smallest of AI. Small AI serves an important set of applications without compromising privacy or requiring enormous resources.</li>
  856.  
  857.  
  858.  
  859. <li>Anthropic has <a href="https://arstechnica.com/ai/2025/10/anthropics-claude-haiku-4-5-matches-mays-frontier-model-at-fraction-of-cost/" target="_blank" rel="noreferrer noopener">released</a> <a href="https://assets.anthropic.com/m/99128ddd009bdcb/original/Claude-Haiku-4-5-System-Card.pdf" target="_blank" rel="noreferrer noopener">Claude Haiku 4.5</a>, skipping 4.0 and 4.1 in the process. Haiku is their smallest and fastest model. The new release claims performance similar to Sonnet 4, but it’s much faster and less expensive.</li>
  860.  
  861.  
  862.  
  863. <li>NVIDIA is now <a href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/" target="_blank" rel="noreferrer noopener">offering</a> the DGX Spark, a desktop AI supercomputer. It offers 1 petaflop performance on models with up to 200B parameters. Simon Willison has a <a href="https://simonwillison.net/2025/Oct/14/nvidia-dgx-spark/" target="_blank" rel="noreferrer noopener">review</a> of a preview unit.</li>
  864.  
  865.  
  866.  
  867. <li>Andrej Karpathy has released <a href="https://github.com/karpathy/nanochat/discussions/1" target="_blank" rel="noreferrer noopener">nanochat</a>, a small ChatGPT-like model that’s completely open and can be trained for roughly $100. It’s intended for experimenters, and Karpathy has detailed instructions on building and training.</li>
  868.  
  869.  
  870.  
  871. <li>There’s an <a href="https://xenodium.com/introducing-agent-shell" target="_blank" rel="noreferrer noopener">agent-shell</a> for Emacs? There had to be one. Emacs abhors a vacuum.</li>
  872.  
  873.  
  874.  
  875. <li>Anthropic launched “<a href="https://www.anthropic.com/news/claude-code-plugins" target="_blank" rel="noreferrer noopener">plugins</a>,” which give developers the ability to write extensions to Claude Code. Of course, these extensions can be agents. Simon Willison <a href="https://simonwillison.net/2025/Oct/10/superpowers/" target="_blank" rel="noreferrer noopener">points</a> to Jesse Vincent’s <a href="https://github.com/obra/superpowers" target="_blank" rel="noreferrer noopener">Superpowers</a> as a glimpse of what plugins can accomplish.</li>
  876.  
  877.  
  878.  
  879. <li>Google has <a href="https://blog.google/technology/google-deepmind/gemini-computer-use-model/" target="_blank" rel="noreferrer noopener">released</a> the Gemini 2.5 Computer Use model into public preview. While the thrill of teaching computers to click browsers and other web applications faded quickly, Gemini 2.5 Computer Use appears to be generating excitement.</li>
  880.  
  881.  
  882.  
  883. <li>Thinking Machines Labs has announced <a href="https://thinkingmachines.ai/blog/announcing-tinker/" target="_blank" rel="noreferrer noopener">Tinker</a>, an API for training open weight language models. Tinker runs on Thinking Machines’ infrastructure. It’s currently in beta.</li>
  884.  
  885.  
  886.  
  887. <li>Merriam-Webster will <a href="https://bsky.app/profile/did:plc:l7mwn6iba72xsiayiac5222p/post/3m25bdagve22f" target="_blank" rel="noreferrer noopener">release</a> its newest large language model on November 18. It has no data centers and requires no electricity.</li>
  888.  
  889.  
  890.  
  891. <li>We know that the data products, including AI, reflect historical biases in their training data. In India, <a href="https://www.technologyreview.com/2025/10/01/1124621/openai-india-caste-bias/" target="_blank" rel="noreferrer noopener">OpenAI reflects caste biases</a>. But it’s not just OpenAI; these biases appear in all models. Although caste bias was outlawed in the middle of the 20th century, these biases live on in the data.</li>
  892.  
  893.  
  894.  
  895. <li>DeepSeek has <a href="https://api-docs.deepseek.com/news/news250929" target="_blank" rel="noreferrer noopener">released</a> an experimental version of its reasoning model, DeepSeek-V3.2-Exp. This model uses a technique called <a href="https://arstechnica.com/ai/2025/09/deepseek-tests-sparse-attention-to-slash-ai-processing-costs/" target="_blank" rel="noreferrer noopener">sparse attention</a> to reduce the processing requirements (and cost) of the reasoning process.</li>
  896.  
  897.  
  898.  
  899. <li>OpenAI has added an <a href="https://openai.com/index/buy-it-in-chatgpt/" target="_blank" rel="noreferrer noopener">Instant Checkout</a> feature that allows users to make purchases with Etsy and Shopify merchants, taking them directly to checkout after finding their products. It’s based on the <a href="https://developers.openai.com/commerce/" target="_blank" rel="noreferrer noopener">Agentic Commerce Protocol</a>.</li>
  900.  
  901.  
  902.  
  903. <li>OpenAI’s <a href="https://openai.com/index/gdpval/" target="_blank" rel="noreferrer noopener">GDPval</a> tests go beyond existing benchmarks by challenging LLMs with real-world tasks rather than simple problems. The tasks were selected from 44 industries and were chosen for economic value.</li>
  904. </ul>
  905.  
  906.  
  907.  
  908. <h2 class="wp-block-heading">Programming</h2>
  909.  
  910.  
  911.  
  912. <ul class="wp-block-list">
  913. <li>Steve Yegge’s <a href="https://steve-yegge.medium.com/introducing-beads-a-coding-agent-memory-system-637d7d92514a" target="_blank" rel="noreferrer noopener">Beads</a> is a memory management system for coding agents. It’s badly needed, and worth checking out.</li>
  914.  
  915.  
  916.  
  917. <li>Do you use coding agents in parallel? Simon Willison was a skeptic, but he’s gradually becoming <a href="https://simonwillison.net/2025/Oct/5/parallel-coding-agents/" target="_blank" rel="noreferrer noopener">convinced</a> it’s a good practice.</li>
  918.  
  919.  
  920.  
  921. <li>One problem with generative coding is that AI is trained on “<a href="https://thenewstack.io/why-ai-is-generating-lowest-common-denominator-react-code/" target="_blank" rel="noreferrer noopener">the worst code in the world</a>.” For web development, we’ll need better foundations to get to a post–frontend-framework world.</li>
  922.  
  923.  
  924.  
  925. <li>If you’ve wanted to program with Claude from your phone or some other device, <a href="https://arstechnica.com/ai/2025/10/claude-code-gets-a-web-version-but-its-the-new-sandboxing-that-really-matters/" target="_blank" rel="noreferrer noopener">now you can</a>. Anthropic has added web and mobile interfaces to Claude Code, along with a sandbox for running generated code safely.</li>
  926.  
  927.  
  928.  
  929. <li>You may have read “<a href="https://tomstu.art/programming-with-nothing" target="_blank" rel="noreferrer noopener">Programming with Nothing</a>,” a classic article that strips programming to the basics of lambda calculus. “<a href="https://joshmoody.org/blog/programming-with-less-than-nothing/" target="_blank" rel="noreferrer noopener">Programming with Less Than Nothing</a>” does FizzBuzz in many lines of <a href="https://en.wikipedia.org/wiki/Combinatory_logic" target="_blank" rel="noreferrer noopener">combinatory logic</a>.</li>
  930.  
  931.  
  932.  
  933. <li>What’s the difference between <a href="https://thenewstack.io/technical-debt-vs-architecture-debt-dont-confuse-them/" target="_blank" rel="noreferrer noopener">technical debt and architectural debt</a>? Don’t confuse them; they’re significantly different problems, with different solutions.</li>
  934.  
  935.  
  936.  
  937. <li>For graph fans: The IRS has released its <a href="https://github.com/IRS-Public/fact-graph" target="_blank" rel="noreferrer noopener">fact graph</a>, which, among other things, models the US Internal Revenue Code. It can be used with JavaScript and any JVM language.</li>
  938.  
  939.  
  940.  
  941. <li>What is spec-driven development? It has become one of the key buzzwords in the discussion of AI-assisted software development. Birgitta Böckeler <a href="https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html" target="_blank" rel="noreferrer noopener">attempts to define</a> SDD precisely, then looks at three tools for aiding SDD.</li>
  942.  
  943.  
  944.  
  945. <li><em>IEEE Spectrum</em> released its <a href="https://spectrum.ieee.org/top-programming-languages-2025" target="_blank" rel="noreferrer noopener">2025 programming languages rankings</a>. Python is still king, with Java second; JavaScript has fallen from third to fifth. But more important, <em>Spectrum</em> wonders whether AI-assisted programming will make these rankings irrelevant.</li>
  946. </ul>
  947.  
  948.  
  949.  
  950. <h2 class="wp-block-heading">Web</h2>
  951.  
  952.  
  953.  
  954. <ul class="wp-block-list">
  955. <li>Cloudflare CEO Matthew Prince is <a href="https://finance.yahoo.com/news/cloudflare-ceo-matthew-prince-pushing-193421352.html" target="_blank" rel="noreferrer noopener">pushing for regulation</a> to prevent Google from tying web crawlers for search and for training content together. You can’t block the training crawler without also blocking the search crawler, and blocking the latter has significant consequences for businesses.</li>
  956.  
  957.  
  958.  
  959. <li>OpenAI has <a href="https://chatgpt.com/atlas" target="_blank" rel="noreferrer noopener">released Atlas</a>, its Chromium-based web browser. As you’d expect, AI is integrated into everything. You can chat with the browser, interrogate your history, your settings, or your bookmarks, and (of course) chat with the pages you’re viewing.</li>
  960.  
  961.  
  962.  
  963. <li>Try again? Apple has <a href="https://arstechnica.com/gadgets/2025/10/with-considerably-less-fanfare-apple-releases-a-second-generation-vision-pro/" target="_blank" rel="noreferrer noopener">announced</a> a second-generation Vision Pro, with a similar design and at the same price point.</li>
  964.  
  965.  
  966.  
  967. <li>Have we passed <a href="https://flowingdata.com/2025/10/03/passed-peak-social-media-maybe/" target="_blank" rel="noreferrer noopener">peak social</a>? Social media usage has been declining for all age groups. The youngest group, 16–24, is the largest but has also shown the sharpest decline. Are we going to reinvent the decentralized web? Or succumb to a different set of walled gardens?</li>
  968.  
  969.  
  970.  
  971. <li>Addy Osmani’s post “<a href="https://addyosmani.com/blog/core-web-vitals/" target="_blank" rel="noreferrer noopener">The History of Core Web Vitals</a>” is a must-read for anyone working in web performance.</li>
  972.  
  973.  
  974.  
  975. <li>Features from the major web frameworks are being <a href="https://thenewstack.io/stop-ignoring-the-browser-the-biggest-frontend-shift-in-a-decade/" target="_blank" rel="noreferrer noopener">implemented by browsers</a>. Frameworks won’t disappear, but their importance will diminish. People will again be programming to the browser. In turn, this will make browser testing and standardization that much more important.</li>
  976.  
  977.  
  978.  
  979. <li>Luke Wroblewski <a href="https://www.lukew.com/ff/entry.asp?2132" target="_blank" rel="noreferrer noopener">writes</a> about using AI to solve common problems in user experience (UX). AI can help with problems like collecting data from users and onboarding users to new applications.</li>
  980. </ul>
  981.  
  982.  
  983.  
  984. <h2 class="wp-block-heading">Operations</h2>
  985.  
  986.  
  987.  
  988. <ul class="wp-block-list">
  989. <li>There’s <a href="https://arstechnica.com/gadgets/2025/10/a-single-point-of-failure-triggered-the-amazon-outage-affecting-millions/" target="_blank" rel="noreferrer noopener">a lot to be learned</a> from <a href="https://aws.amazon.com/message/101925/" target="_blank" rel="noreferrer noopener">AWS’s recent outage</a>, which stemmed from a DynamoDB DNS failure in the US-EAST-1 region. It’s important not to write this off as a war story about Amazon’s failure. Instead, think: How do you make your own distributed networks more reliable?</li>
  990.  
  991.  
  992.  
  993. <li>PyTorch <a href="https://pytorch.org/blog/introducing-pytorch-monarch/" target="_blank" rel="noreferrer noopener">Monarch</a> is a new library that helps developers manage distributed systems for training AI models. It lets developers write a script that “orchestrates all distributed resources,” allowing the developer to work with them as a single almost-local system.</li>
  994. </ul>
  995.  
  996.  
  997.  
  998. <h2 class="wp-block-heading">Security</h2>
  999.  
  1000.  
  1001.  
  1002. <ul class="wp-block-list">
  1003. <li>The solution to the fourth part of <a href="https://en.wikipedia.org/wiki/Kryptos" target="_blank" rel="noreferrer noopener"><em>Kryptos</em></a>, the cryptosculpture at the CIA’s headquarters, has been <a href="https://www.scientificamerican.com/article/a-solution-to-the-cias-kryptos-code-is-found-after-35-years/" target="_blank" rel="noreferrer noopener">discovered</a>! The discovery came through an opsec error that led researchers to the clear text stored at the Smithsonian. This is an important lesson: Attacks against cryptosystems rarely touch the cryptography. They attack the protocols, people, and systems surrounding codes.</li>
  1004.  
  1005.  
  1006.  
  1007. <li>Public cryptocurrency blockchains are being used by international threat actors as “bulletproof” hosts for <a href="https://arstechnica.com/security/2025/10/hackers-bullet-proof-hosts-deliver-malware-from-blockchains/" target="_blank" rel="noreferrer noopener">storing and distributing malware</a>.</li>
  1008.  
  1009.  
  1010.  
  1011. <li>Apple is now <a href="https://security.apple.com/blog/apple-security-bounty-evolved/" target="_blank" rel="noreferrer noopener">giving a $2M bounty</a> for zero-day exploits that allow zero-click remote code execution on iOS. These vulnerabilities have been exploited by commercial malware vendors.</li>
  1012.  
  1013.  
  1014.  
  1015. <li>Signal has <a href="https://arstechnica.com/security/2025/10/why-signals-post-quantum-makeover-is-an-amazing-engineering-achievement/" target="_blank" rel="noreferrer noopener">incorporated postquantum encryption</a> into its Signal protocol. This is a major technological achievement. They’re one of the few organizations that’s ready for the quantum world.</li>
  1016.  
  1017.  
  1018.  
  1019. <li>Salesforce is <a href="https://arstechnica.com/security/2025/10/salesforce-says-it-wont-pay-extortion-demand-in-1-billion-records-breach/" target="_blank" rel="noreferrer noopener">refusing to pay</a> extortion after a major data loss of over a billion records. Data from a number of major accounts was stolen by a group calling itself Scattered LAPSUS$ Hunters. Attackers simply asked the victim’s staff to install an attacker-controlled app.</li>
  1020.  
  1021.  
  1022.  
  1023. <li><a href="https://danielmiessler.com/blog/will-ai-help-moreattackers-defenders" target="_blank" rel="noreferrer noopener">Context is the key to AI security</a>. We’re not surprised; right now, context is the key to just about everything in AI. Attackers have the advantage now, but in 3–5 years that advantage will pass to defenders who use AI effectively.</li>
  1024.  
  1025.  
  1026.  
  1027. <li>Google has <a href="https://workspaceupdates.googleblog.com/2025/10/send-gmail-end-to-end-encrypted-emails-in-gmail.html" target="_blank" rel="noreferrer noopener">announced</a> that Gmail users can now send end-to-end encrypted (E2EE) regardless of whether they’re using Gmail. Recipients who don’t use Gmail will receive a notification and the ability to read the message on a one-time guest account.</li>
  1028.  
  1029.  
  1030.  
  1031. <li>The best way to attack your company isn’t through the applications; it’s <a href="https://www.bleepingcomputer.com/news/security/your-service-desk-is-the-new-attack-vector-heres-how-to-defend-it/" target="_blank" rel="noreferrer noopener">through the service help desk</a>. Human engineering remains extremely effective—more effective than attacks against software. Training helps; a well-designed workflow and playbook is crucial.</li>
  1032.  
  1033.  
  1034.  
  1035. <li><a href="https://www.bleepingcomputer.com/news/security/google-drive-for-desktop-gets-ai-powered-ransomware-detection/" target="_blank" rel="noreferrer noopener">Ransomware detection</a> has now been built into the desktop version of Google Drive. When it detects activities that indicate ransomware, Drive suspends file syncing and alerts users. It’s enabled by default, but it is possible to opt out.</li>
  1036.  
  1037.  
  1038.  
  1039. <li><a href="https://www.bleepingcomputer.com/news/artificial-intelligence/openai-is-routing-gpt-4o-to-safety-models-when-it-detects-harmful-activities/" target="_blank" rel="noreferrer noopener">OpenAI is routing requests with safety issues to an unknown model</a>. This is presumably a specialized version of GPT-5 that has been trained specially to deal with sensitive issues.</li>
  1040. </ul>
  1041.  
  1042.  
  1043.  
  1044. <h2 class="wp-block-heading">Robotics</h2>
  1045.  
  1046.  
  1047.  
  1048. <ul class="wp-block-list">
  1049. <li>Would you <a href="https://techxplore.com/news/2025-10-banana-robots-pitch-customers-chicago.html" target="_blank" rel="noreferrer noopener">buy a banana from a robot</a>? A small chain of stores in Chicago is finding out.</li>
  1050.  
  1051.  
  1052.  
  1053. <li>Rodney Brooks, founder of iRobot, <a href="https://arstechnica.com/ai/2025/10/why-irobots-founder-wont-go-within-10-feet-of-todays-walking-robots/" target="_blank" rel="noreferrer noopener">warns</a> that humans should stay at least 10 feet (3 meters) away from humanoid walking robots. There is a lot of potential energy in their limbs when they move them to retain balance. Unsurprisingly, this danger stems from the vision-only approach that Tesla and other vendors have adopted. Humans learn and act with all five senses.</li>
  1054. </ul>
  1055.  
  1056.  
  1057.  
  1058. <h2 class="wp-block-heading">Quantum Computing</h2>
  1059.  
  1060.  
  1061.  
  1062. <ul class="wp-block-list">
  1063. <li>Google <a href="https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/" target="_blank" rel="noreferrer noopener">claims</a> to have demonstrated a <a href="https://www.nature.com/articles/s41586-025-09526-6" target="_blank" rel="noreferrer noopener">verifiable quantum advantage</a> on its quantum processor: The output of the computation can be tested for correctness. Verifiable quantum advantage doesn’t just mean that it’s fast; it means that error correction is working.</li>
  1064.  
  1065.  
  1066.  
  1067. <li>Researchers at Institute of Science Tokyo have <a href="https://arxiv.org/abs/2412.21171" target="_blank" rel="noreferrer noopener">developed</a> a <a href="https://phys.org/news/2025-09-quantum-error-codes-enable-efficient.html" target="_blank" rel="noreferrer noopener">quantum error correction method that’s efficient and (in theory) scales</a> to hundreds of thousands of qubits. Quantum computers of that size haven’t been built yet but will be needed to perform real work.</li>
  1068. </ul>
  1069.  
  1070.  
  1071.  
  1072. <h2 class="wp-block-heading">Biology</h2>
  1073.  
  1074.  
  1075.  
  1076. <ul class="wp-block-list">
  1077. <li>Scientists have discovered a new narrow-spectrum antibiotic that could be used to treat inflammatory bowel disease. AI was <a href="https://healthsci.mcmaster.ca/new-antibiotic-targets-ibd-and-ai-predicted-how-it-would-work-before-scientists-could-prove-it/" target="_blank" rel="noreferrer noopener">able to predict how the antibiotic would work</a>, apparently a first.</li>
  1078.  
  1079.  
  1080.  
  1081. <li>A red-teaming security group at Microsoft has <a href="https://www.science.org/doi/10.1126/science.adu8578" target="_blank" rel="noreferrer noopener">announced</a> <a href="https://www.technologyreview.com/2025/10/02/1124767/microsoft-says-ai-can-create-zero-day-threats-in-biology/" target="_blank" rel="noreferrer noopener">that they have found</a> a zero-day that allows malicious actors to design harmful proteins with AI.</li>
  1082.  
  1083.  
  1084.  
  1085. <li>AI has successfully <a href="https://www.nature.com/articles/d41586-025-03055-y" target="_blank" rel="noreferrer noopener">designed the DNA for a bacteriophage</a> (essentially a very simple virus) capable of <a href="https://www.biorxiv.org/content/10.1101/2025.09.12.675911v1" target="_blank" rel="noreferrer noopener">infecting and killing E. coli</a>, a common bacteria. This is the first time AI has been used to synthesize an entire genome.</li>
  1086. </ul>
  1087. ]]></content:encoded>
  1088. <wfw:commentRss>https://www.oreilly.com/radar/radar-trends-to-watch-november-2025/feed/</wfw:commentRss>
  1089. <slash:comments>0</slash:comments>
  1090. </item>
  1091. <item>
  1092. <title>On the AWS Outage</title>
  1093. <link>https://www.oreilly.com/radar/on-the-aws-outage/</link>
  1094. <comments>https://www.oreilly.com/radar/on-the-aws-outage/#respond</comments>
  1095. <pubDate>Mon, 03 Nov 2025 10:47:35 +0000</pubDate>
  1096. <dc:creator><![CDATA[Mike Loukides]]></dc:creator>
  1097. <category><![CDATA[Web]]></category>
  1098. <category><![CDATA[Commentary]]></category>
  1099.  
  1100. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17622</guid>
  1101.  
  1102. <media:content
  1103. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-lights-1.jpg"
  1104. medium="image"
  1105. type="image/jpeg"
  1106. width="2304"
  1107. height="1792"
  1108. />
  1109.  
  1110. <media:thumbnail
  1111. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-lights-1-160x160.jpg"
  1112. width="160"
  1113. height="160"
  1114. />
  1115. <description><![CDATA[Everybody notices when something big fails—like AWS’s US-EAST-1 region. And fail it did. All sorts of services and sites became inaccessible, and we all knew it was Amazon’s fault. A week later, when I run into a site that’s down, I still say, “Must be some hangover from the AWS outage. Some cache that didn’t [&#8230;]]]></description>
  1116. <content:encoded><![CDATA[
  1117. <p>Everybody notices when something big fails—like AWS’s US-EAST-1 region. And fail it did. All sorts of services and sites became inaccessible, and we all knew it was Amazon’s fault. A week later, when I run into a site that’s down, I still say, “Must be some hangover from the AWS outage. Some cache that didn’t get refreshed.” Amazon gets blamed—maybe even rightly—even when it’s not their fault.</p>
  1118.  
  1119.  
  1120.  
  1121. <p>I’m not writing about fault, though, and I’m also not writing a technical analysis of what happened. There are good places for that online, including <a href="https://aws.amazon.com/premiumsupport/technology/pes/" target="_blank" rel="noreferrer noopener">AWS’s own summary</a>. What I am writing about is a reaction to the outage that I’ve seen all too often: “This proves we can’t trust AWS. We need to build our own infrastructure.”</p>
  1122.  
  1123.  
  1124.  
  1125. <p>Building your own infrastructure is fine. But I’m also reminded of the wisest comment I heard after the <a href="https://aws.amazon.com/message/67457/" target="_blank" rel="noreferrer noopener">2012 US-EAST outage</a>. I asked <a href="https://learning.oreilly.com/library/view/r-cookbook-2nd/9781492040675/" target="_blank" rel="noreferrer noopener">JD Long</a> about his reaction to the outage. He said, “I’m really glad it wasn’t my guys trying to fix the problem.”<sup>1</sup> JD wasn’t disparaging his team; he was saying that Amazon has a lot of expertise in running, maintaining, and troubleshooting really big systems that can fail suddenly in unpredictable ways—when just the right conditions happen to tickle a bug that had been latent in the system for years. That expertise is hard to find and expensive when you find it. And no matter how expert “your guys” are, all complex systems fail. After last month’s AWS failure, Microsoft’s Azure obligingly <a href="https://www.zdnet.com/article/massive-azure-outage-is-over-but-problems-linger-heres-what-happened/" target="_blank" rel="noreferrer noopener">failed</a> about 10 days later.</p>
  1126.  
  1127.  
  1128.  
  1129. <p>I’m not really an Amazon fan or, more specifically, an AWS fan. But outages like this should force us to remember what they do right. AWS outages also warn us that we need to learn how to “craft ways of undoing this concentration and creating real choice,” as Signal CEO Meredith Whittaker <a href="https://bsky.app/profile/did:plc:so5r7asbd26pmnnoerksklor/post/3m46a2fmfzk23" target="_blank" rel="noreferrer noopener">points out</a>. But Meredith understands how difficult it will be to build this infrastructure and that, for the present, there’s no viable alternative to AWS or one of the other hyperscalers.</p>
  1130.  
  1131.  
  1132.  
  1133. <p>Operating and troubleshooting large systems is difficult and requires very specialized skills. If you decide to build your own infrastructure, you will need those skills. And you may end up wishing that it isn’t your guys trying to fix the problem.</p>
  1134.  
  1135.  
  1136.  
  1137. <hr class="wp-block-separator has-alpha-channel-opacity"/>
  1138.  
  1139.  
  1140.  
  1141. <h3 class="wp-block-heading">Footnote</h3>
  1142.  
  1143.  
  1144.  
  1145. <ol class="wp-block-list">
  1146. <li>In 2012, I happened to be flying out of DC just as the storm that took US-EAST down was rolling in. My flight made it out, but it was dramatic.</li>
  1147. </ol>
  1148. ]]></content:encoded>
  1149. <wfw:commentRss>https://www.oreilly.com/radar/on-the-aws-outage/feed/</wfw:commentRss>
  1150. <slash:comments>0</slash:comments>
  1151. </item>
  1152. <item>
  1153. <title>Generative AI in the Real World: Chris Butler on GenAI in Product Management</title>
  1154. <link>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-product-management-in-the-age-of-ai-with-chris-butler/</link>
  1155. <comments>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-product-management-in-the-age-of-ai-with-chris-butler/#respond</comments>
  1156. <pubDate>Thu, 30 Oct 2025 11:29:41 +0000</pubDate>
  1157. <dc:creator><![CDATA[Ben Lorica and Chris Butler]]></dc:creator>
  1158. <category><![CDATA[Generative AI in the Real World]]></category>
  1159. <category><![CDATA[Podcast]]></category>
  1160.  
  1161. <guid isPermaLink="false">https://www.oreilly.com/radar/?post_type=podcast&#038;p=17620</guid>
  1162.  
  1163. <enclosure url="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3" length="0" type="audio/mpeg" />
  1164. <media:content
  1165. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2024/01/Podcast_Cover_GenAI_in_the_Real_World-scaled.png"
  1166. medium="image"
  1167. type="image/png"
  1168. width="2560"
  1169. height="2560"
  1170. />
  1171.  
  1172. <media:thumbnail
  1173. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2024/01/Podcast_Cover_GenAI_in_the_Real_World-160x160.png"
  1174. width="160"
  1175. height="160"
  1176. />
  1177. <description><![CDATA[In this episode, Ben Lorica and Chris Butler, director of product operations for GitHub&#8217;s Synapse team, chat about the experimentation Chris is doing to incorporate generative AI into the product development process—particularly with the goal of reducing toil for cross-functional teams. It isn’t just automating busywork (although there’s some of that). He and his team [&#8230;]]]></description>
  1178. <content:encoded><![CDATA[
  1179. <p>In this episode, Ben Lorica and Chris Butler, director of product operations for GitHub&#8217;s Synapse team, chat about the experimentation Chris is doing to incorporate generative AI into the product development process—particularly with the goal of reducing toil for cross-functional teams. It isn’t just automating busywork (although there’s some of that). He and his team have created agents that expose the right information at the right time, use feedback in meetings to develop “straw man” prototypes for the team to react to, and even offer critiques from specific perspectives (a CPO agent?). Very interesting stuff.</p>
  1180.  
  1181.  
  1182.  
  1183. <p>About the <em>Generative AI in the Real World</em> podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In <em>Generative AI in the Real World</em>, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.</p>
  1184.  
  1185.  
  1186.  
  1187. <p>Check out <a href="https://learning.oreilly.com/playlists/42123a72-1108-40f1-91c0-adbfb9f4983b/?_gl=1*o8rnpe*_ga*MTYyODYzMzQwMi4xNzU4NTY5ODYz*_ga_092EL089CH*czE3NjEzMzQ5Nzgkbzg4JGcwJHQxNzYxMzM1MDg3JGo2MCRsMCRoMA.." target="_blank" rel="noreferrer noopener">other episodes</a> of this podcast on the O’Reilly learning platform.</p>
  1188.  
  1189.  
  1190.  
  1191. <h2 class="wp-block-heading">Transcript</h2>
  1192.  
  1193.  
  1194.  
  1195. <p><em>This transcript was created with the help of AI and has been lightly edited for clarity.</em></p>
  1196.  
  1197.  
  1198.  
  1199. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=0" target="_blank" rel="noreferrer noopener">00.00</a>: <strong>Today we have Chris Butler of GitHub, where he leads a team called the Synapse. Welcome to the podcast, Chris. </strong></p>
  1200.  
  1201.  
  1202.  
  1203. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=15" target="_blank" rel="noreferrer noopener">00.15</a>: Thank you. Yeah. Synapse is actually part of our product team and what we call EPD operations, which is engineering, product, and design. And our team is mostly engineers. I&#8217;m the product lead for it, but we help solve and reduce toil for these cross-functional teams inside of GitHub, mostly building internal tooling, with the focus on process automation and AI. But we also have a speculative part of our practice as well: trying to imagine the future of cross-functional teams working together and how they might do that with agents, for example.</p>
  1204.  
  1205.  
  1206.  
  1207. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=45" target="_blank" rel="noreferrer noopener">00.45</a>: <strong>Actually, you are the first person I&#8217;ve come across who&#8217;s used the word “toil.” Usually “tedium” is what people use, in terms of describing the parts of their job that they would rather automate. So you&#8217;re actually a big proponent of talking about agents that go beyond coding agents.</strong></p>
  1208.  
  1209.  
  1210.  
  1211. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=63" target="_blank" rel="noreferrer noopener">01.03</a>: Yeah. That&#8217;s right. </p>
  1212.  
  1213.  
  1214.  
  1215. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=65" target="_blank" rel="noreferrer noopener">01.05</a>: <strong>And specifically in your context for product people. </strong></p>
  1216.  
  1217.  
  1218.  
  1219. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=69" target="_blank" rel="noreferrer noopener">01.09</a>: And actually, for just the way that, say, product people work with their cross-functional teams. But I would also include other types of functions, legal privacy and customer support docs, any of these people that are working to actually help build a product; I think there needs to be a transformation of the way we think about these tools.</p>
  1220.  
  1221.  
  1222.  
  1223. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=89" target="_blank" rel="noreferrer noopener">01.29</a>: GitHub is a very engineering-led organization as well as a very engineering-focused organization. But my role is to really think about “How do we do a better job between all these people that I would call nontechnical—but they are sometimes technical, of course, but the people that are not necessarily there to write code. . . How do we actually work together to build great products?” And so that&#8217;s what I think about work. </p>
  1224.  
  1225.  
  1226.  
  1227. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=108" target="_blank" rel="noreferrer noopener">01.48</a>: <strong>For people who aren&#8217;t familiar with product management and product teams, what’s toil in the context of product teams? </strong></p>
  1228.  
  1229.  
  1230.  
  1231. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=120" target="_blank" rel="noreferrer noopener">02.00</a>: So toil is actually something that I stole from a Google SRE from the standpoint of any type of thing that someone has to do that is manual, tactical, repetitive. . . It usually doesn&#8217;t really add to the value of the product in any way. It&#8217;s something that as the team gets bigger or the product goes down the SDLC or lifecycle, it scales linearly, with the fact that you&#8217;re building bigger and bigger things. And so it&#8217;s usually something that we want to try to cut out, because not only is it potentially a waste of time, but there&#8217;s also a perception within the team it can cause burnout.</p>
  1232.  
  1233.  
  1234.  
  1235. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=155" target="_blank" rel="noreferrer noopener">02.35</a>: If I have to constantly be doing toilsome parts of my work, I feel I&#8217;m doing things that don&#8217;t really matter rather than focusing on the things that really matter. And what I would argue is especially for product managers and cross-functional teams, a lot of the time that is processes that they have to use, usually to share information within larger organizations.</p>
  1236.  
  1237.  
  1238.  
  1239. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=174" target="_blank" rel="noreferrer noopener">02.54</a>: A good example of that is status reporting. Status reporting is one of those things where people will spend anywhere from 30 minutes to hours per week. And sometimes it&#8217;s in certain parts of the team—technical product managers, product managers, engineering managers, program managers are all dealing with this aspect that they have to in some way summarize the work that the team is doing and then shar[e] that not only with their leadership. . . They want to build trust with their leadership, that they&#8217;re making the right decisions, that they&#8217;re making the right calls. They&#8217;re able to escalate when they need help. But also then to convey information to other teams that are dependent on them or they&#8217;re dependent on. Again, this is [in] very large organizations, [where] there&#8217;s a huge cost to communication flows.</p>
  1240.  
  1241.  
  1242.  
  1243. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=215" target="_blank" rel="noreferrer noopener">03.35</a>: And so that&#8217;s why I use status reporting as a good example of that. Now with the use of the things like LLMs, especially if we think about our LLMs as a compression engine or a translation engine, we can then start to use these tools inside of these processes around status reporting to make it less toilsome. But there&#8217;s still aspects of it that we want to keep that are really about humans understanding, making decisions, things like that. </p>
  1244.  
  1245.  
  1246.  
  1247. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=239" target="_blank" rel="noreferrer noopener">03:59</a>: <strong>And this is key. So one of the concerns that people have is about a hollowing out in the following context: If you eliminate toil in general, the problem there is that your most junior or entry-level employees actually learn about the culture of the organization by doing toil. There&#8217;s some level of toil that becomes part of the onboarding in the acculturation of young employees. But on the other hand, this is a challenge for organizations to just change how they onboard new employees and what kinds of tasks they give them and how they learn more about the culture of the organization.</strong></p>
  1248.  
  1249.  
  1250.  
  1251. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=291" target="_blank" rel="noreferrer noopener">04.51</a>: I would differentiate between the idea of toil and paying your dues within the organization. In investment banking, there&#8217;s a whole concern about that: “They just need to sit in the office for 12 hours a day to really get the culture here.” And I would differentiate that from. . .</p>
  1252.  
  1253.  
  1254.  
  1255. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=304" target="_blank" rel="noreferrer noopener">05:04</a>: <strong>Or “Get this slide to pitch decks and make sure all the fonts are the right fonts.”</strong></p>
  1256.  
  1257.  
  1258.  
  1259. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=311" target="_blank" rel="noreferrer noopener">05.11</a>: That&#8217;s right. Yeah, I worked at Facebook Reality Labs, and there were many times where we would do a Zuck review, and getting those slides perfect was a huge task for the team. What I would say is I want to differentiate this from the gaining of expertise. So if we think about <a href="https://en.wikipedia.org/wiki/Gary_A._Klein">Gary Klein</a>, naturalistic decision making, real expertise is actually about being able to see an environment. And that could be a data environment [or] information environment as well. And then as you gain expertise, you&#8217;re able to discern between important signals and noise. And so what I&#8217;m <em>not</em> advocating for is to remove the ability to gain that expertise. But I am saying that toilsome work doesn&#8217;t necessarily contribute to expertise. </p>
  1260.  
  1261.  
  1262.  
  1263. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=349" target="_blank" rel="noreferrer noopener">05.49</a>: In the case of status reporting as an example—status reporting is very valuable for a person to be able to understand what is going on with the team, and then, “What actions do I need to take?” And we don&#8217;t want to remove that. But the idea that a TPM or product manager or EM has to dig through all of the different issues that are inside of a particular repo to look for specific updates and then do their own synthesis of a draft, I think there is a difference there. And so what I would say is that the idea of me reading this information in a way that is very convenient for me to consume and then to be able to shape the signal that I then put out into the organization as a status report, that is still very much a human decision.</p>
  1264.  
  1265.  
  1266.  
  1267. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=390" target="_blank" rel="noreferrer noopener">06.30</a>: And I think that&#8217;s where we can start to use tools. <a href="https://www.oneusefulthing.org/" target="_blank" rel="noreferrer noopener">Ethan Mollick</a> has talked about this a lot in the way that he&#8217;s trying to approach including LLMs in, say, the classroom. There&#8217;s two patterns that I think could come out of this. One is that when I have some type of early draft of something, I should be able to get a lot of early feedback that is very low reputational risk. And what I mean by that is that a bot can tell me “Hey, this is not written in a way with the active voice” or “[This] is not really talking about the impact of this on the organization.” And so I can get that super early feedback in a way that is not going to hurt me.</p>
  1268.  
  1269.  
  1270.  
  1271. <p>If I publish a really bad status report, people may think less of me inside the organization. But using a bot or an agent or just a prompt to even just say, “Hey, these are the ways you can improve this”—that type of early feedback is really, really valuable. That I have a draft and I get critique from a bunch of different viewpoints I think is super valuable and will build expertise.</p>
  1272.  
  1273.  
  1274.  
  1275. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=444" target="_blank" rel="noreferrer noopener">07.24</a>: And then there&#8217;s the other side, which is, when we talk about consuming lots of information and then synthesizing or translating it into a draft, I can then critique “Is this actually valuable to the way that I think that this leader thinks? Or what I&#8217;m trying to convey as an impact?” And so then I am critiquing the straw man that is output by these prompts and agents.</p>
  1276.  
  1277.  
  1278.  
  1279. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=466" target="_blank" rel="noreferrer noopener">07.46</a>: Those two different patterns together actually create a really great loop for me to be able to learn not only from agents but also from the standpoint of seeing how. . . The part that ends up being really exciting is when once you start to connect the way communication happens inside the organization, I can then see what my leaders passed on to the next leader or what this person interpreted this as. And I can use that as a feedback loop to then improve, over time, my expertise in, say, writing a status report that is shaped for the leader. There&#8217;s also a whole thing that when we talk about status reporting in particular, there is a difference in expertise that people are getting that I&#8217;m not always 100%. . .</p>
  1280.  
  1281.  
  1282.  
  1283. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=501" target="_blank" rel="noreferrer noopener">08.21</a>: It&#8217;s valuable for me to understand how my leader thinks and makes decisions. I think that is very valuable. But the idea that I will spend hours and hours shaping and formulating a status report from my point of view for someone else can be aided by these types of systems. And so status should not be about the speaker&#8217;s mouth; it should be at the listener&#8217;s ear.</p>
  1284.  
  1285.  
  1286.  
  1287. <p>For these leaders, they want to be able to understand “Are the teams making the right decisions? Do I trust them? And then where should I preemptively intervene because of my experience or maybe my understanding of the context in the broader organization?” And so that&#8217;s what I would say: These tools are very valuable in helping build that expertise.</p>
  1288.  
  1289.  
  1290.  
  1291. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=540" target="_blank" rel="noreferrer noopener">09.00</a>: It&#8217;s just that we have to rethink “What is expertise?” And I just don&#8217;t buy it that paying your dues is the way you gain expertise. You do sometimes. Absolutely. But a lot of it is also just busy work and toil. </p>
  1292.  
  1293.  
  1294.  
  1295. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=551" target="_blank" rel="noreferrer noopener">09.11</a>: <strong>My thing is these are productivity tools. And so you make even your junior employees productive—you just change the way you use your more-junior employees. </strong></p>
  1296.  
  1297.  
  1298.  
  1299. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=564" target="_blank" rel="noreferrer noopener">09.24</a>: Maybe just one thing to add to this is that there is something really interesting inside of the education world of using LLMs: trying to understand where someone is at. And so the type of feedback that someone that is very early in their career or first to doing something is potentially very different in the way that you&#8217;re teaching them or giving them feedback versus something that someone that is much further in expertise, they want to be able to just get down to “What are some things I&#8217;m missing here? Where am I biased?” Those are things where I think we also need to do a better job for those early employees, the people that are just starting to get expertise—“How do we train them using these tools as well as other ways?”</p>
  1300.  
  1301.  
  1302.  
  1303. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=601" target="_blank" rel="noreferrer noopener">10.01</a>: And I&#8217;ve done that as well. I do a lot of learning and development help, internal to companies, and I did that as part of the PM faculty for learning in development at Google. And so thinking a lot about how PMs gain expertise, I think we&#8217;re doing a real disservice to making it so that product manager as a junior position is so hard to get.</p>
  1304.  
  1305.  
  1306.  
  1307. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=618" target="_blank" rel="noreferrer noopener">10.18</a>: I think it&#8217;s really bad because, right out of college, I started doing program management, and it taught me so much about this. But at Microsoft, when I joined, we would say that the program manager wasn&#8217;t really worth very much for the first two years, right? Because they&#8217;re gaining expertise in this.</p>
  1308.  
  1309.  
  1310.  
  1311. <p>And so I think LLMs can help give the ability for people to gain expertise faster and also help them from avoiding making errors that other people might make. But I think there&#8217;s a lot to do with just learning and development in general that we need to pair with LLMs and human systems.</p>
  1312.  
  1313.  
  1314.  
  1315. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=652" target="_blank" rel="noreferrer noopener">10.52</a>: <strong>In terms of agents, I guess agents for product management, first of all, do they exist? And if they do, I always like to look at what level of autonomy they really have. Most agents really are still partially autonomous, right? There’s still a human in the loop. And so the question is “How much is the human in the loop?” It&#8217;s kind of like a self-driving car. There&#8217;s driver assists, and then there&#8217;s all the way to self-driving. A lot of the agents right now are “driver assist.” </strong></p>
  1316.  
  1317.  
  1318.  
  1319. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=688" target="_blank" rel="noreferrer noopener">11.28</a><strong>: </strong>I think you’re right. That&#8217;s why I don&#8217;t always use the term “agent,” because it&#8217;s not an autonomous system that is storing memory using tools, constantly operating.</p>
  1320.  
  1321.  
  1322.  
  1323. <p>I would argue though that there is no such thing as “human out of the loop.” We&#8217;re probably just drawing the system diagram wrong if we&#8217;re saying that there&#8217;s no human that&#8217;s involved in some way. That&#8217;s the first thing.&nbsp;</p>
  1324.  
  1325.  
  1326.  
  1327. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=713" target="_blank" rel="noreferrer noopener">11.53</a>: The second thing I&#8217;d say is that I think you&#8217;re right. A lot of the time right now, it ends up being when the human needs the help, we end up creating systems inside of GitHub; we have something that&#8217;s called GitHub spaces, which is really a custom GPT. It&#8217;s really just a bundling of context that I can then go to when I need help with a particular type of thing. We built very highly specific types of copilot spaces, like “I need to write a blog announcement about something. And so what&#8217;s the GitHub writing style? How should I be wording this avoiding jargon?” Internal things like that. So it can be highly specific. </p>
  1328.  
  1329.  
  1330.  
  1331. <p>We also have more general tools that are kind of like “How do I form and maintain initiatives throughout the entire software development lifecycle? When do I need certain types of feedback? When do I need to generate the 12 to 14 different documents that compliance and downstream teams need?” And so those tend to be operating in the background to autodraft these things based on the context that&#8217;s available. And so that&#8217;s I&#8217;d say that&#8217;s semiagentic, to a certain extent.&nbsp;</p>
  1332.  
  1333.  
  1334.  
  1335. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=772" target="_blank" rel="noreferrer noopener">12.52</a>: But I think actually there&#8217;s really big opportunities when it comes to. . . One of the cases that we&#8217;re working on right now is actually linking information in the GitHub graph that is not commonly linked. And so a key example of that might be kicking off all of the process that goes along with doing a release. </p>
  1336.  
  1337.  
  1338.  
  1339. <p>When I first get started, I actually want to know in our customer feedback repo, in all the different places where we store customer feedback, “Where are there times that customers actually asked about this or complained about it or had some information about this?” And so when I get started, being able to automatically link something like a release tracking issue with all of this customer feedback becomes really valuable. But it&#8217;s very hard for me as an individual to do that. And what we really want—and what we&#8217;re building—[are] things that are more and more autonomous about constantly searching for feedback or information that we can then connect to this release tracking issue.</p>
  1340.  
  1341.  
  1342.  
  1343. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=824" target="_blank" rel="noreferrer noopener">13.44</a>: So that&#8217;s why I say we&#8217;re starting to get into the autonomous realm when it comes to this idea of something going around looking for linkages that don&#8217;t exist today. And so that&#8217;s one of those things, because again, we&#8217;re talking about information flow. And a lot of the time, especially in organizations the size of GitHub, there&#8217;s lots of siloing that takes place.</p>
  1344.  
  1345.  
  1346.  
  1347. <p>We have lots of repos. We have lots of information. And so it&#8217;s really hard for a single person to ever keep all of that in their head and to know where to go, and so [we’re] bringing all of that into the tools that they end up using.&nbsp;</p>
  1348.  
  1349.  
  1350.  
  1351. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=854" target="_blank" rel="noreferrer noopener">14.14</a>: So for example, we&#8217;ve also created internal things—these are more assist-type use cases—but the idea of a Gemini Gem inside of a Google doc or an M365 agent inside of Word that is then also connected to the GitHub graph in some way. I think it&#8217;s “When do we expose this information? Is it always happening in the background, or is it only when I&#8217;m drafting the next version of this initiative that ends up becoming really, really important?”</p>
  1352.  
  1353.  
  1354.  
  1355. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=881" target="_blank" rel="noreferrer noopener">14.41</a>: Some of the work we&#8217;ve been experimenting with is actually “How do we start to include agents inside of the synchronous meetings that we actually do?” You probably don&#8217;t want an agent to suddenly start speaking, especially because there&#8217;s lots of different agents that you may want to have in a meeting.</p>
  1356.  
  1357.  
  1358.  
  1359. <p>We don&#8217;t have a designer on our team, so I actually end up using an agent that is prompted to be like a designer and think like a designer inside of these meetings. And so we probably don’t want them to speak up dynamically inside the meeting, but we do want them to add information if it&#8217;s helpful.&nbsp;</p>
  1360.  
  1361.  
  1362.  
  1363. <p>We want to autoprototype things as a straw man for us to be able to react to. We want to start to use our planning agents and stuff like that to help us plan out “What is the work that might need to take place?” It&#8217;s a lot of experimentation about “How do we actually pull things into the places that humans are doing the work?”—which is usually synchronous meetings, some types of asynchronous communication like Teams or Slack, things like that.</p>
  1364.  
  1365.  
  1366.  
  1367. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=932" target="_blank" rel="noreferrer noopener">15.32</a>: So that&#8217;s where I&#8217;d say the full possibility [is] for, say, a PM. And our customers are also TPMs and leaders and people like that. It really has to do with “How are we linking synchronous and asynchronous conversations with all of this information that is out there in the ecosystem of our organization that we don&#8217;t know about yet, or viewpoints that we don&#8217;t have that we need to have in this conversation?”</p>
  1368.  
  1369.  
  1370.  
  1371. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=955" target="_blank" rel="noreferrer noopener">15.55</a>: <strong>You mentioned the notion of a design agent passively in the background, attending a meeting. This is fascinating. So this design agent, what is it? Is it a fine-tuned agent or. . .? What exactly makes it a design agent? </strong></p>
  1372.  
  1373.  
  1374.  
  1375. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=973" target="_blank" rel="noreferrer noopener">16.13</a><strong>: </strong>In this particular case, it&#8217;s a specific prompt that defines what a designer would usually do in a cross-functional team and what they might ask questions about, what they would want clarification of. . .</p>
  1376.  
  1377.  
  1378.  
  1379. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=986" target="_blank" rel="noreferrer noopener">16.26</a>: <strong>Completely reliant on the pretrained foundation model—no posttraining, no RAG, nothing?</strong> </p>
  1380.  
  1381.  
  1382.  
  1383. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=992" target="_blank" rel="noreferrer noopener">16.32</a>: No, no. [Everything is in the prompt] at this point. </p>
  1384.  
  1385.  
  1386.  
  1387. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=996" target="_blank" rel="noreferrer noopener">16.36</a>: <strong>How big is this prompt? </strong></p>
  1388.  
  1389.  
  1390.  
  1391. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=997" target="_blank" rel="noreferrer noopener">16.37</a>: It&#8217;s not that big. I&#8217;d say it&#8217;s maybe at most 50 lines, something like that. It&#8217;s pretty small. The truth is, the idea of a designer is something that LLMs know about. But more for our specific case, right now it&#8217;s really just based on this live conversation. And there&#8217;s a lot of papercuts in the way that we have to do a site call, pull a live transcript, put it into a space, and [then] I have a bunch of different agents that are inside the space that will then pipe up when they have something interesting to say, essentially.</p>
  1392.  
  1393.  
  1394.  
  1395. <p>And it&#8217;s a little weird because I have to share my screen and people have to read it, hold the meeting. So it&#8217;s clunky right now in the way that we bring this in. But what it will bring up is “Hey, these are patterns inside of design that you may want to think about.” Or you know, “For this particular part of the experience, it&#8217;s still pretty ambiguous. Do you want to define more about what this part of the process is?” And we&#8217;ve also included legal, privacy, data-oriented groups. Even the idea of a facilitator agent saying that we were getting off track or we have these other things to discuss, that type of stuff. So again, these are really rudimentary right now.</p>
  1396.  
  1397.  
  1398.  
  1399. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1057" target="_blank" rel="noreferrer noopener">17.37</a>: Now, what I could imagine though is, we have a design system inside of GitHub. How might we start to use that design system and use internal prototyping tools to autogenerate possibilities for what we&#8217;re talking about? And I guess when I think about using prototyping as a PM, I don&#8217;t think the PMs should be vibe coding everything.</p>
  1400.  
  1401.  
  1402.  
  1403. <p>I don&#8217;t think the prototype replaces a lot of the cross-functional documents that we have today. But I think what it does increase is that if we have been talking about a feature for about 30 minutes, that is a lot of interesting context that if we can say, “Autogenerate three different prototypes that are coming from slightly different directions, slightly different places that we might integrate inside of our current product,” I think what it does is it gives us, again, that straw man for us to be able to critique, which will then uncover additional assumptions, additional values, additional principles that we maybe haven&#8217;t written down somewhere else.</p>
  1404.  
  1405.  
  1406.  
  1407. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1112" target="_blank" rel="noreferrer noopener">18.32</a>: And so I see that as super valuable. And that&#8217;s the thing that we end up doing—we&#8217;ll use an internal product for prototyping to just take that and then have it autogenerated. It takes a little while right now, you know, a couple minutes to do a prototype generation. And so in those cases we&#8217;ll just [say], “Here&#8217;s what we thought about so far. Just give us a prototype.” And again it doesn&#8217;t always do the right thing, but at least it gives us something to now talk about because it&#8217;s more real now. It is not the thing that we end up implementing, but it is the thing that we end up talking about. </p>
  1408.  
  1409.  
  1410.  
  1411. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1139" target="_blank" rel="noreferrer noopener">18.59</a>: <strong>By the way, this notion of an agent attending synchronous some meeting, you can imagine taking it to the next level, which is to take advantage of multimodal models. The agent can then absorb speech and maybe visual cues, so then basically when the agent suggests something and someone reacts with a frown. . . </strong></p>
  1412.  
  1413.  
  1414.  
  1415. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1165" target="_blank" rel="noreferrer noopener">19.25</a>: I think there&#8217;s something really interesting about that. And when you talk about multimodal, I do think that one of the things that is really important about human communication is the way that we pick up cues from each other—if we think about it, the reason why we actually talk to each other. . . And there&#8217;s a great book called <a href="https://www.hup.harvard.edu/books/9780674237827" target="_blank" rel="noreferrer noopener"><em>The Enigma of Reason</em></a> that&#8217;s all about this.</p>
  1416.  
  1417.  
  1418.  
  1419. <p>But their hypothesis is that, yes, we can try to logic or pretend to logic inside of our own heads, but we actually do a lot of post hoc analysis. So we come up with an idea inside our head. We have some certainty around it, some intuition, and then we fit it to why we thought about this. So that&#8217;s what we do internally.&nbsp;</p>
  1420.  
  1421.  
  1422.  
  1423. <p>But when you and I are talking, I&#8217;m actually trying to read your mind in some way. I&#8217;m trying to understand the norms that are at play. And I&#8217;m using your facial expression. I&#8217;m using your tone of voice. I&#8217;m using what you&#8217;re saying—actually way less of what you&#8217;re saying and more your facial expression and your tone of voice—to determine what&#8217;s going on.</p>
  1424.  
  1425.  
  1426.  
  1427. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1216" target="_blank" rel="noreferrer noopener">20.16</a>: And so I think this idea of engagement with these tools and the way these tools work, I think [of] the idea of gaze tracking: What are people looking at? What are people talking about? How are people reacting to this? And then I think this is where in the future, in some of the early prototypes we built internally for what the synchronous meeting would look like, we have it where the agent is raising its hand and saying, “Here&#8217;s an issue that we may want to discuss.” If the people want to discuss it, they can discuss it, or they can ignore it. </p>
  1428.  
  1429.  
  1430.  
  1431. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1241" target="_blank" rel="noreferrer noopener">20.41</a>: Longer term, we have to start to think about how agents are fitting into the turn-taking of conversation with the rest of the group. And using all of these multimodal cues ends up being very interesting, because you wouldn&#8217;t want just an agent whenever it thinks of something to just blurt it out.</p>
  1432.  
  1433.  
  1434.  
  1435. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1259" target="_blank" rel="noreferrer noopener">20.59</a>: And so there&#8217;s a lot of work to do here, but I think there&#8217;s something really exciting about just using engagement as the meaning to understand what are the hot topics, but also trying to help detect “Are we rat-holing on something that should be put in the parking lot?” Those are things and cues that we can start to get from these systems as well.</p>
  1436.  
  1437.  
  1438.  
  1439. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1276" target="_blank" rel="noreferrer noopener">21.16</a>: <strong>By the way, context has multiple dimensions. So you can imagine in a meeting between the two of us, you outrank me. You&#8217;re my manager. But then it turns out the agent realizes, “Well, actually, looking through the data in the company, Ben knows more about this topic than Chris. So maybe when I start absorbing their input, I should weigh Ben&#8217;s, even though in the org chart Chris outranks Ben.” </strong></p>
  1440.  
  1441.  
  1442.  
  1443. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1306" target="_blank" rel="noreferrer noopener">21.46</a>: A related story is one of the things I&#8217;ve created inside of a copilot space is actually a proxy for our CPO. And so what I&#8217;ve done is I&#8217;ve taken meetings that he&#8217;s done where he asked questions in a smaller setting, taking his writing samples and things that, and I&#8217;ve tried to turn it into a, not really an agent, but a space where I can say, “Here&#8217;s what I&#8217;m thinking about for this plan. And what would Mario [Rodriguez] potentially think about this?” </p>
  1444.  
  1445.  
  1446.  
  1447. <p>It&#8217;s definitely not 100% accurate in any way. Mario&#8217;s an individual that is constantly changing and is learning and has intuitions that he doesn&#8217;t say out loud, but it is interesting how it does sound like him. It does seem to focus on questions that he would bring up in a previous meeting based on the context that we provided. And so I think to your point, a lot of things that right now are said inside of meetings that we then don&#8217;t use to actually help understand people&#8217;s points of view in a deeper way.</p>
  1448.  
  1449.  
  1450.  
  1451. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1360" target="_blank" rel="noreferrer noopener">22.40</a>: You could imagine that this proxy also could be used for [determining] potential blind spots for Mario that, as a person that is working on this, I may need to deal with, in the sense that maybe he&#8217;s not always focused on this type of issue, but I think it&#8217;s a really big deal. So how do I help him actually understand what&#8217;s going on?</p>
  1452.  
  1453.  
  1454.  
  1455. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1377" target="_blank" rel="noreferrer noopener">22.57</a>: And this gets back to that reporting: Is that the listener&#8217;s ear? What does that person actually care about? What do they need to know about to build trust with the team? What do they need to take action on? Those are things that I think we can start to build interesting profiles. </p>
  1456.  
  1457.  
  1458.  
  1459. <p>There&#8217;s a really interesting ethical question, which is: Should that person be able to write their own proxy? Would it include the blind spots that they have or not? And then maybe compare this to—you know, there&#8217;s [been] a trend for a little while where every leader would write their own user manual or readme, and inside of those things, they tend to be a bit more performative. It&#8217;s more about how they idealize their behavior versus the way that they actually are.</p>
  1460.  
  1461.  
  1462.  
  1463. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1417" target="_blank" rel="noreferrer noopener">23.37</a>: And so there&#8217;s some interesting problems that start to come up when we&#8217;re doing proxying. I don&#8217;t call it a digital twin of a person, because digital twins to me are basically simulations of mechanical things. But to me it&#8217;s “What is this proxy that might sit in this meeting to help give us a perspective and maybe even identify when this is something we should escalate to that person?”</p>
  1464.  
  1465.  
  1466.  
  1467. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1435" target="_blank" rel="noreferrer noopener">23.55</a>: I think there&#8217;s lots of very interesting things. Power structures inside of the organization are really hard to discern because there&#8217;s both, to your point, hierarchical ones that are very set in the systems that are there, but there&#8217;s also unsaid ones. </p>
  1468.  
  1469.  
  1470.  
  1471. <p>I mean, one funny story is Ray Dalio did try to implement this inside of his hedge fund. And unfortunately, I guess, for him, there were two people that were considered to be higher ranking in reputation than him. But then he changed the system so that he was ranked number one. So I guess we have to worry about this type of thing for these proxies as well.&nbsp;</p>
  1472.  
  1473.  
  1474.  
  1475. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1467" target="_blank" rel="noreferrer noopener">24.27</a>: <strong>One of the reasons why coding is such a great playground for these things is one, you can validate the result. But secondly, the data is quite tame and relatively right. So you have version control systems GitHub—you can look through that and say, “Hey, actually Ben&#8217;s commits are much more valuable than Chris&#8217;s commits.” Or “Ben is the one who suggested all of these changes before, and they were all accepted. So maybe we should really take Ben&#8217;s opinion much more strong[ly].” I don&#8217;t know what artifacts you have in the product management space that can help develop this reputation score.</strong></p>
  1476.  
  1477.  
  1478.  
  1479. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1509" target="_blank" rel="noreferrer noopener">25.09</a>: Yeah. It&#8217;s tough because a reputation score, especially once you start to monitor some type of metric and it becomes the goal, that&#8217;s where we get into problems. For example, Agile teams adopting velocity as a metric: It&#8217;s meant to be an internal metric that helps us understand “If this person is out, how does that adjust what type of work we need to do?” But then comparing velocities between different teams ends up creating a whole can of worms around “Is this actually the metric that we&#8217;re trying to optimize for?”</p>
  1480.  
  1481.  
  1482.  
  1483. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1537" target="_blank" rel="noreferrer noopener">25.37</a>: And even when it comes to product management, what I would say is actually valuable a lot of the time is “Does the team understand why they&#8217;re working on something? How does it link to the broader strategy? How does this solve both business and customer needs? And then how are we wrangling this uncertainty of the world?” </p>
  1484.  
  1485.  
  1486.  
  1487. <p>I would argue that a really key meta skill for product managers—and for other people like generative user researchers, business development people, you know, even leaders inside the organization—they have to deal with a lot of uncertainty. And it&#8217;s not that we need to shut down the uncertainty, because actually uncertainty is an advantage that we should take advantage of and something we should use in some way. But there are places where we need to be able to build enough certainty for the team to do their work and then make plans that are resilient in the future uncertainty.&nbsp;</p>
  1488.  
  1489.  
  1490.  
  1491. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1584" target="_blank" rel="noreferrer noopener">26.24</a>: And then finally, the ability to communicate what the team is doing and why it&#8217;s important is very valuable. Unfortunately, there&#8217;s not a lot of. . . Maybe there&#8217;s rubrics we can build. And that&#8217;s actually what career ladders try to do for product managers. But they tend to be very vague actually. And as you get more senior inside of a product manager organization, you start to see things—it&#8217;s really just broader views, more complexity. That&#8217;s really what we start to judge product managers on. Because of that fact, it&#8217;s really about “How are you working across the team?”</p>
  1492.  
  1493.  
  1494.  
  1495. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1655" target="_blank" rel="noreferrer noopener">26.55</a>: There will be cases, though, that we can start to say, “Is this thing thought out well enough at first, at least for the team to be able to take action?” And then linking that work as a team to outcomes ends up being something that we can apply more and more data rigor to. But I worry about it being “This initiative brief was perfect, and so that meant the success of the product,” when the reality was that was maybe the starting point, but there was all this other stuff that the product manager and the team was doing together. So I&#8217;m always wary of that. And that&#8217;s where performance management for PMs is actually pretty hard: where you have to base most of your understanding on how they work with the other teammates inside their team.</p>
  1496.  
  1497.  
  1498.  
  1499. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1655" target="_blank" rel="noreferrer noopener">27.35</a>: <strong>You&#8217;ve been in product for a long time so you have a lot of you have a network of peers in other companies, right? What are one or two examples of the use of AI—not in GitHub—in the product management context that you admire? </strong></p>
  1500.  
  1501.  
  1502.  
  1503. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1673" target="_blank" rel="noreferrer noopener">27.53</a>: For a lot of the people that I know that are inside of startups that are basically using prototyping tools to build out their initial product, I have a lot of, not necessarily envy, but I respect that a lot because you have to be so scrappy inside of a startup, and you&#8217;re really there to not only prove something to a customer, or actually not even prove something, but get validation from customers that you&#8217;re building the right thing. And so I think that type of rapid prototyping is something that is super valuable for that stage of an organization.</p>
  1504.  
  1505.  
  1506.  
  1507. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1706" target="_blank" rel="noreferrer noopener">28.26</a>: When I start to then look at larger enterprises, what I do see that I think is not as well a help with these prototyping tools is what we&#8217;ll call brownfield development: We need to build something on top of this other thing. It&#8217;s actually hard to use these tools today to imagine new things inside of a current ecosystem or a current design system.</p>
  1508.  
  1509.  
  1510.  
  1511. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1726" target="_blank" rel="noreferrer noopener">28.46</a>: [For] a lot of the teams that are in other places, it really is a struggle to get access to some of these tools. The thing that&#8217;s holding back the biggest enterprises from actually doing interesting work in this area is they&#8217;re overconstraining what their engineers [and] product managers can use as far as these tools.</p>
  1512.  
  1513.  
  1514.  
  1515. <p>And so what&#8217;s actually being created is shadow systems, where the person is using their personal ChatGPT to actually do the work rather than something that&#8217;s within the compliance of the organization.</p>
  1516.  
  1517.  
  1518.  
  1519. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1758" target="_blank" rel="noreferrer noopener">29:18</a>: <strong>Which is great for IP protection. </strong></p>
  1520.  
  1521.  
  1522.  
  1523. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1759" target="_blank" rel="noreferrer noopener">29:19</a>: Exactly! That&#8217;s the problem, right? Some of this stuff, you do want to use the most current tools. Because there is actually not just [the] time savings aspect and toil reduction aspects—there&#8217;s also just the fact that it helps you think differently, especially if you&#8217;re an expert in your domain. It really aids you in becoming even better at what you&#8217;re doing. And then it also shores up some of your weaknesses. Those are the things that really expert people are using these types of tools for. But in the end, it comes down to a combination of legal, HR, and IT, and budgetary types of things too, that are holding back some of these organizations.</p>
  1524.  
  1525.  
  1526.  
  1527. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1800" target="_blank" rel="noreferrer noopener">30.00</a>: When I&#8217;m talking to other people inside of the orgs. . . Maybe another problem for enterprises right now is that a lot of these tools require lots of different context. We&#8217;ve benefited inside of GitHub in that a lot of our context is inside the GitHub graph, so Copilot can access it and use it. But for other teams they keep things and all of these individual vendor platforms.</p>
  1528.  
  1529.  
  1530.  
  1531. <p>And so the biggest problem then ends up being “How do we merge these different pieces of context in a way that is allowed?” When I first started working in the team of Synapse, I looked at the patterns that we were building and it was like “If we just had access to Zapier or Relay or something like that, that is exactly what we need right now.” Except we would not have any of the approvals for the connectors to all of these different systems. And so Airtable is a great example of something like that too: They&#8217;re building out process automation platforms that focus on data as well as connecting to other data sources, plus the idea of including LLMs as components inside these processes.</p>
  1532.  
  1533.  
  1534.  
  1535. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1858" target="_blank" rel="noreferrer noopener">30.58</a>: A really big issue I see for enterprises in general is the connectivity issue between all the datasets. And there are, of course, teams that are working on this—Glean or others that are trying to be more of an overall data copilot frontend for your entire enterprise datasets. But I just haven&#8217;t seen as much success in getting all these connected. </p>
  1536.  
  1537.  
  1538.  
  1539. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1877" target="_blank" rel="noreferrer noopener">31.17</a>: <strong>I think one of the things that people don&#8217;t realize is enterprise search is not turnkey. You have to get in there and really do all these integrations. There&#8217;s no shortcuts. There&#8217;s no, if a vendor comes to you and says, yeah, just use our system, it all magically works.</strong></p>
  1540.  
  1541.  
  1542.  
  1543. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1897" target="_blank" rel="noreferrer noopener">31.37</a>: This is why we need to hire more people with degrees in library science, because they actually know how to manage these types of systems. Again, my first cutting my teeth on this was in very early versions of SharePoint a long time ago. And even inside there, there&#8217;s so much that you need to do to just help people with not only organization of the data but even just the search itself.</p>
  1544.  
  1545.  
  1546.  
  1547. <p>It&#8217;s not just a search index problem. It&#8217;s a bunch of different things. And that&#8217;s why whenever we&#8217;re shown an empty text box, that&#8217;s why there&#8217;s so much work that goes into just behind that; inside of Google, all of the instant answers, there&#8217;s lots of different ways that a particular search query is actually looked at, not just to go against the search index but to also just provide you the right information. And now they&#8217;re trying to include Gemini by default in there. The same thing happens within any copilot. There&#8217;s a million different things you could use.&nbsp;</p>
  1548.  
  1549.  
  1550.  
  1551. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1947" target="_blank" rel="noreferrer noopener">32.27</a>: And so I guess maybe this gets to my hypothesis about the way that agents will be valuable, either fully autonomous ones or ones that are attached to a particular process. But having many different agents that are highly biased in a particular way. And I use the term bias as in bias can be good, neutral, and bad, right? I don&#8217;t mean bias in a way of unfairness and that type of stuff; I mean more from the standpoint of “This agent is meant to represent this viewpoint, and it&#8217;s going to give you feedback from this viewpoint.” That ends up becoming really, really valuable because of that fact that you will not always be thinking about everything. </p>
  1552.  
  1553.  
  1554.  
  1555. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=1980" target="_blank" rel="noreferrer noopener">33.00</a>: I&#8217;ve done a lot of work in adversarial thinking and red teaming and stuff like that. One of the things that is most valuable is to build prompts that are breaking the sycophancy of these different models that are there by default, because it should be about challenging my thinking rather than just agreeing with it.</p>
  1556.  
  1557.  
  1558.  
  1559. <p>And then the standpoint of each one of these highly biased agents actually helps provide a very interesting approach. I mean, if we go to things like meeting facilitation or workshop facilitation groups, this is why.&nbsp;.&nbsp;. I don&#8217;t know if you&#8217;re familiar with the six hats, but the six hats is a technique by which we declare inside of a meeting that I&#8217;m going to be the one that&#8217;s all positivity. This person&#8217;s going to be the one about data. This person&#8217;s gonna be the one that&#8217;s the adversarial, negative one, etc., etc. When you have all of these different viewpoints, you actually end up because of the tensions in the discussion of those ideas, the creation of options, the weighing of options, I think you end up making much better decisions. That&#8217;s where I think those highly biased viewpoints end up becoming really valuable.&nbsp;</p>
  1560.  
  1561.  
  1562.  
  1563. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2040" target="_blank" rel="noreferrer noopener">34.00</a>: <strong>For product people who are early in their career or want to enter the field, what are some resources that they should be looking at in terms of leveling up on the use AI in this context?</strong></p>
  1564.  
  1565.  
  1566.  
  1567. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2057" target="_blank" rel="noreferrer noopener">34.17</a>: The first thing is there are millions of prompt libraries out there for product managers. What you should do is when you are creating work, you should be using a lot of these prompts to give you feedback, and you can actually even write your own, if you want to. But I would say there&#8217;s lots of material out there for “I need to write this thing.”</p>
  1568.  
  1569.  
  1570.  
  1571. <p>What is a way to [do something like] “I try to write it and then I get critique”? But then how might this AI system, through a prompt, generate a draft of this thing? And then I go in and look at it and say, “Which things are not actually quite right here?” And I think that again, those two patterns of getting critique and giving critique end up building a lot of expertise.</p>
  1572.  
  1573.  
  1574.  
  1575. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2095" target="_blank" rel="noreferrer noopener">34.55</a>: I think also within the organization itself, I believe an awful lot in things that are called basically “learning from your peers.” Being able to join small groups where you are getting feedback from your peers and including AI agent feedback inside of the small peer groups is very valuable. </p>
  1576.  
  1577.  
  1578.  
  1579. <p>There&#8217;s another technique, which is using case studies. And I actually, as part of my learning development practice, do something called “decision forcing cases” where we take a story that actually happened, we walk people through it and we ask them, “What do they think is happening; what would they do next?” But having that where you do those types of things across junior and senior people, you can start to actually learn the expertise from the senior people through these types of case studies.</p>
  1580.  
  1581.  
  1582.  
  1583. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2137" target="_blank" rel="noreferrer noopener">35.37</a>: I think there&#8217;s an awful lot more that senior leaders inside the organization should be doing. And as junior people inside your organization, you should be going to these senior leaders and saying, “How do you think about this? What is the way that you make these decisions?” Because what you&#8217;re actually pulling from is their past experience and expertise that they&#8217;ve gained to build that intuition.</p>
  1584.  
  1585.  
  1586.  
  1587. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2153" target="_blank" rel="noreferrer noopener">35.53</a>: <strong>There&#8217;s all sorts of surveys of programmers and engineers and AI. Are there surveys about product managers? Are they freaked out or what? What&#8217;s the state of adoption and this kind of thing? </strong></p>
  1588.  
  1589.  
  1590.  
  1591. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2160" target="_blank" rel="noreferrer noopener">36.00</a>: Almost every PM that I&#8217;ve met has used an LLM in some way, to help them with their writing in particular. And if you look at the studies by ChatGPT or OpenAI about the use of ChatGPT, a lot of the writing tasks end up being from a product manager or senior leader standpoint. I think people are freaked out because every practice says that this other practice is going to be replaced because I can in some way replace them right now with a viewpoint.</p>
  1592.  
  1593.  
  1594.  
  1595. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2198" target="_blank" rel="noreferrer noopener">36.38</a>: I don&#8217;t think product management will go away. We may change the terminology that we end up using. But this idea of someone that is helping manage the complexity of the team, help with communication, help with [the] decision-making process inside that team is still very valuable and will be valuable even when we can start to autodraft a PRD.</p>
  1596.  
  1597.  
  1598.  
  1599. <p>I would argue that the draft of the PRD is not what matters. It&#8217;s actually the discussions that take place in the team after the PRD is created. And I don&#8217;t think that designers are going to take over the PM work because, yes, it is about to a certain extent the interaction patterns and the usability of things and the design and the feeling of things. But there&#8217;s all these other things that you need to worry about when it comes to matching it to business models, matching it to customer mindsets, deciding which problems to solve. They&#8217;re doing that.&nbsp;</p>
  1600.  
  1601.  
  1602.  
  1603. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2247" target="_blank" rel="noreferrer noopener">37.27</a>: There&#8217;s a lot of this concern about [how] every practice is saying this other practice is going to go away because of AI. I just don&#8217;t think that&#8217;s true. I just think we&#8217;re all going to be given different levels of abstraction to gain expertise on. But the core of what we do—an engineer focusing on what is maintainable and buildable and actually something that we want to work on versus the designer that&#8217;s building something usable and something that people will feel good using, and a product manager making sure that we&#8217;re actually building the thing that is best for the company and the user—those are things that will continue to exist even with these AI tools, prototyping tools, etc.</p>
  1604.  
  1605.  
  1606.  
  1607. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2281" target="_blank" rel="noreferrer noopener">38.01</a>: <strong>And for our listeners, as Chris mentioned, there&#8217;s many, many prompt templates for product managers. We&#8217;ll try to get Chris to recommend one, and we&#8217;ll put it in the episode notes. [See “Resources from Chris” below.] And with that thank you, Chris. </strong><br><br><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Chris_Butler.mp3#t=2298" target="_blank" rel="noreferrer noopener">38.18</a>: Thank you very much. Great to be here.</p>
  1608.  
  1609.  
  1610.  
  1611. <h2 class="wp-block-heading">Resources from Chris</h2>
  1612.  
  1613.  
  1614.  
  1615. <p>Here’s what Chris shared with us following the recording:</p>
  1616.  
  1617.  
  1618.  
  1619. <p><em>There are two [prompt resources for product managers] that I think people should check out:</em></p>
  1620.  
  1621.  
  1622.  
  1623. <ul class="wp-block-list">
  1624. <li><a href="https://academy.openai.com/home/clubs/work-users-ynjqu/resources/use-cases-product" target="_blank" rel="noreferrer noopener"><em>OpenAI&#8217;s ChatGPT for Product</em></a></li>
  1625.  
  1626.  
  1627.  
  1628. <li><a href="https://github.com/deanpeters/product-manager-prompts" target="_blank" rel="noreferrer noopener"><em>Dean Peters&#8217;s Product Manager Prompts for Generative AI</em></a></li>
  1629. </ul>
  1630.  
  1631.  
  1632.  
  1633. <p><em>However, I&#8217;d say that people should take these as a starting point and they should adapt them for their own needs. There is always going to be nuance for their roles, so they should look at how people do the prompting and modify for their own use. I tend to look at other people&#8217;s prompts and then write my own.</em></p>
  1634.  
  1635.  
  1636.  
  1637. <p><em>If they are thinking about using prompts frequently, I&#8217;d make a plug for </em><a href="https://docs.github.com/en/copilot/how-tos/provide-context/use-copilot-spaces/create-copilot-spaces" target="_blank" rel="noreferrer noopener"><em>Copilot Spaces</em></a><em> to pull that context together.</em></p>
  1638. ]]></content:encoded>
  1639. <wfw:commentRss>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-product-management-in-the-age-of-ai-with-chris-butler/feed/</wfw:commentRss>
  1640. <slash:comments>0</slash:comments>
  1641. </item>
  1642. <item>
  1643. <title>The Java Developer’s Dilemma: Part 3</title>
  1644. <link>https://www.oreilly.com/radar/the-java-developers-dilemma-part-3/</link>
  1645. <comments>https://www.oreilly.com/radar/the-java-developers-dilemma-part-3/#respond</comments>
  1646. <pubDate>Tue, 28 Oct 2025 11:08:21 +0000</pubDate>
  1647. <dc:creator><![CDATA[Markus Eisele]]></dc:creator>
  1648. <category><![CDATA[AI & ML]]></category>
  1649. <category><![CDATA[Commentary]]></category>
  1650.  
  1651. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17587</guid>
  1652.  
  1653. <media:content
  1654. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Human-and-Android-Architects.jpg"
  1655. medium="image"
  1656. type="image/jpeg"
  1657. width="2304"
  1658. height="1792"
  1659. />
  1660.  
  1661. <media:thumbnail
  1662. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Human-and-Android-Architects-160x160.jpg"
  1663. width="160"
  1664. height="160"
  1665. />
  1666. <custom:subtitle><![CDATA[Architectural Shifts in Java Applications]]></custom:subtitle>
  1667. <description><![CDATA[This is the final part of a three-part series by Markus Eisele. Part 1 can be found here, and Part 2 here. In the first article we looked at the Java developer’s dilemma: the gap between flashy prototypes and the reality of enterprise production systems. In the second article we explored why new types of [&#8230;]]]></description>
  1668. <content:encoded><![CDATA[
  1669. <figure class="wp-block-table"><table class="has-cyan-bluish-gray-background-color has-background has-fixed-layout"><tbody><tr><td><em>This is the final part of a three-part series by Markus Eisele. Part 1 can be found </em><a href="https://www.oreilly.com/radar/the-java-developers-dilemma-part-1/"><em>here</em></a><em>, and Part 2 </em><a href="https://www.oreilly.com/radar/the-java-developers-dilemma-part-2/"><em>here</em></a><em>.</em></td></tr></tbody></table></figure>
  1670.  
  1671.  
  1672.  
  1673. <p>In the first article we looked at the Java developer’s dilemma: the gap between flashy prototypes and the reality of enterprise production systems. In the second article we explored why new types of applications are needed, and how AI changes the shape of enterprise software. This article focuses on what those changes mean for architecture. If applications look different, the way we structure them has to change as well.</p>
  1674.  
  1675.  
  1676.  
  1677. <h2 class="wp-block-heading">The Traditional Java Enterprise Stack</h2>
  1678.  
  1679.  
  1680.  
  1681. <p>Enterprise Java applications have always been about structure. A typical system is built on a set of layers. At the bottom is persistence, often with JPA or JDBC. Business logic runs above that, enforcing rules and processes. On top sit REST or messaging endpoints that expose services to the outside world. Crosscutting concerns like transactions, security, and observability run through the stack. This model has proven durable. It has carried Java from the early servlet days to modern frameworks like <a href="https://quarkus.io/" target="_blank" rel="noreferrer noopener">Quarkus</a>, <a href="https://spring.io/" target="_blank" rel="noreferrer noopener">Spring Boot</a>, and <a href="https://micronaut.io/" target="_blank" rel="noreferrer noopener">Micronaut</a>.</p>
  1682.  
  1683.  
  1684.  
  1685. <p>The success of this architecture comes from clarity. Each layer has a clear responsibility. The application is predictable and maintainable because you know where to add logic, where to enforce policies, and where to plug in monitoring. Adding AI does not remove these layers. But it does add new ones, because the behavior of AI doesn’t fit into the neat assumptions of deterministic software.</p>
  1686.  
  1687.  
  1688.  
  1689. <h2 class="wp-block-heading">New Layers in AI-Infused Applications</h2>
  1690.  
  1691.  
  1692.  
  1693. <p>AI changes the architecture by introducing layers that never existed in deterministic systems. Three of the most important ones are fuzzy validation, context sensitive guardrails, and observability of model behavior. In practice you’ll encounter even more components, but validation and observability are the foundation that make AI safe in production.</p>
  1694.  
  1695.  
  1696.  
  1697. <h2 class="wp-block-heading">Validation and Guardrails</h2>
  1698.  
  1699.  
  1700.  
  1701. <p>Traditional Java applications assume that inputs can be validated. You check whether a number is within range, whether a string is not empty, or whether a request matches a schema. Once validated, you process it deterministically. With AI outputs, this assumption no longer holds. A model might generate text that looks correct but is misleading, incomplete, or harmful. The system cannot blindly trust it.</p>
  1702.  
  1703.  
  1704.  
  1705. <p>This is where validation and guardrails come in. They form a new architectural layer between the model and the rest of the application. Guardrails can take different forms:</p>
  1706.  
  1707.  
  1708.  
  1709. <ul class="wp-block-list">
  1710. <li><strong>Schema validation:</strong> If you expect a JSON object with three fields, you must check that the model’s output matches that schema. A missing or malformed field should be treated as an error.</li>
  1711.  
  1712.  
  1713.  
  1714. <li><strong>Policy checks:</strong> If your domain forbids certain outputs, such as exposing sensitive data, returning personal identifiers, or generating offensive content, policies must filter those out.</li>
  1715.  
  1716.  
  1717.  
  1718. <li><strong>Range and type enforcement:</strong> If the model produces a numeric score, you need to confirm that the score is valid before passing it into your business logic.</li>
  1719. </ul>
  1720.  
  1721.  
  1722.  
  1723. <p>Enterprises already know what happens when validation is missing. SQL injection, cross-site scripting, and other vulnerabilities have taught us that unchecked inputs are dangerous. AI outputs are another kind of untrusted input, even if they come from inside your own system. Treating them with suspicion is a requirement.</p>
  1724.  
  1725.  
  1726.  
  1727. <p>In Java, this layer can be built with familiar tools. You can write bean validation annotations, schema checks, or even custom CDI interceptors that run after each AI call. The important part is architectural: Validation must not be hidden in utility methods. It has to be a visible, explicit layer in the stack so that it can be maintained, evolved, and tested rigorously over time.</p>
  1728.  
  1729.  
  1730.  
  1731. <h2 class="wp-block-heading">Observability</h2>
  1732.  
  1733.  
  1734.  
  1735. <p>Observability has always been critical in enterprise systems. Logs, metrics, and traces allow us to understand how applications behave in production. With AI, observability becomes even more important because behavior is not deterministic. A model might give different answers tomorrow than it does today. Without visibility, you cannot explain or debug why.</p>
  1736.  
  1737.  
  1738.  
  1739. <p>Observability for AI means more than logging a result. It requires:</p>
  1740.  
  1741.  
  1742.  
  1743. <ul class="wp-block-list">
  1744. <li><strong>Tracing prompts and responses:</strong> Capturing what was sent to the model and what came back, ideally with identifiers that link them to the original request</li>
  1745.  
  1746.  
  1747.  
  1748. <li><strong>Recording context: </strong>Storing the data retrieved from vector databases or other sources so you know what influenced the model’s answer</li>
  1749.  
  1750.  
  1751.  
  1752. <li><strong>Tracking cost and latency: </strong>Monitoring how often models are called, how long they take, and how much they cost</li>
  1753.  
  1754.  
  1755.  
  1756. <li><strong>Notifying drift: </strong>Identifying when the quality of answers changes over time, which may indicate a model update or degraded performance on specific data</li>
  1757. </ul>
  1758.  
  1759.  
  1760.  
  1761. <p>For Java developers, this maps to existing practice. We already integrate <a href="https://quarkus.io/guides/opentelemetry" target="_blank" rel="noreferrer noopener">OpenTelemetry</a>, structured logging frameworks, and metrics exporters like <a href="https://micrometer.io/" target="_blank" rel="noreferrer noopener">Micrometer</a>. The difference is that now we need to apply those tools to AI-specific signals. A prompt is like an input event. A model response is like a downstream dependency. Observability becomes an additional layer that cuts through the stack, capturing the reasoning process itself.</p>
  1762.  
  1763.  
  1764.  
  1765. <p>Consider a <a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/observability.html" target="_blank" rel="noreferrer noopener">Quarkus application that integrates with OpenTelemetry</a>. You can create spans for each AI call; add attributes for the model name, token count, latency, and cache hits; and export those metrics to Grafana or another monitoring system. This makes AI behavior visible in the same dashboards your operations team already uses.</p>
  1766.  
  1767.  
  1768.  
  1769. <h2 class="wp-block-heading">Mapping New Layers to Familiar Practices</h2>
  1770.  
  1771.  
  1772.  
  1773. <p>The key insight is that these new layers do not replace the old ones. They extend them. Dependency injection still works. You should inject a guardrail component into a service the same way you inject a validator or logger. Fault tolerance libraries like MicroProfile Fault Tolerance or Resilience4j are still useful. You can wrap AI calls with time-outs, retries, and circuit breakers. Observability frameworks like Micrometer and OpenTelemetry are still relevant. You just point them at new signals.</p>
  1774.  
  1775.  
  1776.  
  1777. <p>By treating validation and observability as layers, not ad hoc patches, you maintain the same architectural discipline that has always defined enterprise Java. That discipline is what keeps systems maintainable when they grow and evolve. Teams know where to look when something fails, and they know how to extend the architecture without introducing brittle hacks.</p>
  1778.  
  1779.  
  1780.  
  1781. <h2 class="wp-block-heading">An Example Flow</h2>
  1782.  
  1783.  
  1784.  
  1785. <p>Imagine a REST end point that answers customer questions. The flow looks like this:</p>
  1786.  
  1787.  
  1788.  
  1789. <p>1. The request comes into the REST layer.<br>2. A context builder retrieves relevant documents from a vector store.<br>3. The prompt is assembled and sent to a local or remote model.<br>4. The result is passed through a guardrail layer that validates the structure and content.<br>5. Observability hooks record the prompt, context, and response for later analysis.<br>6. The validated result flows into business logic and is returned to the client.</p>
  1790.  
  1791.  
  1792.  
  1793. <p>This flow has clear layers. Each one can evolve independently. You can swap the vector store, upgrade the model, or tighten the guardrails without rewriting the whole system. That modularity is exactly what enterprise Java architectures have always valued.</p>
  1794.  
  1795.  
  1796.  
  1797. <p>A concrete example might be using LangChain4j in Quarkus. You define an AI service interface, annotate it with the model binding, and inject it into your resource class. Around that service you add a guardrail interceptor that enforces a schema using Jackson. You add an OpenTelemetry span that records the prompt and tokens used. None of this requires abandoning Java discipline. It’s the same stack thinking we’ve always used, now applied to AI.</p>
  1798.  
  1799.  
  1800.  
  1801. <h2 class="wp-block-heading">Implications for Architects</h2>
  1802.  
  1803.  
  1804.  
  1805. <p>For architects, the main implication is that AI doesn’t remove the need for structure. If anything, it increases it. Without clear boundaries, AI becomes a black box in the middle of the system. That’s not acceptable in an enterprise environment. By defining guardrails and observability as explicit layers, you make AI components as manageable as any other part of the stack.</p>
  1806.  
  1807.  
  1808.  
  1809. <p>This is what evaluation in this context means: systematically measuring how an AI component behaves, using tests and monitoring that go beyond traditional correctness checks. Instead of expecting exact outputs, evaluations look at structure, boundaries, relevance, and compliance. They combine automated tests, curated prompts, and sometimes human review to build confidence that a system is behaving as intended. In enterprise settings, evaluation becomes a recurring activity rather than a one-time validation step.</p>
  1810.  
  1811.  
  1812.  
  1813. <p>Evaluation itself becomes an architectural concern that reaches beyond just the models themselves. Hamel Husain describes evaluation as a <a href="https://hamel.dev/blog/posts/evals?utm_source=chatgpt.com" target="_blank" rel="noreferrer noopener">first-class system, not an add-on</a>. For Java developers, this means building evaluation into CI/CD, just as unit and integration tests are. Continuous evaluation of prompts, retrieval, and outputs becomes part of the deployment gate. This extends what we already do with integration testing suites.</p>
  1814.  
  1815.  
  1816.  
  1817. <p>This approach also helps with skills. Teams already know how to think in terms of layers, services, and crosscutting concerns. By framing AI integration in the same way, you lower the barrier to adoption. Developers can apply familiar practices to unfamiliar behavior. This is critical for staffing. Enterprises should not depend on a small group of AI specialists. They need large teams of Java developers who can apply their existing skills with only moderate retraining.</p>
  1818.  
  1819.  
  1820.  
  1821. <p>There is also a governance aspect. When regulators or auditors ask how your AI system works, you need to show more than a diagram with a “call LLM here” box. You need to show the validation layer that checks outputs, the guardrails that enforce policies, and the observability that records decisions. This is what turns AI from an experiment into a production system that can be trusted.</p>
  1822.  
  1823.  
  1824.  
  1825. <h2 class="wp-block-heading">Looking Forward</h2>
  1826.  
  1827.  
  1828.  
  1829. <p>The architectural shifts described here are only the beginning. More layers will emerge as AI adoption matures. We’ll see specialist and per-user caching layers to control cost, fine-grained access control to limit who can use which models, and new forms of testing to verify behavior. But the core lesson is clear: AI requires us to add structure, not remove it.</p>
  1830.  
  1831.  
  1832.  
  1833. <p>Java’s history gives us confidence. We’ve already navigated shifts from monoliths to distributed systems, from synchronous to reactive programming, and from on-premises to cloud. Each shift added layers and patterns. Each time, the ecosystem adapted. The arrival of AI is no different. It’s another step in the same journey.</p>
  1834.  
  1835.  
  1836.  
  1837. <p>For Java developers, the challenge is not to throw away what we know but to extend it. The shift is real, but it’s not alien. Java’s history of layered architectures, dependency injection, and crosscutting services gives us the tools to handle it. The result is not prototypes or one-off demos but applications that are reliable, auditable, and ready for the long lifecycles that enterprises demand.</p>
  1838.  
  1839.  
  1840.  
  1841. <p>In our book, <a href="https://www.oreilly.com/library/view/applied-ai-for/9781098174491/" target="_blank" rel="noreferrer noopener"><em>Applied AI for Enterprise Java Development</em></a>, we explore these architectural shifts in depth with concrete examples and patterns. From retrieval pipelines with Docling to guardrail testing and observability integration, we show how Java developers can take the ideas outlined here and turn them into production-ready systems.</p>
  1842. ]]></content:encoded>
  1843. <wfw:commentRss>https://www.oreilly.com/radar/the-java-developers-dilemma-part-3/feed/</wfw:commentRss>
  1844. <slash:comments>0</slash:comments>
  1845. </item>
  1846. <item>
  1847. <title>AI Integration Is the New Moat</title>
  1848. <link>https://www.oreilly.com/radar/integration-is-the-new-moat/</link>
  1849. <comments>https://www.oreilly.com/radar/integration-is-the-new-moat/#respond</comments>
  1850. <pubDate>Mon, 27 Oct 2025 11:41:10 +0000</pubDate>
  1851. <dc:creator><![CDATA[Tim O’Reilly]]></dc:creator>
  1852. <category><![CDATA[AI & ML]]></category>
  1853. <category><![CDATA[Commentary]]></category>
  1854.  
  1855. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17591</guid>
  1856.  
  1857. <media:content
  1858. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Chatbot-vs-human-interaction-2.jpg"
  1859. medium="image"
  1860. type="image/jpeg"
  1861. width="2304"
  1862. height="1792"
  1863. />
  1864.  
  1865. <media:thumbnail
  1866. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Chatbot-vs-human-interaction-2-160x160.jpg"
  1867. width="160"
  1868. height="160"
  1869. />
  1870. <description><![CDATA[The electrical system warning light had gone on in my Kona EV over the weekend, and all the manual said was to take it to the dealer for evaluation. I first tried scheduling an appointment via the website, and it reminded me how the web, once a marvel, is looking awfully clunky these days. There [&#8230;]]]></description>
  1871. <content:encoded><![CDATA[
  1872. <p>The electrical system warning light had gone on in my Kona EV over the weekend, and all the manual said was to take it to the dealer for evaluation. I first tried scheduling an appointment via the website, and it reminded me how the web, once a marvel, is looking awfully clunky these days. There were lots of options for services to schedule, but it wasn’t at all clear which of them I might want.</p>
  1873.  
  1874.  
  1875.  
  1876. <figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1600" height="963" src="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/image.png" alt="Hyundai web interface" class="wp-image-17597" srcset="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/image.png 1600w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/image-300x181.png 300w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/image-768x462.png 768w, https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/image-1536x924.png 1536w" sizes="auto, (max-width: 1600px) 100vw, 1600px" /></figure>
  1877.  
  1878.  
  1879.  
  1880. <p>Not only that, I’d only reached this page after clicking through various promotions and testimonials about how great the dealership is—in short, content designed to serve the interests of the dealer rather than the interests of the customer. Eventually, I did find a free-form text field where I could describe the problem I actually wanted the appointment for. But then it pushed me to a scheduling page on which the first available appointment was six weeks away.</p>
  1881.  
  1882.  
  1883.  
  1884. <p>So I tried calling the service department directly, to see if I could get some indication of how urgent the problem might be. The phone was busy, and a pleasant chatbot came on offering to see if it might help. It was quite a wonderful experience. First, it had already identified my vehicle by its association with my phone number, and then asked what the problem was. I briefly explained, and it said, “Got it. Your EV service light is on, and you need to have it checked out.” Bingo! Then it asked me when I wanted to schedule the service, and I said, “I’m not sure. I don’t know how urgent the problem is.” Once again. “Got it. You don’t know how urgent the problem is. I’ll have a service advisor call you back.”</p>
  1885.  
  1886.  
  1887.  
  1888. <p>That was nearly a perfect customer service interaction! I was very pleased. And someone did indeed call me back shortly. Unfortunately, it wasn’t a service advisor; it was a poorly trained receptionist, who apparently hadn’t received the information collected by the chatbot, since she gathered all the same information, only far less efficiently. She had to ask for my phone number to look up the vehicle. Half the time she didn’t understand what I said and I had to repeat it, or I didn’t understand what she said, and had to ask her to repeat it. But eventually, we did get through to the point where I was offered an appointment this week.</p>
  1889.  
  1890.  
  1891.  
  1892. <p>This was not the only challenging customer service experience I’ve had recently. I’ve had a problem for months with my gas bill. I moved, and somehow they set up my new account wrong. My online account would only show my former address and gas bill. So I deleted the existing online account and tried to set up a new one, only to be told by the web interface that either the account number or the associated phone number did not exist.</p>
  1893.  
  1894.  
  1895.  
  1896. <p>Calling customer service was no help. They would look up the account number and verify both it and the phone number, and tell me that it should all be OK. But when I tried again, and it still didn’t work, they’d tell me that someone would look into it, fix the problem, and call me back when it was done. No one ever called. Not only that, I even got a plaintive letter from the gas company addressed to “Resident” asking that I contact them, because someone was clearly using gas at this address, but there was no account associated with it. But when I called back yet again and told them this, they could find no record of any such letter.</p>
  1897.  
  1898.  
  1899.  
  1900. <p>Finally, after calling multiple times, each time having to repeat the whole story (with no record apparently ever being kept of the multiple interactions on the gas company end), I wrote an email that said, essentially, “I’m going to stop trying to solve this problem. The ball is in your court. In the meantime, I will just assume that you are planning to provide me gas services for free.” At that point someone did call me back, and this time assured me that they had found and fixed the problem. We’ll see.</p>
  1901.  
  1902.  
  1903.  
  1904. <p>Both of these stories emphasize what a huge opportunity there is in customer service agents. But they also illustrate why, in the end, <a href="https://www.oreilly.com/radar/is-ai-a-normal-technology/" target="_blank" rel="noreferrer noopener">AI is a “normal technology.”</a> No matter how intelligent the AI powering the chatbot might be, it has to be integrated with the systems and the workflow of the organization that deploys it. And if that system or workflow is bad, it needs to be reengineered to make use of the new AI capabilities. You can’t build a new skyscraper on a crumbling foundation.</p>
  1905.  
  1906.  
  1907.  
  1908. <p>There was no chatbot at the gas company. I wish there had been. But it would only have made a difference if the information it collected was stored into records that were accessible to other AIs or humans working on the problem, if those assigned to the problem had the expertise to debug it, and if there were workflows in place to follow up. It is possible to imagine a future where an AI customer service assistant could have actually fixed the problem, but I suspect that it will be a long time before edge cases like corrupted records are solved automatically.</p>
  1909.  
  1910.  
  1911.  
  1912. <p>And even with the great chatbot at the Hyundai dealer, it didn’t do much to change my overall customer experience, because it wasn’t properly integrated with the workflow at the dealership. The information the chatbot had collected wasn’t passed on to the appropriate human, so most of the value was lost.</p>
  1913.  
  1914.  
  1915.  
  1916. <p>That suggests that the problems that face us in advancing AI are not just making the machines smarter but figuring out how to integrate them with existing systems. We may eventually get to the point where AI-enabled workflows are the norm, and companies have figured out how to retool themselves, but it’s not going to be an easy process or a quick one.</p>
  1917.  
  1918.  
  1919.  
  1920. <p>And that leads me to the title of this piece. What is the competitive moat if intelligence becomes a commodity? There are many moats waiting to be discovered, but I am sure that one of them will be integration into human systems and workflows. The company that gets this right for a given industry will have an advantage for a surprisingly long time to come.</p>
  1921. ]]></content:encoded>
  1922. <wfw:commentRss>https://www.oreilly.com/radar/integration-is-the-new-moat/feed/</wfw:commentRss>
  1923. <slash:comments>0</slash:comments>
  1924. </item>
  1925. <item>
  1926. <title>Code Generation and the Shifting Value of Software</title>
  1927. <link>https://www.oreilly.com/radar/code-generation-and-the-shifting-value-of-software/</link>
  1928. <pubDate>Thu, 23 Oct 2025 11:14:26 +0000</pubDate>
  1929. <dc:creator><![CDATA[Tim O'Brien]]></dc:creator>
  1930. <category><![CDATA[AI & ML]]></category>
  1931. <category><![CDATA[Commentary]]></category>
  1932.  
  1933. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17582</guid>
  1934.  
  1935. <media:content
  1936. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-lights-3.jpg"
  1937. medium="image"
  1938. type="image/jpeg"
  1939. width="2304"
  1940. height="1792"
  1941. />
  1942.  
  1943. <media:thumbnail
  1944. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-lights-3-160x160.jpg"
  1945. width="160"
  1946. height="160"
  1947. />
  1948. <description><![CDATA[This article originally appeared on Medium. Tim O’Brien has given us permission to repost here on Radar. One of the most unexpected changes in software development right now comes from code generation. We’ve all known that it could speed up certain kinds of work, but what’s becoming clear is that it also reshapes the economics [&#8230;]]]></description>
  1949. <content:encoded><![CDATA[
  1950. <p class="has-cyan-bluish-gray-background-color has-background"><em>This article originally appeared on </em><a href="https://medium.com/@tobrien/code-generation-and-the-shifting-value-of-software-0c64cfc91adc" target="_blank" rel="noreferrer noopener"><em>Medium</em></a><em>. Tim O’Brien has given us permission to repost here on Radar.</em></p>
  1951.  
  1952.  
  1953.  
  1954. <p>One of the most unexpected changes in software development right now comes from code generation. We’ve all known that it could speed up certain kinds of work, but what’s becoming clear is that it also reshapes the economics of libraries, frameworks, and even the way we think about open source.</p>
  1955.  
  1956.  
  1957.  
  1958. <p>Just to be clear, I don’t view this as a threat to the employment of developers. I think we’ll end up needing more developers, and I also think that more people will start to consider themselves developers. But I do think that there are practices that are expiring:</p>
  1959.  
  1960.  
  1961.  
  1962. <ol class="wp-block-list">
  1963. <li><strong>Purchasing software</strong>—It will become more challenging to sell software unless it provides a compelling and difficult-to-reproduce product.</li>
  1964.  
  1965.  
  1966.  
  1967. <li><strong>Adopting open source frameworks</strong>—Don’t get me wrong, open source will continue to play a role, but there’s going to be more of it, and there will be fewer “star stage” projects.</li>
  1968.  
  1969.  
  1970.  
  1971. <li><strong>Software architects</strong>—Again, I’m not saying that we won’t have software architects, but the human process of considering architecture alternatives and having very expensive discussions about abstractions is already starting to disappear.</li>
  1972. </ol>
  1973.  
  1974.  
  1975.  
  1976. <h2 class="wp-block-heading"><strong>Why Are You Paying for That?</strong></h2>
  1977.  
  1978.  
  1979.  
  1980. <p>Take paid libraries as an example. For years, developers paid for specific categories of software simply because they solved problems that felt tedious or complex to recreate. A table renderer with pagination, custom cell rendering, and filtering might have justified a license fee because of the time it saved. What developer wants to stop and rewrite the pagination logic for that React table library?</p>
  1981.  
  1982.  
  1983.  
  1984. <p>Lately, I’ve started answering, “me.” Instead of upgrading the license and paying some ridiculous per-developer fee, why not just ask Claude Sonnet to “render this component with an HTML table that also supports on-demand pagination”? At first, it feels like a mistake, but then you realize it’s cheaper and faster to ask a generative model to write a tailored implementation for that table—and it’s simpler.</p>
  1985.  
  1986.  
  1987.  
  1988. <p>Most developers who buy software libraries end up using one or two features, while most of the library’s surface area goes untouched. Flipping the switch and moving to a simpler custom approach makes your build cleaner. (I know some of you pay for a very popular React component library with a widespread table implementation that recently raised prices. I also know some of you started asking, “Do I really need this?”)</p>
  1989.  
  1990.  
  1991.  
  1992. <p>If you can point your IDE at it and say, “Hey, can you implement this in HTML with some simple JavaScript?” and it generates flawless code in five minutes—why wouldn’t you? The next question becomes: Will library creators start adding new legal clauses to lock you in? (My prediction: That’s next.)</p>
  1993.  
  1994.  
  1995.  
  1996. <p>The moat around specific, specialized libraries keeps shrinking. If you can answer “Can I just replace that?” in five minutes, then replace it.</p>
  1997.  
  1998.  
  1999.  
  2000. <h2 class="wp-block-heading"><strong>Did You Need That Library?</strong></h2>
  2001.  
  2002.  
  2003.  
  2004. <p>This same shift also touches open source. Many of the libraries we use came out of long-term community efforts to solve straightforward problems. Logging illustrates this well: Packages like Log4j or Winston exist because developers needed consistent logging across projects. However, most teams utilize only a fraction of that functionality. These days, generating a lightweight logging library with exactly the levels and formatting you need often proves easier.</p>
  2005.  
  2006.  
  2007.  
  2008. <p>Although adopting a shared library still offers interoperability benefits, the balance tilts toward custom solutions. I just needed to format logs in a standard way. Instead of adding a dependency, we wrote a 200-line internal library. Done.</p>
  2009.  
  2010.  
  2011.  
  2012. <p>Five years ago, that might have sounded wild. Why rewrite Winston? But once you see the level of complexity these libraries carry, and you realize Claude Opus can generate that same logging library to your exact specifications in five minutes, the whole discussion shifts. Again, I’m not saying you should drop everything and craft your own logging library. But look at the 100 dependencies you have in your software—some of them add complexity you’ll never use.</p>
  2013.  
  2014.  
  2015.  
  2016. <h2 class="wp-block-heading"><strong>Say Goodbye to “Let’s Think About”</strong></h2>
  2017.  
  2018.  
  2019.  
  2020. <p>Another subtle change shows up in how we solve problems. In the past, a new requirement meant pausing to consider the architecture, interfaces, or patterns before implementing anything. Increasingly, I delegate that “thinking” step to a model. It runs in parallel, proposing solutions while I evaluate and refine. The time between idea and execution keeps shrinking. Instead of carefully choosing among frameworks or libraries, I can ask for a bespoke implementation and iterate from there.</p>
  2021.  
  2022.  
  2023.  
  2024. <p>Compare that to five years ago. Back then, you assembled your most senior engineers and architects to brainstorm an approach. That still happens, but more often today, you end up discussing the output of five or six independent models that have already generated solutions. You discuss outcomes of models, not ideas for abstractions.</p>
  2025.  
  2026.  
  2027.  
  2028. <p>The bigger implication: Entire categories of software may lose relevance. I’ve spent years working on open source libraries like Jakarta Commons—collections of utilities that solved countless minor problems. Those projects may no longer matter when developers can write simple functionality on demand. Even build tools face this shift. Maven, for example, once justified an ecosystem of training and documentation. But in the future, documenting your build system in a way that a generative model can understand might prove more useful than teaching people how to use Maven.</p>
  2029.  
  2030.  
  2031.  
  2032. <h2 class="wp-block-heading"><strong>The Common Thread</strong></h2>
  2033.  
  2034.  
  2035.  
  2036. <p>The pattern across all of this is simple: Software generation makes it harder to justify paying for prepackaged solutions. Both proprietary and open source libraries lose value when it’s faster to generate something custom. Direct automation displaces tooling and frameworks. Frameworks existed to capture standard code that generative models can now produce on demand.</p>
  2037.  
  2038.  
  2039.  
  2040. <p>As a result, the future may hold more custom-built code and fewer compromises to fit preexisting systems. In short, code generation doesn’t just speed up development—it fundamentally changes what’s worth building, buying, and maintaining.</p>
  2041. ]]></content:encoded>
  2042. </item>
  2043. <item>
  2044. <title>AI Is Reshaping Developer Career Paths</title>
  2045. <link>https://www.oreilly.com/radar/ai-is-reshaping-developer-career-paths/</link>
  2046. <pubDate>Wed, 22 Oct 2025 11:14:11 +0000</pubDate>
  2047. <dc:creator><![CDATA[Andrew Stellman]]></dc:creator>
  2048. <category><![CDATA[AI & ML]]></category>
  2049. <category><![CDATA[Commentary]]></category>
  2050.  
  2051. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17579</guid>
  2052.  
  2053. <media:content
  2054. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Shift-button.png"
  2055. medium="image"
  2056. type="image/png"
  2057. width="1080"
  2058. height="1080"
  2059. />
  2060.  
  2061. <media:thumbnail
  2062. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Shift-button-160x160.png"
  2063. width="160"
  2064. height="160"
  2065. />
  2066. <custom:subtitle><![CDATA[From Specialists to Generalists]]></custom:subtitle>
  2067. <description><![CDATA[This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the original framework introduction and explore the complete methodology in Andrew Stellman&#8217;s O&#8217;Reilly report Critical Thinking Habits for Coding with AI. A few decades ago, I worked with a developer who was respected by everyone on our team. Much [&#8230;]]]></description>
  2068. <content:encoded><![CDATA[
  2069. <p class="has-cyan-bluish-gray-background-color has-background"><em>This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the <a href="https://www.oreilly.com/radar/the-sens-ai-framework/" target="_blank" rel="noreferrer noopener">original framework introduction</a> and explore the complete methodology in Andrew Stellman&#8217;s O&#8217;Reilly report </em><a href="https://learning.oreilly.com/library/view/critical-thinking-habits/0642572243326/" target="_blank" rel="noreferrer noopener">Critical Thinking Habits for Coding with AI</a><em>.</em></p>
  2070.  
  2071.  
  2072.  
  2073. <p>A few decades ago, I worked with a developer who was respected by everyone on our team. Much of that respect came from the fact that he kept adopting new technologies that none of us had worked with. There was a cutting-edge language at the time that few people were using, and he built an entire feature with it. He quickly became known as the person you&#8217;d go to for these niche technologies, and it earned him a lot of respect from the rest of the team.</p>
  2074.  
  2075.  
  2076.  
  2077. <p>Years later, I worked with another developer who went out of his way to incorporate specific, obscure .NET libraries into his code. That too got him recognition from our team members and managers, and he was viewed as a senior developer in part because of his expertise with these specialized tools.</p>
  2078.  
  2079.  
  2080.  
  2081. <p>Both developers built their reputations on deep knowledge of specific technologies. It was a reliable career strategy that worked for decades: Become the expert in something valuable but not widely known, and you&#8217;d have authority on your team and an edge in job interviews.</p>
  2082.  
  2083.  
  2084.  
  2085. <p>But AI is changing that dynamic in ways we&#8217;re just starting to see.</p>
  2086.  
  2087.  
  2088.  
  2089. <p>In the past, experienced developers could build deep expertise in a single technology (like Rails or React, for example) and that expertise would consistently get them recognition on their team and help them stand out in reviews and job interviews. It used to take months or years of working with a specific framework before a developer could write <em>idiomatic code</em>, or code that follows the accepted patterns and best practices of that technology.</p>
  2090.  
  2091.  
  2092.  
  2093. <p>But now AI models are trained on countless examples of idiomatic code, so developers without that experience can generate similar code immediately. That puts less of a premium on the time spent developing that deep expertise.</p>
  2094.  
  2095.  
  2096.  
  2097. <h2 class="wp-block-heading"><strong>The Shift Toward Generalist Skills</strong></h2>
  2098.  
  2099.  
  2100.  
  2101. <p>That change is reshaping career paths in ways we&#8217;re just starting to see. The traditional approach worked for decades, but as AI fills in more of that specialized knowledge, the career advantage is shifting toward people who can integrate across systems and spot design problems early.</p>
  2102.  
  2103.  
  2104.  
  2105. <p>As I&#8217;ve trained developers and teams who are increasingly adopting AI coding tools, I&#8217;ve noticed that the developers who adapt best aren&#8217;t always the ones with the deepest expertise in a specific framework. Rather, they&#8217;re the ones who can spot when something looks wrong, integrate across different systems, and recognize patterns. Most importantly, they can apply those skills even when they&#8217;re not deep experts in the particular technology they&#8217;re working with.</p>
  2106.  
  2107.  
  2108.  
  2109. <p>This represents a shift from the more traditional dynamic on teams, where being an expert in a specific technology (like being the &#8220;Rails person&#8221; or the &#8220;React expert&#8221; on the team) carried real authority. AI now fills in much of that specialized knowledge. You can still build a career on deep Rails knowledge, but thanks to AI, it doesn&#8217;t always carry the same authority on a team that it once did.</p>
  2110.  
  2111.  
  2112.  
  2113. <h2 class="wp-block-heading"><strong>What AI Still Can&#8217;t Do</strong></h2>
  2114.  
  2115.  
  2116.  
  2117. <p>Both new and experienced developers routinely find themselves accumulating technical debt, especially when deadlines push delivery over maintainability, and this is an area where experienced engineers often distinguish themselves, even on a team with wide AI adoption. The key difference is that an experienced developer often knows they&#8217;re taking on debt. They can spot antipatterns early because they&#8217;ve seen them repeatedly and take steps to &#8220;pay off&#8221; the debt before it gets much more expensive to fix.</p>
  2118.  
  2119.  
  2120.  
  2121. <p>But AI is also changing the game for experienced developers in ways that go beyond technical debt management, and it&#8217;s starting to reshape their traditional career paths. What AI still can&#8217;t do is tell you when a design or architecture decision today will cause problems six months from now, or when you&#8217;re writing code that doesn&#8217;t actually solve the user&#8217;s problem. That&#8217;s why being a generalist, with skills in architecture, design patterns, requirements analysis, and even project management, is becoming more valuable on software teams.</p>
  2122.  
  2123.  
  2124.  
  2125. <p>Many developers I see thriving with AI tools are the ones who can:</p>
  2126.  
  2127.  
  2128.  
  2129. <ul class="wp-block-list">
  2130. <li><strong>Recognize when generated code will create maintenance problems</strong> even if it works initially</li>
  2131.  
  2132.  
  2133.  
  2134. <li><strong>Integrate across multiple systems</strong> without being deep experts in each one</li>
  2135.  
  2136.  
  2137.  
  2138. <li><strong>Spot architectural patterns and antipatterns</strong> regardless of the specific technology</li>
  2139.  
  2140.  
  2141.  
  2142. <li><strong>Frame problems clearly</strong> so AI can generate more useful solutions</li>
  2143.  
  2144.  
  2145.  
  2146. <li><strong>Question and refine AI output</strong> rather than accepting it as is</li>
  2147. </ul>
  2148.  
  2149.  
  2150.  
  2151. <h2 class="wp-block-heading"><strong>Practical Implications for Your Career</strong></h2>
  2152.  
  2153.  
  2154.  
  2155. <p>This shift has real implications for how developers think about career development:</p>
  2156.  
  2157.  
  2158.  
  2159. <p><strong>For experienced developers:</strong> Your years of expertise are still important and valuable, but the career advantage is shifting from &#8220;I know this specific tool really well&#8221; to &#8220;I can solve complex problems across different technologies.&#8221; Focus on building skills in system design, integration, and pattern recognition that apply broadly.</p>
  2160.  
  2161.  
  2162.  
  2163. <p><strong>For early-career developers:</strong> The temptation might be to rely on AI to fill knowledge gaps, but this can be dangerous. Those broader skills—architecture, design judgment, problem-solving across domains—typically require years of hands-on experience to develop. Use AI as a tool, but make sure you&#8217;re still building the fundamental thinking skills that let you guide it effectively.</p>
  2164.  
  2165.  
  2166.  
  2167. <p><strong>For teams:</strong> Look for people who can adapt to new technologies quickly and integrate across systems, not just deep specialists. The &#8220;Rails person&#8221; might still be valuable, but the person who can work with Rails, integrate it with three other systems, and spot when the architecture is heading for trouble six months down the line is becoming more valuable.</p>
  2168.  
  2169.  
  2170.  
  2171. <p>The developers who succeed in an AI-enabled world won&#8217;t always be the ones who know the most about any single technology. They&#8217;ll be the ones who can see the bigger picture, integrate across systems, and use AI as a powerful tool while maintaining the critical thinking necessary to guide it toward genuinely useful solutions.</p>
  2172.  
  2173.  
  2174.  
  2175. <p>AI isn&#8217;t replacing developers. It&#8217;s changing what kinds of developer skills matter most.</p>
  2176. ]]></content:encoded>
  2177. </item>
  2178. <item>
  2179. <title>The Java Developer’s Dilemma: Part 2</title>
  2180. <link>https://www.oreilly.com/radar/the-java-developers-dilemma-part-2/</link>
  2181. <pubDate>Tue, 21 Oct 2025 11:17:33 +0000</pubDate>
  2182. <dc:creator><![CDATA[Markus Eisele]]></dc:creator>
  2183. <category><![CDATA[AI & ML]]></category>
  2184. <category><![CDATA[Commentary]]></category>
  2185.  
  2186. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17572</guid>
  2187.  
  2188. <media:content
  2189. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-fractal-drops-1.jpg"
  2190. medium="image"
  2191. type="image/jpeg"
  2192. width="2304"
  2193. height="1792"
  2194. />
  2195.  
  2196. <media:thumbnail
  2197. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-fractal-drops-1-160x160.jpg"
  2198. width="160"
  2199. height="160"
  2200. />
  2201. <custom:subtitle><![CDATA[New Types of Applications]]></custom:subtitle>
  2202. <description><![CDATA[This is the second of a three-part series by Markus Eisele. Part 1 can be found here. Stay tuned for part 3. Many AI projects fail. The reason is often simple. Teams try to rebuild last decade’s applications but add AI on top: A CRM system with AI. A chatbot with AI. A search engine [&#8230;]]]></description>
  2203. <content:encoded><![CDATA[
  2204. <p class="has-cyan-bluish-gray-background-color has-background"><em>This is the second of a three-part series by Markus Eisele. Part 1 can be found </em><a href="https://www.oreilly.com/radar/the-java-developers-dilemma-part-1/" target="_blank" rel="noreferrer noopener"><em>here</em></a><em>. Stay tuned for part 3.</em></p>
  2205.  
  2206.  
  2207.  
  2208. <p>Many AI projects fail. The reason is often simple. Teams try to rebuild last decade’s applications but add AI on top: A CRM system with AI. A chatbot with AI. A search engine with AI. The pattern is the same: “X, but now with AI.” These projects usually look fine in a demo, but they rarely work in production. The problem is that AI doesn’t just extend old systems. It changes what applications are and how they behave. If we treat AI as a bolt-on, we miss the point.</p>
  2209.  
  2210.  
  2211.  
  2212. <h2 class="wp-block-heading">What AI Changes in Application Design</h2>
  2213.  
  2214.  
  2215.  
  2216. <p>Traditional enterprise applications are built around deterministic workflows. A service receives input, applies business logic, stores or retrieves data, and responds. If the input is the same, the output is the same. Reliability comes from predictability.</p>
  2217.  
  2218.  
  2219.  
  2220. <p>AI changes this model. Outputs are probabilistic. The same question asked twice may return two different answers. Results depend heavily on context and prompt structure. Applications now need to manage data retrieval, context building, and memory across interactions. They also need mechanisms to validate and control what comes back from a model. In other words, the application is no longer just code plus a database. It’s code plus a reasoning component with uncertain behavior. That shift makes “AI add-ons” fragile and points to a need for entirely new designs.</p>
  2221.  
  2222.  
  2223.  
  2224. <h2 class="wp-block-heading">Defining AI-Infused Applications</h2>
  2225.  
  2226.  
  2227.  
  2228. <p>AI-infused applications aren’t just old applications with smarter text boxes. They have new structural elements:</p>
  2229.  
  2230.  
  2231.  
  2232. <ul class="wp-block-list">
  2233. <li><strong>Context pipelines</strong>: Systems need to assemble inputs before passing them to a model. This often includes retrieval-augmented generation (RAG), where enterprise data is searched and embedded into the prompt. But also hierarchical, per user memory.</li>
  2234.  
  2235.  
  2236.  
  2237. <li><strong>Memory</strong>: Applications need to persist context across interactions. Without memory, conversations reset on every request. And this memory might need to be stored in different ways. In process, midterm and even long-term memory. Who wants to start support conversations by saying your name and purchased products over and over again?</li>
  2238.  
  2239.  
  2240.  
  2241. <li><strong>Guardrails</strong>: Outputs must be checked, validated, and filtered. Otherwise, hallucinations or malicious responses leak into business workflows.</li>
  2242.  
  2243.  
  2244.  
  2245. <li><strong>Agents</strong>: Complex tasks often require coordination. An agent can break down a request, call multiple tools or APIs or even other agents, and assemble complex results. Executed in parallel or synchronously. Instead of workflow driven, agents are goal driven. They try to produce a result that satisfies a request. <a href="https://en.wikipedia.org/wiki/Business_Process_Model_and_Notation" target="_blank" rel="noreferrer noopener">Business Process Model and Notation</a> (BPMN) is turning toward goal-context–oriented agent design.</li>
  2246. </ul>
  2247.  
  2248.  
  2249.  
  2250. <p>These are not theoretical. They’re the building blocks we already see in modern AI systems. What’s important for Java developers is that they can be expressed as familiar architectural patterns: pipelines, services, and validation layers. That makes them approachable even though the underlying behavior is new.</p>
  2251.  
  2252.  
  2253.  
  2254. <h2 class="wp-block-heading">Models as Services, Not Applications</h2>
  2255.  
  2256.  
  2257.  
  2258. <p>One foundational thought: AI models should not be part of the application binary. They are services. Whether they’re served through a container locally, served via vLLM, hosted by a model cloud provider, or deployed on private infrastructure, the model is consumed through a service boundary. For enterprise Java developers, this is familiar territory. We have decades of experience consuming external services through fast protocols, handling retries, applying backpressure, and building resilience into service calls. We know how to build clients that survive transient errors, timeouts, and version mismatches. This experience is directly relevant when the “service” happens to be a model endpoint rather than a database or messaging broker.</p>
  2259.  
  2260.  
  2261.  
  2262. <p>By treating the model as a service, we avoid a major source of fragility. Applications can evolve independently of the model. If you need to swap a local Ollama model for a cloud-hosted GPT or an internal Jlama deployment, you change configuration, not business logic. This separation is one of the reasons enterprise Java is well positioned to build AI-infused systems.</p>
  2263.  
  2264.  
  2265.  
  2266. <h2 class="wp-block-heading">Java Examples in Practice</h2>
  2267.  
  2268.  
  2269.  
  2270. <p>The Java ecosystem is beginning to support these ideas with concrete tools that address enterprise-scale requirements rather than toy examples.</p>
  2271.  
  2272.  
  2273.  
  2274. <ul class="wp-block-list">
  2275. <li><strong>Retrieval-augmented generation (RAG)</strong>: Context-driven retrieval is the most common pattern for grounding model answers in enterprise data. At scale this means structured ingestion of documents, PDFs, spreadsheets, and more into vector stores. Projects like <a href="https://github.com/docling-project/docling" target="_blank" rel="noreferrer noopener">Docling</a> handle parsing and transformation, and <a href="https://docs.langchain4j.dev/" target="_blank" rel="noreferrer noopener">LangChain4j</a> provides the abstractions for embedding, retrieval, and ranking. Frameworks such as <a href="https://quarkus.io/" target="_blank" rel="noreferrer noopener">Quarkus</a> then extend those concepts into production-ready services with dependency injection, configuration, and observability. The combination moves RAG from a demo pattern into a reliable enterprise feature.</li>
  2276. </ul>
  2277.  
  2278.  
  2279.  
  2280. <ul class="wp-block-list">
  2281. <li><strong>LangChain4j as a standard abstraction</strong>: LangChain4j is emerging as a common layer across frameworks. It offers CDI integration for <a href="https://github.com/langchain4j/langchain4j-cdi" target="_blank" rel="noreferrer noopener">Jakarta EE</a> and <a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/" target="_blank" rel="noreferrer noopener">extensions for Quarkus</a> but also supports Spring, Micronaut, and Helidon. Instead of writing fragile, low-level OpenAPI glue code for each provider, developers define AI services as interfaces and let the framework handle the wiring. This standardization is also beginning to cover agentic modules, so orchestration across multiple tools or APIs can be expressed in a framework-neutral way.<br></li>
  2282.  
  2283.  
  2284.  
  2285. <li><strong>Cloud to on-prem portability</strong>: In enterprises, portability and control matter. Abstractions make it easier to switch between cloud-hosted providers and on-premises deployments. With LangChain4j, you can change configuration to point from a cloud LLM to a local Jlama model or Ollama instance without rewriting business logic. These abstractions also make it easier to use more and smaller domain-specific models and maintain consistent behavior across environments. For enterprises, this is critical to balancing innovation with control.</li>
  2286. </ul>
  2287.  
  2288.  
  2289.  
  2290. <p>These examples show how Java frameworks are taking AI integration from low-level glue code toward reusable abstractions. The result is not only faster development but also better portability, testability, and long-term maintainability.</p>
  2291.  
  2292.  
  2293.  
  2294. <h2 class="wp-block-heading">Testing AI-Infused Applications</h2>
  2295.  
  2296.  
  2297.  
  2298. <p>Testing is where AI-infused applications diverge most sharply from traditional systems. In deterministic software, we write unit tests that confirm exact results. With AI, outputs vary, so testing has to adapt. The answer is not to stop testing but to broaden how we define it.</p>
  2299.  
  2300.  
  2301.  
  2302. <ul class="wp-block-list">
  2303. <li><strong>Unit tests</strong>: Deterministic parts of the system—context builders, validators, database queries—are still tested the same way. Guardrail logic, which enforces schema correctness or policy compliance, is also a strong candidate for unit tests.</li>
  2304.  
  2305.  
  2306.  
  2307. <li><strong>Integration tests</strong>: AI models should be tested as opaque systems. You feed in a set of prompts and check that outputs meet defined boundaries: JSON is valid, responses contain required fields, values are within expected ranges.</li>
  2308.  
  2309.  
  2310.  
  2311. <li><strong>Prompt testing</strong>: Enterprises need to track how prompts perform over time. Variation testing with slightly different inputs helps expose weaknesses. This should be automated and included in the CI pipeline, not left to ad hoc manual testing.</li>
  2312. </ul>
  2313.  
  2314.  
  2315.  
  2316. <p>Because outputs are probabilistic, tests often look like assertions on structure, ranges, or presence of warning signs rather than exact matches. Hamel Husain stresses that specification-based testing with curated prompt sets is essential, and that <a href="http://hamel.dev/blog/posts/evals-faq" target="_blank" rel="noreferrer noopener">evaluations should be problem-specific rather than generic</a>. This aligns well with Java practices: We design integration tests around known inputs and expected boundaries, not exact strings. Over time, this produces confidence that the AI behaves within defined boundaries, even if specific sentences differ.</p>
  2317.  
  2318.  
  2319.  
  2320. <h2 class="wp-block-heading">Collaboration with Data Science</h2>
  2321.  
  2322.  
  2323.  
  2324. <p>Another dimension of testing is collaboration with data scientists. Models aren’t static. They can drift as training data changes or as providers update versions. Java teams cannot ignore this. We need methodologies to surface warning signs and detect sudden drops in accuracy on known inputs or unexpected changes in response style. They need to be fed back into monitoring systems that span both the data science and the application side.</p>
  2325.  
  2326.  
  2327.  
  2328. <p>This requires closer collaboration between application developers and data scientists than most enterprises are used to. Developers must expose signals from production (logs, metrics, traces) to help data scientists diagnose drift. Data scientists must provide datasets and evaluation criteria that can be turned into automated tests. Without this feedback loop, drift goes unnoticed until it becomes a business incident.</p>
  2329.  
  2330.  
  2331.  
  2332. <p>Domain experts play a central role here. Looking back at Husain, he points out that <a href="https://hamel.dev/blog/posts/evals" target="_blank" rel="noreferrer noopener">automated metrics often fail to capture user-perceived quality</a>. Java developers shouldn’t leave evaluation criteria to data scientists alone. Business experts need to help define what “good enough” means in their context. A clinical assistant has very different correctness criteria than a customer service bot. Without domain experts, AI-infused applications risk delivering the wrong things.</p>
  2333.  
  2334.  
  2335.  
  2336. <h2 class="wp-block-heading">Guardrails and Sensitive Data</h2>
  2337.  
  2338.  
  2339.  
  2340. <p>Guardrails belong under testing as well. For example, an enterprise system should never return personally identifiable information (PII) unless explicitly authorized. Tests must simulate cases where PII could be exposed and confirm that guardrails block those outputs. This is not optional. While a best practice on the model training side, especially RAG and memory carry a lot of risks for exactly that personal identifiable information to be carried across boundaries. Regulatory frameworks like GDPR and HIPAA already enforce strict requirements. Enterprises must prove that AI components respect these boundaries, and testing is the way to demonstrate it.</p>
  2341.  
  2342.  
  2343.  
  2344. <p>By treating guardrails as testable components, not ad hoc filters, we raise their reliability. Schema checks, policy enforcement, and PII filters should all have automated tests just like database queries or API endpoints. This reinforces the idea that AI is part of the application, not a mysterious bolt-on.</p>
  2345.  
  2346.  
  2347.  
  2348. <h2 class="wp-block-heading">Edge-Based Scenarios: Inference on the JVM</h2>
  2349.  
  2350.  
  2351.  
  2352. <p>Not all AI workloads belong in the cloud. Latency, cost, and data sovereignty often demand local inference. This is especially true at the edge: in retail stores, factories, vehicles, or other environments where sending every request to a cloud service is impractical.</p>
  2353.  
  2354.  
  2355.  
  2356. <p>Java is starting to catch up here. Projects like Jlama allow language models to run directly inside the JVM. This makes it possible to deploy inference alongside existing Java applications without adding a separate Python or C++ runtime. The advantages are clear: lower latency, no external data transfer, and simpler integration with the rest of the enterprise stack. For developers, it also means you can test and debug everything inside one environment rather than juggling multiple languages and toolchains.</p>
  2357.  
  2358.  
  2359.  
  2360. <p>Edge-based inference is still new, but it points to a future where AI isn’t just a remote service you call. It becomes a local capability embedded into the same platform you already trust.</p>
  2361.  
  2362.  
  2363.  
  2364. <h2 class="wp-block-heading">Performance and Numerics in Java</h2>
  2365.  
  2366.  
  2367.  
  2368. <p>One reason Python became dominant in AI is its excellent math libraries like NumPy and SciPy. These libraries are backed by native C and C++ code, which delivers strong performance. Java has historically lacked first-rate numerics libraries of the same quality and ecosystem adoption. Libraries like <a href="https://deeplearning4j.konduit.ai/nd4j/tutorials/quickstart" target="_blank" rel="noreferrer noopener">ND4J</a> (part of <a href="https://deeplearning4j.konduit.ai/" target="_blank" rel="noreferrer noopener">Deeplearning4j</a>) exist, but they never reached the same critical mass.</p>
  2369.  
  2370.  
  2371.  
  2372. <p>That picture is starting to change. <a href="https://openjdk.org/projects/panama/" target="_blank" rel="noreferrer noopener">Project Panama</a> is an important step. It gives Java developers efficient access to native libraries, GPUs, and accelerators without complex JNI code. Combined with ongoing work on vector APIs and Panama-based bindings, Java is becoming much more capable of running performance-sensitive tasks. This evolution matters because inference and machine learning won’t always be external services. In many cases, they’ll be libraries or models you want to embed directly in your JVM-based systems.</p>
  2373.  
  2374.  
  2375.  
  2376. <h2 class="wp-block-heading">Why This Matters for Enterprises</h2>
  2377.  
  2378.  
  2379.  
  2380. <p>Enterprises cannot afford to live in prototype mode. They need systems that run for years, can be supported by large teams, and fit into existing operational practices. AI-infused applications built in Java are well positioned for this. They are:</p>
  2381.  
  2382.  
  2383.  
  2384. <ul class="wp-block-list">
  2385. <li><strong>Closer to business logic</strong>: Running in the same environment as existing services</li>
  2386.  
  2387.  
  2388.  
  2389. <li><strong>More auditable</strong>: Observable with the same tools already used for logs, metrics, and traces</li>
  2390.  
  2391.  
  2392.  
  2393. <li><strong>Deployable across cloud and edge</strong>: Capable of running in centralized data centers or at the periphery, where latency and privacy matter</li>
  2394. </ul>
  2395.  
  2396.  
  2397.  
  2398. <p>This is a different vision from “add AI to last decade’s application.” It’s about creating applications that only make sense because AI is at their core.</p>
  2399.  
  2400.  
  2401.  
  2402. <p>In <a href="https://www.oreilly.com/library/view/applied-ai-for/9781098174491/" target="_blank" rel="noreferrer noopener"><em>Applied AI for Enterprise Java Development</em></a>, we go deeper into these patterns. The book provides an overview of architectural concepts, shows how to implement them with real code, and explains how emerging standards like the <a href="https://a2a-protocol.org/latest/" target="_blank" rel="noreferrer noopener">Agent2Agent Protocol</a> and <a href="https://modelcontextprotocol.io/docs/getting-started/intro" target="_blank" rel="noreferrer noopener">Model Context Protocol</a> fit in. The goal is to give Java developers a road map to move beyond demos and build applications that are robust, explainable, and ready for production.</p>
  2403.  
  2404.  
  2405.  
  2406. <p>The transformation isn’t about replacing everything we know. It’s about extending our toolbox. Java has adapted before, from servlets to EJBs to microservices. The arrival of AI is the next shift. The sooner we understand what these new types of applications look like, the sooner we can build systems that matter.</p>
  2407. ]]></content:encoded>
  2408. </item>
  2409. <item>
  2410. <title>A Human-Centered Approach to Competitive Advantage</title>
  2411. <link>https://www.oreilly.com/radar/a-human-centered-approach-to-competitive-advantage/</link>
  2412. <pubDate>Mon, 20 Oct 2025 11:25:17 +0000</pubDate>
  2413. <dc:creator><![CDATA[Kord Davis]]></dc:creator>
  2414. <category><![CDATA[AI & ML]]></category>
  2415. <category><![CDATA[Commentary]]></category>
  2416.  
  2417. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17565</guid>
  2418.  
  2419. <media:content
  2420. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/A-collaborative-approach-to-AI.jpg"
  2421. medium="image"
  2422. type="image/jpeg"
  2423. width="2304"
  2424. height="1792"
  2425. />
  2426.  
  2427. <media:thumbnail
  2428. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/A-collaborative-approach-to-AI-160x160.jpg"
  2429. width="160"
  2430. height="160"
  2431. />
  2432. <custom:subtitle><![CDATA[Unlocking AI&#039;s Potential]]></custom:subtitle>
  2433. <description><![CDATA[In the modern enterprise, information is the new capital. While companies pour resources into artificial intelligence, many discover that technology, standing alone, delivers only expense, not transformation. The true engine of change lies not in the algorithm but in the hands and minds of the people who use it. The greatest asset an organization possesses [&#8230;]]]></description>
  2434. <content:encoded><![CDATA[
  2435. <p>In the modern enterprise, information is the new capital. While companies pour resources into artificial intelligence, many discover that technology, standing alone, delivers only expense, not transformation. The true engine of change lies not in the algorithm but in the hands and minds of the people who use it. The greatest asset an organization possesses is the diverse, domain-specific expertise held within its human teams.</p>
  2436.  
  2437.  
  2438.  
  2439. <p>Drawing directly from <a href="https://en.wikipedia.org/wiki/Peter_Drucker" target="_blank" rel="noreferrer noopener">Peter Drucker</a>&#8216;s principles, the path to competitive advantage is a human-centered approach. Effective management, Drucker taught, demands a focus on measurable results, fostered through collaboration and the strict alignment of individual efforts with institutional goals. Technology is but a tool; it has no purpose unless it serves the people who use it and the mission they are trying to accomplish. This is the only reliable way to generate genuine innovation and tangible outcomes.</p>
  2440.  
  2441.  
  2442.  
  2443. <h2 class="wp-block-heading"><strong>The Social Reality of Data and The Peril of Silos</strong></h2>
  2444.  
  2445.  
  2446.  
  2447. <h3 class="wp-block-heading"><strong><strong>Data as a Collective Endeavor</strong></strong></h3>
  2448.  
  2449.  
  2450.  
  2451. <p>Data analysis is fundamentally a collective effort. We shouldn&#8217;t aim to turn everyone into a data scientist; rather, we must empower teams to collaborate effectively with both AI and one another—together. Consider a large retail company seeking to optimize its supply chain. The firm has invested heavily in a sophisticated AI model to forecast demand and automate inventory. The model, however, is failing. It recommends stocking up on products that sit unsold while critical items are frequently out of stock.</p>
  2452.  
  2453.  
  2454.  
  2455. <p>The problem is not the technology. It’s a failure to apply human intelligence, experience, and expertise. The AI model, built by a team of data scientists, was designed to optimize for cost per unit and speed of delivery. It did not, and could not, account for the deep insights held by the people who actually run the business. The marketing team understands that a sudden social media trend will create a surge in demand for a specific item, while the sales team knows that a key corporate client has just placed a large, unannounced order. The operations manager on the warehouse floor can predict which logistical choke points will delay a shipment, regardless of the model&#8217;s prediction. The AI&#8217;s diagnosis was based on limited data; the humans had the full picture.</p>
  2456.  
  2457.  
  2458.  
  2459. <blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
  2460. <p><em>&#8220;The purpose of an organization is to enable ordinary human beings to do extraordinary things.&#8221; </em><br>Peter Drucker</p>
  2461. </blockquote>
  2462.  
  2463.  
  2464.  
  2465. <p>These individuals—the marketing leader, the sales professional, the operations manager—hold the domain expertise that unlocks the AI’s full potential. The purpose of the AI is to augment and amplify this expertise, not to replace it.</p>
  2466.  
  2467.  
  2468.  
  2469. <h3 class="wp-block-heading"><strong>The Challenge of Silos</strong></h3>
  2470.  
  2471.  
  2472.  
  2473. <p>This collective effort often fails because of organizational silos. While some silos began as practical necessity—protecting sensitive customer data, for instance—many persist long after their original justification has vanished. More dangerously, silos are often the result of political dynamics and the fear of losing power or influence. Consider a chief marketing officer (CMO) who is reluctant to share a new predictive model for customer lifetime value with the chief information officer (CIO). The CMO views this model as a competitive asset, a tool to justify her department&#8217;s budget and influence. By withholding it, she ensures her team remains the sole source of this critical insight.</p>
  2474.  
  2475.  
  2476.  
  2477. <p>This mindset is toxic; it substitutes internal competition for collective performance. It creates a system where departments focus on territory over results. As Drucker taught, <a href="https://www.academia.edu/43101559/The_Essential_Drucker" target="_blank" rel="noreferrer noopener">the purpose of an organization is to enable ordinary human beings to do extraordinary things</a>. When they are confined to their own small domains, their work becomes ordinary, no matter how advanced their tools.</p>
  2478.  
  2479.  
  2480.  
  2481. <h3 class="wp-block-heading"><strong>Cultivating a Collaborative Environment</strong></h3>
  2482.  
  2483.  
  2484.  
  2485. <p>Dismantling these barriers isn’t merely a structural challenge; it’s a fundamental human and cultural imperative. Leaders must recognize that silos are symptoms of human challenges that demand a shift in mindset: prioritize collaboration over competition. To do this, they must create an environment where diverse perspectives are actively sought and rewarded.</p>
  2486.  
  2487.  
  2488.  
  2489. <p>This begins with a shared language and a clear mandate. A leader can facilitate a series of cross-departmental workshops, bringing together marketers, engineers, and financial analysts not to &#8220;get trained on AI&#8221; but to identify shared problems. A question like &#8220;How can we use existing data to reduce customer service call volume?&#8221; can be the starting point for a collaboration that organically breaks down barriers. The result isn’t a new algorithm but a new process built on mutual understanding.</p>
  2490.  
  2491.  
  2492.  
  2493. <h2 class="wp-block-heading"><strong>Strategy: Start Small, Win Big</strong></h2>
  2494.  
  2495.  
  2496.  
  2497. <p>Many enterprises err by pursuing ambitious, grand-scale technology implementations, such as vast enterprise resource planning (ERP) systems. The intention—to integrate and streamline—is sound, but the result is often disappointment, cost overruns, and fresh confusion. Consider a manufacturing company that invested millions in a new system to automate its entire production line. The initial rollout was chaotic. The system&#8217;s inflexible data entry requirements frustrated engineers on the floor who had their own established, practical methods. Production was halted for weeks as frontline workers grappled with a system that complicated, rather than simplified, their work. This is a cautionary tale: Without a people-centered approach, even the most advanced systems fall short.</p>
  2498.  
  2499.  
  2500.  
  2501. <h3 class="wp-block-heading"><strong>The Power of Incrementalism</strong></h3>
  2502.  
  2503.  
  2504.  
  2505. <p>The path to AI success isn’t a sweeping, top-down overhaul. It&#8217;s about incremental projects that empower teams to tackle small, relevant challenges. This isn&#8217;t a retreat; it&#8217;s a strategic choice. It’s a recognition that true change happens through a series of manageable, successful steps.</p>
  2506.  
  2507.  
  2508.  
  2509. <ol class="wp-block-list">
  2510. <li><strong>Start with a small, strategic project</strong>: Don&#8217;t overhaul the entire customer service platform; focus on a single, pressing problem. For a call center, a small project might be using a simple AI model to analyze call transcripts and identify the top five reasons for long hold times. This is manageable, provides immediate, actionable insights, and gives the team a sense of accomplishment. The project is small, but the win is significant: It proves the value of the approach.<br></li>
  2511.  
  2512.  
  2513.  
  2514. <li><strong>Establish clear objectives</strong>: If the call center project aims to reduce hold times, define success with a clear, measurable goal: reduce the average call handle time by 15% within three months. This clarity is nonnegotiable. It provides a focal point and eliminates ambiguity.<br></li>
  2515.  
  2516.  
  2517.  
  2518. <li><strong>Prevent scope creep</strong>: This is the silent killer of projects. To prevent it, clear boundaries must be established from the outset. The team might agree: &#8220;We will only analyze calls from Q3, and we will only focus on the top five identified root causes. We will not expand to analyze email support tickets during this phase.&#8221; This rigid discipline ensures the project remains on track and delivers a tangible outcome.<br></li>
  2519.  
  2520.  
  2521.  
  2522. <li><strong>Encourage cross-functional collaboration</strong>: The project&#8217;s success depends on the human element. The team must include a frontline call center representative who understands the nuances of customer conversations, a data analyst to interpret the AI’s output, and a product manager to implement the recommended changes. These cross-functional workshops are where true insights collide and innovation is born.</li>
  2523. </ol>
  2524.  
  2525.  
  2526.  
  2527. <h2 class="wp-block-heading"><strong>Learning and Scaling</strong></h2>
  2528.  
  2529.  
  2530.  
  2531. <p>Every incremental project is an opportunity for relentless learning. After completing the call center project and reducing hold times, the team must conduct a thorough retrospective. They should ask: What succeeded? What failed? If a project successfully reduces churn rates, document the strategies that led to this success and apply them broadly. Success isn’t the end; it’s the beginning of a new process. The team can then apply the same methodology to email support, then to their live chat. The small win becomes a repeatable blueprint for progress.</p>
  2532.  
  2533.  
  2534.  
  2535. <h3 class="wp-block-heading"><strong>The Leadership Imperative</strong></h3>
  2536.  
  2537.  
  2538.  
  2539. <p>The leader&#8217;s role is unambiguous: foster a culture of transparency, trust, and empowerment.</p>
  2540.  
  2541.  
  2542.  
  2543. <p>A human-centered strategy addresses the root causes of slow AI adoption and siloed data. It encourages a resilient environment where curiosity about data becomes ingrained in the corporate culture. When diverse disciplines actively engage with data, they cultivate a shared language and a collective, data-first mindset.</p>
  2544.  
  2545.  
  2546.  
  2547. <p>This endeavor isn’t about tool adoption; it’s about nurturing an environment where collaboration is the default setting. It’s about understanding that a silo isn’t a structure; it’s a human behavior that must be managed and redirected toward a common goal. By prioritizing human expertise and actively confronting the political realities underpinning silos, businesses transform AI from a technology expense into a competitive advantage that drives meaningful innovation and secures long-term success.</p>
  2548. ]]></content:encoded>
  2549. </item>
  2550. <item>
  2551. <title>Generative AI in the Real World: Context Engineering with Drew Breunig</title>
  2552. <link>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-context-engineering-with-drew-breunig/</link>
  2553. <comments>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-context-engineering-with-drew-breunig/#respond</comments>
  2554. <pubDate>Thu, 16 Oct 2025 11:18:24 +0000</pubDate>
  2555. <dc:creator><![CDATA[Ben Lorica and Drew Breunig]]></dc:creator>
  2556. <category><![CDATA[Generative AI in the Real World]]></category>
  2557. <category><![CDATA[Podcast]]></category>
  2558.  
  2559. <guid isPermaLink="false">https://www.oreilly.com/radar/?post_type=podcast&#038;p=17562</guid>
  2560.  
  2561. <enclosure url="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3" length="0" type="audio/mpeg" />
  2562. <media:content
  2563. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2024/01/Podcast_Cover_GenAI_in_the_Real_World-scaled.png"
  2564. medium="image"
  2565. type="image/png"
  2566. width="2560"
  2567. height="2560"
  2568. />
  2569.  
  2570. <media:thumbnail
  2571. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2024/01/Podcast_Cover_GenAI_in_the_Real_World-160x160.png"
  2572. width="160"
  2573. height="160"
  2574. />
  2575. <description><![CDATA[In this episode, Ben Lorica and Drew Breunig, a strategist at the Overture Maps Foundation, talk all things context engineering: what’s working, where things are breaking down, and what comes next. Listen in to hear why huge context windows aren’t solving the problems we hoped they might, why companies shouldn’t discount evals and testing, and [&#8230;]]]></description>
  2576. <content:encoded><![CDATA[
  2577. <p>In this episode, Ben Lorica and Drew Breunig, a strategist at the Overture Maps Foundation, talk all things context engineering: what’s working, where things are breaking down, and what comes next. Listen in to hear why huge context windows aren’t solving the problems we hoped they might, why companies shouldn’t discount evals and testing, and why we’re doing the field a disservice by leaning into marketing and buzzwords rather than trying to leverage what current crop of LLMs are actually capable of.</p>
  2578.  
  2579.  
  2580.  
  2581. <p><strong>About the <em>Generative AI in the Real World </em>podcast</strong>: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In <em>Generative AI in the Real World</em>, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.</p>
  2582.  
  2583.  
  2584.  
  2585. <p>Check out <a href="https://learning.oreilly.com/playlists/42123a72-1108-40f1-91c0-adbfb9f4983b/?_gl=1*m7f70i*_ga*MTYyODYzMzQwMi4xNzU4NTY5ODYz*_ga_092EL089CH*czE3NTkxNzAwODUkbzE0JGcwJHQxNzU5MTcwMDg1JGo2MCRsMCRoMA.." target="_blank" rel="noreferrer noopener">other episodes</a> of this podcast on the O’Reilly learning platform.</p>
  2586.  
  2587.  
  2588.  
  2589. <h2 class="wp-block-heading">Transcript</h2>
  2590.  
  2591.  
  2592.  
  2593. <p><em>This transcript was created with the help of AI and has been lightly edited for clarity.</em></p>
  2594.  
  2595.  
  2596.  
  2597. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=0" target="_blank" rel="noreferrer noopener">00.00</a>: <strong>All right. So today we have Drew Breunig. He is a strategist at the Overture Maps Foundation. And he&#8217;s also in the process of writing a book for O&#8217;Reilly called the <em>Context Engineering Handbook</em>. And with that, Drew, welcome to the podcast.</strong></p>
  2598.  
  2599.  
  2600.  
  2601. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=23" target="_blank" rel="noreferrer noopener">00.23</a>: Thanks, Ben. Thanks for having me on here. </p>
  2602.  
  2603.  
  2604.  
  2605. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=26" target="_blank" rel="noreferrer noopener">00.26</a>: <strong>So context engineering. . . I remember before ChatGPT was even released, someone was talking to me about prompt engineering. I said, “What&#8217;s that?” And then of course, fast-forward to today, now people are talking about context engineering. And I guess the short definition is it&#8217;s the delicate art and science of filling the context window with just the right information. What&#8217;s broken with how teams think about context today? </strong></p>
  2606.  
  2607.  
  2608.  
  2609. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=56" target="_blank" rel="noreferrer noopener">00.56</a>: I think it&#8217;s important to talk about why we need a new word or why a new word makes sense. I was just talking with Mike Taylor, who wrote the <a href="https://learning.oreilly.com/library/view/prompt-engineering-for/9781098153427/" target="_blank" rel="noreferrer noopener">prompt engineering book</a> for O&#8217;Reilly, exactly about this and why we need a new word. Why is prompt engineering not good enough? And I think it has to do with the way the models and the way they&#8217;re being built is evolving. I think it also has to deal with the way that we&#8217;re learning how to use these models. </p>
  2610.  
  2611.  
  2612.  
  2613. <p>And so prompt engineering was a natural word to think about when your interaction and how you program the model was maybe one turn of conversation, maybe two, and you might pull in some context to give it examples. You might do some RAG and context augmentation, but you&#8217;re working with this one-shot service. And that was really similar to the way people were working in chatbots. And so prompt engineering started to evolve as this thing.&nbsp;</p>
  2614.  
  2615.  
  2616.  
  2617. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=120" target="_blank" rel="noreferrer noopener">02.00</a>: But as we started to build agents and as companies started to develop models that were capable of multiturn tool-augmented reasoning usage, suddenly you&#8217;re not using that one prompt. You have a context that is sometimes being prompted by you, sometimes being modified by your software harness around the model, sometimes being modified by the model itself. And increasingly the model is starting to manage that context. And that prompt is very user-centric. It is a user giving that prompt. </p>
  2618.  
  2619.  
  2620.  
  2621. <p>But when we start to have these multiturn systematic editing and preparation of contexts, a new word was needed, which is this idea of context engineering. This is not to belittle prompt engineering. I think it&#8217;s an evolution. And it shows how we&#8217;re evolving and finding this space in real time. I think context engineering is more suited to agents and applied AI programing, whereas prompt engineering lives in how people use chatbots, which is a different field. It&#8217;s not better and not worse.&nbsp;</p>
  2622.  
  2623.  
  2624.  
  2625. <p>And so context engineering is more specific to understanding the failure modes that occur, diagnosing those failure modes and establishing good practices for both preparing your context but also setting up systems that fix and edit your context, if that makes sense.&nbsp;</p>
  2626.  
  2627.  
  2628.  
  2629. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=213" target="_blank" rel="noreferrer noopener">03.33</a>: <strong>Yeah, and also, it seems like the words themselves are indicative of the scope, right? So “prompt” engineering means it&#8217;s the prompt. So you&#8217;re fiddling with the prompt. And [with] context engineering, “context” can be a lot of things. It could be the information you retrieve. It might involve RAG, so you retrieve information. You put that in the context window. </strong></p>
  2630.  
  2631.  
  2632.  
  2633. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=242" target="_blank" rel="noreferrer noopener">04.02</a>: Yeah. And people were doing that with prompts too. But I think in the beginning we just didn&#8217;t have the words. And that word became a big empty bucket that we filled up. You know, the quote I always quote too often, but I find it fitting, is one of my favorite quotes from Stuart Brand, which is, “If you want to know where the future is being made, follow where the lawyers are congregating and the language is being invented,” and the arrival of context engineering as a word came after the field was invented. It just kind of crystallized and demarcated what people were already doing. </p>
  2634.  
  2635.  
  2636.  
  2637. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=276" target="_blank" rel="noreferrer noopener">04.36</a>: <strong>So the word “context” means you&#8217;re providing context. So context could be a tool, right? It could be memory. Whereas the word “prompt” is much more specific.</strong> </p>
  2638.  
  2639.  
  2640.  
  2641. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=295" target="_blank" rel="noreferrer noopener">04.55</a>: And I think it also is like, it has to be edited by a person. I&#8217;m a big advocate for not using anthropomorphizing words around large language models. “Prompt” to me involves agency. And so I think it’s nice—it&#8217;s a good delineation. </p>
  2642.  
  2643.  
  2644.  
  2645. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=314" target="_blank" rel="noreferrer noopener">05.14</a>: <strong>And then I think one of the very immediate lessons that people realize is, just because. . . </strong></p>
  2646.  
  2647.  
  2648.  
  2649. <p><strong>So one of the things that these model providers do when they have a model release,&nbsp; one of the things they note is, What&#8217;s the size of the context window? So people started associating context window [with] “I stuff as much as I can in there.” But the reality is actually that, one, it&#8217;s not efficient. And two, it also is not useful to the model. Just because you have a massive context window doesn&#8217;t mean that the model treats the entire context window evenly.</strong></p>
  2650.  
  2651.  
  2652.  
  2653. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=357" target="_blank" rel="noreferrer noopener">05.57</a>: Yeah, it doesn&#8217;t treat it evenly. And it&#8217;s not a one-size-fits-all solution. So I don&#8217;t know if you remember last year, but that was the big dream, which was, “Hey, we&#8217;re doing all this work with RAG and augmenting our context. But wait a second, if we can make the context 1 million tokens, 2 million tokens, I don&#8217;t have to run RAG on all of my corporate documents. I can just fit it all in there, and I can constantly be asking this. And if we can do this, we essentially have solved all of the hard problems that we were worrying about last year.” And so that was the big hope. </p>
  2654.  
  2655.  
  2656.  
  2657. <p>And you started to see an arms race of everybody trying to make bigger and bigger context windows to the point where, you know, Llama 4 had its spectacular flameout. It was rushed out the door. But the headline feature by far was “We will be releasing a 10 million token context window.” And the thing that everybody realized is.&nbsp;.&nbsp;.&nbsp; Like, all right, we were really hopeful for that. And then as we started building with these context windows, we started to realize there were some big limitations around them.</p>
  2658.  
  2659.  
  2660.  
  2661. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=421" target="_blank" rel="noreferrer noopener">07.01</a>: Perhaps the thing that clicked for me was in <a href="https://arxiv.org/abs/2507.06261" target="_blank" rel="noreferrer noopener">Google&#8217;s Gemini 2.5 paper</a>. Fantastic paper. And one of the reasons I love it is because they dedicate about four pages in the appendix to talking about the kind of methodology and harnesses they built so that they could teach Gemini to play Pokémon: how to connect it to the game, how to actually read out the state of the game, how to make choices about it, what tools they gave it, all of these other things.</p>
  2662.  
  2663.  
  2664.  
  2665. <p>And buried in there was a real “warts and all” case study, which are my favorite when you talk about the hard things and especially when you cite the things you can&#8217;t overcome. And Gemini 2.5 was a million-token context window with, eventually, 2 million tokens coming. But in this Pokémon thing, they said, “Hey, we actually noticed something, which is once you get to about 200,000 tokens, things start to fall apart, and they fall apart for a host of reasons. They start to hallucinate. One of the things that is really demonstrable is they start to rely more on the context knowledge than the weights knowledge.&nbsp;</p>
  2666.  
  2667.  
  2668.  
  2669. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=502" target="_blank" rel="noreferrer noopener">08.22</a>: So inside every model there&#8217;s a knowledge base. There&#8217;s, you know, all of these other things that get kind of buried into the parameters. But when you reach a certain level of context, it starts to overload the model, and it starts to rely more on the examples in the context. And so this means that you are not taking advantage of the full strength or knowledge of the model. </p>
  2670.  
  2671.  
  2672.  
  2673. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=523" target="_blank" rel="noreferrer noopener">08.43</a>: So that&#8217;s one way it can fail. We call this “context distraction,” though Kelly Hong at Chroma has written an <a href="https://research.trychroma.com/context-rot" target="_blank" rel="noreferrer noopener">incredible paper documenting this</a>, which she calls “context rot,” which is a similar way [of] charting when these benchmarks start to fall apart.</p>
  2674.  
  2675.  
  2676.  
  2677. <p>Now the cool thing about this is that you can actually use this to your advantage. There&#8217;s another paper out of, I believe, the Harvard Interaction Lab, where they look at these inflection points for.&nbsp;.&nbsp;.&nbsp;</p>
  2678.  
  2679.  
  2680.  
  2681. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=553" target="_blank" rel="noreferrer noopener">09.13</a>: Are you familiar with the term “in-context learning”? In-context learning is when you teach the model to do something that doesn&#8217;t know how to do by providing examples in your context. And those examples illustrate how it should perform. It&#8217;s not something that it&#8217;s seen before. It&#8217;s not in the weights. It&#8217;s a completely unique problem. </p>
  2682.  
  2683.  
  2684.  
  2685. <p>Well, sometimes those in-context learning[s] are counter to what the model has learned in the weights. So they end up fighting each other, the weights and the context. And this paper documented that when you get over a certain context length, you can overwhelm the weights and you can force it to listen to your in-context examples.</p>
  2686.  
  2687.  
  2688.  
  2689. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=597" target="_blank" rel="noreferrer noopener">09.57</a>: And so all of this is just to try to illustrate the complexity of what&#8217;s going on here and how I think one of the traps that leads us to this place is that the gift and the curse of LLMs is that we prompt and build contexts that are in the English language or whatever language you speak. And so that leads us to believe that they&#8217;re going to react like other people or entities that read the English language.</p>
  2690.  
  2691.  
  2692.  
  2693. <p>And the fact of the matter is, they don&#8217;t—they&#8217;re reading it in a very specific way. And that specific way can vary from model to model. And so you have to systematically approach this to understand these nuances, which is where the context management field comes in.&nbsp;</p>
  2694.  
  2695.  
  2696.  
  2697. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=635" target="_blank" rel="noreferrer noopener">10.35</a>: <strong>This is interesting because even before those papers came out, there were studies which showed the exact opposite problem, which is the following: You may have a RAG system that actually retrieves the right information, but then somehow the LLMs can still fail because, as you alluded to, they have weights so they have prior beliefs. You saw something [on] the internet, and they will opine against the precise information you retrieve from the context. </strong></p>
  2698.  
  2699.  
  2700.  
  2701. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=668" target="_blank" rel="noreferrer noopener">11.08</a>: This is a really big problem. </p>
  2702.  
  2703.  
  2704.  
  2705. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=669" target="_blank" rel="noreferrer noopener">11.09</a>: <strong>So this is true even if the context window’s small actually.</strong> </p>
  2706.  
  2707.  
  2708.  
  2709. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=673" target="_blank" rel="noreferrer noopener">11.13</a>: Yeah, and Ben, you touched on something that&#8217;s really important. So in my <a href="https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html" target="_blank" rel="noreferrer noopener">original blog post</a>, I document four ways that context fails. I talk about “context poisoning.” That&#8217;s when you hallucinate something in a long-running task and it stays in there, and so it&#8217;s continually confusing it. “Context distraction,” which is when you overwhelm that soft limit to the context window and then you start to perform poorly. “Context confusion”: This is when you put things that aren&#8217;t relevant to the task inside your context, and suddenly they think the model thinks that it has to pay attention to this stuff and it leads them astray. And then the last thing is “context clash,” which is when there&#8217;s information in the context that’s at odds with the task that you are trying to perform. </p>
  2710.  
  2711.  
  2712.  
  2713. <p>A good example of this is, say you&#8217;re asking the model to only reply in JSON, but you&#8217;re using MCP tools that are defined with XML. And so you&#8217;re creating this backwards thing. But I think there&#8217;s a fifth piece that I need to write about because it keeps coming up. And it&#8217;s exactly what you described.</p>
  2714.  
  2715.  
  2716.  
  2717. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=743" target="_blank" rel="noreferrer noopener">12.23</a>: Douwe [Kiela] over at Contextual AI refers to this as “context” or “prompt adherence.” But the term that keeps sticking in my mind is this idea of fighting the weights. There’s three situations you get yourself into when you&#8217;re interacting with an LLM. The first is when you&#8217;re working with the weights. You&#8217;re asking it a question that it knows how to answer. It&#8217;s seen many examples of that answer. It has it in its knowledge base. It comes back with the weights, and it can give you a phenomenal, detailed answer to that question. That&#8217;s what I call “working with the weights.” </p>
  2718.  
  2719.  
  2720.  
  2721. <p>The second is what we referred to earlier, which is that in-context learning, which is you&#8217;re doing something that it doesn&#8217;t know about and you&#8217;re showing an example, and then it does it. And this is great. It&#8217;s wonderful. We do it all the time.&nbsp;</p>
  2722.  
  2723.  
  2724.  
  2725. <p>But then there&#8217;s a third example which is, you&#8217;re providing it examples. But those examples are at odds with some things that it had learned usually during posttraining, during the fine-tuning or RL stage. A really good example is format outputs.&nbsp;</p>
  2726.  
  2727.  
  2728.  
  2729. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=814" target="_blank" rel="noreferrer noopener">13.34</a>: Recently a friend of mine was updating his pipeline to try out a new model, Moonshots. A really great model and really great model for tool use. And so he just changed his model and hit run to see what happened. And he kept failing—his thing couldn&#8217;t even work. He&#8217;s like, “I don&#8217;t understand. This is supposed to be the best tool use model there is.” And he asked me to look at his code.</p>
  2730.  
  2731.  
  2732.  
  2733. <p>I looked at his code and he was extracting data using Markdown, essentially: “Put the final answer in an ASCII box and I&#8217;ll extract it that way.” And I said, “If you change this to XML, see what happens. Ask it to respond in XML, use XML as your formatting, and see what happens.” He did that. That one change passed every test. Like basically crushed it because it was working with the weights. He wasn&#8217;t fighting the weights. Everyone’s experienced this if you build with AI: the stubborn things it refuses to do, no matter how many times you ask it, including formatting.&nbsp;</p>
  2734.  
  2735.  
  2736.  
  2737. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=875" target="_blank" rel="noreferrer noopener">14.35</a>: [Here’s] my favorite example of this though, Ben: So in ChatGPT’s web interface or their application interface, if you go there and you try to prompt an image, a lot of the images that people prompt—and I&#8217;ve talked to user research about this—are really boring prompts. They have a text box that can be anything, and they&#8217;ll say something like “a black cat” or “a statue of a man thinking.”</p>
  2738.  
  2739.  
  2740.  
  2741. <p>OpenAI realized this was leading to a lot of bad images because the prompt wasn&#8217;t detailed; it wasn&#8217;t a good prompt. So they built a system that recognizes if your prompt is too short, low detail, bad, and it hands it to another model and says, “Improve this prompt,” and it improves the prompt for you. And if you inspect in Chrome or Safari or Firefox, whatever, you inspect the developer settings, you can see the JSON being passed back and forth, and you can see your original prompt going in. Then you can see the improved prompt.&nbsp;</p>
  2742.  
  2743.  
  2744.  
  2745. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=936" target="_blank" rel="noreferrer noopener">15.36</a>: My favorite example of this [is] I asked it to make a statue of a man thinking, and it came back and said something like “A detailed statue of a human figure in a thinking pose similar to Rodin&#8217;s ‘The Thinker.’ The statue is made of weathered stone sitting on a pedestal. . .” Blah blah blah blah blah blah. A paragraph. . . But below that prompt there were instructions to the chatbot or to the LLM that said, “Generate this image and after you generate the image, do not reply. Do not ask follow up questions. Do not ask. Do not make any comments describing what you&#8217;ve done. Just generate the image.” And in this prompt, then nine times, some of them in all caps, they say, “Please do not reply.” And the reason is because a big chunk of OpenAI’s posttraining is teaching these models how to converse back and forth. They want you to always be asking a follow-up question and they train it. And so now they have to fight the prompts. They have to add in all these statements. And that&#8217;s another way that fails. </p>
  2746.  
  2747.  
  2748.  
  2749. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1002" target="_blank" rel="noreferrer noopener">16.42</a>: So why I bring this up—and this is why I need to write about it—is as an applied AI developer, you need to recognize when you&#8217;re fighting the prompt, understand enough about the posttraining of that model, or make some assumptions about it, so that you can stop doing that and try something different, because you&#8217;re just banging your head against a wall and you&#8217;re going to get inconsistent, bad applications and the same statement 20 times over. </p>
  2750.  
  2751.  
  2752.  
  2753. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1027" target="_blank" rel="noreferrer noopener">17.07</a>: <strong>By the way, the other thing that’s interesting about this whole topic is, people actually somehow have underappreciated or forgotten all of the progress we&#8217;ve made in information retrieval. There&#8217;s a whole. . . I mean, these people have their own conferences, right? Everything from reranking to the actual indexing, even with vector search—the information retrieval community still has a lot to offer, and it&#8217;s the kind of thing that people underappreciated. And so by simply loading your context window with massive amounts of garbage, you&#8217;re actually, leaving on the field so much progress in information retrieval.</strong></p>
  2754.  
  2755.  
  2756.  
  2757. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1084" target="_blank" rel="noreferrer noopener">18.04</a>: I do think it&#8217;s hard. And that&#8217;s one of the risks: We&#8217;re building all this stuff so fast from the ground up, and there&#8217;s a tendency to just throw everything into the biggest model possible and then hope it sorts it out.</p>
  2758.  
  2759.  
  2760.  
  2761. <p>I really do think there&#8217;s two pools of developers. There&#8217;s the “throw everything in the model” pool, and then there&#8217;s the “I&#8217;m going to take incremental steps and find the most optimal model.” And I often find that latter group, which I called a compound AI group after a <a href="https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/" target="_blank" rel="noreferrer noopener">paper that was published out of Berkeley</a>, those tend to be people who have run data pipelines, because it&#8217;s not just a simple back and forth interaction. It&#8217;s gigabytes or even more of data you&#8217;re processing with the LLM. The costs are high. Latency is important. So designing efficient systems is actually incredibly key, if not a total requirement. So there&#8217;s a lot of innovation that comes out of that space because of that kind of boundary.</p>
  2762.  
  2763. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1148" target="_blank" rel="noreferrer noopener">19.08</a>: <strong>If you were to talk to one of these applied AI teams and you were to give them one or two things that they can do right away to improve, or fix context in general, what are some of the best practices?</strong></p>
  2764.  
  2765.  
  2766.  
  2767. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1169" target="_blank" rel="noreferrer noopener">19.29</a>: Well you&#8217;re going to laugh, Ben, because the answer is dependent on the context, and I mean the context in the team and what have you. </p>
  2768.  
  2769.  
  2770.  
  2771. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1178" target="_blank" rel="noreferrer noopener">19.38</a>: <strong>But if you were to just go give a keynote to a general audience, if you were to list down one, two, or three things that are the lowest hanging fruit, so to speak. . .</strong></p>
  2772.  
  2773.  
  2774.  
  2775. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1190" target="_blank" rel="noreferrer noopener">19.50</a>: The first thing I&#8217;m gonna do is I&#8217;m going to look in the room and I&#8217;m going to look at the titles of all the people in there, and I&#8217;m going to see if they have any subject-matter experts or if it&#8217;s just a bunch of engineers trying to build something for subject-matter experts. And my first bit of advice is you need to get yourself a subject-matter expert who is looking at the data, helping you with the eval data, and telling you what “good” looks like. </p>
  2776.  
  2777.  
  2778.  
  2779. <p>I see a lot of teams that don&#8217;t have this, and they end up building fairly brittle prompt systems. And then they can&#8217;t iterate well, and so that enterprise AI project fails. I also see them not wanting to open themselves up to subject-matter experts, because they want to hold on to the power themselves. It&#8217;s not how they&#8217;re used to building.&nbsp;</p>
  2780.  
  2781.  
  2782.  
  2783. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1238" target="_blank" rel="noreferrer noopener">20.38</a>: I really do think building in applied AI has changed the power dynamic between builders and subject-matter experts. You know, we were talking earlier about some of like the old Web 2.0 days and I&#8217;m sure you remember. . . Remember back at the beginning of the iOS app craze, we&#8217;d be at a dinner party and someone would find out that you&#8217;re capable of building an app, and you would get cornered by some guy who&#8217;s like “I&#8217;ve got a great idea for an app,” and he would just talk at you—usually a he. </p>
  2784.  
  2785.  
  2786.  
  2787. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1275" target="_blank" rel="noreferrer noopener">21.15</a>: <strong>This is back in the Objective-C days. . .</strong></p>
  2788.  
  2789.  
  2790.  
  2791. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1277" target="_blank" rel="noreferrer noopener">21.17</a>: Yes, way back when. And this is someone who loves Objective-C. So you&#8217;d get cornered and you’d try to find a way out of that awkward conversation. Nowadays, that dynamic has shifted. The subject-matter expertise is so important for codifying and designing the spec, which usually gets specced out by the evals that it leads itself to more. And you can even see this. OpenAI is arguably creating and at the forefront of this stuff. And what are they doing? They&#8217;re standing up programs to get lawyers to come in, to get doctors to come in, to get these specialists to come in and help them create benchmarks because they can&#8217;t do it themselves. And so that&#8217;s the first thing. Got to work with the subject-matter expert. </p>
  2792.  
  2793.  
  2794.  
  2795. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1324" target="_blank" rel="noreferrer noopener">22.04</a>: The second thing is if they&#8217;re just starting out—and this is going to sound backwards, given our topic today—I would encourage them to use a system like DSPy or GEPA, which are essentially frameworks for building with AI. And one of the components of that framework is that they optimize the prompt for you with the help of an LLM and your eval data. </p>
  2796.  
  2797.  
  2798.  
  2799. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1357" target="_blank" rel="noreferrer noopener">22.37</a>: <strong>Throw in BAML?</strong></p>
  2800.  
  2801.  
  2802.  
  2803. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1359" target="_blank" rel="noreferrer noopener">22.39</a>: BAML is similar [but it’s] more like the spec for how to describe the entire spec. So it&#8217;s similar.</p>
  2804.  
  2805.  
  2806.  
  2807. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1372" target="_blank" rel="noreferrer noopener">22.52</a>: <strong>BAML and TextGrad? </strong></p>
  2808.  
  2809.  
  2810.  
  2811. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1375" target="_blank" rel="noreferrer noopener">22.55</a>: TextGrad is more like the prompt optimization I&#8217;m talking about. </p>
  2812.  
  2813.  
  2814.  
  2815. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1377" target="_blank" rel="noreferrer noopener">22:57</a>: <strong>TextGrad plus GEPA plus Regolo?</strong></p>
  2816.  
  2817.  
  2818.  
  2819. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1382" target="_blank" rel="noreferrer noopener">23.02</a>: Yeah, those things are really important. And the reason I say they&#8217;re important is. . .</p>
  2820.  
  2821.  
  2822.  
  2823. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1388" target="_blank" rel="noreferrer noopener">23.08</a>: <strong>I mean, Drew, those are kind of advanced topics. </strong></p>
  2824.  
  2825.  
  2826.  
  2827. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1392" target="_blank" rel="noreferrer noopener">23.12</a>: I don&#8217;t think they&#8217;re that advanced. I think they can appear really intimidating because everybody comes in and says, “Well, it&#8217;s so easy. I could just write what I want.” And this is the gift and curse of prompts, in my opinion. There&#8217;s a lot of things to like about.</p>
  2828.  
  2829.  
  2830.  
  2831. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1413" target="_blank" rel="noreferrer noopener">23.33</a>: <strong>DSPy is fine, but I think TextGrad, GEPA, and Regolo. . .</strong></p>
  2832.  
  2833.  
  2834.  
  2835. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1421" target="_blank" rel="noreferrer noopener">23.41</a>: Well. . . I wouldn&#8217;t encourage you to use GEPA directly. I would encourage you to use it through the framework of DSPy. </p>
  2836.  
  2837.  
  2838.  
  2839. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1428" target="_blank" rel="noreferrer noopener">23.48</a>: The point here is if it&#8217;s a team building, you can go down essentially two paths. You can handwrite your prompt, and I think this creates some issues. One is as you build, you tend to have a lot of hotfix statements like, “Oh, there&#8217;s a bug over here. We&#8217;ll say it over here. Oh, that didn&#8217;t fix it. So let&#8217;s say it again.” It will encourage you to have one person who <em>really</em> understands this prompt. And so you end up being reliant on this prompt magician. Even though they&#8217;re written in English, there&#8217;s kind of no syntax highlighting. They get messier and messier as you build the application because they start to grow and become these growing collections of edge cases.</p>
  2840.  
  2841.  
  2842.  
  2843. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1467" target="_blank" rel="noreferrer noopener">24.27</a>: And the other thing too, and this is really important, is when you build and you spend so much time honing a prompt, you&#8217;re doing it against one model, and then at some point there&#8217;s going to be a better, cheaper, more effective model. And you&#8217;re going to have to go through the process of tweaking it and fixing all the bugs again, because this model functions differently.</p>
  2844.  
  2845.  
  2846.  
  2847. <p>And I used to have to try to convince people that this was a problem, but they all kind of found out when OpenAI deprecated all of their models and tried to move everyone over to GPT-5. And now I hear about it all the time.&nbsp;</p>
  2848.  
  2849.  
  2850.  
  2851. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1503" target="_blank" rel="noreferrer noopener">25.03</a>: <strong>Although I think right now “agents” is our hot topic, right? So we talk to people about agents and you start really getting into the weeds, you realize, “Oh, okay. So their agents are really just prompts.” </strong></p>
  2852.  
  2853.  
  2854.  
  2855. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1516" target="_blank" rel="noreferrer noopener">25.16</a>: In the loop. . .</p>
  2856.  
  2857.  
  2858.  
  2859. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1519" target="_blank" rel="noreferrer noopener">25.19</a>: <strong>So agent optimization in many ways means injecting a bit more software engineering rigor in how you maintain and version. . .</strong></p>
  2860.  
  2861.  
  2862.  
  2863. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1530" target="_blank" rel="noreferrer noopener">25.30</a>: Because that context is growing. As that loop goes, you&#8217;re deciding what gets added to it. And so you have to put guardrails in—ways to rescue from failure and figure out all these things. It&#8217;s very difficult. And you have to go at it systematically. </p>
  2864.  
  2865.  
  2866.  
  2867. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1546" target="_blank" rel="noreferrer noopener">25.46</a>: <strong>And then the problem is that, in many situations, the models are not even models that you control, actually. You&#8217;re using them through an API like OpenAI or Claude so you don&#8217;t actually have access to the weights. So even if you&#8217;re one of the super, super advanced teams that can do gradient descent and backprop, you can&#8217;t do that. Right? So then, what are your options for being more rigorous in doing optimization?</strong></p>
  2868.  
  2869.  
  2870.  
  2871. <p><strong>Well, it&#8217;s precisely these tools that Drew alluded to, which is the TextGrads of the world, the GEPA. You have these compound systems that are nondifferentiable. So then how do you actually do optimization in a world where you have things that are not differentiable? Right. So these are precisely the tools that will allow you to turn it from somewhat of a, I guess, black art to something with a little more discipline.&nbsp;</strong></p>
  2872.  
  2873.  
  2874.  
  2875. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1613" target="_blank" rel="noreferrer noopener">26.53</a>: And I think a good example is, even if you aren&#8217;t going to use prompt optimization-type tools. . . The prompt optimization is a great solution for what you just described, which is when you can&#8217;t control the weights of the models you&#8217;re using. But the other thing too, is, even if you aren&#8217;t going to adopt that, you need to get evals because that&#8217;s going to be step one for anything, which is you need to start working with subject-matter experts to create evals.</p>
  2876.  
  2877.  
  2878.  
  2879. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1642" target="_blank" rel="noreferrer noopener">27.22</a>: Because what I see. . . And there was just a really dumb argument online of “Are evals worth it or not?” And it was really silly to me because it was positioned as an either-or argument. And there were people arguing against evals, which is just insane to me. And the reason they were arguing against evals is they&#8217;re basically arguing in favor of what they called, to your point about dark arts, vibe shipping—which is they&#8217;d make changes, push those changes, and then the person who was also making the changes would go in and type in 12 different things and say, “Yep, feels right to me.” And that&#8217;s insane to me. </p>
  2880.  
  2881.  
  2882.  
  2883. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1677" target="_blank" rel="noreferrer noopener">27.57</a>: And even if you&#8217;re doing that—which I think is a good thing and you may not go create coverage and eval, you have some taste. . . And I do think when you&#8217;re building more qualitative tools. . . So a good example is like if you’re Character.AI or you’re Portola Labs, who’s building essentially personalized emotional chatbots, it&#8217;s going to be harder to create evals and it&#8217;s going to require taste as you build them. But having evals is going to ensure that your whole thing didn&#8217;t fall apart because you changed one sentence, which sadly is a risk because these are probabilistic software.</p>
  2884.  
  2885.  
  2886.  
  2887. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1713" target="_blank" rel="noreferrer noopener">28.33</a>: <strong>Honestly, evals are super important. Number one, because, basically, leaderboards like LMArena are great for narrowing your options. But at the end of the day, you still need to benchmark all of these against your own application use case and domain. And then secondly, obviously, it&#8217;s an ongoing thing. So it ties in with reliability. The more reliable your application is, that means most likely you&#8217;re doing evals properly in an ongoing fashion. And I really believe that eval and reliability are a moat, because basically what else is your moat? Prompt? That&#8217;s not a moat. </strong></p>
  2888.  
  2889.  
  2890.  
  2891. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1761" target="_blank" rel="noreferrer noopener">29.21</a>: So first off, violent agreement there. The only asset teams truly have—unless they&#8217;re a model builder, which is only a handful—is their eval data. And I would say the counterpart to that is their spec, whatever defines their program, but mostly the eval data. But to the other point about it, like why are people vibe shipping? I think you can get pretty far with vibe shipping and it fools you into thinking that that’s right.</p>
  2892.  
  2893.  
  2894.  
  2895. <p>We saw this pattern in the Web 2.0 and social era, which was, you would have the product genius—everybody wanted to be the Steve Jobs, who didn&#8217;t hold focus groups, didn&#8217;t ask their customers what they wanted. The Henry Ford quote about “They all say faster horses,” and I&#8217;m the genius who comes in and tweaks these things and ships them. And that often takes you very far.</p>
  2896.  
  2897.  
  2898.  
  2899. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1813" target="_blank" rel="noreferrer noopener">30.13</a>: I also think it&#8217;s a bias of success. We only know about the ones that succeed, but the best ones, when they grow up and they start to serve an audience that&#8217;s way bigger than what they could hold in their head, they start to grow up with AB testing and ABX testing throughout their organization. And a good example of that is Facebook.</p>
  2900.  
  2901.  
  2902.  
  2903. <p>Facebook stopped being just some choices and started having to do testing and ABX testing in every aspect of their business. Compare that to Snap, which again, was kind of the last of the great product geniuses to come out. Evan [Spiegel] was heralded as “He&#8217;s the product genius,” but I think they ran that too long, and they kept shipping on vibes rather than shipping on ABX testing and growing and, you know, being more boring.</p>
  2904.  
  2905.  
  2906.  
  2907. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1864" target="_blank" rel="noreferrer noopener">31.04</a>: But again, that&#8217;s how you get the global reach. I think there&#8217;s a lot of people who probably are really great vibe shippers. And they&#8217;re probably having great success doing that. The question is, as their company grows and starts to hit harder times or the growth starts to slow, can that vibe shipping take them over the hump? And I would argue, no, I think you have to grow up and start to have more accountable metrics that, you know, scale to the size of your audience. </p>
  2908.  
  2909.  
  2910.  
  2911. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1894" target="_blank" rel="noreferrer noopener">31.34</a>: <strong>So in closing. . . We talked about prompt engineering. And then we talked about context engineering. So putting you on the spot. What&#8217;s a buzzword out there that either irks you or you think is undertalked about at this point? So what&#8217;s a buzzword out there, Drew? </strong></p>
  2912.  
  2913.  
  2914.  
  2915. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1917" target="_blank" rel="noreferrer noopener">31.57</a>: [laughs] I mean, I wish you had given me some time to think about it. </p>
  2916.  
  2917.  
  2918.  
  2919. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1918" target="_blank" rel="noreferrer noopener">31.58</a>: <strong>We are in a hype cycle here. . .</strong></p>
  2920.  
  2921.  
  2922.  
  2923. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=1922" target="_blank" rel="noreferrer noopener">32.02</a>: We’re always in a hype cycle. I don’t like anthropomorphosizing LLMs or AI for a whole host of reasons. One, I think it leads to bad understanding and bad mental models, that means that we don&#8217;t have substantive conversations about these things, and we don&#8217;t learn how to build really well with them because we think they&#8217;re intelligent. We think they&#8217;re a PhD in your pocket. We think they&#8217;re all of these things and they&#8217;re not—they&#8217;re fundamentally different. </p>
  2924.  
  2925.  
  2926.  
  2927. <p>I&#8217;m not against using the way we think the brain works for inspiration. That&#8217;s fine with me. But when you start oversimplifying these and not taking the time to explain to your audience how they actually work—you just say it&#8217;s a PhD in your pocket, and here&#8217;s the benchmark to prove it—you&#8217;re misleading and setting unrealistic expectations. And unfortunately, the market rewards them for that. So they keep going.&nbsp;</p>
  2928.  
  2929.  
  2930.  
  2931. <p>But I also think it just doesn&#8217;t help you build sustainable programs because you aren&#8217;t actually understanding how it works. You&#8217;re just kind of reducing it down to it. AGI is one of those things. And superintelligence, but AGI especially.</p>
  2932.  
  2933.  
  2934.  
  2935. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2001" target="_blank" rel="noreferrer noopener">33.21</a>: I went to school at UC Santa Cruz, and one of my favorite classes I ever took was a seminar with Donna Haraway. Donna Haraway wrote “<a href="https://en.wikipedia.org/wiki/A_Cyborg_Manifesto" target="_blank" rel="noreferrer noopener">A Cyborg Manifesto</a>” in the ’80s. She&#8217;s kind of a tech science history feminist lens. You would just sit in that class and your mind would explode, and then at the end, you just have to sit there for like five minutes afterwards, just picking up the pieces. </p>
  2936.  
  2937.  
  2938.  
  2939. <p>She had a great term called “power objects.” A power object is something that we as a society recognize to be incredibly important, believe to be incredibly important, but we don&#8217;t know how it works. That lack of understanding allows us to fill this bucket with whatever we want it to be: our hopes, our fears, our dreams. This happened with DNA; this happened with PET scans and brain scans. This happens all throughout science history, down to phrenology and blood types and things that we understand to be, or we believed to be, important, but they&#8217;re not. And big data, another one that is very, very relevant.&nbsp;</p>
  2940.  
  2941.  
  2942.  
  2943. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2074" target="_blank" rel="noreferrer noopener">34.34</a>: <strong>That&#8217;s my handle on Twitter. </strong></p>
  2944.  
  2945.  
  2946.  
  2947. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2095" target="_blank" rel="noreferrer noopener">34.55</a>: Yeah, there you go. So like it&#8217;s, you know, I fill it with Ben Lorica. That&#8217;s how I fill that power object. But AI is definitely that. AI is definitely that. And my favorite example of this is when the DeepSeek moment happened, we understood this to be really important, but we didn&#8217;t understand why it works and how well it worked.</p>
  2948.  
  2949.  
  2950.  
  2951. <p>And so what happened is, if you looked at the news and you looked at people&#8217;s reactions to what DeepSeek meant, you could basically find all the hopes and dreams about whatever was important to that person. So to AI boosters, DeepSeek proved that LLM progress is not slowing down. To AI skeptics, DeepSeek proved that AI companies have no moat. To open source advocates, it proved open is superior. To AI doomers, it proved that we aren&#8217;t being careful enough. Security researchers worried about the risk of backdoors in the models because it was in China. Privacy advocates worried about DeepSeek’s web services collecting sensitive data. China hawks said, “We need more sanctions.” Doves said, “Sanctions don&#8217;t work.” NVIDIA bears said, “We&#8217;re not going to need any more data centers if it&#8217;s going to be this efficient.” And bulls said, “No, we&#8217;re going to need tons of them because it&#8217;s going to use everything.”</p>
  2952.  
  2953.  
  2954.  
  2955. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2144" target="_blank" rel="noreferrer noopener">35.44</a>: And AGI is another term like that, which means everything and nothing. And when the point we&#8217;ve reached it comes, isn&#8217;t. And compounding that is that it&#8217;s in the contract between OpenAI and Microsoft—I forget the exact term, but it&#8217;s the statement that Microsoft gets access to OpenAI’s technologies until AGI is achieved.</p>
  2956.  
  2957.  
  2958.  
  2959. <p>And so it&#8217;s a very loaded definition right now that&#8217;s being debated back and forth and trying to figure out how to take [Open]AI into being a for-profit corporation. And Microsoft has a lot of leverage because how do you define AGI? Are we going to go to court to define what AGI is? I almost look forward to that.</p>
  2960.  
  2961.  
  2962.  
  2963. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2188" target="_blank" rel="noreferrer noopener">36.28</a>: So because it&#8217;s going to be that thing, and you&#8217;ve seen Sam Altman come out and some days he talks about how LLMs are just software. Some days he talks about how it’s a PhD in your pocket, some days he talks about how we&#8217;ve already passed AGI, it&#8217;s already over. </p>
  2964.  
  2965.  
  2966.  
  2967. <p>I think Nathan Lambert has some <a href="https://www.interconnects.ai/p/agi-is-what-you-want-it-to-be" target="_blank" rel="noreferrer noopener">great writing about how AGI is a mistake</a>. We shouldn&#8217;t talk about trying to turn LLMs into humans. We should try to leverage what they do now, which is something fundamentally different, and we should keep building and leaning into that rather than trying to make them like us. So AGI is my word for you. </p>
  2968.  
  2969.  
  2970.  
  2971. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2223" target="_blank" rel="noreferrer noopener">37.03</a>: <strong>The way I think of it is, AGI is great for fundraising, let&#8217;s put it that way. </strong></p>
  2972.  
  2973.  
  2974.  
  2975. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2228" target="_blank" rel="noreferrer noopener">37.08</a>: That&#8217;s basically it. Well, until you need it to have already been achieved, or until you need it to not be achieved because you don&#8217;t want any regulation or if you <em>want</em> regulation—it&#8217;s kind of a fuzzy word. And that has some really good properties. </p>
  2976.  
  2977.  
  2978.  
  2979. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2243" target="_blank" rel="noreferrer noopener">37.23</a>: <strong>So I&#8217;ll close by throwing in my own term. So prompt engineering, context engineering. . . I will close by saying pay attention to this boring term, which my friend Ion Stoica is now talking more about “systems engineering.” If you look at particularly the agentic applications, you&#8217;re talking about systems.</strong></p>
  2980.  
  2981.  
  2982.  
  2983. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2275" target="_blank" rel="noreferrer noopener">37.55</a>: Can I add one thing to this? Violent agreement. I think that is an underrated. . . </p>
  2984.  
  2985.  
  2986.  
  2987. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2280" target="_blank" rel="noreferrer noopener">38.00</a>: <strong>Although I think it&#8217;s too boring a term, Drew, to take off.</strong></p>
  2988.  
  2989.  
  2990.  
  2991. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2283" target="_blank" rel="noreferrer noopener">38.03</a>: That’s fine! The reason I like it is because—and you were talking about this when you talk about fine-tuning—is, looking at the way people build and looking at the way I see teams with success build, there&#8217;s pretraining, where you&#8217;re basically training on unstructured data and you&#8217;re just building your base knowledge, your base English capabilities and all that. And then you have posttraining. And in general, posttraining is where you build. I do think of it as a form of interface design, even though you are adding new skills, but you&#8217;re teaching reasoning, you&#8217;re teaching it validated functions like code and math. You&#8217;re teaching it how to chat with you. This is where it learns to converse. You&#8217;re teaching it how to use tools and specific sets of tools. And then you&#8217;re teaching it alignment, what&#8217;s safe, what&#8217;s not safe, all these other things. </p>
  2992.  
  2993.  
  2994.  
  2995. <p>But then after it ships, you can still RL that model, you can still fine-tune that model, and you can still prompt engineer that model, and you can still context engineer that model. And back to the systems engineering thing is, I think we&#8217;re going to see that posttraining all the way through to a final applied AI product. That&#8217;s going to be a real shades-of-gray gradient. It’s going to be. And this is one of the reasons why I think open models have a pretty big advantage in the future is that you&#8217;re going to dip down the way throughout that, leverage that.&nbsp;.&nbsp;.</p>
  2996.  
  2997.  
  2998.  
  2999. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2372" target="_blank" rel="noreferrer noopener">39.32</a>: The only thing that&#8217;s keeping us from doing that now is we don&#8217;t have the tools and the operating system to align throughout that posttraining to shipping. Once we do, that operating system is going to change how we build, because the distance between posttraining and building is going to look really, really, really blurry. I really like the systems engineering type of approach, but I also think you can also start to see this yesterday [when] Thinking Machines released their first product.</p>
  3000.  
  3001.  
  3002.  
  3003. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2404" target="_blank" rel="noreferrer noopener">40.04</a>: And so Thinking Machines is Mira [Murati]. Her very hype thing. They launched their first thing, and it’s called Tinker. And it&#8217;s essentially, “Hey, you can write a very simple Python code, and then we will do the RL for you or the fine-tuning for you using our cluster of GPU so you don&#8217;t have to manage that.” And that is the type of thing that we want to see in a maturing kind of development framework. And you start to see this operating system emerging. </p>
  3004.  
  3005.  
  3006.  
  3007. <p>And it reminds me of the early days of O&#8217;Reilly, where it&#8217;s like I had to stand up a web server, I had to maintain a web server, I had to do all of these things, and now I don&#8217;t have to. I can spin up a Docker image, I can ship to render, I can ship to Vercel. All of these shared complicated things now have frameworks and tooling, and I think we&#8217;re going to see a similar evolution from that. And I&#8217;m really excited. And I think you have picked a great underrated term.&nbsp;</p>
  3008.  
  3009.  
  3010.  
  3011. <p><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2456" target="_blank" rel="noreferrer noopener">40.56</a>: <strong>Now with that. Thank you, Drew. </strong><br><br><a href="https://cdn.oreillystatic.com/radar/generative-ai-real-world-podcast/GenAI_in_the_Real_World_with_Drew_Breunig.mp3#t=2458" target="_blank" rel="noreferrer noopener">40.58</a>: Awesome. Thank you for having me, Ben.</p>
  3012. ]]></content:encoded>
  3013. <wfw:commentRss>https://www.oreilly.com/radar/podcast/generative-ai-in-the-real-world-context-engineering-with-drew-breunig/feed/</wfw:commentRss>
  3014. <slash:comments>0</slash:comments>
  3015. </item>
  3016. <item>
  3017. <title>From Habits to Tools</title>
  3018. <link>https://www.oreilly.com/radar/from-habits-to-tools/</link>
  3019. <pubDate>Wed, 15 Oct 2025 12:49:38 +0000</pubDate>
  3020. <dc:creator><![CDATA[Andrew Stellman]]></dc:creator>
  3021. <category><![CDATA[AI & ML]]></category>
  3022. <category><![CDATA[Commentary]]></category>
  3023.  
  3024. <guid isPermaLink="false">https://www.oreilly.com/radar/?p=17557</guid>
  3025.  
  3026. <media:content
  3027. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-colorful-drops_Otherworldly.jpg"
  3028. medium="image"
  3029. type="image/jpeg"
  3030. width="2304"
  3031. height="1792"
  3032. />
  3033.  
  3034. <media:thumbnail
  3035. url="https://www.oreilly.com/radar/wp-content/uploads/sites/3/2025/10/Abstract-colorful-drops_Otherworldly-160x160.jpg"
  3036. width="160"
  3037. height="160"
  3038. />
  3039. <custom:subtitle><![CDATA[The Future of AI-Assisted Development]]></custom:subtitle>
  3040. <description><![CDATA[This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the original framework introduction and explore the complete methodology in Andrew Stellman’s O’Reilly report Critical Thinking Habits for Coding with AI. AI-assisted coding is here to stay. I&#8217;ve seen many companies now require all developers to install Copilot extensions [&#8230;]]]></description>
  3041. <content:encoded><![CDATA[
  3042. <p class="has-cyan-bluish-gray-background-color has-background"><em><em>This article is part of a series on the Sens-AI Framework—practical habits for learning and coding with AI. Read the <a href="https://www.oreilly.com/radar/the-sens-ai-framework/" target="_blank" rel="noreferrer noopener">original framework introduction</a> and explore the complete methodology in Andrew Stellman’s O’Reilly report </em></em><a href="https://learning.oreilly.com/library/view/critical-thinking-habits/0642572243326/" target="_blank" rel="noreferrer noopener">Critical Thinking Habits for Coding with AI</a><em><em>.</em></em></p>
  3043.  
  3044.  
  3045.  
  3046. <p>AI-assisted coding is here to stay. I&#8217;ve seen many companies now require all developers to install Copilot extensions in their IDEs, and teams are increasingly being measured on AI-adoption metrics. Meanwhile, the tools themselves have become genuinely useful for routine tasks: Developers regularly use them to generate boilerplate, convert between formats, write unit tests, and explore unfamiliar APIs—giving us more time to focus on solving our real problems instead of wrestling with syntax or going down research rabbit holes.</p>
  3047.  
  3048.  
  3049.  
  3050. <p>Many team leads, managers, and instructors looking to help developers ramp up on AI tools assume the biggest challenge is learning to write better prompts or picking the right AI tool; that assumption misses the point. The real challenge is figuring out how developers can use these tools in ways that keep them engaged and strengthen their skills instead of becoming disconnected from the code and letting their development skills atrophy.</p>
  3051.  
  3052.  
  3053.  
  3054. <p>This was the challenge I took on when I developed the Sens-AI Framework. When I was updating <a href="https://learning.oreilly.com/library/view/head-first-c/9781098141776/" target="_blank" rel="noreferrer noopener"><em>Head First C#</em></a> (O&#8217;Reilly 2024) to help readers ramp up on AI skills alongside other fundamental development skills, I watched new learners struggle not with the mechanics of prompting but with maintaining their understanding of the code they were producing. The framework emerged from those observations—five habits that keep developers engaged in the design conversation: context, research, framing, refining, and critical thinking. These habits address the real issue: making sure the developer stays in control of the work, understanding not just what the code does but why it&#8217;s structured that way.</p>
  3055.  
  3056.  
  3057.  
  3058. <h2 class="wp-block-heading"><strong>What We&#8217;ve Learned So Far</strong></h2>
  3059.  
  3060.  
  3061.  
  3062. <p>When I updated <em>Head First C# </em>to include AI exercises, I had to design them knowing learners would paste instructions directly into AI tools. That forced me to be deliberate: The instructions had to guide the learner while also shaping how the AI responded. Testing those same exercises against Copilot and ChatGPT showed the same kinds of problems over and over—AI filling in gaps with the wrong assumptions or producing code that looked fine until you actually had to run it, read and understand it, or modify and extend it.</p>
  3063.  
  3064.  
  3065.  
  3066. <p>Those issues don&#8217;t only trip up new learners. More experienced developers can fall for them too. The difference is that experienced developers already have habits for catching themselves, while newer developers usually don&#8217;t—unless we make a point of teaching them. AI skills aren&#8217;t exclusive to senior or experienced developers either; I&#8217;ve seen relatively new developers develop their AI skills quickly because they&#8217;ve built these habits quickly.</p>
  3067.  
  3068.  
  3069.  
  3070. <h2 class="wp-block-heading"><strong>Habits Across the Lifecycle</strong></h2>
  3071.  
  3072.  
  3073.  
  3074. <p>In “<a href="https://www.oreilly.com/radar/the-sens-ai-framework/" target="_blank" rel="noreferrer noopener">The Sens-AI Framework</a>,” I introduced the five habits and explained how they work together to keep developers engaged with their code rather than becoming passive consumers of AI output. These habits also address specific failure modes, and understanding how they solve real problems points the way toward broader implementation across teams and tools:</p>
  3075.  
  3076.  
  3077.  
  3078. <p><strong>Context</strong> helps avoid vague prompts that lead to poor output. Ask an AI to “make this code better” without sharing what the code does, and it might suggest adding comments to a performance-critical section where comments would just clutter. But provide the context—“This is a high-frequency trading system where microseconds matter,” along with the actual code structure, dependencies, and constraints—and the AI understands it should focus on optimizations, not documentation.</p>
  3079.  
  3080.  
  3081.  
  3082. <p><strong>Research</strong> makes sure the AI isn&#8217;t your only source of truth. When you rely solely on AI, you risk compounding errors—the AI makes an assumption, you build on it, and soon you&#8217;re deep in a solution that doesn&#8217;t match reality. Cross-checking with documentation or even asking a different AI can reveal when you&#8217;re being led astray.</p>
  3083.  
  3084.  
  3085.  
  3086. <p><strong>Framing</strong> is about asking questions that set up useful answers. &#8220;How do I handle errors?&#8221; gets you a try-catch block. &#8220;How do I handle network timeout errors in a distributed system where partial failures need rollback?&#8221; gets you circuit breakers and compensation patterns. As I showed in “<a href="https://www.oreilly.com/radar/understanding-the-rehash-loop/" target="_blank" rel="noreferrer noopener">Understanding the Rehash Loop</a>,” proper framing can break the AI out of circular suggestions.</p>
  3087.  
  3088.  
  3089.  
  3090. <p><strong>Refining</strong> means not settling for the first thing the AI gives you. The first response is rarely the best—it&#8217;s just the AI&#8217;s initial attempt. When you iterate, you&#8217;re steering toward better patterns. Refining moves you from &#8220;This works&#8221; to “This is actually good.&#8221;</p>
  3091.  
  3092.  
  3093.  
  3094. <p><strong>Critical thinking</strong> ties it all together, asking whether the code actually works for your project. It&#8217;s debugging the AI&#8217;s assumptions, reviewing for maintainability, and asking, &#8220;Will this make sense six months from now?&#8221;</p>
  3095.  
  3096.  
  3097.  
  3098. <p>The real power of the Sens-AI Framework comes from using all five habits together. They form a reinforcing loop: Context informs research, research improves framing, framing guides refinement, refinement reveals what needs critical thinking, and critical thinking shows you what context you were missing. When developers use these habits in combination, they stay engaged with the design and engineering process rather than becoming passive consumers of AI output. It&#8217;s the difference between using AI as a crutch and using it as a genuine collaborator.</p>
  3099.  
  3100.  
  3101.  
  3102. <h2 class="wp-block-heading"><strong>Where We Go from Here</strong></h2>
  3103.  
  3104.  
  3105.  
  3106. <p>If developers are going to succeed with AI, these habits need to show up beyond individual workflows. They need to become part of:</p>
  3107.  
  3108.  
  3109.  
  3110. <p><strong>Education</strong>: <em>Teaching AI literacy alongside basic coding skills.</em> As I described in “<a href="https://www.oreilly.com/radar/the-ai-teaching-toolkit-practical-guidance-for-teams/" target="_blank" rel="noreferrer noopener">The AI Teaching Toolkit</a>,” techniques like having learners debug intentionally flawed AI output help them spot when the AI is confidently wrong and practice breaking out of rehash loops. These aren&#8217;t advanced skills; they&#8217;re foundational.</p>
  3111.  
  3112.  
  3113.  
  3114. <p><strong>Team practice</strong>: <em>Using code reviews, pairing, and retrospectives to evaluate AI output the same way we evaluate human-written code.</em> In my teaching article, I described techniques like AI archaeology and shared language patterns. What matters here is making those kinds of habits part of standard training—so teams develop vocabulary like &#8220;I&#8217;m stuck in a rehash loop&#8221; or &#8220;The AI keeps defaulting to the old pattern.&#8221; And as I explored in “<a href="https://www.oreilly.com/radar/trust-but-verify/" target="_blank" rel="noreferrer noopener">Trust but Verify</a>,” treating AI-generated code with the same scrutiny as human code is essential for maintaining quality.</p>
  3115.  
  3116.  
  3117.  
  3118. <p><strong>Tooling</strong>: <em>IDEs and linters that don&#8217;t just generate code but highlight assumptions and surface design trade-offs.</em> Imagine your IDE warning: &#8220;Possible rehash loop detected: you&#8217;ve been iterating on this same approach for 15 minutes.&#8221; That&#8217;s one direction IDEs need to evolve—surfacing assumptions and warning when you&#8217;re stuck. The technical debt risks I outlined in “<a href="https://www.oreilly.com/radar/building-ai-resistant-technical-debt/" target="_blank" rel="noreferrer noopener">Building AI-Resistant Technical Debt</a>” could be mitigated with better tooling that catches antipatterns early.</p>
  3119.  
  3120.  
  3121.  
  3122. <p><strong>Culture</strong>: <em>A shared understanding that AI is a collaboration too (and not a teammate)</em>. A team&#8217;s measure of success for code shouldn&#8217;t revolve around AI. Teams still need to understand that code, keep it maintainable, and grow their own skills along the way. Getting there will require changes in how they work together—for example, adding AI-specific checks to code reviews or developing shared vocabulary for when AI output starts drifting. This cultural shift connects to the requirements engineering parallels I explored in “<a href="https://www.oreilly.com/radar/prompt-engineering-is-requirements-engineering/" target="_blank" rel="noreferrer noopener">Prompt Engineering Is Requirements Engineering</a>”—we need the same clarity and shared understanding with AI that we&#8217;ve always needed with human teams.</p>
  3123.  
  3124.  
  3125.  
  3126. <p><strong>More convincing output will require more sophisticated evaluation.</strong> Models will keep getting faster and more capable. What won&#8217;t change is the need for developers to think critically about the code in front of them.</p>
  3127.  
  3128.  
  3129.  
  3130. <p>The Sens-AI habits work alongside today&#8217;s tools and are designed to stay relevant to tomorrow&#8217;s tools as well. They&#8217;re practices that keep developers in control, even as models improve and the output gets harder to question. The framework gives teams a way to talk about both the successes and the failures they see when using AI. From there, it&#8217;s up to instructors, tool builders, and team leads to decide how to put those lessons into practice.</p>
  3131.  
  3132.  
  3133.  
  3134. <p>The next generation of developers will never know coding without AI. Our job is to make sure they build lasting engineering habits alongside these tools—so AI strengthens their craft rather than hollowing it out.</p>
  3135. ]]></content:encoded>
  3136. </item>
  3137. </channel>
  3138. </rss>
  3139.  
  3140. <!--
  3141. Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/
  3142.  
  3143. Object Caching 239/242 objects using Memcached
  3144. Page Caching using Disk: Enhanced (Page is feed)
  3145. Minified using Memcached
  3146.  
  3147. Served from: www.oreilly.com @ 2025-11-06 17:01:07 by W3 Total Cache
  3148. -->

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=https%3A//www.oreilly.com/radar/feed/index.xml

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda