That is the primary article in a collection on agentic engineering and AI-driven improvement. Search for the following article on March 19 on O’Reilly Radar.

There’s been a number of hype about AI and software program improvement, and it is available in two flavors. One says, “We’re all doomed, that instruments like Claude Code will make software program engineering out of date inside a yr.” The opposite says, “Don’t fear, all the things’s wonderful, AI is simply one other software within the toolbox.” Neither is sincere.

I’ve spent over 20 years writing about software program improvement for practitioners, overlaying all the things from coding and structure to challenge administration and staff dynamics. For the final two years I’ve been targeted on AI, coaching builders to make use of these instruments successfully, writing about what works and what doesn’t in books, articles, and studies. And I stored working into the identical drawback: I had but to search out anybody with a coherent reply for the way skilled builders ought to really work with these instruments. There are many ideas and loads of hype however little or no construction, and little or no you may apply, educate, critique, or enhance.

I’d been observing builders at work utilizing AI with numerous ranges of success, and I noticed we have to begin interested by this as its personal self-discipline. Andrej Karpathy, the previous head of AI at Tesla and a founding member of OpenAI, not too long ago proposed the time period “agentic engineering” for disciplined improvement with AI brokers, and others like Addy Osmani are getting on board. Osmani’s framing is that AI brokers deal with implementation however the human owns the structure, opinions each diff, and checks relentlessly. I feel that’s proper.

However I’ve spent a number of the final two years instructing builders find out how to use instruments like Claude Code, agent mode in Copilot, Cursor, and others, and what I hold listening to is that they already know they need to be reviewing the AI’s output, sustaining the structure, writing checks, preserving documentation present, and staying in charge of the codebase. They know find out how to do it in principle. However they get caught making an attempt to use it in apply: How do you really evaluate 1000’s of traces of AI-generated code? How do you retain the structure coherent whenever you’re working throughout a number of AI instruments over weeks? How have you learnt when the AI is confidently flawed? And it’s not simply junior builders who’re having hassle with agentic engineering. I’ve talked to senior engineers who wrestle with the shift to agentic instruments, and intermediate builders who take to it naturally. The distinction isn’t essentially the years of expertise; it’s whether or not they’ve discovered an efficient and structured strategy to work with AI coding instruments. That hole between figuring out what builders must be doing with agentic engineering and figuring out find out how to combine it into their day-to-day work is an actual supply of tension for lots of engineers proper now. That’s the hole this collection is making an attempt to fill.

Regardless of what a lot of the hype about agentic engineering is telling you, this type of improvement doesn’t get rid of the necessity for developer experience; simply the other. Working successfully with AI brokers really raises the bar for what builders have to know. I wrote about that have hole in an earlier O’Reilly Radar piece known as “The Cognitive Shortcut Paradox.” The builders who get probably the most from working with AI coding instruments are those who already know what good software program seems like, and may usually inform if the AI wrote it.

The concept AI instruments work greatest when skilled builders are driving them matched all the things I’d noticed. It rang true, and I needed to show it in a method that different builders would perceive: by constructing software program. So I began constructing a particular, sensible strategy to agentic engineering constructed for builders to comply with, after which I put it to the check. I used it to construct a manufacturing system from scratch, with the rule that AI would write all of the code. I wanted a challenge that was advanced sufficient to stress-test the strategy, and fascinating sufficient to maintain me engaged by way of the laborious elements. I needed to use all the things I’d realized and uncover what I nonetheless didn’t know. That’s after I got here again to Monte Carlo simulations.

The experiment

I’ve been obsessive about Monte Carlo simulations ever since I used to be a child. My dad’s an epidemiologist—his complete profession has been about discovering patterns in messy inhabitants information, which suggests statistics was all the time a part of our lives (and it additionally implies that I realized SPSS at a really early age). Once I was possibly 11 he informed me concerning the drunken sailor drawback: A sailor leaves a bar on a pier, taking a random step towards the water or towards his ship every time. Does he fall in or make it residence? You may’t know from any single run. However run the simulation a thousand instances, and the sample emerges from the noise. The person consequence is random; the mixture is predictable.

I bear in mind writing that simulation in BASIC on my TRS-80 Shade Pc 2: somewhat blocky sailor stumbling throughout the display, two steps ahead, one step again. The drunken sailor is the “Whats up, world” of Monte Carlo simulations. Monte Carlo is a way for issues you’ll be able to’t clear up analytically: You simulate them a whole bunch or 1000’s of instances and measure the mixture outcomes. Every particular person run is random, however the statistics converge on the true reply because the pattern dimension grows. It’s a technique we mannequin all the things from nuclear physics to monetary danger to the unfold of illness throughout populations.

What in case you might run that type of simulation right this moment by describing it in plain English? Not a toy demo however 1000’s of iterations with seeded randomness for reproducibility, the place the outputs get validated and the outcomes get aggregated into precise statistics you should use. Or a pipeline the place an LLM generates content material, a second LLM scores it, and something that doesn’t move will get despatched again for one more strive.

The objective of my experiment was to construct that system, which I known as Octobatch. Proper now, the trade is consistently in search of new real-world end-to-end case research in agentic engineering, and I needed Octobatch to be precisely that case research.

I took all the things I’d realized from instructing and observing builders working with AI, put it to the check by constructing an actual system from scratch, and turned the teachings right into a structured strategy to agentic engineering I’m calling AI-driven improvement, or AIDD. That is the primary article in a collection about what agentic engineering seems like in apply, what it calls for from the developer, and how one can apply it to your personal work.

The result’s a totally functioning, well-tested utility that consists of about 21,000 traces of Python throughout a number of dozen recordsdata, backed by full specs, practically a thousand automated checks, and high quality integration and regression check suites. I used Claude Cowork to evaluate all of the AI chats from the complete challenge, and it seems that I constructed the complete utility in roughly 75 hours of energetic improvement time over seven weeks. For comparability, I constructed Octobatch in simply over half the time I spent final yr enjoying Blue Prince.

However this collection isn’t nearly Octobatch. I built-in AI instruments at each stage: Claude and Gemini collaborating on structure, Claude Code writing the implementation, LLMs producing the pipelines that run on the system they helped construct. This collection is about what I realized from that course of: the patterns that labored, the failures that taught me probably the most, and the orchestration mindset that ties all of it collectively. Every article pulls a special lesson from the experiment, from validation structure to multi-LLM coordination to the values that stored the challenge on observe.

Agentic engineering and AI-driven improvement

When most individuals speak about utilizing AI to jot down code, they imply certainly one of two issues: AI coding assistants like GitHub Copilot, Cursor, or Windsurf, which have advanced effectively past autocomplete into agentic instruments that may run multifile enhancing periods and outline customized brokers; or “vibe coding,” the place you describe what you need in pure language and settle for no matter comes again. These coding assistants are genuinely spectacular, and vibe coding will be actually productive.

Utilizing these instruments successfully on an actual challenge, nevertheless, sustaining architectural coherence throughout 1000’s of traces of AI-generated code, is a special drawback completely. AIDD goals to assist clear up that drawback. It’s a structured strategy to agentic engineering the place AI instruments drive substantial parts of the implementation, structure, and even challenge administration, when you, the human within the loop, resolve what will get constructed and whether or not it’s any good. By “construction,” I imply a set of practices builders can be taught and comply with, a strategy to know whether or not the AI’s output is definitely good, and a strategy to keep on observe throughout the lifetime of a challenge. If agentic engineering is the self-discipline, AIDD is one strategy to apply it.

In AI-driven improvement, builders don’t simply settle for options or hope the output is appropriate. They assign particular roles to particular instruments: one LLM for structure planning, one other for code execution, a coding agent for implementation, and the human for imaginative and prescient, verification, and the choices that require understanding the entire system.

And the “pushed” half is literal. The AI is writing virtually the entire code. One in all my floor guidelines for the Octobatch experiment was that I’d let AI write all of it. I’ve excessive code high quality requirements, and a part of the experiment was seeing whether or not AIDD might produce a system that meets them. The human decides what will get constructed, evaluates whether or not it’s proper, and maintains the constraints that hold the system coherent.

Not everybody agrees on how a lot the developer wants to remain within the loop, and the absolutely autonomous finish of the spectrum is already producing cautionary tales. Nicholas Carlini at Anthropic not too long ago tasked 16 Claude cases to construct a C compiler in parallel with no human within the loop. After 2,000 periods and $20,000 in API prices, the brokers produced a 100,000-line compiler that may construct a Linux kernel however isn’t a drop-in alternative for something, and when all 16 brokers acquired caught on the identical bug, Carlini needed to step again in and partition the work himself. Even robust advocates of a totally hands-off, vibe-driven strategy to agentic engineering would possibly name {that a} step too far. The query is how a lot human judgment it’s good to make that code reliable, and what particular practices enable you apply that judgment successfully.

The orchestration mindset

If you wish to get builders interested by agentic engineering in the correct method, you need to begin with how they consider working with AI, not simply what instruments they use. That’s the place I began after I started constructing a structured strategy, and it’s why I began with habits. I developed a framework for these known as the Sens-AI Framework, revealed as each an O’Reilly report (Important Considering Habits for Coding with AI) and a Radar collection. It’s constructed round 5 practices: offering context, doing analysis earlier than prompting, framing issues exactly, iterating intentionally on outputs, and making use of essential considering to all the things the AI produces. I began there as a result of habits are the way you lock in the way in which you concentrate on the way you’re working. With out them, AI-driven improvement produces plausible-looking code that falls aside below scrutiny. With them, it produces methods {that a} single developer couldn’t construct alone in the identical time-frame.

Habits are the muse, however they’re not the entire image. AIDD additionally has practices (concrete methods like multi-LLM coordination, context file administration, and utilizing one mannequin to validate one other’s output) and values (the rules behind these practices). Should you’ve labored with Agile methodologies like Scrum or XP, that construction must be fairly acquainted: Practices inform you find out how to work day-to-day, and habits are the reflexes you develop in order that the practices turn out to be automated.

Values usually appear weirdly theoretical, however they’re an necessary piece of the puzzle as a result of they information your choices when the practices don’t provide you with a transparent reply. There’s an rising tradition round agentic engineering proper now, and the values you carry to your challenge both match or conflict with that tradition. Understanding the place the values come from is what makes the practices stick. All of that results in a complete new mindset, what I’m calling the orchestration mindset. This collection builds all 4 layers, utilizing Octobatch because the proving floor.

Octobatch was a deliberate experiment in AIDD. I designed the challenge as a check case for the complete strategy, to see what a disciplined AI-driven workflow might produce and the place it could break down, and I used it to use and enhance the practices and values to make them efficient and straightforward to undertake. And whether or not by intuition or coincidence, I picked the proper challenge for this experiment. Octobatch is a batch orchestrator. It coordinates asynchronous jobs, manages state throughout failures, tracks dependencies between pipeline steps, and makes positive validated outcomes come out the opposite finish. That type of system is enjoyable to design however a number of the small print, like state machines, retry logic, crash restoration, and value accounting, will be tedious to implement. It’s precisely the type of work the place AIDD ought to shine, as a result of the patterns are effectively understood however the implementation is repetitive and error-prone.

Orchestration—the work of coordinating a number of unbiased processes towards a coherent consequence—advanced right into a core thought behind AIDD. I discovered myself orchestrating LLMs the identical method Octobatch orchestrates batch jobs: assigning roles, managing handoffs, validating outputs, recovering from failures. The system I used to be constructing and the method I used to be utilizing to construct it adopted the identical sample. I didn’t anticipate it after I began, however constructing a system that orchestrates AI seems to be a reasonably good strategy to learn to orchestrate AI. That’s the unintended a part of the unintended orchestrator. That parallel runs by way of each article on this collection.

Need Radar delivered straight to your inbox? Be a part of us on Substack. Join right here.

The trail to batch

I didn’t start the Octobatch challenge by beginning with a full end-to-end Monte Carlo simulation. I began the place most individuals begin: typing prompts right into a chat interface. I used to be experimenting with completely different simulation and era concepts to offer the challenge some construction, and some of them caught. A blackjack technique comparability turned out to be an amazing check case for a multistep Monte Carlo simulation. NPC dialogue era for a role-playing recreation gave me a inventive workload with subjective high quality to measure. Each had the identical form: a set of structured inputs, every processed the identical method. So I had Claude write a easy script to automate what I’d been doing by hand, and I used Gemini to double-check the work, be certain Claude actually understood my ask, and repair hallucinations. It labored wonderful at small scale, however as soon as I began working greater than 100 or so items, I stored hitting price limits, the caps that suppliers placed on what number of API requests you may make per minute.

That’s what pushed me to LLM batch APIs. As an alternative of sending particular person prompts one after the other and ready for every response, the most important LLM suppliers all provide batch APIs that allow you to submit a file containing your whole requests directly. The supplier processes them on their very own schedule; you look ahead to outcomes as an alternative of getting them instantly, however you don’t have to fret about price caps. I used to be joyful to find additionally they value 50% much less, and that’s after I began monitoring token utilization and prices in earnest. However the actual shock was that batch APIs carried out higher than real-time APIs at scale. As soon as pipelines acquired previous the 100- or 200-unit mark, batch began working considerably sooner than actual time. The supplier processes the entire batch in parallel on their infrastructure, so that you’re not bottlenecked by round-trip latency or price caps anymore.

The swap to batch APIs modified how I considered the entire drawback of coordinating LLM API calls at scale, and led to the concept of configurable pipelines. I might chain levels collectively: The output of 1 step might turn out to be the enter to the following, and I might kick off the entire pipeline and are available again to completed outcomes. It seems I wasn’t the one one making the shift to batch APIs. Between April 2024 and July 2025, OpenAI, Anthropic, and Google all launched batch APIs, converging on the identical pricing mannequin: 50% of the real-time price in trade for asynchronous processing.

You most likely didn’t discover that every one three main AI suppliers launched batch APIs. The trade dialog was dominated by brokers, software use, MCP, and real-time reasoning. Batch APIs shipped with comparatively little fanfare, however they symbolize a real shift in how we will use LLMs. As an alternative of treating them as conversational companions or one-shot SaaS APIs, we will deal with them as processing infrastructure, nearer to a MapReduce job than a chatbot. You give them structured information and a immediate template, they usually course of all of it and hand again the outcomes. What issues is that you may now run tens of 1000’s of those transformations reliably, at scale, with out managing price limits or connection failures.

Why orchestration?

If batch APIs are so helpful, why can’t you simply write a for-loop that submits requests and collects outcomes? You may, and for easy instances a fast script with a for-loop works wonderful. However when you begin working bigger workloads, the issues begin to pile up. Fixing these issues turned out to be one of the vital necessary classes for creating a structured strategy to agentic engineering.

First, batch jobs are asynchronous. You submit a job, and outcomes come again hours later, so your script wants to trace what was submitted and ballot for completion. In case your script crashes within the center, you lose that state. Second, batch jobs can partially fail. Possibly 97% of your requests succeeded and three% didn’t. Your code wants to determine which 3% failed, extract them, and resubmit simply these objects. Third, in case you’re constructing a multistage pipeline the place the output of 1 step feeds into the following, it’s good to observe dependencies between levels. And fourth, you want value accounting. Whenever you’re working tens of 1000’s of requests, you need to know the way a lot you spent, and ideally, how a lot you’re going to spend whenever you first begin the batch. Each certainly one of these has a direct parallel to what you’re doing in agentic engineering: preserving observe of the work a number of AI brokers are doing directly, coping with code failures and bugs, ensuring the complete challenge stays coherent when AI coding instruments are solely wanting on the one half presently in context, and stepping again to take a look at the broader challenge administration image.

All of those issues are solvable, however they’re not issues you need to clear up time and again (in each conditions—whenever you’re orchestrating LLM batch jobs or orchestrating AI coding instruments). Fixing these issues within the code gave some fascinating classes concerning the general strategy to agentic engineering. Batch processing strikes the complexity from connection administration to state administration. Actual-time APIs are laborious due to price limits and retries. Batch APIs are laborious as a result of you need to observe what’s in flight, what succeeded, what failed, and what’s subsequent.

Earlier than I began improvement, I went in search of current instruments that dealt with this mix of issues, as a result of I didn’t need to waste my time reinventing the wheel. I didn’t discover something that did the job I wanted. Workflow orchestrators like Apache Airflow and Dagster handle DAGs and job dependencies, however they assume duties are deterministic and don’t present LLM-specific options like immediate template rendering, schema-based output validation, or retry logic triggered by semantic high quality checks. LLM frameworks like LangChain and LlamaIndex are designed round real-time inference chains and agent loops—they don’t handle asynchronous batch job lifecycles, persist state throughout course of crashes, or deal with partial failure restoration on the chunk stage. And the batch API consumer libraries from the suppliers themselves deal with submission and retrieval for a single batch, however not multistage pipelines, cross-step validation, or provider-agnostic execution.

Nothing I discovered coated the total lifecycle of multiphase LLM batch workflows, from submission and polling by way of validation, retry, value monitoring, and crash restoration, throughout all three main AI suppliers. That’s what I constructed.

Classes from the experiment

The objective of this text, as the primary one in my collection on agentic engineering and AI-driven improvement, is to put out the speculation and construction of the Octobatch experiment. The remainder of the collection goes deep on the teachings I realized from it: the validation structure, multi-LLM coordination, the practices and values that emerged from the work, and the orchestration mindset that ties all of it collectively. Just a few early classes stand out, as a result of they illustrate what AIDD seems like in apply and why developer expertise issues greater than ever.

  • You must run issues and verify the info. Bear in mind the drunken sailor, the “Whats up, world” of Monte Carlo simulations? At one level I observed that after I ran the simulation by way of Octobatch, 77.5% of the sailors fell within the water. The outcomes for a random stroll must be 50/50, so clearly one thing was badly flawed. It turned out the random quantity generator was being re-seeded at each iteration with sequential seed values, which created correlation bias between runs. I didn’t determine the issue instantly; I ran a bunch of checks utilizing Claude Code as a check runner to generate every check, run it, and log the outcomes; Gemini seemed on the outcomes and located the basis trigger. Claude had hassle arising with a repair that labored effectively, and proposed a workaround with a big listing of preseeded random quantity values within the pipeline. Gemini proposed a hash-based repair reviewing my conversations with Claude, nevertheless it appeared overly advanced. As soon as I understood the issue and rejected their proposed options, I made a decision one of the best repair was easier than both of the AI’s options: a persistent RNG per simulation unit that superior naturally by way of its sequence. I wanted to grasp each the statistics and the code to guage these three choices. Believable-looking output and proper output aren’t the identical factor, and also you want sufficient experience to inform the distinction. (We’ll speak extra about this case within the subsequent article within the collection.)
  • LLMs usually overestimate complexity. At one level I needed so as to add assist for customized mathematical expressions within the evaluation pipeline. Each Claude and Gemini pushed again, telling me, “That is scope creep for v1.0” and “Reserve it for v1.1.” Claude estimated three hours to implement. As a result of I knew the codebase, I knew we had been already utilizing asteval, a Python library that gives a secure, minimalistic evaluator for mathematical expressions and easy Python statements, elsewhere to guage expressions, so this appeared like an easy use of a library we’re already utilizing elsewhere. Each LLMs thought the answer could be much more advanced and time-consuming than it really was; it took simply two prompts to Claude Code (generated by Claude), and about 5 minutes complete to implement. The function shipped and made the software considerably extra highly effective. The AIs had been being conservative as a result of they didn’t have my context concerning the system’s structure. Expertise informed me the mixing could be trivial. With out that have, I’d have listened to them and deferred a function that took 5 minutes.
  • AI is usually biased towards including code, not deleting it. Generative AI is, unsurprisingly, biased towards era. So after I requested the LLMs to repair issues, their first response was usually so as to add extra code, including one other layer or one other particular case. I can’t consider a single time in the entire challenge when one of many AIs stepped again and mentioned, “Tear this out and rethink the strategy.” The best periods had been those the place I overrode that intuition and pushed for simplicity. That is one thing skilled builders be taught over a profession: Essentially the most profitable modifications usually delete greater than they add—the PRs we brag about are those that delete 1000’s of traces of code.
  • The structure emerged from failure. The AI instruments and I didn’t design Octobatch’s core structure up entrance. Our first try was a Python script with in-memory state and a number of hope. It labored for small batches however fell aside at scale: A community hiccup meant restarting from scratch, a malformed response required handbook triage. Quite a lot of issues fell into place after I added the constraint that the system should survive being killed at any second. That single requirement led to the tick mannequin (get up, verify state, do work, persist, exit), the manifest file as supply of fact, and the complete crash-recovery structure. We found the design by repeatedly failing to do one thing easier.
  • Your improvement historical past is a dataset. I simply informed you many tales from the Octobatch challenge, and this collection can be stuffed with them. Each a type of tales got here from going again by way of the chat logs between me, Claude, and Gemini. With AIDD, you have got a whole transcript of each architectural choice, each flawed flip, each second the place you overruled the AI and each second the place it corrected you. Only a few improvement groups have ever had that stage of constancy of their challenge historical past. Mining these logs for classes realized seems to be one of the vital worthwhile practices I’ve discovered.

Close to the tip of the challenge, I switched to Cursor to ensure none of this was particular to Claude Code. I created contemporary conversations utilizing the identical context recordsdata I’d been sustaining all through improvement, and was in a position to bootstrap productive periods instantly; the context recordsdata labored precisely as designed. The practices I’d developed transferred cleanly to a special software. The worth of this strategy comes from the habits, the context administration, and the engineering judgment you carry to the dialog, not from any explicit vendor.

These instruments are shifting the world in a route that favors builders who perceive the methods engineering can go flawed and know stable design and structure patterns…and who’re okay letting go of management of each line of code.

What’s subsequent

Agentic engineering wants construction, and construction wants a concrete instance to make it actual. The following article on this collection goes into Octobatch itself, as a result of the way in which it orchestrates AI is a remarkably shut parallel to what AIDD asks builders to do. Octobatch assigns roles to completely different processing steps, manages handoffs between them, validates their outputs, and recovers after they fail. That’s the identical sample I adopted when constructing it: assigning roles to Claude and Gemini, managing handoffs between them, validating their outputs, and recovering after they went down the flawed path. Understanding how the system works seems to be a great way to grasp find out how to orchestrate AI-driven improvement. I’ll stroll by way of the structure, present what an actual pipeline seems like from immediate to outcomes, current the info from a 300-hand blackjack Monte Carlo simulation that places all of those concepts to the check, and use all of that to reveal concepts we will apply on to agentic engineering and AI-driven improvement.

Later articles go deeper into the practices and concepts I realized from this experiment that make AI-driven improvement work: how I coordinated a number of AI fashions with out shedding management of the structure, what occurred after I examined the code in opposition to what I really meant to construct, and what I realized concerning the hole between code that runs and code that does what you meant. Alongside the way in which, the experiment produced some findings about how completely different AI fashions see code that I didn’t count on—and that turned out to matter greater than I assumed they might.



Supply hyperlink


Leave a Reply

Your email address will not be published. Required fields are marked *