Three white line icons on a blue-to-purple gradient background: the first icon shows a simple flowchart with connected squares and a diamond, the second icon shows a network of interconnected circles, and the third icon shows three user profile symbols linked together.

AI brokers are reshaping software program growth, from writing code to finishing up complicated directions. But LLM-based brokers are susceptible to errors and infrequently carry out poorly on sophisticated, multi-step duties. Reinforcement studying (RL) is an method the place AI programs be taught to make optimum choices by receiving rewards or penalties for his or her actions, bettering by means of trial and error. RL may also help brokers enhance, nevertheless it usually requires builders to extensively rewrite their code. This discourages adoption, although the info these brokers generate might considerably enhance efficiency by means of RL coaching.

To handle this, a analysis workforce from Microsoft Analysis Asia – Shanghai has launched Agent Lightning. This open-source (opens in new tab) framework makes AI brokers trainable by means of RL by separating how brokers execute duties from mannequin coaching, permitting builders so as to add RL capabilities with just about no code modification.

Capturing agent habits for coaching

Agent Lightning converts an agent’s expertise right into a format that RL can use by treating the agent’s execution as a sequence of states and actions, the place every state captures the agent’s standing and every LLM name is an motion that strikes the agent to a brand new state.

This method works for any workflow, regardless of how complicated. Whether or not it entails a number of collaborating brokers or dynamic instrument use, Agent Lightning breaks it down right into a sequence of transitions. Every transition captures the LLM’s enter, output, and reward (Determine 1). This standardized format means the info can be utilized for coaching with none extra steps.

Figure 1: Diagram illustrating Agent Lightning’s unified data interface for a retrieval-augmented generation (RAG) agent. On the left, four states (state₀ to state₃) show the agent’s execution flow, where semantic variables—UserInput, Query, Passages, and Answer—are updated after each component call (LLM or Search). Green blocks represent populated variables; gray blocks indicate empty ones. On the right, the unified data interface converts these transitions into a trajectory format containing prompt, generation, and immediate reward for RL training.
Determine 1. An illustration of Agent Lightning’s standardized format utilizing a retrieval-augmented technology (RAG) agent. Left: The total agent workflow, the place the agent’s state updates after every element step. The inexperienced blocks present assigned variables, and the grey blocks point out variables with out content material. Proper: The collected transitions are based mostly on the standardized format for the RL coaching course of, with every transition corresponding to at least one LLM step that accommodates its immediate, outcome, and fast reward.

Hierarchical reinforcement studying

Conventional RL coaching for brokers that make a number of LLM requests entails stitching collectively all content material into one lengthy sequence after which figuring out which components ought to be realized and which ignored throughout coaching. This method is troublesome to implement and might create excessively lengthy sequences that degrade mannequin efficiency.

As a substitute, Agent Lightning’s LightningRL algorithm takes a hierarchical method. After a process completes, a credit score task module determines how a lot every LLM request contributed to the result and assigns it a corresponding reward. These impartial steps, now paired with their very own reward scores, can be utilized with any present single-step RL algorithm, corresponding to Proximal Coverage Optimization (PPO) or Group Relative Coverage Optimization (GRPO) (Determine 2).

Figure 2: Comparison of three reinforcement learning approaches for LLM tasks. (a) Single-step GRPO: The model completes the task in one call, and multiple outputs for the same task are compared with associated rewards. (b) Previous multi-step GRPO: The task spans multiple LLM calls, forming trajectories; non-LLM tokens (gray boxes) are ignored during training, and entire multi-step runs are compared. (c) LightningRL: Breaks multi-step runs into individual LLM calls, each including input, context, output, and reward assigned by a credit assignment module. Calls from the same task are grouped for reinforcement.
Determine 2. (a) Single-step GRPO: The LLM completes the duty in a single name. A number of responses for a similar process are in comparison with decide how strongly every ought to be bolstered. (b) Earlier multi-step GRPO: The duty entails a number of LLM calls. A number of multi-step runs of the identical process are in contrast, with non-LLM generated tokens (gray containers) ignored throughout coaching. (c) LightningRL: The multi-step run is split into particular person LLM calls. Calls from the identical process are in comparison with decide how strongly every ought to be bolstered. Every name contains its enter, context, output, and reward, assigned by the credit score task module.

This design affords a number of advantages. It stays absolutely appropriate with extensively used single-step RL algorithms, permitting present coaching strategies to be utilized with out modification. Organizing information as a sequence of impartial transitions lets builders flexibly assemble the LLM enter as wanted, supporting complicated behaviors like brokers that use a number of instruments or work with different brokers. Moreover, by preserving sequences quick, the method scales cleanly and retains coaching environment friendly.

Agent Lightning as middleware

Agent Lightning serves as middleware between RL algorithms and agent environments, offering modular parts that allow scalable RL by means of standardized protocols and well-defined interfaces.

An agent runner manages the brokers as they full duties. It distributes work and collects and shops the outcomes and progress information. It operates individually from the LLMs, enabling them to run on completely different assets and scale to help a number of brokers working concurrently.

An algorithm trains the fashions and hosts the LLMs used for inference and coaching. It orchestrates the general RL cycle, managing which duties are assigned, how brokers full them, and the way fashions are up to date based mostly on what the brokers be taught. It usually runs on GPU assets and communicates with the agent runner by means of shared protocols.

The LightningStore (opens in new tab) serves because the central repository for all information exchanges throughout the system. It supplies standardized interfaces and a shared format, guaranteeing that the completely different parts can work collectively and enabling the algorithm and agent runner to speak successfully.

Figure 3: Diagram showing the architecture of Agent Lightning (AGL). On the left, the AGL Algorithm block includes an inference engine (e.g., vLLM), an algorithm iteration loop, and an adapter for trainable data and weights update. In the center, the AGL Core contains LightningStore, which manages tasks, resources, spans, and LLM calls. On the right, the AGL Agent Runner & Tracer includes a user-defined agent using OpenAI chat completion and agl.emit(). Arrows indicate flows of prompts, responses, tasks, resources, spans, and datasets between components, with roles for algorithm researchers and agent developers highlighted.
Determine 3. The Agent Lightning framework

All RL cycles observe two steps: (1) Agent Lightning collects agent execution information (referred to as “spans”) and retailer them within the information retailer; (2) it then retrieves the required information and sends it to the algorithm for coaching. Via this design, the algorithm can delegate duties asynchronously to the agent runner, which completes them and studies the outcomes again (Determine 4).

Figure 4: Diagram of the training loop in Agent Lightning. The central element is ‘Trainer,’ with arrows forming a cycle between three components: Agent on the left, Algorithm on the right, and Trainer in the middle. The top arrow labeled ‘Tasks’ flows from Algorithm to Agent, while the bottom arrow labeled ‘Spans’ flows from Agent to Algorithm. ‘Prompt Templates’ is noted above the cycle, indicating its role in task generation.
Determine 4. Agent Lightning’s RL cycle

One key benefit of this method is its algorithmic flexibility. The system makes it straightforward for builders to customise how brokers be taught, whether or not they’re defining completely different rewards, capturing intermediate information, or experimenting with completely different coaching approaches.

One other benefit is useful resource effectivity. Agentic RL programs are complicated, integrating agentic programs, LLM inference engines, and coaching frameworks. By separating these parts, Agent Lightning makes this complexity manageable and permits every half to be optimized independently

A decoupled design permits every element to make use of the {hardware} that fits it greatest. The agent runner can use CPUs whereas mannequin coaching makes use of GPUs. Every element may also scale independently, bettering effectivity and making the system simpler to take care of. In apply, builders can hold their present agent frameworks and swap mannequin calls to the Agent Lightning API with out altering their agent code (Determine 5).

Figure 5: Side-by-side code comparison showing agent implementation before and after integrating Agent Lightning. The left panel (dark background) displays the original agent code written by the developer, including logic for LLM calls, tool usage, and reward assignment. The right panel (light background) shows the modified version using Agent Lightning, where most of the agent logic remains unchanged but includes additional imports and calls to Agent Lightning components such as agl.PromptTemplate, agl.emit(), and agl.Trainer for training and credit assignment. A stylized lightning icon is centered between the two panels.
Determine 5. On the left, the developer implements the agent code. On the underside proper is the code required for Agent Lightning. The principle physique of the agent code is unchanged.

Analysis throughout three real-world situations

Agent Lightning was examined on three distinct duties, reaching constant efficiency enhancements throughout all situations (Determine 6):

Textual content-to-SQL (LangChain): In a system with three brokers dealing with SQL technology, checking, and rewriting, Agent Lightning concurrently optimized two of them, considerably bettering the accuracy of producing executable SQL from pure language queries.

Retrieval-augmented technology (OpenAI Brokers SDK implementation): On the multi-hop question-answering dataset MuSiQue, which requires querying a big Wikipedia database, Agent Lightning helped the agent generate simpler search queries and purpose higher from retrieved content material.

Mathematical QA and power use (AutoGen implementation): For complicated math issues, Agent Lightning skilled LLMs to extra precisely decide when and methods to name the instrument and combine the outcomes into its reasoning, rising accuracy.

Figure 6: Figure with six line charts showing reward curves across three evaluation scenarios (Spider, MuSiQue, Calculator) for train and test splits. Top row: Train Rewards on Spider, MuSiQue, and Calculator—each plot shows a blue line with noisy upward trend over steps, indicating increasing rewards; Spider and Calculator rise faster with more variance, MuSiQue climbs more gradually. Bottom row: Test Rewards on Spider, MuSiQue, and Calculator—each plot shows a blue line that increases and then stabilizes at higher rewards; Calculator reaches near-plateau earliest, Spider shows steady gains with minor fluctuations, MuSiQue improves more slowly. All plots use ‘Steps’ on the x‑axis and ‘Rewards’ on the y‑axis, with a legend labeled ‘ours’ and light gridlines.
Determine 6. Reward curves throughout the three analysis situations

Enabling steady agent enchancment

By simplifying RL integration, Agent Lightning could make it simpler for builders to construct, iterate, and deploy high-performance brokers. We plan to increase Agent Lightning’s capabilities to incorporate computerized immediate optimization and extra RL algorithms.

The framework is designed to function an open platform the place any AI agent can enhance by means of real-world apply. By bridging present agentic programs with reinforcement studying, Agent Lightning goals to assist create AI programs that be taught from expertise and enhance over time.





Supply hyperlink


Leave a Reply

Your email address will not be published. Required fields are marked *