Skip to content

Multi-Agent

We will learn about the advanced usage of Agents—Multi-Agent. The Agent applications we've studied previously belong to the single-agent model, where a single Agent application consists of a fixed prompt template, an LLM model, and a list of tools. In contrast, a multi-agent system is composed of multiple Agents with different functions that collaborate to solve more complex problems.

Compared to a single large Agent, a multi-agent system constructed from multiple sub-Agents divided by responsibilities/functions has the following advantages:

  1. Each Agent needs to choose fewer tools, making errors less likely.
  2. Each Agent has its own prompt and LLM model, allowing for the optimization of prompts and the selection of different LLMs, or even separate fine-tuning of LLMs. Each sub-Agent can better complete its own tasks.
  3. Each module does not affect the others, allowing us to evaluate and improve sub-Agents individually without damaging the overall application.

This can be likened to the work model in human society, where we do not assign one person to be responsible for all stages of production. Instead, work is often organized in a production line, with each person responsible for one stage, leading to faster work efficiency and greater substitutability. We can continuously optimize various stages to improve overall efficiency.

Multiple studies (Study 1, Study 2) have shown that Multi-Agent systems can effectively reduce the LLM hallucination problem and improve the accuracy and efficiency of task processing.

A Multi-Agent system requires coordination among multiple Agents, which may lead to situations where one Agent repeatedly calls another. For example, one Agent processes data and, based on a certain strategy, passes the result as input to another Agent. For some reason, the subsequent Agent's processed result may again be passed as input to the previously executed Agent for further processing.

This complex relationship among sub-modules is well-suited to be described using a graph data structure, where each Agent corresponds to a node, and the edges between nodes represent interactions between Agents.

LangChain introduced LangGraph in version 0.1, which can be viewed as an extension of LCEL. A pure LCEL calling chain is actually a Directed Acyclic Graph (DAG) where the execution order of each module is pre-designed and cannot call in a loop. LangGraph addresses this limitation, allowing for the recursive calling of previously executed modules during execution, enabling us to construct more complex LLM systems.

Today, we will first learn the basic usage of LangGraph, followed by an introduction to two different Multi-Agent architectures and a demonstration of how to implement them using LangGraph.

LangGraph Quick Start

First, we need to install the LangGraph library.

bash
pip install langgraph

LangGraph has three important concepts: nodes, edges, and the data transfer between nodes (referred to as a state machine in LangGraph).

To construct an application using LangGraph, we first need to define the state machine, which is the data structure for transmitting messages between nodes. Then, we define the various nodes of the application and the edges between them, and finally, specify a starting node as the entry point for execution.

Defining the State Machine

The state machine refers to the data structure for transmitting data between nodes, which will serve as the input and output for the nodes. A state machine is generally defined as a type that inherits from TypedDict, which allows us to define classes with dictionary-like properties. For example:

python
from typing import TypedDict

# Define a TypedDict, specifying keys and their types 
class Point2D(TypedDict): 
    x: int 
    y: int 

# Using TypedDict 
point = {'x': 1, 'y': 2}

We can define multiple fields in the state machine, which will be passed between nodes. Before the execution results are propagated downstream, each field in the state machine will be updated. For example, we define the following state machine:

python
class AgentState(TypedDict):
    current_result: str

The transmission process is as follows:

  1. The node receives content: AgentState(current_result="")
  2. After the node completes execution, it outputs content: AgentState(current_result="hello").

When the data is actually passed down, each field in AgentState will be updated. Since AgentState has only one field, current_result, it will execute update(old_current_result, new_current_result). By default, the value of the field will be updated to the latest value, meaning current_result will be updated to "hello," and the content received by the next node will be AgentState(current_result="hello").

We can customize the update logic. A common approach is to merge the results from each node before passing them downstream. To achieve this effect, we can use Annotated to define the update behavior for the fields. As shown below:

python
from typing import Annotated, Sequence, TypedDict, operator
from langchain_core.messages import BaseMessage

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]

Additionally, LangChain has designed add_messages for merging messages. Compared to operator.add, add_messages can deduplicate messages, which may be useful in certain recursive call scenarios.

python
from langgraph.graph.message import add_messages
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]

After defining the state machine, we can initialize the workflow instance and begin constructing the graph:

python
from langgraph.graph import StateGraph

workflow = StateGraph(AgentState)

Defining Nodes and Edges

In LangGraph, nodes can be an AI Agent, an LCEL calling chain, or any Python function that can accept and return data in the state machine format. For example, we define the following state machine:

python
class AgentState(TypedDict):
    current_result: str

We can use LangChain's add_node to define the following Python functions as nodes:

python
def AddHello(state):
    return {"current_result": "hello, " + state["current_result"]}

def AddGoodBye(state):
    return {"current_result": state["current_result"] + ", now good bye!"}

workflow.add_node("hello_node", AddHello)
workflow.add_node("goodbye_node", AddGoodBye)

This adds two nodes named hello_node and goodbye_node to the graph system.

After defining the nodes, we need to design edges to connect them. Edges can be unconditional or conditional:

Unconditional Edges

The execution path of nodes is fixed, meaning that after one node executes, the next node to execute is determined. Unconditional edges can be created using add_edge:

python
workflow.add_edge("hello_node", "goodbye_node")

Here, we create an edge from hello_node to goodbye_node.

Conditional Edges

Unlike unconditional edges, a node can create conditional edges to multiple nodes, allowing us to determine the next node to execute based on custom logic. Conditional edges are created using add_conditional_edges:

python
from langgraph.graph import END

def router(state):
    if "no run goodbye_node" in state["current_result"]:
        return END
    else:
        return "goodbye_node"
        
workflow.add_conditional_edges("hello_node", router)

Now, after executing the hello_node, it will check whether to execute the goodbye_node. Note that END is a predefined endpoint in LangChain, and each execution path in the graph must end at the END node.

We also need to specify a starting node as the entry point for execution:

python
workflow.set_entry_point("hello_node")

Here, we set hello_node as the entry point for the system.

Unlike other components of LangChain, before running LangGraph, we need to compile it:

python
graph = workflow.compile()

Here is the complete example code:

python
from typing import TypedDict

class AgentState(TypedDict):
    current_result: str

def AddHello(state):
    return {"current_result": "hello, " + state["current_result"]}

def AddGoodBye(state):
    return {"current_result": state["current_result"] + ", now good bye!"}

from langgraph.graph import StateGraph, END
workflow = StateGraph(AgentState)

workflow.add_node("hello_node", AddHello)
workflow.add_node("goodbye_node", AddGoodBye)

def router(state):
    if "no run goodbye_node" in state["current_result"]:
        return END
    else:
        return "goodbye_node"

workflow.add_conditional_edges("hello_node", router)
workflow.add_edge("goodbye_node", END)
workflow.set_entry_point("hello_node")
graph = workflow.compile()

After compilation, we can invoke the system:

python
print(graph.invoke({"current_result": "test"}))
# output
# {'current_result': 'hello, test, now good bye!'}

Here's the translation:

Multi-Agent Architecture Design

Now that we have mastered the basic usage of LangGraph, let's start learning how to use LangGraph to construct different Multi-Agent systems.

Simple Dual-Agent Design

First, let’s look at the simplest dual-Agent system, which consists of two working nodes that call each other back and forth, continuously optimizing the output until the desired effect is achieved.

In the context of program development, we design two Agents: one called the Coder Agent, responsible for coding, and another called the Reviewer Agent, which reviews the generated code and provides modification suggestions. The Coder Agent then optimizes the code based on these suggestions, and this process repeats until the Reviewer Agent deems the code satisfactory. This approach can effectively improve the accuracy of LLMs in handling programming issues. The overall system flowchart is as follows:

Defining State Machine Structure

The state machine requires only a messages field to retain the outputs of previous nodes, forming short-term memory that provides more details during node execution. We can use LangChain's add_messages to update the messages field.

python
from typing import TypedDict, Annotated, Sequence
from langgraph.graph.message import add_messages

# Define state machine structure
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]

# Create StateGraph instance
workflow = StateGraph(AgentState)

Defining Nodes

Define code_node

python
def code_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding. Given the following user request, make a code. Or optimize the code according to the review suggestions. just respond with the final code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

Add code_node to the workflow:

python
workflow.add_node("code_node", code_node)

Define review_node

python
def review_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding review. Given the following code. Make some review suggestions but DON'T optimize the code. if the code is good enough, just respond the code and start with `FINAL CODE:`."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

Add review_node to the workflow:

python
workflow.add_node("review_node", review_node)

We have defined two nodes, code_node and review_node, both of which accept and return the AgentState format.

Since the functions of each node are relatively simple, we have constructed the nodes as two LCEL chains. For more complex functionalities, we could construct Agent nodes.

Based on the prompt templates for each node, we can see that code_node is mainly for code development and optimization, while review_node is for reviewing the input code.

Defining Node Edges

For code_node, after generating code, we need to send it to review_node for review. Thus, we create an unconditional edge from code_node to review_node:

python
workflow.add_edge("code_node", "review_node")

After review_node executes, we need to determine whether there are further optimization suggestions. If there are, we loop back to code_node; if not, we proceed to the END node to conclude the call.

Carefully observing the prompt template for review_node, when the code does not require optimization, the LLM's response will include the string FINAL CODE. Thus, we can route based on whether this string is present in the output from review_node.

python
def review_to_code_router(state: AgentState):
    if "FINAL CODE" in state["messages"][-1].content:
        return END
    else:
        return "code_node"

workflow.add_conditional_edges("review_node", review_to_code_router)

Finally, we set code_node as the entry point for execution and compile the workflow:

python
# Define entry node
workflow.set_entry_point("code_node")
# Compile
graph = workflow.compile()

Here's the translation of the complete code section and the accompanying example:

Complete Code

python
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, END
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI

# Define state machine structure
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]

workflow = StateGraph(AgentState)

# Define code_node
def code_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding. Given the following user request, make a code. Or optimize the code according to the review suggestions. just respond with the final code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add code_node to the workflow
workflow.add_node("code_node", code_node)

# Define review_node
def review_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding review. Given the following code. Make some review suggestions but DON'T optimize the code. if the code is good enough, just respond the code and start with `FINAL CODE:`."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add review_node to the workflow
workflow.add_node("review_node", review_node)

# Add unconditional edge from code_node to review_node
workflow.add_edge("code_node", "review_node")

# review_node checks for FINAL CODE keyword to decide flow
def review_to_code_router(state: AgentState):
    if "FINAL CODE" in state["messages"][-1].content:
        return END
    else:
        return "code_node"

workflow.add_conditional_edges("review_node", review_to_code_router)

# Set entry node
workflow.set_entry_point("code_node")
# Compile the workflow
graph = workflow.compile()

Example Test with a LeetCode Problem

Let's test the effect with a difficult-level LeetCode algorithm question:

python
result = graph.invoke({"messages": [HumanMessage("Given two sorted arrays nums1 and nums2 of size m and n respectively, find and return the median of the two sorted arrays.")]})
for message in result["messages"]:
    print(message.content)
    print("--------")

Output

code_node: Sure, here's a Python code to find the median of two sorted arrays:

```python
def findMedianSortedArrays(nums1, nums2):
    nums = sorted(nums1 + nums2)
    n = len(nums)
    if n % 2 == 0:
        return (nums[n // 2 - 1] + nums[n // 2]) / 2
    else:
        return nums[n // 2]
```

This code first merges the two arrays `nums1` and `nums2` into a single sorted array `nums`, and then calculates the median based on the length of `nums`.

review_node: - The given code works correctly in finding the median of two sorted arrays.
- However, it's not efficient, as it sorts the merged array unnecessarily.
- Instead of sorting the entire merged array, you can find the median directly without merging the arrays. This can be done in O(log(min(m,n))) time complexity using the binary search approach.
  ...

code_node: Here's the optimized Python code to find the median of two sorted arrays using the binary search approach and handling edge cases:

```python
def findMedianSortedArrays(nums1, nums2):
    if len(nums1) > len(nums2):
        nums1, nums2 = nums2, nums1

    m, n = len(nums1), len(nums2)
    imin, imax = 0, m
    half_len = (m + n + 1) // 2

    while imin <= imax:
        i = (imin + imax) // 2
        j = half_len - i

        if i < m and nums2[j-1] > nums1[i]:
            imin = i + 1
        elif i > 0 and nums1[i-1] > nums2[j]:
            imax = i - 1
        else:
            if i == 0: max_of_left = nums2[j-1]
            elif j == 0: max_of_left = nums1[i-1]
            else: max_of_left = max(nums1[i-1], nums2[j-1])

            if (m + n) % 2 == 1:
                return max_of_left

            if i == m: min_of_right = nums2[j]
            elif j == n: min_of_right = nums1[i]
            else: min_of_right = min(nums1[i], nums2[j])

            return (max_of_left + min_of_right) / 2.0
```

This code efficiently finds the median of two sorted arrays using the binary search approach and handles edge cases for empty arrays or arrays with only one element.

review_node: ```python
FINAL CODE:
...

As we can see, the review_node provided some optimization suggestions for the code produced by code_node in its first execution, and code_node effectively implemented the suggested modifications.

Agent Supervisor Architecture

The previous architecture is simple and direct: after each node execution, it jumps to another node or exits directly. However, when the number of nodes in the system exceeds two, this model does not handle node orchestration effectively.

At this point, we can adopt a centralized routing architecture, also known as the Agent Supervisor architecture.

In this mode, in addition to the business nodes, we define a supervisor node that is specifically used for routing: after a business node executes, it will return to the supervisor, which will then determine the next node to execute.

Now, let's see how to refactor the previous example using the Agent Supervisor architecture.

Define State Machine Structure

In addition to the messages field, we also define a next field to store the next node to be executed.

python
# Define state machine structure
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]
    next: str

# Create StateGraph instance
workflow = StateGraph(AgentState)

This structure will allow for more flexible management of node execution order and enhance the overall system orchestration.

Define Nodes

Define code_node and review_node Nodes

python
# Define code_node
def code_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding. CODE for the following user request. Or optimize the code according to the review suggestions. Just respond with the code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add code_node
workflow.add_node("code_node", code_node)

# Define review_node
def review_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for giving suggestions for the given code. The suggestions should not include any code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add review_node
workflow.add_node("review_node", review_node)

These two nodes are consistent with the previous definitions, but the prompt templates have been slightly modified. In the review_node, when there are no optimization suggestions, we no longer require returning the "FINAL CODE" string because we are now directly using the supervisor node for routing.

Additionally, we emphasize that the review_node should only provide optimization suggestions and not optimize the code itself. This is because, in practice, the review_node sometimes provides optimized code directly, leading to unclear responsibilities.

Define the Supervisor Node

python
def supervisor_node(state: AgentState) -> AgentState:
    function_def = {
        "name": "route",
        "description": "Select the next role.",
        "parameters": {
            "title": "routeSchema",
            "type": "object",
            "properties": {
                "next": {
                    "title": "Next",
                    "anyOf": [
                        {"enum": ["code_node", "review_node", END]},
                    ],
                }
            },
            "required": ["next"],
        },
    }
    worker_nodes = ["code_node", "review_node"]
    worker_descs = {
        "code_node": "a helpful AI assistant for coding",
        "review_node": "a helpful AI assistant for giving suggestions for the given code. The suggestions will not include any code.",
    }
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system", 
                "You are a supervisor tasked with managing a conversation between the"
                " following workers:  {worker_descs}. "
                "Given the following user request,"
                " respond with the worker to act next. Each worker will perform a"
                " task and respond with their results and status. When finished,"
                " respond with '{end_node}'."
            ),
            MessagesPlaceholder(variable_name="messages"),
            (
                "system",
                "Given the conversation above, who should act next?"
                " Or should we FINISH? Select one of: {worker_nodes}, {end_node}",
            ),
        ]
    ).partial(end_node=END, worker_nodes=", ".join(worker_nodes), worker_descs=json.dumps(worker_descs))
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    supervisor_chain = (
        prompt
        | llm.bind_functions(functions=[function_def], function_call="route")
        | JsonOutputFunctionsParser()
    )
    return supervisor_chain

workflow.add_node("supervisor_node", supervisor_node)

The supervisor_node will choose the next node to execute based on the past message list and the descriptions of each node in the prompt template (worker_descs). The supervisor can choose from code_node, review_node, and END. A small trick here is to use function calls to simplify the model output parsing.

Define Node Edges

For the business nodes code_node and review_node, each time they finish executing, they need to return to the supervisor node. Therefore, we need to create unconditional edges from code_node and review_node to supervisor.

python
workflow.add_edge("code_node", "supervisor_node")
workflow.add_edge("review_node", "supervisor_node")

The supervisor will then conditionally point to code_node, review_node, and END nodes.

Define the Supervisor Router

python
def supervisor_router(state: AgentState) -> str:
    if state["next"] == "code_node":
        return "code_node"
    elif state["next"] == "review_node":
        return "review_node"
    else:
        return END

The supervisor node will parse the LLM output, which will return something like {"next": "code_node"}. This will serve as the input for supervisor_router, which will decide the next node to execute based on the value of the next field.

Finally, we set supervisor_node as the entry point and compile the workflow.

python
workflow.set_entry_point("supervisor_node")
graph = workflow.compile()

This completes the setup of the Agent Supervisor architecture.

Here's the complete code translated into English:

python
import json
from typing import TypedDict, Annotated, Sequence
from langgraph.graph.message import add_messages
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import StateGraph, END
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser

# Define the state machine structure
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]
    next: str

workflow = StateGraph(AgentState)

# Define the code_node
def code_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for coding. CODE for the following user request. Or optimize the code according to the review suggestions. just respond with the code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add code_node to the workflow
workflow.add_node("code_node", code_node)

# Define the review_node
def review_node(state: AgentState) -> AgentState:
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", 
             "You are a helpful AI assistant for giving suggestions for the given code. The suggestions should not include any code."),
            MessagesPlaceholder(variable_name="messages")
        ]
    )
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    code_chain = prompt | llm
    code_ret = code_chain.invoke({"messages": state["messages"]})
    return {"messages": [HumanMessage(code_ret.content)]}

# Add review_node to the workflow
workflow.add_node("review_node", review_node)

# Define the supervisor node
def supervisor_node(state: AgentState) -> AgentState:
    worker_nodes = ["code_node", "review_node"]
    function_def = {
        "name": "route",
        "description": "Select the next role.",
        "parameters": {
            "title": "routeSchema",
            "type": "object",
            "properties": {
                "next": {
                    "title": "Next",
                    "anyOf": [
                        {"enum": ["code_node", "review_node", END]},
                    ],
                }
            },
            "required": ["next"],
        },
    }
    worker_descs = {
        "code_node": "a helpful AI assistant for coding",
        "review_node": "a helpful AI assistant for giving suggestions for the given code. The suggestions will not include any code.",
    }
    prompt = ChatPromptTemplate.from_messages(
        [
            (
                "system", 
                "You are a supervisor tasked with managing a conversation between the"
                " following workers:  {worker_descs}. "
                "Given the following user request,"
                " respond with the worker to act next. Each worker will perform a"
                " task and respond with their results and status. When finished,"
                " respond with '{end_node}'."
            ),
            MessagesPlaceholder(variable_name="messages"),
            (
                "system",
                "Given the conversation above, who should act next?"
                " Or should we FINISH? Select one of: {worker_nodes},{end_node}",
            ),
        ]
    ).partial(end_node=END, worker_nodes=", ".join(worker_nodes), worker_descs=json.dumps(worker_descs))
    
    llm = ChatOpenAI(model="gpt-3.5-turbo-1106")
    supervisor_chain = (
        prompt
        | llm.bind_functions(functions=[function_def], function_call="route")
        | JsonOutputFunctionsParser()
    )
    return supervisor_chain

workflow.add_node("supervisor_node", supervisor_node)

# Organize the edges of the nodes
# Business nodes return to the supervisor node after execution
workflow.add_edge("code_node", "supervisor_node")
workflow.add_edge("review_node", "supervisor_node")

# The supervisor will select the next node to execute
def supervisor_router(state: AgentState) -> str:
    if state["next"] == "code_node":
        return "code_node"
    elif state["next"] == "review_node":
        return "review_node"
    else:
        return END

workflow.add_conditional_edges("supervisor_node", supervisor_router)

# Set the entry point to start from the supervisor node
workflow.set_entry_point("supervisor_node")
graph = workflow.compile()

This code sets up a Multi-Agent architecture using a supervisor node to manage the flow between the coding and reviewing tasks. Each node interacts with a language model to perform its designated role, and the supervisor routes the process based on the defined logic.

Here’s the translation following your guidelines:

python
for s in graph.stream(
    {
        "messages": [
            HumanMessage(content="Given two sorted arrays nums1 and nums2 of sizes m and n, respectively. Please find and return the median of the two sorted arrays.")
        ]
    }
):
    if "__end__" not in s:
        print(s)

Output:

{'supervisor_node': {'next': 'code_node'}}
{'code_node': {'messages': [HumanMessage(content='Sure, you can use th...')]}}
{'supervisor_node': {'next': 'review_node'}}
{'review_node': {'messages': [HumanMessage(content="It seems like the ap...")]}}
{'supervisor_node': {'next': 'code_node'}}
{'code_node': {'messages': [HumanMessage(content="\
python\ndef findMedianSortedArrays(nums1, nums2)...)]}}
{'supervisor_node': {'next': '__end__'}}

As we can see, the entire process is coordinated by the supervisor node, while the code_node and review_node are responsible for executing specific tasks and returning the results to the supervisor node.

Summary

Today, we mainly introduced the concept and application of Multi-Agent systems, as well as how to use the LangGraph library to construct and design complex multi-agent systems.

Multi-Agent systems consist of multiple Agents with different functions. Compared to single-agent systems, multiple sub-Agents can collaborate to solve more complex problems. LangGraph was introduced in LangChain version 0.1 to describe the complex calling relationships of sub-modules, allowing for the cyclic invocation of previously executed modules during the execution process, thereby constructing more complex LLM systems. There are various design patterns for Multi-Agent systems; today, we introduced a simple dual-Agent system and an Agent Supervisor architecture that uses centralized routing.

Multi-Agent has loaded