Single vs Multi-Agent Systems

Endless Creative Possibilities with AI Agents

AI agents can operate alone or with other agents.

In a previous post, I explained how AI agents work and walked through a unified architecture that maximizes their capabilities:

  • Profile

  • Memory

  • Planning

  • Action

Graphic By: Sabrina Ramonov @

This unified architecture powers AI agents that can truly remember, learn, and therefore adapt in dynamic environments.

Single-Agent Systems

A single-agent systems consists of one AI agent.

Here are examples:

  1. Customer Service

Single AI agents can provide real-time customer support on your website.

Each agent handles questions and complex issues that traditionally would’ve involved human assistance.

  1. ChemCrow

ChemCrow is a single AI agent designed to perform “organic synthesis, drug discovery, and materials design”.

It facilitates both experimental and computational chemistry.

  1. Question Answering

A valuable enterprise use case is a question-answering AI agent.

This enables you to “talk to your data”, for example:

  • What were the main takeaways from Q1 FY2023?

  • What were the top customer challenges during Q3 FY2022?

Multi-Agent Systems

Multi-agent systems (MAS) have multiple AI agents in a shared environment, operating in cooperation or competition.

Applications are diverse including but not limited to: self-driving, coding, cybersecurity, manufacturing, trading, gaming, simulation, and education.

Multi-Agent Cooperation

Cooperation-based systems use communication protocols like free-form dialogue and structured documents to improve collective intelligence.

Open-source frameworks like ChatDev enable you to build multi-agent simulations of entire digital villages and economies!

This is useful for simulating behavioral studies, marketing campaigns, or better understanding user experiences.

Without LLMs, these simulations would be costly.

Generative Agents simulated 25 virtual people, each powered by an LLM, living together in a town.

The Sims — LLM edition!

These agents performed daily activities, interacted socially, and evolved based on their experiences.

The biggest surprise:

They engaged in complex social behaviors, like organizing and attending a Valentine’s Day party!

This demonstrates the effectiveness of integrating profile, memory, planning, and action modules in a unified architecture for AI agents.

Another example of multi-agent collaboration:

To discover new math theorems, you could split an LLM into 3 agents.

Given a set of math axioms and definitions:

  • first agent suggests a new provable property

  • second agent tries to prove it

  • third agent tries to verify it

Multi-Agent Competition

Competition-based systems use debate and rivalry to foster:

  • critical thinking

  • peak performance

  • precise decision-making

This approach is common for logic, math, and law.

Here are examples:

  1. Automated Stock Trading

Multiple AI agents analyze vast amounts of market data, predict stock movements, and execute trades.

They compete in real-time trading environments, each trying to maximize returns for their respective portfolios.

  1. Competitive Multiplayer Gaming

AI agents can compete against humans or other agents.

For example, teams of AI agents have dominated StarCraft2 and Dota2:

These agents make lightning-fast decisions and adapt to their enemy, showcasing the ability to thrive in highly competitive and dynamic conditions.

  1. Cybersecurity Red Teaming

In cybersecurity, AI agents can be used to simulate attacks on network systems — AKA red teaming.

These agents compete against "blue team" agents, who defend the network.

This enables continuous testing and strengthening of cybersecurity defenses.

  1. Math Problem Solver

To solve a complex math problem, imagine splitting an LLM into 2 agents:

  • first agent acts as a solver to find a solution to the math problem

  • second agent acts as a verifier to find errors in the solution

Then, the agents compete with each other to solve the problem.

  1. Software Development and QA

In this scenario, one agent codes a function, while another agent tries to find bugs and security vulnerabilities by producing malicious or malformed input.


To learn more about LLM-powered AI agents, check out resources here.

To experiment with building multi-agent systems, here are interesting open-source projects: