Multi AI Agent Techniques 101

0
14


Automating Routine Duties in Information Supply Administration with CrewAI

1*uNj55dKXRioO MnI1KIQSw
Picture by DALL-E 3

Initially, when ChatGPT simply appeared, we used easy prompts to get solutions to our questions. Then, we encountered points with hallucinations and started utilizing RAG (Retrieval Augmented Technology) to offer extra context to LLMs. After that, we began experimenting with AI brokers, the place LLMs act as a reasoning engine and might resolve what to do subsequent, which instruments to make use of, and when to return the ultimate reply.

The subsequent evolutionary step is to create groups of such brokers that may collaborate with one another. This strategy is logical because it mirrors human interactions. We work in groups the place every member has a selected position:

  • The product supervisor proposes the following challenge to work on.
  • The designer creates its look and really feel.
  • The software program engineer develops the answer.
  • The analyst examines the information to make sure it performs as anticipated and identifies methods to enhance the product for purchasers.

Equally, we will create a workforce of AI brokers, every specializing in one area. They will collaborate and attain a ultimate conclusion collectively. Simply as specialization enhances efficiency in actual life, it may additionally profit the efficiency of AI brokers.

One other benefit of this strategy is elevated flexibility. Every agent can function with its personal immediate, set of instruments and even LLM. As an example, we will use completely different fashions for various components of our system. You should utilize GPT-4 for the agent that wants extra reasoning and GPT-3.5 for the one which does solely easy extraction. We will even fine-tune the mannequin for small particular duties and use it in our crew of brokers.

The potential drawbacks of this strategy are time and value. A number of interactions and data sharing between brokers require extra calls to LLM and eat further tokens. This might end in longer wait occasions and elevated bills.

There are a number of frameworks obtainable for multi-agent methods immediately.
Listed below are a few of the hottest ones:

  • AutoGen: Developed by Microsoft, AutoGen makes use of a conversational strategy and was one of many earliest frameworks for multi-agent methods,
  • LangGraph: Whereas not strictly a multi-agent framework, LangGraph permits for outlining advanced interactions between actors utilizing a graph construction. So, it may also be tailored to create multi-agent methods.
  • CrewAI: Positioned as a high-level framework, CrewAI facilitates the creation of “crews” consisting of role-playing brokers able to collaborating in varied methods.

I’ve determined to begin experimenting with multi-agent frameworks from CrewAI because it’s fairly broadly standard and person pleasant. So, it seems like a superb possibility to start with.

On this article, I’ll stroll you thru find out how to use CrewAI. As analysts, we’re the area specialists chargeable for documenting varied knowledge sources and addressing associated questions. We’ll discover find out how to automate these duties utilizing multi-agent frameworks.

Organising the setting

Let’s begin with establishing the setting. First, we have to set up the CrewAI primary package deal and an extension to work with instruments.

pip set up crewai
pip set up 'crewai[tools]'

CrewAI was developed to work primarily with OpenAI API, however I might additionally prefer to strive it with an area mannequin. Based on the ChatBot Enviornment Leaderboard, one of the best mannequin you’ll be able to run in your laptop computer is Llama 3 (8b parameters). It will likely be probably the most possible possibility for our use case.

We will entry Llama fashions utilizing Ollama. Set up is fairly simple. You want to obtain Ollama from the web site after which undergo the set up course of. That’s it.

Now, you’ll be able to take a look at the mannequin in CLI by operating the next command.

ollama run llama3

For instance, you’ll be able to ask one thing like this.

1*LKh7JSgBW gtHM2YTFIBrw

Let’s create a customized Ollama mannequin to make use of later in CrewAI.

We are going to begin with a ModelFile (documentation). I solely specified the bottom mannequin (llama3), temperature and cease sequence. Nonetheless, you may add extra options. For instance, you’ll be able to decide the system message utilizing SYSTEM key phrase.

FROM llama3

# set parameters
PARAMETER temperature 0.5
PARAMETER cease Consequence

I’ve saved it right into a Llama3ModelFile file.

Let’s create a bash script to load the bottom mannequin for Ollama and create the customized mannequin we outlined in ModelFile.

#!/bin/zsh

# outline variables
model_name="llama3"
custom_model_name="crewai-llama3"

# load the bottom mannequin
ollama pull $model_name

# create the mannequin file
ollama create $custom_model_name -f ./Llama3ModelFile

Let’s execute this file.

chmod +x ./llama3_setup.sh
./llama3_setup.sh

You could find each information on GitHub: Llama3ModelFile and llama3_setup.sh

We have to initialise the next environmental variables to make use of the native Llama mannequin with CrewAI.

os.environ["OPENAI_API_BASE"]='http://localhost:11434/v1'

os.environ["OPENAI_MODEL_NAME"]='crewai-llama3'
# custom_model_name from the bash script

os.environ["OPENAI_API_KEY"] = "NA"

We’ve completed the setup and are able to proceed our journey.

Use instances: working with documentation

As analysts, we frequently play the position of material specialists for knowledge and a few data-related instruments. In my earlier workforce, we used to have a channel with virtually 1K individuals, the place we have been answering a number of questions on our knowledge and the ClickHouse database we used as storage. It took us numerous time to handle this channel. It might be fascinating to see whether or not such duties will be automated with LLMs.

For this instance, I’ll use the ClickHouse database. When you’re , You possibly can be taught extra about ClickHouse and find out how to set it up domestically in my earlier article. Nonetheless, we received’t utilise any ClickHouse-specific options, so be at liberty to stay to the database you know.

I’ve created a fairly easy knowledge mannequin to work with. There are simply two tables in our DWH (Information Warehouse): ecommerce_db.customers and ecommerce_db.periods. As you may guess, the primary desk accommodates details about the customers of our service.

1*LukeFk5kSyBCOU4bV3yfzg

The ecommerce_db.periods desk shops details about person periods.

1*WqZwGilme NwyQtRobomSw

Concerning knowledge supply administration, analysts sometimes deal with duties like writing and updating documentation and answering questions on this knowledge. So, we’ll use LLM to jot down documentation for the desk within the database and train it to reply questions on knowledge or ClickHouse.

However earlier than transferring on to the implementation, let’s be taught extra concerning the CrewAI framework and its core ideas.

CrewAI primary ideas

The cornerstone of a multi-agent framework is an agent idea. In CrewAI, brokers are powered by role-playing. Position-playing is a tactic if you ask an agent to undertake a persona and behave like a top-notch backend engineer or useful buyer help agent. So, when making a CrewAI agent, you should specify every agent's position, objective, and backstory in order that LLM is aware of sufficient to play this position.

The brokers’ capabilities are restricted with out instruments (features that brokers can execute and get outcomes). With CrewAI, you should utilize one of many predefined instruments (for instance, to go looking the Web, parse a web site, or do RAG on a doc), create a customized software your self or use LangChain instruments. So, it’s fairly straightforward to create a robust agent.

Let’s transfer on from brokers to the work they’re doing. Brokers are engaged on duties (particular assignments). For every job, we have to outline an outline, anticipated output (definition of achieved), set of accessible instruments and assigned agent. I actually like that these frameworks comply with the managerial greatest practices like a transparent definition of achieved for the duties.

The subsequent query is find out how to outline the execution order for duties: which one to work on first, which of them can run in parallel, and so on. CrewAI applied processes to orchestrate the duties. It gives a few choices:

  • Sequential —probably the most simple strategy when duties are referred to as one after one other.
  • Hierarchical — when there’s a supervisor (specified as LLM mannequin) that creates and delegates duties to the brokers.

Additionally, CrewAI is engaged on a consensual course of. In such a course of, brokers will have the ability to make selections collaboratively with a democratic strategy.

There are different levers you should utilize to tweak the method of duties’ execution:

  • You possibly can mark duties as “asynchronous”, then they are going to be executed in parallel, so it is possible for you to to get a solution quicker.
  • You should utilize the “human enter” flag on a job, after which the agent will ask for human approval earlier than finalising the output of this job. It will probably mean you can add an oversight to the course of.

We’ve outlined all the first constructing blocks and might focus on the holly grail of CrewAI — crew idea. The crew represents the workforce of brokers and the set of duties they are going to be engaged on. The strategy for collaboration (processes we mentioned above) may also be outlined on the crew degree.

Additionally, we will arrange the reminiscence for a crew. Reminiscence is essential for environment friendly collaboration between the brokers. CrewAI helps three ranges of reminiscence:

  • Brief-term reminiscence shops data associated to the present execution. It helps brokers to work collectively on the present job.
  • Lengthy-term reminiscence is knowledge concerning the earlier executions saved within the native database. One of these reminiscence permits brokers to be taught from earlier iterations and enhance over time.
  • Entity reminiscence captures and buildings details about entities (like personas, cities, and so on.)

Proper now, you’ll be able to solely change on all sorts of reminiscence for a crew with none additional customisation. Nonetheless, it doesn’t work with the Llama fashions.

We’ve discovered sufficient concerning the CrewAI framework, so it’s time to begin utilizing this data in observe.

Use case: writing documentation

Let’s begin with a easy job: placing collectively the documentation for our DWH. As we mentioned earlier than, there are two tables in our DWH, and I wish to create an in depth description for them utilizing LLMs.

First strategy

At first, we’d like to consider the workforce construction. Consider this as a typical managerial job. Who would you rent for such a job?

I might break this job into two components: retrieving knowledge from a database and writing documentation. So, we’d like a database specialist and a technical author. The database specialist wants entry to a database, whereas the author received’t want any particular instruments.

1*0YLuYhwdFr0hKuS46qjEfA

Now, we’ve got a high-level plan. Let’s create the brokers.

For every agent, I’ve specified the position, objective and backstory. I’ve tried my greatest to offer brokers with all of the wanted context.

database_specialist_agent = Agent(
position = "Database specialist",
objective = "Present knowledge to reply enterprise questions utilizing SQL",
backstory = '''You might be an knowledgeable in SQL, so you'll be able to assist the workforce
to collect wanted knowledge to energy their selections.
You might be very correct and consider all of the nuances in knowledge.''',
allow_delegation = False,
verbose = True
)

tech_writer_agent = Agent(
position = "Technical author",
objective = '''Write partaking and factually correct technical documentation
for knowledge sources or instruments''',
backstory = '''
You might be an knowledgeable in each know-how and communications, so you'll be able to simply clarify even subtle ideas.
You base your work on the factual data offered by your colleagues.
Your texts are concise and will be simply understood by a large viewers.
You employ skilled however somewhat a casual fashion in your communication.
''',
allow_delegation = False,
verbose = True
)

We are going to use a easy sequential course of, so there’s no want for brokers to delegate duties to one another. That’s why I specified allow_delegation = False.

The subsequent step is setting the duties for brokers. However earlier than transferring to them, we have to create a customized software to hook up with the database.

First, I put collectively a operate to execute ClickHouse queries utilizing HTTP API.

CH_HOST = 'http://localhost:8123' # default tackle 

def get_clickhouse_data(question, host = CH_HOST, connection_timeout = 1500):
r = requests.submit(host, params = {'question': question},
timeout = connection_timeout)
if r.status_code == 200:
return r.textual content
else:
return 'Database returned the next error:n' + r.textual content

When working with LLM brokers, it’s necessary to make instruments fault-tolerant. For instance, if the database returns an error (status_code != 200), my code received’t throw an exception. As a substitute, it is going to return the error description to the LLM so it will probably try and resolve the subject.

To create a CrewAI customized software, we have to derive our class from crewai_tools.BaseTool, implement the _run methodology after which create an occasion of this class.

from crewai_tools import BaseTool

class DatabaseQuery(BaseTool):
title: str = "Database Question"
description: str = "Returns the results of SQL question execution"

def _run(self, sql_query: str) -> str:
# Implementation goes right here
return get_clickhouse_data(sql_query)

database_query_tool = DatabaseQuery()

Now, we will set the duties for the brokers. Once more, offering clear directions and all of the context to LLM is essential.

table_description_task = Process(
description = '''Present the excellent overview for the information
in desk {desk}, in order that it's straightforward to know the construction
of the information. This job is essential to place collectively the documentation
for our database''',
expected_output = '''The great overview of {desk} within the md format.
Embrace 2 sections: columns (listing of columns with their varieties)
and examples (the primary 30 rows from desk).''',
instruments = [database_query_tool],
agent = database_specialist_agent
)

table_documentation_task = Process(
description = '''Utilizing offered details about the desk,
put collectively the detailed documentation for this desk in order that
folks can use it in observe''',
expected_output = '''Effectively-written detailed documentation describing
the information scheme for the desk {desk} in markdown format,
that provides the desk overview in 1-2 sentences then then
describes every columm. Construction the columns description
as a markdown desk with column title, sort and outline.''',
instruments = [],
output_file="table_documentation.md",
agent = tech_writer_agent
)

You might need seen that I’ve used {desk} placeholder within the duties’ descriptions. We are going to use desk as an enter variable when executing the crew, and this worth shall be inserted into all placeholders.

Additionally, I’ve specified the output file for the desk documentation job to save lots of the ultimate end result domestically.

Now we have all we’d like. Now, it’s time to create a crew and execute the method, specifying the desk we’re taken with. Let’s strive it with the customers desk.

crew = Crew(
brokers = [database_specialist_agent, tech_writer_agent],
duties = [table_description_task, table_documentation_task],
verbose = 2
)

end result = crew.kickoff({'desk': 'ecommerce_db.customers'})

It’s an thrilling second, and I’m actually wanting ahead to seeing the end result. Don’t fear if execution takes a while. Brokers make a number of LLM calls, so it’s completely regular for it to take a couple of minutes. It took 2.5 minutes on my laptop computer.

We requested LLM to return the documentation in markdown format. We will use the next code to see the formatted end in Jupyter Pocket book.

from IPython.show import Markdown
Markdown(end result)

At first look, it seems nice. We’ve bought the legitimate markdown file describing the customers' desk.

1*5Ls4Gp N feYnJr4jO5k g

However wait, it’s incorrect. Let’s see what knowledge we’ve got in our desk.

1*LukeFk5kSyBCOU4bV3yfzg

The columns listed within the documentation are utterly completely different from what we’ve got within the database. It’s a case of LLM hallucinations.

We’ve set verbose = 2 to get the detailed logs from CrewAI. Let’s learn by way of the execution logs to determine the basis explanation for the downside.

First, the database specialist couldn’t question the database because of issues with quotes.

1*8F9OpgR0LJA mWw7ib hrw

The specialist didn’t handle to resolve this downside. Lastly, this chain has been terminated by CrewAI with the next output: Agent stopped because of iteration restrict or time restrict.

This implies the technical author didn’t obtain any factual details about the information. Nonetheless, the agent continued and produced utterly pretend outcomes. That’s how we ended up with incorrect documentation.

Fixing the points

Despite the fact that our first iteration wasn’t profitable, we’ve discovered loads. Now we have (not less than) two areas for enchancment:

  • Our database software is simply too tough for the mannequin, and the agent struggles to make use of it. We will make the software extra tolerant by eradicating quotes from the start and finish of the queries. This answer shouldn’t be ideally suited since legitimate SQL can finish with a quote, however let’s strive it.
  • Our technical author isn’t basing its output on the enter from the database specialist. We have to tweak the immediate to focus on the significance of offering solely factual data.

So, let’s attempt to repair these issues. First, we’ll repair the software — we will leverage strip to get rid of quotes.

CH_HOST = 'http://localhost:8123' # default tackle 

def get_clickhouse_data(question, host = CH_HOST, connection_timeout = 1500):
r = requests.submit(host, params = {'question': question.strip('"').strip("'")},
timeout = connection_timeout)
if r.status_code == 200:
return r.textual content
else:
return 'Database returned the next error:n' + r.textual content

Then, it’s time to replace the immediate. I’ve included statements emphasizing the significance of sticking to the info in each the agent and job definitions.


tech_writer_agent = Agent(
position = "Technical author",
objective = '''Write partaking and factually correct technical documentation
for knowledge sources or instruments''',
backstory = '''
You might be an knowledgeable in each know-how and communications, so that you
can simply clarify even subtle ideas.
Your texts are concise and will be simply understood by large viewers.
You employ skilled however somewhat casual fashion in your communication.
You base your work on the factual data offered by your colleagues.
You persist with the info within the documentation and use ONLY
data offered by the colleagues not including something.''',
allow_delegation = False,
verbose = True
)

table_documentation_task = Process(
description = '''Utilizing offered details about the desk,
put collectively the detailed documentation for this desk in order that
folks can use it in observe''',
expected_output = '''Effectively-written detailed documentation describing
the information scheme for the desk {desk} in markdown format,
that provides the desk overview in 1-2 sentences then then
describes every columm. Construction the columns description
as a markdown desk with column title, sort and outline.
The documentation relies ONLY on the knowledge offered
by the database specialist with none additions.''',
instruments = [],
output_file = "table_documentation.md",
agent = tech_writer_agent
)

Let’s execute our crew as soon as once more and see the outcomes.

We’ve achieved a bit higher end result. Our database specialist was in a position to execute queries and consider the information, which is a major win for us. Moreover, we will see all of the related fields within the end result desk, although there are many different fields as effectively. So, it’s nonetheless not fully appropriate.

I as soon as once more seemed by way of the CrewAI execution log to determine what went improper. The difficulty lies in getting the listing of columns. There’s no filter by database, so it returns some unrelated columns that seem within the end result.

SELECT column_name 
FROM information_schema.columns
WHERE table_name = 'customers'

Additionally, after a number of makes an attempt, I seen that the database specialist, on occasion, executes choose * from <desk> question. It’d trigger some points in manufacturing as it would generate a number of knowledge and ship it to LLM.

Extra specialised instruments

We will present our agent with extra specialised instruments to enhance our answer. At present, the agent has a software to execute any SQL question, which is versatile and highly effective however vulnerable to errors. We will create extra targeted instruments, similar to getting desk construction and top-N rows from the desk. Hopefully, it is going to cut back the variety of errors.

class TableStructure(BaseTool):
title: str = "Desk construction"
description: str = "Returns the listing of columns and their varieties"

def _run(self, desk: str) -> str:
desk = desk.strip('"').strip("'")
return get_clickhouse_data(
'describe {desk} format TabSeparatedWithNames'
.format(desk = desk)
)

class TableExamples(BaseTool):
title: str = "Desk examples"
description: str = "Returns the primary N rows from the desk"

def _run(self, desk: str, n: int = 30) -> str:
desk = desk.strip('"').strip("'")
return get_clickhouse_data(
'choose * from {desk} restrict {n} format TabSeparatedWithNames'
.format(desk = desk, n = n)
)

table_structure_tool = TableStructure()
table_examples_tool = TableExamples()

Now, we have to specify these instruments within the job and re-run our script. After the primary try, I bought the next output from the Technical Author.

Process output: This ultimate reply gives an in depth and factual description 
of the ecommerce_db.customers desk construction, together with column names, varieties,
and descriptions. The documentation adheres to the offered data
from the database specialist with none additions or modifications.

Extra targeted instruments helped the database specialist retrieve the proper desk data. Nonetheless, though the author had all the required data, we didn’t get the anticipated end result.

As we all know, LLMs are probabilistic, so I gave it one other strive. And hooray, this time, the end result was fairly good.

1*yWLsOgw FPo4ys0XYfUQGQ

It’s not excellent because it nonetheless consists of some irrelevant feedback and lacks the general description of the desk. Nonetheless, offering extra specialised instruments has undoubtedly paid off. It additionally helped to stop points when the agent tried to load all the information from the desk.

High quality assurance specialist

We’ve achieved fairly good outcomes, however let’s see if we will enhance them additional. A standard observe in multi-agent setups is high quality assurance, which provides the ultimate evaluate stage earlier than finalising the outcomes.

1*aYHqvVqM9DEsEMGmgT3UYw

Let’s create a brand new agent — a High quality Assurance Specialist, who shall be in command of evaluate.

qa_specialist_agent = Agent(
position = "High quality Assurance specialist",
objective = """Guarantee the best high quality of the documentation we offer
(that it's appropriate and straightforward to know)""",
backstory = '''
You're employed as a High quality Assurance specialist, checking the work
from the technical author and making certain that it's inline
with our highest requirements.
You want to verify that the technical author gives the complete full
solutions and make no assumptions.
Additionally, you should make it possible for the documentation addresses
all of the questions and is straightforward to know.
''',
allow_delegation = False,
verbose = True
)

Now, it’s time to explain the evaluate job. I’ve used the context parameter to specify that this job requires outputs from each table_description_task and table_documentation_task.

qa_review_task = Process(
description = '''
Evaluate the draft documentation offered by the technical author.
Be certain that the documentation totally solutions all of the questions:
the aim of the desk and its construction within the type of desk.
Guarantee that the documentation is according to the knowledge
offered by the database specialist.
Double verify that there are not any irrelevant feedback within the ultimate model
of documentation.
''',
expected_output = '''
The ultimate model of the documentation in markdown format
that may be printed.
The documentation ought to totally tackle all of the questions, be constant
and comply with our skilled however casual tone of voice.
''',
instruments = [],
context = [table_description_task, table_documentation_task],
output_file="checked_table_documentation.md",
agent = qa_specialist_agent
)

Let’s replace our crew and run it.

full_crew = Crew(
brokers=[database_specialist_agent, tech_writer_agent, qa_specialist_agent],
duties=[table_description_task, table_documentation_task, qa_review_task],
verbose = 2,
reminiscence = False # don't work with Llama
)

full_result = full_crew.kickoff({'desk': 'ecommerce_db.customers'})

We now have extra structured and detailed documentation because of the addition of the QA stage.

1*U7Yt4hWfUD4y4BqKt YhEA

Delegation

With the addition of the QA specialist, it might be fascinating to check the delegation mechanism. The QA specialist agent might need questions or requests that it may delegate to different brokers.

I attempted utilizing the delegation with Llama 3, but it surely didn’t go effectively. Llama 3 struggled to name the co-worker software appropriately. It couldn’t specify the proper co-worker’s title.

We achieved fairly good outcomes with an area mannequin that may run on any laptop computer, however now it’s time to modify gears and use a far more highly effective mannequin — GPT-4o.

To do it, we simply must replace the next setting variables.

os.environ["OPENAI_MODEL_NAME"] = 'gpt-4o'  
os.environ["OPENAI_API_KEY"] = config['OPENAI_API_KEY'] # your OpenAI key

To modify on the delegation, we must always specify allow_delegation = True for the QA specialist agent.

Additionally, we will use useful reminiscence performance for OpenAI fashions, as talked about above. The reminiscence will enable our brokers to share their data with one another throughout execution and leverage long-term reminiscence to get data from earlier executions.

full_crew = Crew(
brokers = [database_specialist_agent, tech_writer_agent, qa_specialist_agent],
duties = [table_description_task, table_documentation_task, qa_review_task],
verbose = 2,
reminiscence = True
)

full_result = full_crew.kickoff({'desk': 'ecommerce_db.customers'})

Let’s see the CrewAI execution logs to know how delegation works. Listed below are all of the logs for the QA specialist. We will see that it reached out to the database specialist to double-check the knowledge.

Thought: I must evaluate the offered draft documentation to make sure it 
totally solutions all of the questions and is full, appropriate, and straightforward
to know. Particularly, I would like to make sure that it explains the aim
of the desk, its construction, and consists of related examples with out
any irrelevant feedback.

First, I'll examine the 2 drafts to determine any discrepancies and
lacking data. Then, I'll confirm the accuracy and completeness of
the knowledge offered.

**Comparability:**
1. **Introduction:**
- First draft: Consists of an introduction and examples.
- Second draft: Provides a extra detailed introduction concerning the objective and significance of the desk.

2. **Columns:**
- Each drafts have the identical column construction and descriptions.

3. **Further Info:**
- Each drafts embrace comparable further data, however the second draft is extra concise.

4. **Examples:**
- The primary draft consists of examples, that are lacking within the second draft.

5. **Ultimate Checks:**
- Guarantee there are not any irrelevant feedback.
- Confirm consistency with data offered by the database specialist.

Motion: Ask query to co-worker
Motion Enter: 1000009

<-- omitted logs from the database specialist -->

Based mostly on the offered context and examples for the `ecommerce_db.customers` desk,
the column descriptions and examples seem correct. Listed below are the small print:
- `user_id`: This column accommodates a novel identifier for every person.
The offered examples (e.g., 1000001, 1000002) align with this description
as they're distinctive numeric identifiers.

- `nation`: This column specifies the nation of the person. The offered
examples (e.g., United Kingdom, France, Germany, Netherlands) are
all legitimate nation names.

- `is_active`: This column signifies whether or not the person is lively (1) or not (0). The offered examples (e.g., 0, 1) appropriately characterize this binary standing.

- `age`: This column reveals the age of the person. The offered examples (e.g., 70,
87, 88, 25, 48, 78, 65, 31, 66, 73) are all legitimate ages.

Due to this fact, the column descriptions and examples offered for the
`ecommerce_db.customers` desk are correct and meet the anticipated standards.

Thought: Based mostly on the remark from the database specialist,
the column descriptions and examples offered for the `ecommerce_db.customers`
desk are correct. Now, I'll consolidate one of the best components from each drafts,
making certain the documentation is full, clear, and freed from irrelevant feedback.

Motion: Finalize the documentation in markdown format, incorporating
the detailed introduction, examples, and making certain the tone is skilled
however casual.

After I tried the delegation for the primary time, I didn’t allow reminiscence, which led to incorrect outcomes. The info specialist and the technical author initially returned the proper data. Nonetheless, when the QA specialist returned with the follow-up questions, they began to hallucinate. So, it seems like delegation works higher when reminiscence is enabled.

Right here’s the ultimate output from GPT-4o. The end result seems fairly good now. We undoubtedly can use LLMs to automate documentation.

1* 567hW4cSLS Me6iKfRQWw

So, the primary job has been solved!

I used the identical script to generate documentation for the ecommerce_db.periods desk as effectively. It will likely be useful for our subsequent job. So, let’s not waste any time and transfer on.

Use case: answering questions

Our subsequent job is answering questions primarily based on the documentation because it’s widespread for a lot of knowledge analysts (and different specialists).

We are going to begin easy and can create simply two brokers:

  • The documentation help specialist shall be answering questions primarily based on the docs,
  • The help QA agent will evaluate the reply earlier than sharing it with the shopper.
1* KEcNJLuLb4N 5POD9Wz4Q

We might want to empower the documentation specialist with a few instruments that may enable them to see all of the information saved within the listing and browse the information. It’s fairly simple since CrewAI has applied such instruments.

from crewai_tools import DirectoryReadTool, FileReadTool

documentation_directory_tool = DirectoryReadTool(
listing = '~/crewai_project/ecommerce_documentation')

base_file_read_tool = FileReadTool()

Nonetheless, since Llama 3 retains scuffling with quotes when calling instruments, I needed to create a customized software on prime of the FileReaderTool to beat this subject.

from crewai_tools import BaseTool

class FileReadToolUPD(BaseTool):
title: str = "Learn a file's content material"
description: str = "A software that can be utilized to learn a file's content material."

def _run(self, file_path: str) -> str:
# Implementation goes right here
return base_file_read_tool._run(file_path = file_path.strip('"').strip("'"))

file_read_tool = FileReadToolUPD()

Subsequent, as we did earlier than, we have to create brokers, duties and crew.

data_support_agent = Agent(
position = "Senior Information Assist Agent",
objective = "Be probably the most useful help for you colleagues",
backstory = '''You're employed as a help for data-related questions
within the firm.
Despite the fact that you're a giant knowledgeable in our knowledge warehouse, you double verify
all of the info in documentation.
Our documentation is totally up-to-date, so you'll be able to totally depend on it
when answering questions (you don't must verify the precise knowledge
in database).
Your work is essential for the workforce success. Nonetheless, bear in mind
that examples of desk rows don't present all of the attainable values.
You want to be certain that you present the absolute best help: answering
all of the questions, making no assumptions and sharing solely the factual knowledge.
Be artistic strive your greatest to resolve the shopper downside.
''',
allow_delegation = False,
verbose = True
)

qa_support_agent = Agent(
position = "Assist High quality Assurance Agent",
objective = """Guarantee the best high quality of the solutions we offer
to the purchasers""",
backstory = '''You're employed as a High quality Assurance specialist, checking the work
from help brokers and making certain that it's inline with our highest requirements.
You want to verify that the agent gives the complete full solutions
and make no assumptions.
Additionally, you should make it possible for the documentation addresses all
the questions and is straightforward to know.
''',
allow_delegation = False,
verbose = True
)

draft_data_answer = Process(
description = '''Essential buyer {buyer} reached out to you
with the next query:
```
{query}
```

Your job is to offer one of the best reply to all of the factors within the query
utilizing all obtainable data and never making any assumprions.
When you don't have sufficient data to reply the query, simply say
that you just don't know.''',
expected_output = '''The detailed informative reply to the shopper's
query that addresses all the purpose talked about.
Guarantee that reply is full and stict to info
(with none further data not primarily based on the factual knowledge)''',
instruments = [documentation_directory_tool, file_read_tool],
agent = data_support_agent
)

answer_review = Process(
description = '''
Evaluate the draft reply offered by the help agent.
Be certain that the it totally solutions all of the questions talked about
within the preliminary inquiry.
Guarantee that the reply is constant and doesn't embrace any assumptions.
''',
expected_output = '''
The ultimate model of the reply in markdown format that may be shared
with the shopper.
The reply ought to totally tackle all of the questions, be constant
and comply with our skilled however casual tone of voice.
We're very chill and pleasant firm, so don't neglect to incorporate
all of the well mannered phrases.
''',
instruments = [],
agent = qa_support_agent
)

qna_crew = Crew(
brokers = [data_support_agent, qa_support_agent],
duties = [draft_data_answer, answer_review],
verbose = 2,
reminiscence = False # don't work with Llama
)

Let’s see the way it works in observe.

end result = qna_crew.kickoff(
{'buyer': "Max",
'query': """Hey workforce, I hope you're doing effectively. I would like to search out
the numbers earlier than our CEO presentation tomorrow, so I'll actually
recognize your assist.
I must calculate the variety of periods from our Home windows customers in 2023. I've tried to search out the desk with such knowledge in our knowledge warehouse, however wasn't in a position to.
Do you could have any concepts whether or not we retailer the wanted knowledge someplace,
in order that I can question it? """
}
)

We’ve bought a well mannered, sensible and useful reply in return. That’s actually nice.

**Hiya Max,**

Thanks for reaching out along with your query! I'm blissful that can assist you
discover the variety of periods from Home windows customers in 2023.
After reviewing our documentation, I discovered that we do retailer knowledge
associated to periods and customers in our ecommerce database, particularly in
the `ecommerce_db.periods` desk.

To reply your query, I can offer you a step-by-step information
on find out how to question this desk utilizing SQL. First, you should utilize the `session_id`
column together with the `os` column filtering for "Home windows" and
the `action_date` column filtering for dates in 2023.
Then, you'll be able to group the outcomes by `os` utilizing the `GROUP BY` clause
to depend the variety of periods that meet these situations.

Right here's a pattern SQL question that ought to provide the desired output:

```sql
SELECT COUNT(*)
FROM ecommerce_db.periods
WHERE os = 'Home windows'
AND action_date BETWEEN '2023-01-01' AND '2023-12-31'
GROUP BY os;
```

This question will return the full variety of periods from Home windows
customers in 2023. I hope this helps! You probably have any additional questions or
want extra help, please don't hesitate to ask.

Let’s complicate the duty a bit. Suppose we will get not solely questions on our knowledge but additionally about our software (ClickHouse). So, we may have one other agent within the crew — ClickHouse Guru. To present our CH agent some data, I’ll share a documentation web site with it.

from crewai_tools import ScrapeWebsiteTool, WebsiteSearchTool
ch_documenation_tool = ScrapeWebsiteTool(
'https://clickhouse.com/docs/en/guides/creating-tables')

If you should work with a prolonged doc, you may strive utilizing RAG (Retrieval Augmented era) — WebsiteSearchTool. It’ll calculate embeddings and retailer them domestically in ChromaDB. In our case, we’ll persist with a easy web site scraper software.

Now that we’ve got two material specialists, we have to resolve who shall be engaged on the questions. So, it’s time to make use of a hierarchical course of and add a supervisor to orchestrate all of the duties.

1*5O56Xqw Cb87wZ nKGg0iA

CrewAI gives the supervisor implementation, so we solely must specify the LLM mannequin. I’ve picked the GPT-4o.

from langchain_openai import ChatOpenAI
from crewai import Course of

complext_qna_crew = Crew(
brokers = [ch_support_agent, data_support_agent, qa_support_agent],
duties = [draft_ch_answer, draft_data_answer, answer_review],
verbose = 2,
manager_llm = ChatOpenAI(mannequin='gpt-4o', temperature=0),
course of = Course of.hierarchical,
reminiscence = False
)

At this level, I needed to change from Llama 3 to OpenAI fashions once more to run a hierarchical course of because it hasn’t labored for me with Llama (much like this subject).

Now, we will strive our new crew with various kinds of questions (both associated to our knowledge or ClickHouse database).

ch_result = complext_qna_crew.kickoff(
{'buyer': "Maria",
'query': """Good morning, workforce. I'm utilizing ClickHouse to calculate
the variety of clients.
May you please remind whether or not there's an possibility so as to add totals
in ClickHouse?"""
}
)

doc_result = complext_qna_crew.kickoff(
{'buyer': "Max",
'query': """Hey workforce, I hope you're doing effectively. I would like to search out
the numbers earlier than our CEO presentation tomorrow, so I'll actually
recognize your assist.
I must calculate the variety of periods from our Home windows customers
in 2023. I've tried to search out the desk with such knowledge
in our knowledge warehouse, however wasn't in a position to.
Do you could have any concepts whether or not we retailer the wanted knowledge someplace,
in order that I can question it. """
}
)

If we have a look at the ultimate solutions and logs (I’ve omitted them right here since they’re fairly prolonged, but you’ll find them and full logs on GitHub), we’ll see that the supervisor was in a position to orchestrate appropriately and delegate duties to co-workers with related data to handle the shopper's query. For the primary (ClickHouse-related) query, we bought an in depth reply with examples and attainable implications of utilizing WITH TOTALS performance. For the data-related query, fashions returned roughly the identical data as we’ve seen above.

So, we’ve constructed a crew that may reply varied sorts of questions primarily based on the documentation, whether or not from an area file or a web site. I feel it’s a superb end result.

You could find all of the code on GitHub.

Abstract

On this article, we’ve explored utilizing the CrewAI multi-agent framework to create an answer for writing documentation primarily based on tables and answering associated questions.

Given the intensive performance we’ve utilised, it’s time to summarise the strengths and weaknesses of this framework.

General, I discover CrewAI to be an extremely helpful framework for multi-agent methods:

  • It’s simple, and you’ll construct your first prototype rapidly.
  • Its flexibility permits to resolve fairly subtle enterprise issues.
  • It encourages good practices like role-playing.
  • It gives many useful instruments out of the field, similar to RAG and a web site parser.
  • The help of various kinds of reminiscence enhances the brokers’ collaboration.
  • Constructed-in guardrails assist forestall brokers from getting caught in repetitive loops.

Nonetheless, there are areas that might be improved:

  • Whereas the framework is straightforward and straightforward to make use of, it’s not very customisable. As an example, you at the moment can’t create your personal LLM supervisor to orchestrate the processes.
  • Generally, it’s fairly difficult to get the complete detailed data from the documentation. For instance, it’s clear that CrewAI applied some guardrails to stop repetitive operate calls, however the documentation doesn’t totally clarify the way it works.
  • One other enchancment space is transparency. I like to know how frameworks work below the hood. For instance, in Langchain, you should utilize langchain.debug = True to see all of the LLM calls. Nonetheless, I haven’t discovered find out how to get the identical degree of element with CrewAI.
  • The total help for the native fashions can be a terrific addition, as the present implementation both lacks some options or is tough to get working correctly.

The area and instruments for LLMs are evolving quickly, so I’m hopeful that we’ll see a variety of progress within the close to future.

Thank you a large number for studying this text. I hope this text was insightful for you. You probably have any follow-up questions or feedback, please depart them within the feedback part.

Reference

This text is impressed by the “Multi AI Agent Techniques with CrewAI” brief course from DeepLearning.AI.

stat?event=post


Multi AI Agent Techniques 101 was initially printed in In the direction of Information Science on Medium, the place persons are persevering with the dialog by highlighting and responding to this story.



Supply hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here