<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eva Jagodic</title>
    <description>The latest articles on DEV Community by Eva Jagodic (@eva-jagodic).</description>
    <link>https://dev.to/eva-jagodic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eva-jagodic"/>
    <language>en</language>
    <item>
      <title>Building Intelligent Multi-Agent Systems with CrewAI</title>
      <dc:creator>Eva Jagodic</dc:creator>
      <pubDate>Tue, 04 Feb 2025 13:46:47 +0000</pubDate>
      <link>https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2</link>
      <guid>https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Multi-agent systems (MAS)&lt;/strong&gt; for large language models (LLMs) represent a significant advancement in AI-driven problem-solving. Rather than operating in isolation, LLM agents collaborate, exchange information, and make dynamic decisions to achieve complex objectives efficiently.&lt;/p&gt;

&lt;p&gt;From &lt;strong&gt;document analysis&lt;/strong&gt; and &lt;strong&gt;automated research&lt;/strong&gt; to &lt;strong&gt;content generation&lt;/strong&gt; and &lt;strong&gt;customer support&lt;/strong&gt;, LLM-based MAS revolutionizes workflows by offering scalability, adaptability, and efficiency. Their ability to interact and coordinate dynamically enables efficient collaboration across multiple AI-driven tasks, optimizing performance in real-world applications.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll explore LLM multi-agent fundamentals, real-world applications, and guide you step-by-step in building your own intelligent agent system. We will be using &lt;a href="https://docs.crewai.com/introduction" rel="noopener noreferrer"&gt;&lt;strong&gt;CrewAI&lt;/strong&gt;&lt;/a&gt;, an open source framework for orchestrating autonomous AI agents and we will power it with &lt;a href="https://cortecs.ai/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cortecs LLM workers&lt;/strong&gt;&lt;/a&gt;. Get ready to bring AI collaboration to life!&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2#understanding-multi-agent-systems"&gt;Understanding Multi-Agent Systems&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2#setting-up-the-development-environment"&gt;Setting Up the Development Environment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2#adding-dynamic-provisioning-to-your-example-crew"&gt;Adding Dynamic Provisioning to Your Example Crew&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2#running-your-crew"&gt;Running Your Crew&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/cortecs/building-intelligent-multi-agent-systems-with-crewai-1bc2#conclusion"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Understanding Multi-Agent Systems
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Are Multi-Agent Systems?
&lt;/h3&gt;

&lt;p&gt;An LLM-based MAS consists of multiple AI agents that interact in a shared environment to process language tasks efficiently. These agents, powered by large language models, collaborate by exchanging information, analysing data, and generating responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Components of LLM Multi-Agent Systems
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLM Agents&lt;/strong&gt; – AI-driven entities that process and generate text based on specific roles and objectives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment&lt;/strong&gt; – The digital space where agents operate, such as document repositories, chat interfaces, or APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication&lt;/strong&gt; – How agents share insights, using structured prompts, shared memory, or message-passing frameworks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision-Making&lt;/strong&gt; – The strategies agents use to determine responses, often involving chain-of-thought reasoning or reinforcement learning.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits of LLM Multi-Agent Systems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – Handles large-scale text processing tasks efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt; – Multiple agents can divide and refine tasks for better accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptability&lt;/strong&gt; – Easily integrates into various workflows and industries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; – Automates complex workflows with minimal human intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Applications of LLM Multi-Agent Systems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Research&lt;/strong&gt; – Agents collaborate to summarize, fact-check, and analyse documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Generation&lt;/strong&gt; – Teams of AI writers draft, edit, and refine articles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer Support&lt;/strong&gt; – AI agents handle inquiries, escalate issues, and personalize responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Extraction &amp;amp; Analysis&lt;/strong&gt; – AI parses structured and unstructured text for insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding these fundamentals prepares us to implement an LLM-based MAS!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Development Environment
&lt;/h2&gt;

&lt;p&gt;Let's install the required libraries for this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;crewai crewai-tools uv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll use &lt;code&gt;crewai&lt;/code&gt; and its extension &lt;code&gt;crewai-tools&lt;/code&gt; to orchestrate our agents, while the &lt;code&gt;uv&lt;/code&gt; package manager helps run our crews.&lt;/p&gt;

&lt;p&gt;Once the libraries are installed, we will create an example crew with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crewai create crew example_crew
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When prompted for a hardware provider, we can select OpenAI from the listed models. Since Cortecs LLM workers are OpenAI-compatible, we'll use our Cortecs credentials. First, create an account on &lt;a href="http://cortecs.ai" rel="noopener noreferrer"&gt;Cortecs.ai&lt;/a&gt;, then visit your &lt;a href="https://cortecs.ai/userArea/userProfile" rel="noopener noreferrer"&gt;profile page&lt;/a&gt; to generate access credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CORTECS_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_CORTECS_CLIENT_ID&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CORTECS_CLIENT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_CLIENT_SECRET&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_CORTECS_API_KEY&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, select a model for your crew. We recommend using an 🔵 &lt;strong&gt;Instantly Provisioned&lt;/strong&gt; model like &lt;code&gt;cortecs/phi-4-FP8-Dynamic&lt;/code&gt;. The openai/ prefix indicates we're using an OpenAI-compatible endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;openai/cortecs/phi-4-FP8-Dynamic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding Dynamic Provisioning to Your Example Crew
&lt;/h2&gt;

&lt;p&gt;Let's dynamically provision an LLM worker to power our crew.&lt;/p&gt;

&lt;p&gt;We will navigate to &lt;code&gt;example_crew/src/example_crew/crew.py&lt;/code&gt; and modify the ExampleCrew class with these two key functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;start_llm&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;This function initializes the Cortecs client and starts an LLM Worker of the desired model. We'll add it to the ExampleCrew class's &lt;code&gt;__init__&lt;/code&gt; function to ensure it runs when the crew starts.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stop_and_delete_llm()&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;To maximize cost efficiency, this function shuts down our resources when the crew completes its execution. We'll decorate it with the &lt;code&gt;@after_kickoff&lt;/code&gt; hook to ensure proper cleanup.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's the modified ExampleCrew class implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Crew&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Process&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai.project&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CrewBase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;crew&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;after_kickoff&lt;/span&gt; &lt;span class="c1"&gt;#Add after_kickoff import
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cortecs_py&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Cortecs&lt;/span&gt;

&lt;span class="nd"&gt;@CrewBase&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ExampleCrew&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_llm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start_llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cortecs_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Cortecs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MODEL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;removeprefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Starting model &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cortecs_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ensure_instance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_BASE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;

    &lt;span class="nd"&gt;@after_kickoff&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;stop_and_delete_llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cortecs_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cortecs_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;instance_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; stopped and deleted.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;#The rest of the ExampleCrew stays the same...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can further customize your crew by modifying agents.yaml, tasks.yaml and crew.py, or by following additional examples in the &lt;a href="https://docs.crewai.com/introduction" rel="noopener noreferrer"&gt;crewai docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before running our crew, we will add the cortecs-py dependency to our pyproject file in &lt;code&gt;example_crew/pyproject.toml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;dependencies&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="py"&gt;"crewai[tools]&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.100&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="s"&gt;",&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;    &lt;span class="py"&gt;"cortecs-py&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="s"&gt;" #Add this line&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running Your Crew
&lt;/h2&gt;

&lt;p&gt;To run our crew, we will first navigate to the project directory (&lt;code&gt;example_crew/&lt;/code&gt;) and install the dependencies by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crewai &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can execute the crew with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crewai run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see that an LLM worker instance starts up. Once it's ready, the crew executes its task. Afterward, the instance automatically stops and gets deleted.&lt;/p&gt;

&lt;p&gt;The generated report will look similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Comprehensive Report on Advances in Large Language Model (LLM) Technologies

## 1. Advanced Fine-Tuning Techniques

By 2025, significant advancements in fine-tuning techniques have marked a turning point for Large Language Models (LLMs). These improvements include few-shot and zero-shot learning, enabling models to perform new tasks with minimal task-specific data. Few-shot learning takes advantage of a minimal number of examples, allowing the model to generalize well across similar tasks. Zero-shot learning, on the other hand, lets the model tackle tasks without any task-specific training data. These techniques reduce dependency on extensive labeled datasets and expedite adaptation to diverse applications, offering flexibility and efficiency.

## 2. Multi-Modal Capabilities

LLMs have evolved to incorporate multi-modal data, effectively integrating information from text, images, video, and audio. This enhancement broadens their application across various sectors. In healthcare, multi-modal LLMs facilitate complex case studies by correlating clinical text with imagery and patient history. In autonomous systems, they enhance decision-making by combining sensory data with textual inputs. This synergy results in richer, more contextual insights, enabling more comprehensive understanding and interaction within environments.

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we've explored how to build a multi-agent system using CrewAI and Cortecs LLM workers. We covered the fundamentals of LLM-based multi-agent systems, from understanding their key components to practical implementation. We've learned how to set up your development environment, dynamically provision LLM workers, and create a functional crew that can efficiently handle complex tasks.&lt;/p&gt;

&lt;p&gt;To dive deeper into multi-agent systems, check out the &lt;a href="https://docs.crewai.com/introduction" rel="noopener noreferrer"&gt;CrewAI documentation&lt;/a&gt; and explore the &lt;a href="https://cortecs.ai" rel="noopener noreferrer"&gt;Cortecs platform&lt;/a&gt;. Happy building! 🚀✨&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>nlp</category>
      <category>cortecs</category>
    </item>
  </channel>
</rss>
