<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Benjamin Consolvo</title>
    <description>The latest articles on DEV Community by Benjamin Consolvo (@bconsolvo).</description>
    <link>https://dev.to/bconsolvo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bconsolvo"/>
    <language>en</language>
    <item>
      <title>Deploying AI Agents Locally with Qwen3, Qwen-Agent, and Ollama</title>
      <dc:creator>Benjamin Consolvo</dc:creator>
      <pubDate>Wed, 28 May 2025 20:05:36 +0000</pubDate>
      <link>https://dev.to/bconsolvo/deploying-ai-agents-locally-with-qwen3-qwen-agent-and-ollama-1ddm</link>
      <guid>https://dev.to/bconsolvo/deploying-ai-agents-locally-with-qwen3-qwen-agent-and-ollama-1ddm</guid>
      <description>&lt;p&gt;&lt;em&gt;[Article originally posted on &lt;a href="https://medium.com/intel-tech/deploying-ai-agents-locally-with-qwen3-qwen-agent-and-ollama-cad452f20be5" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;]&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ytogu4y85mghrdwbh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ytogu4y85mghrdwbh9.png" alt="Bear with a hat" width="800" height="800"&gt;&lt;/a&gt;Image generated by author. Prompt: "use your tools like my_image_gen to generate a bear with a hat".&lt;/p&gt;

&lt;p&gt;Ever wanted to run your own AI agent locally - without sending data to the cloud? As an AI software engineer at Intel, I've been exploring how to run open-source LLMs locally on AI PCs. With the smaller &lt;a href="https://huggingface.co/Qwen/Qwen3-8B" rel="noopener noreferrer"&gt;Qwen3&lt;/a&gt; models, it's totally possible. These models are compact enough to run on an AI PC and powerful enough to call tools and handle real tasks. Even the smaller variants of Qwen3 do allow for tool-calling, enabling you to build agentic workflows to do things like looking up live websites, function calling, and code execution. This guide walks through how to build your own agentic workflows using Qwen3, Qwen-Agent, and Ollama - without relying on the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ollama Setup and Qwen3 Model Hosting
&lt;/h2&gt;

&lt;p&gt;To keep everything local and private, I used Ollama - a lightweight way to run open-source models right on your machine. Here's how I got Qwen3 running on my AI PC using WSL2 (Windows Subsystem for Linux).&lt;br&gt;
Install Ollama with the Linux command, taken from the &lt;a href="https://ollama.com/download/linux" rel="noopener noreferrer"&gt;Ollama website&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ollama makes it easy to host your model. After installing Ollama, simply run&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama run qwen3:8b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and the ~5.2GB Qwen3:8b model should download and run locally by default at the local address of &lt;a href="https://localhost:11434/" rel="noopener noreferrer"&gt;https://localhost:11434/&lt;/a&gt;. We will use this address later when building the agents with Qwen-Agent.&lt;/p&gt;
&lt;h2&gt;
  
  
  Qwen-Agent Python Library
&lt;/h2&gt;

&lt;p&gt;After installing Ollama, I populated a requirements.txt file with the Qwen-Agent library,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;qwen-agent[gui,rag,code_interpreter,mcp]
qwen-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and installed from the command line with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sample AI agent with Qwen3 using Qwen-Agent
&lt;/h2&gt;

&lt;p&gt;To build a sample AI agent with Qwen3, you can use the code snippet found on the Qwen-Agent GitHub repository here. The only modifications I made are to the &lt;code&gt;llm_cfg&lt;/code&gt; to change the model to &lt;code&gt;qwen3:8b&lt;/code&gt;, the model server to &lt;a href="https://localhost:11434/v1" rel="noopener noreferrer"&gt;https://localhost:11434/v1&lt;/a&gt;, and a PDF file of a research paper called &lt;code&gt;Zheng2024_LargeLanguageModelsinDrugDiscovery.pdf&lt;/code&gt;. In my case, I only made use of the built-in tool called &lt;code&gt;my_image_gen&lt;/code&gt; to ask the LLM agent to use a tool to generate an image, but feel free to experiment with your own Qwen-Agent workflow. In this code walkthrough, I'm showing how to create a simple AI agent that can generate an image based on your request - entirely locally using Qwen3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#from https://github.com/QwenLM/Qwen-Agent
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pprint&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;urllib.parse&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json5&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;qwen_agent.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Assistant&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;qwen_agent.tools.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BaseTool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;register_tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;qwen_agent.utils.output_beautify&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;typewriter_print&lt;/span&gt;

&lt;span class="c1"&gt;# Step 1 (Optional): Add a custom tool named `my_image_gen`.
&lt;/span&gt;&lt;span class="nd"&gt;@register_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my_image_gen&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyImageGen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseTool&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# The `description` tells the agent the functionality of this tool.
&lt;/span&gt;    &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;AI painting (image generation) service, input text description, and return the image URL drawn based on text information.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="c1"&gt;# The `parameters` tell the agent what input parameters the tool has.
&lt;/span&gt;    &lt;span class="n"&gt;parameters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;string&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Detailed description of the desired image content, in English&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;required&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# `params` are the arguments generated by the LLM agent.
&lt;/span&gt;        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;urllib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;quote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;image_url&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://image.pollinations.ai/prompt/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;ensure_ascii&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 2: Configure the LLM you are using.
&lt;/span&gt;&lt;span class="n"&gt;llm_cfg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Use the model service provided by DashScope:
&lt;/span&gt;    &lt;span class="c1"&gt;# 'model': 'qwen-max-latest',
&lt;/span&gt;    &lt;span class="c1"&gt;# 'model_type': 'qwen_dashscope',
&lt;/span&gt;    &lt;span class="c1"&gt;# 'api_key': 'YOUR_DASHSCOPE_API_KEY',
&lt;/span&gt;    &lt;span class="c1"&gt;# It will use the `DASHSCOPE_API_KEY' environment variable if 'api_key' is not set here.
&lt;/span&gt;
    &lt;span class="c1"&gt;# Use a model service compatible with the OpenAI API, such as vLLM or Ollama:
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;qwen3:8b&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# 'model_server': 'http://localhost:8000/v1',  # base_url, also known as api_base
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model_server&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://localhost:11434/v1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Ollama
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;EMPTY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

    &lt;span class="c1"&gt;# (Optional) LLM hyperparameters for generation:
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;generate_cfg&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;top_p&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Step 3: Create an agent. Here we use the `Assistant` agent as an example, which is capable of using tools and reading files.
&lt;/span&gt;&lt;span class="n"&gt;system_instruction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'''&lt;/span&gt;&lt;span class="s"&gt;After receiving the user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;
&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my_image_gen&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;code_interpreter&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# `code_interpreter` is a built-in tool for executing code.
&lt;/span&gt;&lt;span class="n"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Zheng2024_LargeLanguageModelsinDrugDiscovery.pdf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Give the bot a PDF file to read.
&lt;/span&gt;&lt;span class="n"&gt;bot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Assistant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm_cfg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system_instruction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;function_list&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 4: Run the agent as a chatbot.
&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;  &lt;span class="c1"&gt;# This stores the chat history.
&lt;/span&gt;&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# For example, enter the query "draw a dog and rotate it 90 degrees".
&lt;/span&gt;    &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;user query: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Append the user query to the chat history.
&lt;/span&gt;    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="n"&gt;response_plain_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;bot response:&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Streaming output.
&lt;/span&gt;        &lt;span class="n"&gt;response_plain_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;typewriter_print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response_plain_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Append the bot responses to the chat history.
&lt;/span&gt;    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prompt and Response
&lt;/h2&gt;

&lt;p&gt;I provided a prompt to ask &lt;code&gt;qwen3:8b&lt;/code&gt; to generate an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user query: use your tools like `my_image_gen` to generate a bear with a hat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bot response:
2025-05-19 17:34:22,554 - doc_parser.py - 93 - INFO - Read chunked Zheng2024_LargeLanguageModelsinDrugDiscovery.pdf from cache.
2025-05-19 17:34:22,555 - base_search.py - 56 - INFO - all tokens: 50347
&amp;lt;think&amp;gt;
Okay, the user wants me to generate an image of a bear with a hat using the my_image_gen tool. Let me check the &lt;span class="k"&gt;function&lt;/span&gt;&lt;span class="s1"&gt;'s requirements. The function requires a prompt in English that describes the image details. I need to make sure the prompt is detailed enough. Let me think about the elements: a bear, a hat. Maybe specify the type of hat, like a top hat or a funny hat. Also, the setting-maybe a forest or a snowy mountain. Adding adjectives like "cute" or "cartoon-style" could help. Let me put it all together. The prompt should be something like "A cute cartoon bear wearing a top hat in a forest setting." That should work. Now, I'&lt;/span&gt;ll format the tool call with this prompt.
&amp;lt;/think&amp;gt;

&lt;span class="o"&gt;[&lt;/span&gt;TOOL_CALL] my_image_gen
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"prompt"&lt;/span&gt;: &lt;span class="s2"&gt;"A cute cartoon bear wearing a top hat in a forest setting"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;TOOL_RESPONSE] my_image_gen
&lt;span class="o"&gt;{&lt;/span&gt;image_url: &lt;span class="s2"&gt;"https://image.pollinations.ai/prompt/A%20cute%20cartoon%20bear%20wearing%20a%20top%20hat%20in%20a%20forest%20setting"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&amp;lt;think&amp;gt;
Okay, &lt;span class="nb"&gt;let&lt;/span&gt;&lt;span class="s1"&gt;'s see what the user is asking for here. They provided an image URL of a cartoon bear wearing a top hat in a forest setting. The image was generated using the my_image_gen function, which takes a prompt and returns the URL.

First, I need to check if there'&lt;/span&gt;s any specific request related to this image. The user might want to know more about the image, like details about the bear, the forest, or maybe how to generate similar images. Alternatively, they might be testing &lt;span class="k"&gt;if &lt;/span&gt;I can process the image URL or describe tSince the user hasn&lt;span class="s1"&gt;'tSince the user hasn'&lt;/span&gt;t asked a direct question but provided the image, my role is to acknowledge the image and offer further assistance. I should confirm that the image was generated as expected and ask &lt;span class="k"&gt;if &lt;/span&gt;they need anything &lt;span class="k"&gt;else&lt;/span&gt;, like modifying the image, getting more images, or information about the generation process.

I should also make sure to keep the response friendly and open-ended, encouraging them to ask &lt;span class="k"&gt;for &lt;/span&gt;more &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;if &lt;/span&gt;needed. No need to call any functions here since the user hasn&lt;span class="s1"&gt;'t requested an action, just shared theSince the user hasn'&lt;/span&gt;t asked a direct question but provided the image, my role is to acknowledge the image and offer further assistance. I should confirm that the image was generated as expected and ask &lt;span class="k"&gt;if &lt;/span&gt;they need anything &lt;span class="k"&gt;else&lt;/span&gt;, like modifying the image, getting more images, or information about the generation process.

I should also make sure to keep the response friendly and open-ended, encouraging them to ask &lt;span class="k"&gt;for &lt;/span&gt;more &lt;span class="nb"&gt;help &lt;/span&gt;&lt;span class="k"&gt;if &lt;/span&gt;needed. No need to call any functions here since the user hasn&lt;span class="s1"&gt;'t requested an action, just shared the image.
&amp;lt;/think&amp;gt;

![](https://image.pollinations.ai/prompt/A%20cute%20cartoon%20bear%20wearing%20a%20top%20hat%20in%20a%20forest%20setting)

This is an AI-generated cartoon image of a cute bear wearing a top hat in a forest setting. The image appears to be created using text-to-image generation technology. Would you like me to help you with anything related to this image or AI-generated content?
user query:
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that it did properly call the tool called "my_image_gen" and the generated image is at the top of this article. It is also hosted at the &lt;a href="https://image.pollinations.ai/prompt/A%20cute%20cartoon%20bear%20wearing%20a%20top%20hat%20in%20a%20forest%20setting" rel="noopener noreferrer"&gt;URL it cites near the end of its response&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Device
&lt;/h2&gt;

&lt;p&gt;The AI PC laptop used in my testing has an Intel Core Ultra 7 155H 3.80 GHz processor with 32 GB of RAM. &lt;/p&gt;

&lt;h2&gt;
  
  
  What We Built: 
&lt;/h2&gt;

&lt;p&gt;✅ Installed and ran Qwen3:8b locally using Ollama&lt;br&gt;
✅ Set up Qwen-Agent to build an AI assistant&lt;br&gt;
✅ Connected a tool to generate images using text prompts&lt;br&gt;
✅ Prompted the agent and got a real response - locally, on an Intel-powered AI PC&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;You can take advantage of building your own agents locally and speak with other developers using the resources listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out all of the possible &lt;a href="https://ollama.com/library/qwen3" rel="noopener noreferrer"&gt;Qwen3 models hosted by Ollama&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;To build your own Qwen-based agents, visit the &lt;a href="https://github.com/QwenLM/Qwen-Agent" rel="noopener noreferrer"&gt;Qwen-Agent GitHub repository&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Learn more about the &lt;a href="https://www.intel.com/content/www/us/en/products/docs/processors/core-ultra/ai-pc.html" rel="noopener noreferrer"&gt;AI PC Powered by Intel&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;To chat with other developers, you can visit the &lt;a href="https://discord.com/invite/kfJ3NKEw5t" rel="noopener noreferrer"&gt;Intel DevHub Discord&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;For a more in-depth review and performance testing of Qwen3, you can visit the article &lt;a href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-qwen3-large-language-models.html" rel="noopener noreferrer"&gt;Intel® AI Solutions Accelerate Qwen3 Large Language Models&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Check out &lt;a href="https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/overview.html" rel="noopener noreferrer"&gt;Intel's AI developer resources here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>qwen</category>
      <category>genai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
