<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ekin Odabas</title>
    <description>The latest articles on DEV Community by Ekin Odabas (@eo-uk).</description>
    <link>https://dev.to/eo-uk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eo-uk"/>
    <language>en</language>
    <item>
      <title>AI-Powered Dynamic Conversations With NPCs in Godot</title>
      <dc:creator>Ekin Odabas</dc:creator>
      <pubDate>Wed, 24 Dec 2025 19:41:17 +0000</pubDate>
      <link>https://dev.to/eo-uk/ai-powered-dynamic-conversations-with-npcs-in-godot-2imi</link>
      <guid>https://dev.to/eo-uk/ai-powered-dynamic-conversations-with-npcs-in-godot-2imi</guid>
      <description>&lt;p&gt;This is an example of how you can add dynamic AI-generated responses to your Godot game based on arbitrary player input so that your NPCs can respond in character when talking to them.&lt;/p&gt;

&lt;p&gt;For this, you will need an LLM server to consume from inside the game. Here, we will use a local LLM server with Ollama running next to the game on the same machine. We will then take the player's text input, hit the LLM with a special prompt, display the result on the screen, and keep track of the conversation to provide sufficient context with each request.&lt;/p&gt;

&lt;p&gt;So these are what we'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Godot 4.x (I used 4.5.0)&lt;/li&gt;
&lt;li&gt;Ollama (any model)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up Your Local LLM with Ollama
&lt;/h2&gt;

&lt;p&gt;Let's grab Ollama from its official site at:&lt;br&gt;
&lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;https://ollama.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my case, I'm using the Linux instructions, which is to run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm your installation has succeeded by running this in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to install a model to run. There's a list of available models at:&lt;br&gt;
&lt;a href="https://github.com/ollama/ollama?tab=readme-ov-file#model-library" rel="noopener noreferrer"&gt;https://github.com/ollama/ollama?tab=readme-ov-file#model-library&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of these models are fairly large, others not as much. The quality of the responses will mostly depend on the size of the model. For this example, I chose one of the smallest ones, &lt;code&gt;gemma3:1b&lt;/code&gt;, but feel free to pick whichever you prefer. Larger models will, of course, run slower and use more resources.&lt;/p&gt;

&lt;p&gt;This command will download and install a model (replace gemma3:1b with your chosen model):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run gemma3:1b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the process finishes, you will be able to talk to the model in the command line. Hit &lt;code&gt;CTRL + D&lt;/code&gt; to exit this mode. Instead run the below command to start the local Ollama server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can make API calls to &lt;code&gt;localhost:11434&lt;/code&gt;. To generate a response, you can hit the &lt;code&gt;/api/generate&lt;/code&gt; endpoint with a POST request that has a body as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gemma3:1b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"stream"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Ollama Responses From a Godot Project
&lt;/h2&gt;

&lt;p&gt;Let's build a small example in Godot to use our LLM for generating dynamic responses for our NPCs.&lt;/p&gt;

&lt;p&gt;Create a new Node2D node called Npc and an HTTPRequest node as it's child. Check the &lt;code&gt;Access as Unique Name&lt;/code&gt; on your Npc node as well.&lt;/p&gt;

&lt;p&gt;Create a new script and name it &lt;code&gt;npc.gd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Add below variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;67&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;gender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"male"&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;profession&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"farmer"&lt;/span&gt;
&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;name_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Henry"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, normally we'll set these values somewhere, like in the inspector.&lt;/p&gt;

&lt;p&gt;Also add a variable to hold our conversation context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's now create a function to make the request to the local Ollama server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;respondTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dialogText&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s2"&gt;"Player said: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;dialogText&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"""
        You are an npc in a medieval fantasy world named {name}, age {age}, 
        gender {gender}, profession {profession}. The player who is a 
        seasoned mercenary is saying to you: {dialogText}. Respond in 
        character, do not output anything other than your lines of dialog, 
        do not output action lines or descriptions, and prefer responses 
        around one or two paragraphs. The context of the conversation is 
        as follows: {context}
    """&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;format&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;name_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"gender"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;gender&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"profession"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;profession&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"dialogText"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;dialogText&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"gemma3:1b"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"stream"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;false&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;json_body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;$&lt;/span&gt;&lt;span class="n"&gt;HTTPRequest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s2"&gt;"http://localhost:11434/api/generate"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;HTTPClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;METHOD_POST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;json_body&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the prompt we give the LLM to get the right kind of response. The more detailed your prompt is, the better quality your answers will likely be. And the dialog context allows the LLM to keep track of what has been said previously in the same conversation. &lt;/p&gt;

&lt;p&gt;The above function makes the request but does not handle the response that gets returned. Let's add a handler for that too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;_on_request_completed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parse_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_string_from_utf8&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;
    &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;TextDisplay&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s2"&gt;"You said: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And register it on ready as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;_ready&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="o"&gt;$&lt;/span&gt;&lt;span class="n"&gt;HTTPRequest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request_completed&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_on_request_completed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, we're adding the LLM response to the context as well.&lt;/p&gt;

&lt;p&gt;We also need to create a Control node, and inside it a RichTextLabel node called TextDisplay, and check &lt;code&gt;Access as Unique Name&lt;/code&gt; to be able to display the output on the screen. Adjust size and position as you prefer.&lt;/p&gt;

&lt;p&gt;Now we can grab the NPC node anywhere in the code, call its &lt;code&gt;respondTo&lt;/code&gt; method with any string, and get an in-character response from the LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking Text Input from the Player
&lt;/h2&gt;

&lt;p&gt;We now need a way to allow the user type a message for our NPC to respond to. Let's start by creating a LineEdit node inside our Control node. Adjust size, place it where you want on the screen. Then attach a new script to LineEdit called &lt;code&gt;input.gd&lt;/code&gt; and add this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight gdscript"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;_process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Input&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;is_action_just_pressed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ui_submit"&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;Npc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;respondTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to define &lt;code&gt;ui_submit&lt;/code&gt; in &lt;code&gt;Project -&amp;gt; Project Settings -&amp;gt; Input Map&lt;/code&gt;. Map it to the physical &lt;code&gt;Enter&lt;/code&gt; key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node Hierarchy
&lt;/h2&gt;

&lt;p&gt;Here's what the final node hierarchy will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Node2D
├── Npc
│   ├── HTTPRequest
├── Control
│   ├── LineEdit
│   └── TextDisplay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;You can now run your scene, type in anything into your input box, hit Enter, and see your NPC respond to it. You can keep your dialogue going for as long as you want (within your context window limit). The conversation should keep some level of consistency thanks to the context passed, although it's not going to be 100% consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Dialogue
&lt;/h2&gt;

&lt;p&gt;Here's what my output looked like for my setup:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Player:&lt;/strong&gt; Hey introduce yourself to me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NPC:&lt;/strong&gt; Henry. Henry Blackwood. Been tending this land for nigh on sixty years now, you see. My hands are weathered, but my heart… well, my heart still beats with the rhythm of the seasons. And you, sir? A mercenary, are you? A sharp eye, I’ve noticed.  I’ve seen more men come and go than stones on a hillside.  Don’t expect a lengthy welcome, though.  Just… a quiet life, really.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Player:&lt;/strong&gt; Yes, I am a mercenary. How did you know?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NPC:&lt;/strong&gt; That’s a rather… bold claim, young one. Let’s just say my eyes have observed a certain restlessness in your stride. You carry yourself with a practiced alertness, a readiness that speaks of journeys undertaken. A man like you, seeking purpose beyond the harvest, wouldn’t be here to waste time with idle pleasantries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Player:&lt;/strong&gt; It is true, my purpose is beyond the harvest. I have a shield that needs a repair. Is there a blacksmith in this village?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NPC:&lt;/strong&gt; Aye, there’s a smith in the village, just past the well. Though he’s a stubborn sort, takes his time. A simple iron plate, a good man, though he charges a decent price. Would you like me to bring you some ale while you wait?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Player:&lt;/strong&gt; You have my gratitude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NPC:&lt;/strong&gt; Thank you, good sir. It is… a simple request, really. A warm ale would be welcome, though. It’s been a long day tending these fields, and a man needs a moment’s rest.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As you can see, some fine tuning of the prompt is necessary, but it's a good start. Bear in mind that better models will perform much better than &lt;code&gt;gemma3:1b&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flesh It Out
&lt;/h2&gt;

&lt;p&gt;You can go in many interesting directions from here. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add guardrails so that the player is less likely to break the AI, which can teach you a bit of AI security&lt;/li&gt;
&lt;li&gt;Add features to allow AI responses impact the game state, such as grant you an item, heal your character, or give you a quest&lt;/li&gt;
&lt;li&gt;Have the LLM decide what each NPC should be doing at any given moment based on variables like time of day&lt;/li&gt;
&lt;li&gt;Prompt the model to act as a sort of DM in a DnD campaign instead of an individual NPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And so on... Possibilities are basically endless, have fun exploring them!&lt;/p&gt;

</description>
      <category>godot</category>
      <category>ai</category>
      <category>llm</category>
      <category>gamedev</category>
    </item>
  </channel>
</rss>
