I randomly got an idea yesterday. AI is everywhere right? Well let's make it even worse. I realized I've never actually looked into integrating LLMs into games. Particularly game engines like Unreal, Unity, or Godot.
I've always loved playing around with game engines in the past, though I never really made a full game. So I was like, "let's just do it".
The Spark: FunctionGemma
While researching, I came across Google releasing FunctionGemma - a model specifically designed for function calling from natural language. Basically, it takes text input and can identify when to call specific functions and with what parameters.
This immediately clicked for me. Theoretically, I could build something like a helper bot in my game that actually understands player commands: "Go mine some iron", "Pick up all the dropped items nearby", "Build a solar panel over there"... The LLM would parse the intent and trigger the appropriate game functions.
And the best part? It's small enough that I can run it completely locally on my RTX 3070. No API calls needed.
Time to Learn Local LLMs
I've never run an LLM locally before. Didn't have a idea how to do it - always just used OpenAI APIs. So I was like, now is the time!
Researching how to connect a game to an AI led me to an important realization: running the LLM directly inside the game engine isn't ideal, especially for early testing and development. You want a separate server handling the inference.
I decided to give Ollama a try for the LLM server. For the game engine, I went with Godot since I could quickly play with it. Downloaded it, set up FunctionGemma model with a simple ollama pull functiongemma, and had a local LLM server running in minutes.
Connecting Godot to Ollama
Now came the fun part: making Godot talk to the LLM.
Godot has built-in HTTPRequestnodes that make this surprisingly straightforward, I just needed to learn very simple Godot Script and how to make the request to the server and handle the stuff.
The basic flow looks like this
- Create an
HTTPRequestnode in your scene - Send a POST request to
http://127.0.0.1:11434/api/chatwith your message - Parse the JSON response to get the AI's reply
Here's the simplified concept:
const OLLAMA_URL = "http://127.0.0.1:11434/api/chat"
const MODEL = "functiongemma"
func send_request() -> void:
var body = {
"model": MODEL,
"messages": conversation_messages,
"tools": tools, # Your game function definitions
"stream": false
}
var json_body = JSON.stringify(body)
var headers = ["Content-Type: application/json"]
http_request.request(OLLAMA_URL, headers, HTTPClient.METHOD_POST, json_body)
For function calling, you define your available "tools" (game functions) with their parameters. The LLM then decides which functions to call based on the user's request:
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and country"
}
},
"required": ["location"]
}
}
}
When the LLM responds with tool_calls, you execute those functions locally and send the results back. It's a conversation:
user → LLM → function call → result → LLM → final response
Here's the important part: the LLM doesn't actually execute anything. It just returns structured data saying "hey, I think you should call mine_resourcewith these parameters." Your game code makes the final decision. So this not some AI slop game that breaks or does super weird stuff.
What I Learned
I loved this whole process. It made me understand LLMs and all the stuff around running them on a much deeper level — how inference servers work, function/tool calling behind the scenes, the round-trip conversation flow and much more cool stuff!
What's Next
I definitely plan to play with this a lot more. There are some genuinely interesting use cases where LLMs in games could be actually useful - not in the bad way we sometimes see today, where "AI-first" is just shoved everywhere and shines in our faces where we don't want or need it.
Some ideas I want to explore:
- Intelligent NPC companions that understand context and can perform complex tasks
- Natural language command interfaces for strategy or simulation games
- Dynamic dialogue systems that don't feel like scripted trees
Oh, and if you want to try this yourself 👉 I've put together a working demo you can clone and run right now. It's a minimal Godot project with everything set up: the HTTP client, function definitions, and the full round-trip conversation flow.
Feel free to use it as a starting point and build on top of it!
If you're also thinking about experimenting with this, let's connect! The barrier to entry is surprisingly low, and you'll learn a ton in the process. I study economy and come from web dev!
Have you tried integrating LLMs into game engines? I'd love to hear about your experiences in the comments!


Top comments (0)