DEV Community

sumeshi_kun
sumeshi_kun

Posted on

Driving Ghidra Static Analysis with Local LLMs (LM Studio + GhidraMCP Setup)

Overview

I wanted to connect the Ghidra static analysis tool to a local LLM.
Since I prefer not to run the analysis environment directly on the host OS, I set up a dedicated VM for Ghidra and kept only the MCP host (LM Studio) on the host machine.

Here’s the rough architecture:

arch

Environment

A machine with mid-range specs, good enough to run fairly recent games. You can build it for around \$800–900 USD.

CPU: 12th Gen Intel Core i5-12400F
GPU: NVIDIA GeForce RTX 3060
RAM: 32GB
Enter fullscreen mode Exit fullscreen mode

Preparation Steps

  1. Install VMware Workstation on the host OS.
  2. Set up a Windows VM (the analysis environment).
  3. Install JDK in the VM (required for Ghidra).
  4. Install Python 3.x in the VM (for GhidraMCP).
  5. Inside the VM, run pip install mcp requests.
  6. Open TCP/8081 in the VM’s Windows Firewall.
  7. Configure the VM’s network to Host-only mode.

Setting Up the Guest OS

Download GhidraMCP

Download the latest release. I used GhidraMCP 1.4:
https://github.com/LaurieWired/GhidraMCP/releases

The release notes state:

Add support for Ghidra 11.3.2

So make sure the Ghidra version matches. Unzip after downloading.

Download Ghidra

Get version 11.3.2 here:
https://github.com/NationalSecurityAgency/ghidra/releases

Unzip it as well.

Install the GhidraMCP Extension

Follow the instructions in the README:
https://github.com/LaurieWired/GhidraMCP/releases

Run Ghidra
Go to File → Install Extensions
Click + and select the GhidraMCP-1-2.zip (or whichever version you use)
Restart Ghidra
Enable GhidraMCPPlugin at File → Configure → Developer
(Optional) Configure port settings at Edit → Tool Options → GhidraMCP HTTP Server

Run bridge_mcp_ghidra.py

Open Command Prompt in the folder containing the script:

python bridge_mcp_ghidra.py --transport sse --mcp-host 0.0.0.0 --mcp-port 8081 --ghidra-server http://127.0.0.1:8080/
Enter fullscreen mode Exit fullscreen mode

Sample output:

Connecting to Ghidra server at http://127.0.0.1:8080/
Starting MCP server on http://0.0.0.0:8081/sse
Using transport: sse
INFO: Started server process [7908]
INFO: Waiting for application startup.
INFO: Application startup complete.
Enter fullscreen mode Exit fullscreen mode

Preparing the Host OS

Install LM Studio: https://lmstudio.ai/
Setup is straightforward—just follow the installer.

LM Studio is now free for commercial use, which is a nice bonus.
It suggested gpt-oss-20b, so I installed that model.

Edit mcp.json

mcp

Open Edit mcp.json from the sidebar and configure it like this (replace with your VM’s IP from ipconfig):

{
  "mcpServers": {
    "ghidra": {
      "url": "http://192.168.xxx.xxx:8081/sse"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

After saving, enable GhidraMCP in LM Studio.
If successful, the tool list will appear like this:

lm

If it fails, check host ↔ guest connectivity and firewall settings.


Analysis

Inside the Analysis VM

I compiled a simple Hello World program with tcc to generate an EXE:

#include <stdio.h>

int main(int argc, char** argv){
    printf("Hello World.\n");
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

Import it into Ghidra as usual. A Non-Shared Project works fine.


On the LM Studio Side

Since MCP is configured, I tried sending some basic queries:

Approve tool usage when prompted. Example interaction:

How does this binary look?
openai/gpt-oss-20b

list_strings
mcp/ghidra

list_functions
mcp/ghidra

Binary summary

Entry point: 00401038
Output string: "Hello World.\n"
DLLs used: msvcrt.dll → printf, exit, etc.
Main function: FUN_00401000 (calls printf)

Behavior
Starts at entry point (00401038), calls FUN_00401000,
prints "Hello World.\n", then exits.

Conclusion
A simple sample program. No malicious behavior detected.
Useful for learning/demo purposes but not an actual attack sample.
Enter fullscreen mode Exit fullscreen mode

The results were surprisingly solid.


Closing Thoughts

The setup was easy, and I was impressed by how capable a local LLM can be when integrated with Ghidra.
I’ve heard MacBook Pro runs this blazing fast—I’m tempted to pick one up.

That’s it!


Original Post(Japanese): https://zenn.dev/sum3sh1/articles/96be8503049432

Top comments (0)