DEV Community

Ada-Ihueze
Ada-Ihueze

Posted on

How to Setup Openclaw With LMStudio

We have all heard and seen the buzz surrounding Openclaw, starting from its name controversy and evolution: Clawdbot -> Moltbot -> Openclaw. Every setup on Youtube is hinging on using APIs be it OpenAI, Anthropic, Google and the rest. Calling APIs with the Openclaw is and will be pretty expensive, so I decided to experiment with a local setup — LMStudio.

My hardware of choice is my Lenovo Thinkpad which I configured its OS to be Linux rather than windows that it came with, first thing I did was installing LMStudio. It was a little bit hard for me to do since I am still finding my way around core Linux OS instead of WSL and I found this video which helped me with installing LMStudio.

Next was to select a model, due to the nature of my hardware, I had to go with a quantized version of GLM-4.7 Flash model. After downloading, I used LMStudio chat to test and its response to my “hello” took 50.57 secs which was poor. But since I am experimenting with Openclaw, why not.

Next step was following Openclaw docs to install and set it up. I installed Openclaw using

curl -fsSL https://openclaw.bot/install.sh | bash

Following this, I selected manual configuration. Halfway through, I realized that I kept skipping some configs because I wasn’t sure what to make of them. After I got to the end, a whole lot was missing — from skills to model, to model provider to token. I didn’t find where to add my local model, so I decided to mess with the openclaw.json file. Opening it, I made the following modifications:

{
"meta": {
    "lastTouchedVersion": "2026.1.29",
    "lastTouchedAt": "2026-01-31T02:01:52.403Z"
  }, 
"wizard": {
    "lastRunAt": "2026-01-31T02:01:52.399Z",
    "lastRunVersion": "2026.1.29",
    "lastRunCommand": "onboard",
    "lastRunMode": "local"
  },
  "models": {
    "providers": {
      "lmstudio": {
        "baseUrl": "http://127.0.0.1:1234/v1",
        "apiKey": "lm-studio",
        "api": "openai-responses",
        "models": [
          {
            "id": "glm-4.7-flash",
            "name": "GLM-4.7 Flash",
            "reasoning": true,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0
            },
            "contextWindow": 20000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "lmstudio/glm-4.7-flash"
      },
      "workspace": "/home/Ubuntu/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      }
    }
  },
  "messages": {
      "ackReactionScope": "group-mentions"
    },
  "commands": {
      "native": "auto",
      "nativeSkills": "auto"
    },
  "hooks": {
    "internal": {
      "enabled": true,
      "entries": {
        "session-memory": {
          "enabled": true
        }
      }
    }
  },
  "gateway": {
    "port": 18789,
    "bind": "loopback",
    "mode": "local",
    "auth": {
      "mode": "token",
      "token": "generate-your-token"
    },
    "tailscale": {
      "mode": "off",
      "resetOnExit": false
    }
  },
  "skills": {
      "install": {
      "nodeManager": "npm"
      }
    },
}
Enter fullscreen mode Exit fullscreen mode

The token, I had to generate it by running

openssl rand -hex 20

To be sure that what I had was working, I ran

openclaw setup

and it returned

Config OK: ~/.openclaw/openclaw.json
Workspace OK: ~/.openclaw/workspace
Sessions: OK: ~/.openclaw/agents/main/sessions
Enter fullscreen mode Exit fullscreen mode

Finally, I ran the status

openclaw gateway status

and it returned a bunch of information amongst which was

Listening: 127.0.0.1:18789

Conclusion:

I am yet to do any other thing using Openclaw besides setting it up, I will write more as I use it and stretch its capabilities. I am using a relatively new hardware with little to no information on it, so the security risk associated with giving it access to the system is minimal for me.

Are you building an AI system or agent? Is your company building AI agents and AI systems for internal usage? Are you running a company that sells AI systems and agents to companies? You can now stress test your AI before pushing it live, all you have to do is run:

zeroshot scan --target-url https://your-target-url --max-attacks 20

And your system will be stress tested with 20 attack vectors to discover weak points that are present. Its not limited to 20 attacks, you can run up to 50 attacks as there are 1000+ attack vectors across different AI system categories. Start testing for free at Zeroshot

Landing page image for Zeroshot

Top comments (0)