DEV Community

Cover image for What OpenClaw Gets Wrong Out of the Box (And How to Fix It)
v. Splicer
v. Splicer

Posted on

What OpenClaw Gets Wrong Out of the Box (And How to Fix It)

OpenClaw works well enough on a fresh install that most people don’t question the defaults. That’s the problem. The defaults are tuned for demos, not for actual sustained use, and the gap between “it ran the task” and “it ran the task correctly, securely, without silently degrading” is wider than the documentation suggests.

This post covers the specific things OpenClaw gets wrong out of the box and what you actually do about them. Not configuration trivia. The decisions that determine whether the tool is useful in a real workflow or just impressive for ten minutes.

Context Window Discipline Is Off by Default

The default OpenClaw configuration does not aggressively manage context. Tasks that involve long file reads, iterative tool calls, or multi-step pipelines will accumulate context across the session until the model starts making decisions based on stale or compressed information. By the time you notice degraded output, you’re usually several steps past where the problem started.

The model doesn’t tell you the context is degrading. It keeps working. The work just gets worse.

Fix this by setting explicit context boundaries at the task level, not the session level. Structure your task files so each subtask carries only the state it actually needs:

# task_config.yaml

context_strategy: scoped. # not persistent

max_context_tokens: 4000. # per subtask, not cumulative

context_reset_on: task_boundary
Enter fullscreen mode Exit fullscreen mode

The scoped strategy forces OpenClaw to pass explicit state between subtasks instead of relying on accumulated session memory. Slower to configure. Dramatically more reliable on tasks longer than three steps.

For anything involving file analysis, add a summary step between heavy read operations and action steps. The model compresses better when you give it a structured handoff than when you let it carry raw file content forward.

The Default Timeout Behavior Silently Succeeds

Out of the box, OpenClaw will mark a task as complete if the last action it took didn’t return an explicit error. This is not the same as the task actually succeeding. A file write that fails quietly, a subprocess that exits with code 0 but produces no output, a web fetch that returns a soft redirect instead of content — all of these come back as success in the default telemetry.

You find out something went wrong when you go looking for the output and it isn’t there.

Change the completion condition to require positive confirmation, not just absence of error:

# task_config.yaml

completion_criteria:

. require_output_validation: true

. output_check: file_exists. # or: non_empty, hash_match, schema_valid

. on_ambiguous_result: retry. # not: pass
Enter fullscreen mode Exit fullscreen mode

For file-producing tasks, use hash_match against an expected output signature if you’re running the same task repeatedly. For API calls, validate the response schema before marking the step done. The extra configuration takes ten minutes and eliminates an entire class of silent failures.

Filesystem Access Is Too Permissive

Default OpenClaw runs with access to your full working directory. Depending on where you cloned it, that might include dotfiles, ssh configs, environment files, or anything else sitting in your home tree. The tool doesn’t need that access to do its job, and you don’t want it to have it.

Scope the filesystem access before you run anything:

# openclaw_runtime.yaml
sandbox:
.filesystem:
.allow:
. - ./workspace/ # explicit working dir
. - ./outputs/ # explicit output target
. deny:
. - ~/.ssh/
. - ~/.gnupg/
. - ~/.config/
. - ./.* # all dotfiles
. network:
. allow: [] # block by default, add specific hosts as needed

If you’re running OpenClaw on a machine that also handles anything sensitive, put it in a separate user context or a container with explicit volume mounts. The filesystem config above helps, but process-level isolation is stronger than path-level isolation.

Network is blocked by default in the config above because most local automation tasks don’t need outbound access. Add specific endpoints only when a task explicitly requires them. If you’re uncertain, run with network disabled and see if the task completes. It usually does.

The Logging Configuration Tells You Nothing Useful

Default log verbosity in OpenClaw is set to info, which captures task start, task end, and any explicit errors. It does not capture intermediate tool calls, model reasoning steps, or the specific context state at decision points. When something goes wrong in the middle of a multi-step task, the default logs give you the beginning and the end and nothing in between.

Set verbosity to debug for any task you’re running for the first time:


# openclaw_runtime.yaml

logging:

. level: debug

. include:

. — tool_calls

. — context_state

. — reasoning_trace

. output: ./logs/openclaw_run.log

Enter fullscreen mode Exit fullscreen mode

The reasoning_trace flag is the important one. It writes out the model’s intermediate reasoning before each action, which means you can actually read why it made a decision instead of reverse-engineering it from the output. On tasks that involve branching logic or conditional file operations, this is the difference between debugging in ten minutes and debugging in two hours.

Rotate logs per run, not per session. Use a timestamp in the filename:


$ openclaw run task.yaml  — log-file ./logs/run_$(date +%Y%m%d_%H%M%S).log

Enter fullscreen mode Exit fullscreen mode

Old log files are how you diff behavior across runs when a task that used to work stops working.

Prompt Injection Surface Is Unaddressed

If your OpenClaw tasks involve reading external content — web pages, documents, API responses, user-provided files — the default configuration does nothing to sanitize that content before it enters the model context. A file that contains instruction-formatted text will be processed as instruction-formatted text. This is not hypothetical.

The attack surface is real any time the tool ingests content it didn’t generate itself. In a local automation context, this usually looks like a poisoned input file causing the model to write to an unintended path or skip a validation step. In a more connected setup, it looks worse.

Sanitize external content at the ingestion step, before it reaches the context:


input_handling:

. external_content:

. strip_markdown_headers: true

. truncate_at_chars: 2000

. prefix_with_role_label: “EXTERNAL_CONTENT” # signals to model this is data, not instruction

. disallow_instruction_keywords: true

Enter fullscreen mode Exit fullscreen mode

The prefix_with_role_label option is the most effective single control here. Prefixing ingested content with a consistent label trains the model, over the course of a session, to treat that content as data to act on rather than instructions to follow. It doesn’t eliminate the risk but it substantially reduces it.

For anything involving untrusted input, run the task in a throwaway environment. Containers are the right answer. A Python venv and a scoped filesystem config are the second-best answer. Running it on your main machine without isolation is the wrong answer.

Task Retry Logic Compounds the Problem

When a task fails and OpenClaw retries it, the default behavior is to retry with the same context that produced the failure. If the failure was caused by a reasoning error or a corrupted context state, retrying with the same state will usually produce the same failure, or a different failure, or in the worst case a partial completion that leaves your output directory in an ambiguous state.

Disable automatic retry on first-time task runs:


# task_config.yaml

retry:

. on_failure: false. # investigate before retrying

Enter fullscreen mode Exit fullscreen mode

Once you understand the failure mode, add targeted retry logic:


retry:

. on_failure: true

. max_attempts: 2

. reset_context_on_retry: true. # this is the important one

. backoff_seconds: 5

Enter fullscreen mode Exit fullscreen mode

reset_context_on_retry: true clears the accumulated context before the retry attempt. Combined with scoped context strategy at the task level, this prevents failures from compounding. The task effectively starts clean, which means the retry is actually testing whether the task specification is correct, not whether the model can recover from a bad state.

None of This Is Obvious From the Documentation

The OpenClaw docs cover installation, basic task syntax, and the happy path. The failure modes, the security surface, the context degradation behavior — those are things you find out through use. Or you find them in a guide that was written by someone who already ran into all of them.

If you want the full configuration breakdown, including the automation pipeline architecture and the complete security hardening checklist, the guides are at numbpilled.gumroad.com. The automation bible and the hardening doc cover what’s in this post in considerably more depth, with worked examples for the task structures that actually hold up under real workloads.

The defaults are a starting point. Don’t run production tasks on them.

Top comments (0)