DEV Community

Cover image for 🧠Minsky’s six memory types as Orka preset memory.
Mak Sò
Mak Sò

Posted on

🧠Minsky’s six memory types as Orka preset memory.

I hate cargo-cult “memory.” Shovel everything into a vector DB and call it cognition. No. If you want modular cognition and predictable behavior, memory needs intent. OrKa’s v0.9.2 presets do exactly that by mapping Marvin Minsky’s six memory types to operation-aware configurations. Same preset, different defaults for read vs write. That single idea kills 30 line YAML blobs and the footguns that come with them.

What “operation aware” really means

Every memory agent declares an operation: read or write. The preset detects the operation and applies tuned defaults for that path. So a single episodic preset behaves one way when you retrieve context and another when you persist a conversation. It makes the config human readable and keeps your graphs clean.

# Read with episodic preset
- id: memory_search
  type: memory
  memory_preset: episodic
  config:
    operation: read
  namespace: conversations

# Write with the same preset
- id: memory_store
  type: memory
  memory_preset: episodic
  config:
    operation: write
  namespace: conversations
Enter fullscreen mode Exit fullscreen mode

Under the hood this flips similarity thresholds, temporal weighting, vector settings, and indexing parameters without you hand tuning every agent.

Minsky’s six, translated to knobs that matter

You do not need a lecture. You need a mapping you can ship with. Here is the short version of each preset and what changes between read and write. The durations and defaults below come from the preset docs and agent guide.

1. sensory
Use for real time signals. Think IoT, telemetry, short lived buffers.
Read: tiny result set, very high precision.
Write: skip heavy indexing to keep ingestion hot. Duration about 15 minutes.

2. working
Use for active sessions and temporary calculations.
Read: context aware search with session bias.
Write: enable vector indexing but keep it volatile. Duration 2 to 8 hours.

3. episodic
Use for conversations and interaction history.
Read: conversational retrieval with temporal ranking.
Write: rich metadata and conversation oriented indexing. Duration 1 day to 1 week.

4. semantic
Use for facts and documentation.
Read: knowledge matching with no time bias.
Write: long term indexing tuned for recall. Duration 3 days to 90 days.

5. procedural
Use for workflows and skills.
Read: pattern recognition focus.
Write: process oriented storage. Duration 1 week to 6 months.

6. meta
Use for system introspection and performance.
Read: high precision analysis.
Write: quality tuned indexing. Duration 2 days to 1 year.

If you remember nothing else: preset names are cognitive, the applied defaults are operational. That is the point.

Why presets beat hand rolled configs

Before presets you wrote 30 to 50 lines per agent. Decay rules, importance multipliers, vector flags, field names, temporal weights, context weights. Easy to drift. Easy to break. Presets collapse this to one line while preserving the ability to override single values when you genuinely need to. The docs show the before vs after with a ridiculous manual block and the clean preset version. Use the clean one.

from orka.memory.presets import get_memory_preset, merge_preset_with_config

base = get_memory_preset("episodic")
final = merge_preset_with_config("episodic", {"default_long_term_hours": 240})
Enter fullscreen mode Exit fullscreen mode

That is how you keep intent first and keep your YAML sane.

Backend choice and the honest performance path

Run RedisStack for production. You want HNSW vector search, sub millisecond lookups, and proper monitoring commands. You can limp on basic Redis for dev, but you lose vector indexing and speed. Pick the right tool. The backend guide is blunt about this with exact env vars, Docker snippets, and FT.INFO checks.

Patterns that actually work

Conversational memory
Orchestrator preset episodic. Writer with episodic. Reader with episodic and temporal ranking for last N turns. It feels obvious because it is. The agent guide gives a clean pattern with limit, similarity threshold, and context toggles.

Knowledge capture
Run a semantic writer behind your fact extractor. Keep your conversation writer separate. Different presets. Different lifecycles. Cleaner behavior. The preset doc even shows mixing episodic, semantic, and meta cleanly in one graph.

System self awareness
Meta preset for performance logs and health. Read with higher precision. Write with quality indexing. Useful for trace explainability and post mortems.

Decay, importance, and lifecycle

Memory that never forgets is a liability. Set short term and long term windows. Boost critical or frequently accessed items. Decay debug spam quickly. The system guide shows a sane starting point for decay and importance rules plus CLI commands for stats, cleanup, and watch. Ship those defaults, then tune.

Guardrails I expect you to add

Presets are only as good as validation. The docs expose listing and inspection functions. Use them in CI. Assert that preset names resolve and the effective config matches what your team expects. Run a smoke query against RedisStack with FT.INFO to catch missing indexes fast.

Minimal starter graph you can copy

orchestrator:
  id: assistant
  strategy: sequential
  memory_preset: episodic

agents:
  - id: conversation_reader
    type: memory
    memory_preset: episodic
    config:
      operation: read
    namespace: conversations
    params:
      limit: 5
      similarity_threshold: 0.6
      enable_temporal_ranking: true

  - id: respond
    type: builder
    prompt: |
      Using context from {{ previous_outputs.conversation_reader }}, answer the user.

  - id: conversation_writer
    type: memory
    memory_preset: episodic
    config:
      operation: write
    namespace: conversations
Enter fullscreen mode Exit fullscreen mode

Swap in semantic for a knowledge agent, meta for system metrics, procedural for workflow learning. That is all you need to get real lift without wrecking your config.

Final take

Minsky’s categories give you a mental model that matches how systems behave in the wild. OrKa’s presets map that model to real parameters so you stop fighting YAML and start shaping behavior. Operation aware defaults are the killer feature here. You will cut noise, reduce drift, and make your orchestration explainable. If you care about modular cognition and memory guided execution, this is the sane path.

If you ship this, measure retrieval lift, not vibes. Track hit rate, average similarity, latency, and answer quality before and after memory. The preset API gives you the stable surface to do that work.

Top comments (0)