<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: gibs-dev</title>
    <description>The latest articles on DEV Community by gibs-dev (@gibbrdev).</description>
    <link>https://dev.to/gibbrdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gibbrdev"/>
    <language>en</language>
    <item>
      <title>We built an open Neo4j expert dataset — here's what we learned</title>
      <dc:creator>gibs-dev</dc:creator>
      <pubDate>Mon, 02 Mar 2026 23:34:04 +0000</pubDate>
      <link>https://dev.to/gibbrdev/we-built-an-open-neo4j-expert-dataset-heres-what-we-learned-2d50</link>
      <guid>https://dev.to/gibbrdev/we-built-an-open-neo4j-expert-dataset-heres-what-we-learned-2d50</guid>
      <description>&lt;p&gt;We're building &lt;a href="https://github.com/gibbrdev/gibsgraph" rel="noopener noreferrer"&gt;GibsGraph&lt;/a&gt;, an open-source tool that lets you query any Neo4j graph in plain English — or build new graphs from unstructured text. To generate good Cypher, the agent needs real Neo4j expertise. Not LLM training data. Actual documentation, patterns, and best practices.&lt;/p&gt;

&lt;p&gt;So we built a curated expert dataset from scratch. 920 records. 5 categories. Fully bundled as JSONL — no setup needed.&lt;/p&gt;

&lt;p&gt;Here's what we learned along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's in the dataset
&lt;/h2&gt;

&lt;p&gt;We parsed the official Neo4j documentation — the Cypher manual, modeling guides, knowledge base articles — into structured records:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cypher examples&lt;/td&gt;
&lt;td&gt;446&lt;/td&gt;
&lt;td&gt;Official docs, parsed from AsciiDoc&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best practices&lt;/td&gt;
&lt;td&gt;318&lt;/td&gt;
&lt;td&gt;Knowledge base articles, modeling guides&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cypher functions&lt;/td&gt;
&lt;td&gt;133&lt;/td&gt;
&lt;td&gt;Cypher manual function reference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cypher clauses&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;td&gt;Cypher manual clause reference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modeling patterns&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;Data modeling docs + curated additions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each record has a &lt;code&gt;source_file&lt;/code&gt; tracing back to the original documentation, an &lt;code&gt;authority_level&lt;/code&gt; (1 = official docs, 2 = curated), and for some records a &lt;code&gt;quality_tier&lt;/code&gt; that controls whether they get loaded at runtime.&lt;/p&gt;

&lt;p&gt;After quality filtering, 849 records make it into the live system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The automated audit
&lt;/h2&gt;

&lt;p&gt;Trusting your own data is dangerous. So we built a &lt;a href="https://github.com/gibbrdev/gibsgraph/blob/main/data/scripts/audit_expert_data.py" rel="noopener noreferrer"&gt;4-tier audit script&lt;/a&gt; that verifies everything it can mechanically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 — Completeness:&lt;/strong&gt; Every record has its required fields, valid enums, real source paths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 — Cypher validation:&lt;/strong&gt; All 1,131 Cypher snippets checked for balanced syntax, string interpolation, and pseudocode detection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 3 — Cross-reference:&lt;/strong&gt; Every function and clause name checked against the official Neo4j 5.x reference. We hardcoded 126 built-in functions, plus known APOC and GDS entries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 4 — Duplicates:&lt;/strong&gt; Within-file duplicate detection, naming convention checks (PascalCase node labels, UPPER_SNAKE relationship types).&lt;/p&gt;

&lt;h2&gt;
  
  
  What the audit caught
&lt;/h2&gt;

&lt;p&gt;First run results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Records: 956 | Cypher snippets: 1,131

Verified:     3,604 checks passed
Flagged:      33 issues
Human review: 341 records
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;7 real failures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"Introduction"&lt;/strong&gt; — a parser artifact that got classified as a Cypher function. Not a function. It's the intro paragraph of the functions docs page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;allReduce&lt;/code&gt;&lt;/strong&gt; — flagged as unrecognized. Turns out this is a valid Neo4j 5.x predicate function, but our reference list was missing it. The audit caught a gap in our own reference data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unbalanced Cypher in &lt;code&gt;LET&lt;/code&gt; clause&lt;/strong&gt; — the LET clause examples had syntax issues. LET is a newer GQL-conformance addition and the docs examples are still maturing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A best practice with an unclosed bracket&lt;/strong&gt; — "Query to kill transactions" had unbalanced Cypher in its example.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An APOC example with unbalanced strings&lt;/strong&gt; — &lt;code&gt;apoc.load.jsonParams&lt;/code&gt; example had mismatched quotes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every one of these is a real data quality issue that would have degraded the agent's Cypher generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we can't auto-verify
&lt;/h2&gt;

&lt;p&gt;The 23 modeling patterns and 318 best practices contain &lt;strong&gt;domain advice&lt;/strong&gt;: "When to use this pattern", "What the anti-pattern is", "Why this practice matters." No script can tell you if a modeling recommendation is sound. That takes a human who writes Cypher professionally.&lt;br&gt;
a&lt;br&gt;
This is the honest gap. The mechanical data is 98.8% clean. The advice data needs expert eyes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dataset is open
&lt;/h2&gt;

&lt;p&gt;Everything is MIT-licensed and &lt;a href="https://github.com/gibbrdev/gibsgraph" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raw data: &lt;code&gt;src/gibsgraph/data/*.jsonl&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Audit script: &lt;code&gt;data/ascripts/audit_expert_data.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Review CSVs: &lt;code&gt;data/review_modeling_patterns.csv&lt;/code&gt; and &lt;code&gt;data/review_best_practices.csv&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you work with Neo4j and want to help verify the modeling patterns and best practices, we've set up review CSVs with empty &lt;code&gt;review_status&lt;/code&gt; and &lt;code&gt;reviewer_notes&lt;/code&gt; columns. Even reviewing 10 records helps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gibbrdev/gibsgraph/discussions/15" rel="noopener noreferrer"&gt;Join the discussion on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'd rather ship 200 verified records than 900 unverified ones.&lt;/p&gt;




&lt;p&gt;Written with AI assistance, reviewed and published by &lt;a href="https://gibs.dev" rel="noopener noreferrer"&gt;https://gibs.dev&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you Claude (L)&lt;/p&gt;

</description>
      <category>neo4j</category>
      <category>python</category>
      <category>opensource</category>
      <category>beginners</category>
    </item>
    <item>
      <title>I built a RAG system where hallucinations aren't acceptable. Here's what actually worked.</title>
      <dc:creator>gibs-dev</dc:creator>
      <pubDate>Wed, 18 Feb 2026 16:46:04 +0000</pubDate>
      <link>https://dev.to/gibbrdev/i-built-a-rag-system-where-hallucinations-arent-acceptable-heres-what-actually-worked-b2e</link>
      <guid>https://dev.to/gibbrdev/i-built-a-rag-system-where-hallucinations-arent-acceptable-heres-what-actually-worked-b2e</guid>
      <description>&lt;h2&gt;
  
  
  I built a regulatory compliance API for EU law in 5 weeks of evenings. Here's what I learned about RAG, citations, and why nobody panics until it's too late.
&lt;/h2&gt;

&lt;p&gt;I work in construction during the day. At night, I build software.&lt;/p&gt;

&lt;p&gt;Five weeks ago, I started building &lt;a href="https://gibs.dev" rel="noopener noreferrer"&gt;Gibs&lt;/a&gt; — an API that answers questions about EU regulations (DORA, AI Act, GDPR) and returns the answer with a direct citation to the specific EUR-Lex article. Grounded answers with source citations. No "this is not legal advice" disclaimer walls. Just: question in, cited answer out.&lt;/p&gt;

&lt;p&gt;This post is about the technical decisions, the things that broke, and what I learned about building a RAG system where correctness actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;The EU AI Act entered into force in August 2024, but the high-risk AI system rules (Annex III) don't apply until August 2026. DORA (Digital Operational Resilience Act) is already live since January 2025. GDPR has been around since 2018 but people still can't get straight answers about Article 22.&lt;/p&gt;

&lt;p&gt;If you're a fintech building an AI feature, or a municipality deploying a chatbot, you need to know: What are my obligations? Which articles apply? What's the deadline?&lt;/p&gt;

&lt;p&gt;Today, the answer is: hire a lawyer, or spend three days reading EUR-Lex. I wanted to build a third option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: the boring version
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User question
  → Classifier (which regulation?)
  → Qdrant vector search (relevant chunks)
  → Reranker (sort by relevance)
  → LLM synthesis (answer + citations)
  → Response with EUR-Lex article references
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing revolutionary. Standard RAG pipeline. The interesting parts are in the details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 1: Chunking legal text is harder than you think
&lt;/h2&gt;

&lt;p&gt;EU regulations aren't blog posts. A single DORA article can reference three other articles, two delegated acts, and an implementing technical standard. If you chunk naively by paragraph, you lose the cross-references that make the answer correct.&lt;/p&gt;

&lt;p&gt;I ended up building a custom parser that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preserves article boundaries (never splits mid-article)&lt;/li&gt;
&lt;li&gt;Attaches metadata: regulation name, article number, section, EUR-Lex URL&lt;/li&gt;
&lt;li&gt;Handles delegated acts as separate documents with parent references&lt;/li&gt;
&lt;li&gt;Builds a cross-reference graph — over 3,600 edges mapping which articles reference which other articles, across regulations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The graph matters because legal text is relational. DORA Article 19 references Article 20, which references a delegated act, which supplements the base regulation. If your retrieval only returns Article 19, the answer is incomplete. The graph lets the pipeline follow those edges and pull in related chunks automatically.&lt;/p&gt;

&lt;p&gt;The DORA corpus alone is 641 chunks across 12 delegated acts. Getting the metadata right took longer than building the retrieval pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 2: Classification routing matters more than embedding quality
&lt;/h2&gt;

&lt;p&gt;My first eval scored 79.9% on DORA questions. Terrible.&lt;/p&gt;

&lt;p&gt;The problem wasn't the embeddings or the LLM. It was routing. A question like "What are the RTS requirements for threat-led penetration testing?" doesn't contain the word "DORA" — so my classifier sent it to the AI Act collection and retrieved completely wrong chunks.&lt;/p&gt;

&lt;p&gt;The fix was expanding the classifier's keyword patterns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before: only matched "DORA" literally
# After:
&lt;/span&gt;&lt;span class="n"&gt;DORA_PATTERNS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DORA&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;digital\s+operational\s+resilience&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ICT\s+(risk|incident)&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TLPT&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;financial\s+entit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;RTS\s+on\s+(ict|incident|tlpt)&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;delegated\s+act&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single change took accuracy from 79.9% to 90.8%. The lesson: in domain-specific RAG, routing is your biggest lever. Spend your time there, not on prompt engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 3: Eval is everything
&lt;/h2&gt;

&lt;p&gt;I built a golden dataset of 140 questions across DORA, AI Act, and GDPR. Each question has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expected answer keywords (&lt;code&gt;answer_must_contain&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Expected source articles (&lt;code&gt;sources_must_contain&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Whether the system should abstain (question outside scope)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The eval runner scores every response automatically and breaks down accuracy by category: objective, cross_reference, delegated_act, adversarial, real_user, negative, and should_abstain.&lt;/p&gt;

&lt;p&gt;This caught bugs I never would have found manually. For example: my scorer was case-sensitive, so "Article 6" matched but "article 6" didn't. Tiny bug, 3% accuracy hit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python eval_runner.py &lt;span class="nt"&gt;--regulation&lt;/span&gt; dora &lt;span class="nt"&gt;--delay&lt;/span&gt; 7000 &lt;span class="nt"&gt;--retry&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're building any RAG system, build eval first. I'm serious. Before you optimize anything, know how to measure it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 4: Citations are the product
&lt;/h2&gt;

&lt;p&gt;I could have built a chatbot that answers compliance questions. There are dozens of those. What makes Gibs different is that every claim in the response maps to a specific article, and you can click through to EUR-Lex and verify it.&lt;/p&gt;

&lt;p&gt;This is non-negotiable for compliance use cases. A lawyer will never trust "According to EU regulations, you need to do X." They will trust "According to Article 6(1) of Regulation (EU) 2024/1689, deployers of high-risk AI systems shall..." with a link.&lt;/p&gt;

&lt;p&gt;The synthesis prompt forces the model to cite specific articles and cross-reference the source chunks. If a claim can't be grounded in the retrieved text, the system says so instead of hallucinating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lesson 5: Abstention is a feature
&lt;/h2&gt;

&lt;p&gt;90% of compliance questions are in-scope. The other 10% are questions like "What's the best CRM for startups?" — and if your system confidently answers those with made-up regulatory citations, you've lost all trust.&lt;/p&gt;

&lt;p&gt;Gibs has an abstention threshold. If the retrieved chunks aren't relevant enough, the system says "This question is outside the scope of the indexed regulations." My abstention accuracy is 96.7% — meaning it almost never hallucinates an answer for out-of-scope questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vector DB&lt;/strong&gt;: Qdrant (one collection per regulation — strict isolation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings + Reranker&lt;/strong&gt;: Cohere embed-v3 + rerank-v3.5&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synthesis&lt;/strong&gt;: LLM via API (multi-pass: decompose → retrieve → synthesize → verify)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework&lt;/strong&gt;: Python + FastAPI + LangGraph&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hosting&lt;/strong&gt;: Self-hosted Docker on a mini PC with 8GB RAM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth&lt;/strong&gt;: API key + Stripe for billing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDKs&lt;/strong&gt;: Python and TypeScript published on PyPI/npm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server&lt;/strong&gt;: For AI assistant integration (Cursor, Windsurf, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total monthly infrastructure cost: roughly what you'd spend on a nice dinner.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with eval, not with the product.&lt;/strong&gt; I built the pipeline first and eval second. Should have been the other way around.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chunk smaller, retrieve more.&lt;/strong&gt; My initial chunks were too large. Smaller chunks with more retrieval and better reranking would have been better from the start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't underestimate metadata.&lt;/strong&gt; The EUR-Lex URL, the article number, the regulation name — that metadata is what makes the citations work. It took 40% of my time and it was worth every hour.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Current status
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;DORA: 90.8% accuracy, all 12 delegated acts indexed&lt;/li&gt;
&lt;li&gt;AI Act: 88% accuracy, all articles + annexes + recitals&lt;/li&gt;
&lt;li&gt;python and npm built&lt;/li&gt;
&lt;li&gt;mcp built and listed&lt;/li&gt;
&lt;li&gt;GDPR: Live, eval in progress&lt;/li&gt;
&lt;li&gt;API: Live at &lt;code&gt;api.gibs.dev&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Free tier: 50 requests/month, no credit card&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm looking for beta users — especially compliance consultants, fintechs dealing with DORA, or anyone building AI systems that need EU AI Act classification.&lt;/p&gt;

&lt;p&gt;If you want to try it: &lt;a href="https://gibs.dev" rel="noopener noreferrer"&gt;gibs.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to see the API docs: &lt;a href="https://docs.gibs.dev" rel="noopener noreferrer"&gt;docs.gibs.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to talk about RAG for legal text, citations, or building dev tools while working a day job in construction — I'm in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Gibs is a developer tool for regulatory research — not a substitute for qualified legal advice.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Built by one person, evenings and weekends, with a lot of coffee and a somewhat irresponsible sleep schedule.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>python</category>
      <category>rag</category>
    </item>
  </channel>
</rss>
