<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Systima</title>
    <description>The latest articles on DEV Community by Systima (@systima).</description>
    <link>https://dev.to/systima</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/systima"/>
    <language>en</language>
    <item>
      <title>The Builder Test: 6 Questions to Evaluate a Fractional Head of AI</title>
      <dc:creator>Systima</dc:creator>
      <pubDate>Mon, 23 Mar 2026 20:57:34 +0000</pubDate>
      <link>https://dev.to/systima/the-builder-test-6-questions-to-evaluate-a-fractional-head-of-ai-4pn3</link>
      <guid>https://dev.to/systima/the-builder-test-6-questions-to-evaluate-a-fractional-head-of-ai-4pn3</guid>
      <description>&lt;p&gt;The fractional Head of AI market has split into two models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;brokers&lt;/strong&gt; (firms that deploy generalists from a roster)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;practitioners&lt;/strong&gt; (specific people who have built and shipped AI systems in production).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both have their place:&lt;/p&gt;

&lt;p&gt;Brokers work for companies starting their AI journey. Practitioners are what you need when the architectural decisions have real consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to tell which you are getting
&lt;/h2&gt;

&lt;p&gt;Six questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Have they personally built and shipped AI features in production?&lt;/strong&gt; Not managed a team that did. Not advised. Personally built. If no, everything else is secondary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can they make architectural decisions, not just recommend them?&lt;/strong&gt; "Implement RAG for customer support" is a roadmap. Choosing the embedding model, the chunking strategy, the retrieval evaluation framework; that is an architectural decision. A strategist writes the roadmap. A practitioner makes the decisions that determine whether the feature works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can they tell you what is wrong with your current architecture?&lt;/strong&gt; Not "you should consider a vector database." Specifically: "your chunking strategy is producing poor retrieval because your documents have inconsistent heading structures, and your embedding model is not optimised for your domain vocabulary."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can they evaluate vendors at the architectural level?&lt;/strong&gt; Opening the technical documentation, assessing the API design and data residency implications, determining compatibility with your infrastructure. Not comparing feature matrices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do they know the difference between a demo and a production system?&lt;/strong&gt; The gap between a POC and a system that handles edge cases, scales under load, and degrades gracefully is where most AI initiatives die.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If you are in a regulated industry, do they understand the regulatory landscape?&lt;/strong&gt; For companies with &lt;a href="https://systima.ai/blog/eu-ai-act-changes-head-of-ai-role" rel="noopener noreferrer"&gt;EU AI Act&lt;/a&gt; exposure, architectural decisions now carry regulatory weight. For &lt;a href="https://systima.ai/blog/regulated-industries-cant-afford-generic-ai-leadership" rel="noopener noreferrer"&gt;legal, fintech, or medtech&lt;/a&gt;, sector-specific regulators add further obligations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The diagnostic questions to ask
&lt;/h2&gt;

&lt;p&gt;Before you engage anyone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Will I work directly with the decision-maker, or is there an intermediary layer?&lt;/li&gt;
&lt;li&gt;Has this person shipped AI systems, or is their experience advisory?&lt;/li&gt;
&lt;li&gt;Is the first deliverable an assessment of where I am, or a forward-looking strategy deck?&lt;/li&gt;
&lt;li&gt;Can they evaluate my existing architecture, or only recommend new initiatives?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answers point to a broker, that is fine for early-stage AI guidance. If the decisions matter, you need the practitioner.&lt;/p&gt;

&lt;p&gt;Full post here: &lt;a href="https://systima.ai/blog/what-to-look-for-fractional-head-of-ai" rel="noopener noreferrer"&gt;https://systima.ai/blog/what-to-look-for-fractional-head-of-ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>startup</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Open-Source EU AI Act Compliance Scanning for CI/CD</title>
      <dc:creator>Systima</dc:creator>
      <pubDate>Sat, 14 Mar 2026 14:14:44 +0000</pubDate>
      <link>https://dev.to/systima/open-source-eu-ai-act-compliance-scanning-for-cicd-4ogj</link>
      <guid>https://dev.to/systima/open-source-eu-ai-act-compliance-scanning-for-cicd-4ogj</guid>
      <description>&lt;p&gt;We built a CLI tool that scans your codebase for EU AI Act compliance risks.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx @systima/comply scan&lt;/code&gt; analyses your repository to detect AI framework usage, traces how AI outputs flow through the program, and flags patterns that may trigger regulatory obligations.&lt;/p&gt;

&lt;p&gt;It runs in CI and posts findings on pull requests (no API keys required).&lt;/p&gt;

&lt;p&gt;Under the hood it performs AST-based import detection using the TypeScript Compiler API and web-tree-sitter WASM across 37+ AI frameworks. It then traces AI return values through assignments and destructuring to identify four patterns:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;conditional branching on AI output&lt;/li&gt;
&lt;li&gt;persistence of AI output to a database&lt;/li&gt;
&lt;li&gt;rendering AI output in a UI without disclosure&lt;/li&gt;
&lt;li&gt;sending AI output to downstream APIs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Findings are severity-adjusted by system domain. You declare what your system does (customer support, credit scoring, legal research, etc) and the scanner adjusts accordingly.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a chatbot routing tool using AI output in an &lt;code&gt;if&lt;/code&gt; statement produces an informational note&lt;/li&gt;
&lt;li&gt;a credit scoring system doing the same produces a critical finding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We tested it against Vercel’s 20k-star AI chatbot repository; the scan took about 8 seconds. Example PR comment with full results: &lt;a href="https://github.com/systima-ai/chatbot-comply-test/pull/1" rel="noopener noreferrer"&gt;https://github.com/systima-ai/chatbot-comply-test/pull/1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comply ships as an npm package, a GitHub Action (systima-ai/comply@v1), and a TypeScript API. It can also generate PDF reports and template compliance documentation.&lt;/p&gt;

&lt;p&gt;Repo and explanation:&lt;br&gt;
&lt;a href="https://systima.ai/blog/systima-comply-eu-ai-act-compliance-scanning" rel="noopener noreferrer"&gt;https://systima.ai/blog/systima-comply-eu-ai-act-compliance-scanning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feedback welcome on the call-chain tracing approach and whether the domain-based severity model makes sense.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>eu</category>
    </item>
    <item>
      <title>What Engineering Teams Actually Need to Do for EU AI Act Compliance</title>
      <dc:creator>Systima</dc:creator>
      <pubDate>Fri, 06 Mar 2026 21:41:16 +0000</pubDate>
      <link>https://dev.to/systima/what-engineering-teams-actually-need-to-do-for-eu-ai-act-compliance-586k</link>
      <guid>https://dev.to/systima/what-engineering-teams-actually-need-to-do-for-eu-ai-act-compliance-586k</guid>
      <description>&lt;p&gt;The EU AI Act entered into force on 1 August 2024. High-risk AI systems must comply by August 2026. Most guides to the Act are written by lawyers. This one is for the engineers who have to build the systems that generate compliance evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The engineering problem
&lt;/h2&gt;

&lt;p&gt;Articles 9 through 15 define six technical obligations for high-risk AI systems: risk management, data governance, technical documentation, record-keeping and logging, transparency, and human oversight, plus accuracy, robustness, and cybersecurity requirements.&lt;/p&gt;

&lt;p&gt;These aren't abstract policy goals. They're engineering requirements that need to be designed, built, tested, and maintained.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the guide covers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Risk classification:&lt;/em&gt; Mapping your system's function to Annex III categories with a documented, repeatable process&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Technical documentation (Annex IV):&lt;/em&gt; What the documentation must contain and how to make it a first-class development artefact rather than an afterthought&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Audit logging (Article 12):&lt;/em&gt; Structured, immutable logging of inputs, inference decisions, confidence scores, human overrides, and configuration changes&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Conformity assessment:&lt;/em&gt; The self-assessment process for Annex VI and how documentation, logging, and risk management converge into auditable evidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We've also open-sourced a reference implementation of Article 12 audit logging for the Vercel AI SDK: aiact-audit-log (&lt;a href="https://github.com/systima-ai/aiact-audit-log" rel="noopener noreferrer"&gt;https://github.com/systima-ai/aiact-audit-log&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Read the full guide
&lt;/h2&gt;

&lt;p&gt;The full guide goes deeper into each area with architectural patterns, implementation considerations, and references to the relevant Articles and Annexes:&lt;/p&gt;

&lt;p&gt;EU AI Act Engineering Compliance Guide (&lt;a href="https://systima.ai/blog/eu-ai-act-engineering-compliance-guide" rel="noopener noreferrer"&gt;https://systima.ai/blog/eu-ai-act-engineering-compliance-guide&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The August 2026 deadline is approaching. If your team is deploying AI in regulated domains, the time to build compliance into your architecture is now.&lt;/p&gt;

</description>
      <category>aiethics</category>
      <category>governance</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
