<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sualeh Fatehi</title>
    <description>The latest articles on DEV Community by Sualeh Fatehi (@sualeh).</description>
    <link>https://dev.to/sualeh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sualeh"/>
    <language>en</language>
    <item>
      <title>3-way Boolean Anti-pattern</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Sat, 14 Feb 2026 18:22:36 +0000</pubDate>
      <link>https://dev.to/sualeh/3-way-boolean-anti-pattern-2fdf</link>
      <guid>https://dev.to/sualeh/3-way-boolean-anti-pattern-2fdf</guid>
      <description>&lt;p&gt;In Java, the "3-way Boolean" anti-pattern is what you get when you use the boxed type Boolean (instead of primitive boolean) and you implicitly allow it to represent three states:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; true&lt;/li&gt;
&lt;li&gt; false&lt;/li&gt;
&lt;li&gt; null ← the third, often accidental, state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That "third state" becomes a trap because most code reads like it's dealing with a simple yes/ no flag, but at runtime it can behave differently (or crash) when the value is null.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it's an anti-pattern
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It creates implicit tri-state logic without making it explicit&lt;br&gt;
A variable named enabled, &lt;code&gt;isReady&lt;/code&gt;, &lt;code&gt;shouldRetry&lt;/code&gt;, etc. strongly implies binary logic, but &lt;code&gt;Boolean&lt;/code&gt; quietly allows null, which means unknown, not set, not loaded, not applicable, or sometimes just "bug".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can throw &lt;code&gt;NullPointerException&lt;/code&gt; (NPE) during unboxing&lt;br&gt;
Common expressions like these will NPE if the Boolean is null:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;   &lt;span class="nc"&gt;Boolean&lt;/span&gt; &lt;span class="n"&gt;flag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;getFlag&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="c1"&gt;// could be null&lt;/span&gt;
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt;&lt;span class="c1"&gt;// auto-unboxing -&amp;gt; NPE if null&lt;/span&gt;
   &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="err"&gt; &lt;/span&gt; &lt;span class="c1"&gt;// ...&lt;/span&gt;
   &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This exact failure mode is commonly seen during upgrades or refactors because the code compiles fine but fails at runtime.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It leads to confusing "null means false... except when it doesn't" logic
People often "fix" the &lt;code&gt;NullPointerException&lt;/code&gt; by doing inconsistent checks:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flag&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;
   &lt;span class="c1"&gt;// elsewhere&lt;/span&gt;
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Boolean&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;TRUE&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;
   &lt;span class="c1"&gt;// elsewhere&lt;/span&gt;
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Boolean&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FALSE&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;equals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your codebase has multiple semantic interpretations of null.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>java</category>
      <category>programming</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Generate MCP Tool Schemas Directly From Java Code</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Thu, 20 Nov 2025 02:59:44 +0000</pubDate>
      <link>https://dev.to/sualeh/generate-mcp-tool-schemas-directly-from-java-code-3bif</link>
      <guid>https://dev.to/sualeh/generate-mcp-tool-schemas-directly-from-java-code-3bif</guid>
      <description>&lt;p&gt;If you are building an MCP server, every tool you expose needs an &lt;code&gt;inputSchema&lt;/code&gt;. MCP servers written with &lt;a href="https://spring.io/projects/spring-ai" rel="noopener noreferrer"&gt;Spring AI&lt;/a&gt; support often start with a simple data class for tool inputs. Then come changes: a new field, a renamed property, or updated constraints. The JSON schema in the tool registration rarely keeps up - that means clients may send invalid payloads. By generating the schema from the source of truth — the Java type — you remove that drift.&lt;/p&gt;

&lt;p&gt;Writing that JSON by hand is repetitive, easy to get wrong. &lt;a href="https://modelcontextprotocol.io/specification/2025-06-18/schema#primitiveschemadefinition" rel="noopener noreferrer"&gt;MCP supports only a specific sub-type&lt;/a&gt; of the &lt;a href="https://json-schema.org/specification" rel="noopener noreferrer"&gt;JSON Schema specification&lt;/a&gt;. The &lt;a href="https://github.com/sualeh/mcp-json-schema" rel="noopener noreferrer"&gt;MCP JSON Schema&lt;/a&gt; library keeps the parameter schema and the code in lockstep by generating the MCP-compatible JSON Schema from a Jackson 3 annotated Java class or record.&lt;/p&gt;

&lt;p&gt;What you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use of Jackson 3 annotations for naming, required fields, and descriptions that carry  over into the schema&lt;/li&gt;
&lt;li&gt;Use of Jakarta Bean Validation to adds meaningful constraints to the schema (for example, &lt;code&gt;@Max&lt;/code&gt;,&lt;code&gt;@Min&lt;/code&gt;, &lt;code&gt;@Positive&lt;/code&gt;, &lt;code&gt;@PositiveOrZero&lt;/code&gt;, &lt;code&gt;@Negative&lt;/code&gt;, &lt;code&gt;@NegativeOrZero&lt;/code&gt; on numbers, or &lt;code&gt;@Size&lt;/code&gt;, &lt;code&gt;@NotBlank&lt;/code&gt; on strings)&lt;/li&gt;
&lt;li&gt;Automatic handling of required fields, defaults, enums, and descriptions&lt;/li&gt;
&lt;li&gt;Output that targets the MCP JSON Schema subset, not the entire JSON Schema specification&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;Add a dependency to us.fatehi:mcp-json-schema in Maven or Gradle.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;us.fatehi&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;mcp-json-schema&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.0.1&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define a parameters type as a Jackson‑annotated record or class and let the library produce the &lt;code&gt;inputSchema&lt;/code&gt; JSON. Use annotations to describe intent, and let the library translate that into the MCP schema format.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.fasterxml.jackson.annotation.JsonProperty&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.fasterxml.jackson.annotation.JsonPropertyDescription&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;tools.jackson.databind.PropertyNamingStrategies&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;tools.jackson.databind.annotation.JsonNaming&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;@JsonNaming&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;PropertyNamingStrategies&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;KebabCaseStrategy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="nf"&gt;SampleParameters&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
    &lt;span class="nd"&gt;@JsonPropertyDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Type of database table dependant objects."&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nd"&gt;@JsonProperty&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;defaultValue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"NONE"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;DependantObjectType&lt;/span&gt; &lt;span class="n"&gt;dependantObjectType&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;

    &lt;span class="nd"&gt;@JsonPropertyDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Table name."&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;tableName&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;enum&lt;/span&gt; &lt;span class="nc"&gt;DependantObjectType&lt;/span&gt; 
    &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="no"&gt;NONE&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;COLUMNS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;INDEXES&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;FOREIGN_KEYS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;TRIGGERS&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, generate the MCP &lt;code&gt;inputSchema&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;us.fatehi.mcp_json_schema.McpJsonSchemaUtility&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Provide this value as the tool's input_schema &lt;/span&gt;
&lt;span class="c1"&gt;// in your Spring AI MCP server implementation&lt;/span&gt;
&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;inputSchemaJson&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 
  &lt;span class="nc"&gt;McpJsonSchemaUtility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;inputSchema&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SampleParameters&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prefer a &lt;code&gt;JsonNode&lt;/code&gt; for programmatic changes? Use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;schemaNode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 
  &lt;span class="nc"&gt;McpJsonSchemaUtility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;generateJsonSchema&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SampleParameters&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The source code is available at &lt;a href="https://github.com/sualeh/mcp-json-schema" rel="noopener noreferrer"&gt;sualeh/mcp-json-schema&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>mcp</category>
      <category>json</category>
      <category>jsonschema</category>
    </item>
    <item>
      <title>Why Your Claude Skills Deserve Better: Escape the Sandbox with MCP Skill Hub</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Mon, 27 Oct 2025 12:21:07 +0000</pubDate>
      <link>https://dev.to/sualeh/why-your-claude-skills-deserve-better-escape-the-sandbox-with-mcp-skill-hub-ffh</link>
      <guid>https://dev.to/sualeh/why-your-claude-skills-deserve-better-escape-the-sandbox-with-mcp-skill-hub-ffh</guid>
      <description>&lt;p&gt;If you've been working with Claude skills, you've probably felt the frustration of hitting sandbox limitations. Your Python code can't access files, make network requests, or interact with your local system. That's where the &lt;strong&gt;MCP Skill Hub&lt;/strong&gt; comes in – it takes your existing Claude skills and unleashes their full potential locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Claude Skills Sandbox Problem
&lt;/h2&gt;

&lt;p&gt;Claude skills are great for quick demonstrations, but they're severely limited:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ No file-system access&lt;/li&gt;
&lt;li&gt;❌ No network requests&lt;/li&gt;
&lt;li&gt;❌ No system commands&lt;/li&gt;
&lt;li&gt;❌ No persistent storage&lt;/li&gt;
&lt;li&gt;❌ No real-world integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your skills can show examples and explain concepts, but they can't actually &lt;em&gt;do&lt;/em&gt; the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills That Actually Work
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/srprasanna/mcp-skill-hub" rel="noopener noreferrer"&gt;MCP Skill Hub&lt;/a&gt; changes everything. It takes your existing Claude Skills (same YAML frontmatter format, same Markdown content) and runs them locally through the &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Check out the examples in &lt;a href="https://github.com/srprasanna/mcp-skill-hub" rel="noopener noreferrer"&gt;srprasanna/mcp-skill-hub&lt;/a&gt; for working code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changes When You Go Local
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before (Claude Sandbox):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Here's how you &lt;em&gt;would&lt;/em&gt; read an Excel file..."&lt;/li&gt;
&lt;li&gt;"This code &lt;em&gt;demonstrates&lt;/em&gt; the concept..."&lt;/li&gt;
&lt;li&gt;"In a real environment, you &lt;em&gt;could&lt;/em&gt; do this..."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;After (MCP Local):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your skill actually reads files from your computer&lt;/li&gt;
&lt;li&gt;Real database connections, API calls, file operations&lt;/li&gt;
&lt;li&gt;Integration with your actual development workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Beyond Claude: Any Agent, Any Model
&lt;/h2&gt;

&lt;p&gt;Here's the kicker – you're not locked into Claude anymore. The MCP Skill Hub works with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Desktop&lt;/strong&gt; (obvious choice)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cline/Cursor&lt;/strong&gt; (VS Code integration)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open WebUI&lt;/strong&gt; (local models)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Any MCP-compatible agent&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your skills become portable across the entire AI ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started is Dead Simple
&lt;/h2&gt;

&lt;p&gt;The server enforces a clean folder structure that makes sense:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/my-skills/
├── excel-automation/
│   ├── SKILL.md          ← Your existing skill content
│   └── examples/         ← Working Python scripts
├── database-queries/
│   ├── SKILL.md
│   └── examples/
└── file-processing/
    ├── SKILL.md
    └── templates/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it with Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; ~/my-skills:/skills:ro &lt;span class="se"&gt;\&lt;/span&gt;
  srprasanna/mcp-skill-hub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hot-reload is built-in – edit your skills and see changes instantly without restarting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production Ready Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🔄 &lt;strong&gt;Hot-reload&lt;/strong&gt; – edit skills without restarting&lt;/li&gt;
&lt;li&gt;🐳 &lt;strong&gt;Docker support&lt;/strong&gt; – run anywhere containers run&lt;/li&gt;
&lt;li&gt;📊 &lt;strong&gt;Rich metadata&lt;/strong&gt; – categories, tags, complexity levels&lt;/li&gt;
&lt;li&gt;🔍 &lt;strong&gt;Search tools&lt;/strong&gt; – find skills by query or category&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Full documentation&lt;/strong&gt; – comprehensive guides and examples&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Type-safe&lt;/strong&gt; – modern Python 3.13+ with full type hints&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Your Claude skills are just the beginning. The MCP Skill Hub takes that same familiar format and removes all the limitations. Your skills can finally do real work – read files, make API calls, automate your actual workflow.&lt;/p&gt;

&lt;p&gt;And since it's MCP-compliant, you can use those skills with any compatible AI agent, not just Claude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready to escape the sandbox?&lt;/strong&gt; Check out the &lt;a href="https://github.com/srprasanna/mcp-skill-hub" rel="noopener noreferrer"&gt;MCP Skill Hub on GitHub&lt;/a&gt; or install directly from the &lt;a href="https://registry.modelcontextprotocol.io/v0.1/servers/io.github.srprasanna%2Fmcp-skill-hub/versions" rel="noopener noreferrer"&gt;MCP Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Your skills deserve to run free. 🚀&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The MCP Skill Hub is open source (MIT license) and available on Docker Hub. It's production-ready with comprehensive testing and documentation.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Revolutionize Your Database Development with SchemaCrawler MCP Server</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Sat, 24 May 2025 22:21:28 +0000</pubDate>
      <link>https://dev.to/sualeh/revolutionize-your-database-development-with-schemacrawler-mcp-server-310i</link>
      <guid>https://dev.to/sualeh/revolutionize-your-database-development-with-schemacrawler-mcp-server-310i</guid>
      <description>&lt;p&gt;Imagine having an AI assistant that &lt;strong&gt;actually understands&lt;/strong&gt; your database schema and helps you make sense of your tables and columns, helps you craft perfect SQL queries that actually work, and saves you from hours of documentation diving. The &lt;strong&gt;SchemaCrawler MCP Server&lt;/strong&gt; is here, and it it free and open source.&lt;/p&gt;

&lt;p&gt;Forget complex installations and configuration headaches. The SchemaCrawler MCP Server runs in a Docker container, meaning you can get up and running with just a few commands, using your favorite MCP Client in "Agent" mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Can It Do For You?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔍 Explore Your Database Structure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;View all tables and views&lt;/strong&gt; at a glance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examine column details&lt;/strong&gt; including data types and constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand relationships&lt;/strong&gt; between tables with foreign key information&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛠️ Improve Your Database Design
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Find design issues&lt;/strong&gt; with the built-in schema linting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discover missing indexes&lt;/strong&gt; that could improve performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify nullable columns&lt;/strong&gt; in unique constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📝 Simplify SQL Development
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Understand table schemas&lt;/strong&gt; before writing queries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;See sample data&lt;/strong&gt; to better understand the information you're working with&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate proper SQL&lt;/strong&gt; based on your database structure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started in 4 Easy Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone &lt;a href="https://github.com/schemacrawler/SchemaCrawler-MCP-Client-Usage" rel="noopener noreferrer"&gt;https://github.com/schemacrawler/SchemaCrawler-MCP-Client-Usage&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start the Server&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; schemacrawler-mcpserver.yaml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Verify It's Running&lt;/strong&gt;&lt;br&gt;
Check server health at &lt;a href="http://localhost:8080/health" rel="noopener noreferrer"&gt;http://localhost:8080/health&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connect in VS Code&lt;/strong&gt;&lt;br&gt;
The server is already configured in &lt;code&gt;.vscode/mcp.json&lt;/code&gt; - just open VS Code and start asking questions!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Connect to Your Own Database
&lt;/h2&gt;

&lt;p&gt;Want to use this with your own database? No problem!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Stop the current server&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; schemacrawler-mcpserver.yaml down &lt;span class="nt"&gt;-t0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edit the connection details&lt;/strong&gt;&lt;br&gt;
Update &lt;code&gt;schemacrawler-mcpserver.yaml&lt;/code&gt; with your database connection information&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Restart the server&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; schemacrawler-mcpserver.yaml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Start Exploring Today!
&lt;/h2&gt;

&lt;p&gt;Simply ask questions about your database in VS Code's chat panel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What tables are available in my database?"&lt;/li&gt;
&lt;li&gt;"Show me the columns in the Books table"&lt;/li&gt;
&lt;li&gt;"What foreign keys reference the Authors table?"&lt;/li&gt;
&lt;li&gt;"Are there any design issues with my database schema?"&lt;/li&gt;
&lt;li&gt;"Write SQL to find books and their authors"&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Powered by &lt;a href="https://www.schemacrawler.com/" rel="noopener noreferrer"&gt;SchemaCrawler&lt;/a&gt; - Free database schema discovery and comprehension tool&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>database</category>
    </item>
    <item>
      <title>Calculate the DORA Lead Time Metric in Python</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Thu, 10 Apr 2025 23:03:47 +0000</pubDate>
      <link>https://dev.to/sualeh/calculate-the-dora-lead-time-metric-in-python-2bhn</link>
      <guid>https://dev.to/sualeh/calculate-the-dora-lead-time-metric-in-python-2bhn</guid>
      <description>&lt;p&gt;DORA (DevOps Research and Assessment) metrics have become the gold standard for measuring software delivery performance. Among these metrics, Lead Time for Changes is a good indicator of your team's efficiency to deliver changes in production. Let us understand what this metric is, why it matters, and how you can calculate it using Jira and GitHub data with Python code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the DORA Lead Time Metric?
&lt;/h2&gt;

&lt;p&gt;Lead Time for Changes measures the duration from when code is first committed until it is successfully deployed to production. In simpler terms, it answers the question: "How long does it take for a code change to go from a developer's machine to serving users in production?"&lt;/p&gt;

&lt;p&gt;According to DORA research, organizations typically fall into these performance categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Elite performers&lt;/strong&gt;: Less than one hour&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High performers&lt;/strong&gt;: Between one day and one week&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium performers&lt;/strong&gt;: Between one week and one month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low performers&lt;/strong&gt;: Between one month and six months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A shorter lead time indicates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster delivery of features to users&lt;/li&gt;
&lt;li&gt;Quicker bug fixes and security patches&lt;/li&gt;
&lt;li&gt;More agile response to changing requirements&lt;/li&gt;
&lt;li&gt;Less work-in-progress building up&lt;/li&gt;
&lt;li&gt;Reduced context switching for developers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, lead time is a powerful indicator of your development process efficiency. Long lead times often signal bottlenecks in your development pipeline that need addressing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculating Lead Time
&lt;/h2&gt;

&lt;p&gt;If you use Jira and GitHub, you can calculate lead time by connecting data from both platforms. The calculation involves several steps:&lt;/p&gt;

&lt;p&gt;Projects → Releases → Stories → Pull Requests → Commits&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Projects&lt;/strong&gt;: First, gather all software projects from Jira&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Releases&lt;/strong&gt;: For each project, collect released versions within a specified date range&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stories&lt;/strong&gt;: Identify all Jira stories associated with each release&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull Requests&lt;/strong&gt;: For each story, find the linked GitHub pull requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commits&lt;/strong&gt;: Within each pull request, analyze all commits&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For each pull request, lead time is calculated as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;lead_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;release_date&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;earliest_commit_date&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key here is using the &lt;strong&gt;earliest commit date&lt;/strong&gt; rather than the pull request creation date. This captures the true beginning of work, even if the pull request was created later. The final DORA lead time metric is calculated by averaging all individual lead times over a specified time period, for a given set of projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage This in Jira and GitHub
&lt;/h2&gt;

&lt;p&gt;To effectively use this approach, you need to understand how to manage the key components in Jira. Create Jira releases (or "versions") for each planned release, and set release dates when versions are published. Mark versions as "Released" once deployed. Assign the release to a project. In this context projects typically represent teams, products, or components. Stories are work items (features, bugs, etc.) included in releases. Use the Jira GitHub integration to connect your repositories. Reference Jira issues in pull request titles or descriptions** (e.g., "PROJ-123: Add new feature"). Use smart commits in your commit messages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Python to Generate lead Time Reports
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://pypi.org/project/dora-lead-time-metric/" rel="noopener noreferrer"&gt;&lt;code&gt;dora-lead-time&lt;/code&gt;&lt;/a&gt; package provides a simple way to calculate and visualize lead time metrics. It connects to your Jira and GitHub data, calculates lead times, and generates reports. Here is how you might use the package to generate a monthly lead time report:&lt;/p&gt;

&lt;p&gt;First set up your tokens to access Jira and GitHub. Set the following environmental variables.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;code&gt;ATLASSIAN_TOKEN&lt;/code&gt; containing the API token for Atlassian Jira access.&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;JIRA_INSTANCE&lt;/code&gt; for your Jira instance URL (e.g., &lt;code&gt;company.atlassian.net&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;EMAIL&lt;/code&gt; for your Atlassian account email address.&lt;/li&gt;
&lt;li&gt;Create personal access tokens for each GitHub organization, for example, &lt;code&gt;GITHUB_TOKEN_ORG1&lt;/code&gt;, &lt;code&gt;GITHUB_TOKEN_ORG2&lt;/code&gt;, etc. to authenticate API requests to specific GitHub organizations. Each organization you need to access requires its own token.&lt;/li&gt;
&lt;li&gt;Create an environmental variable &lt;code&gt;GITHUB_ORG_TOKENS_MAP&lt;/code&gt; which is a JSON string mapping organization names to environment variable names. For example:
&lt;code&gt;GITHUB_ORG_TOKENS_MAP={"Org1": "GITHUB_TOKEN_ORG1", "Org2": "GITHUB_TOKEN_ORG2"}&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can optionally set&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;SQLITE_PATH&lt;/code&gt; which is the path where the SQLite database will be created.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;START_DATE&lt;/code&gt; and &lt;code&gt;END_DATE&lt;/code&gt; to define the date range for which to calculate lead time metrics, using ISO date strings (YYYY-MM-DD).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a complete examples which you can put in a ".env" file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GITHUB_TOKEN_ORG1=your_personal_access_token_for_org1
GITHUB_TOKEN_ORG2=your_personal_access_token_for_org2
GITHUB_ORG_TOKENS_MAP={"Org1": "GITHUB_TOKEN_ORG1", "Org2": "GITHUB_TOKEN_ORG2"}
ATLASSIAN_TOKEN=your_atlassian_api_token
JIRA_INSTANCE=your_company.atlassian.net
EMAIL=your_email@your_company.com
SQLITE_PATH=./releases.db
START_DATE=2023-01-01
END_DATE=2023-12-31
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run code similar to the following to generate a lead time report:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;date&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dora_lead_time.lead_time_report&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LeadTimeReport&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize the report generator
&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LeadTimeReport&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;releases.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define the scope
&lt;/span&gt;&lt;span class="n"&gt;project_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FRONTEND&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BACKEND&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MOBILE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;start_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;end_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2025&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;31&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Generate a monthly report
&lt;/span&gt;&lt;span class="n"&gt;monthly_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monthly_lead_time_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_keys&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;end_date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Display and visualize the report
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;monthly_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create a visualization
&lt;/span&gt;&lt;span class="n"&gt;plt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;show_plot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;monthly_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2025 Monthly Lead Time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;show_trend&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;savefig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lead_time_trend.png&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The full code is available on &lt;a href="https://github.com/username/dora-lead-time-metric" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The report allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track lead time trends over months&lt;/li&gt;
&lt;li&gt;Compare performance across different projects&lt;/li&gt;
&lt;li&gt;Identify when process changes impact lead time&lt;/li&gt;
&lt;li&gt;Set targets based on DORA performance levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, the project includes SQL-based outlier reports to identify issues like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Projects without releases&lt;/li&gt;
&lt;li&gt;Releases with open stories&lt;/li&gt;
&lt;li&gt;Stories in multiple releases&lt;/li&gt;
&lt;li&gt;Stories without pull requests&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Calculating the DORA Lead Time metric provides valuable insights into your software delivery performance. By connecting data from Jira and GitHub, this approach gives you an accurate measurement that truly reflects your development process. The real power comes from using this data to identify bottlenecks and continuously improve your delivery pipeline. Whether you're aiming to move from "medium" to "high" performer status or already pursuing "elite" performance, measuring lead time is an essential step in the journey.&lt;/p&gt;

</description>
      <category>dora</category>
      <category>metrics</category>
    </item>
    <item>
      <title>Use ChatGPT to Explore Your Database Schema</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Mon, 30 Dec 2024 20:27:31 +0000</pubDate>
      <link>https://dev.to/sualeh/use-chatgpt-to-explore-your-database-schema-4mfm</link>
      <guid>https://dev.to/sualeh/use-chatgpt-to-explore-your-database-schema-4mfm</guid>
      <description>&lt;p&gt;SchemaCrawler is a relational database exploration tool. It obtains database schema metadata such as tables, stored procedures, foreign keys, triggers and so on, and makes them available for search. The traditional way to use SchemaCrawler has been the command-line or an interactive shell. &lt;/p&gt;

&lt;p&gt;ChatGPT offers almost magical help with coding questions. You could use it to get help with SQL questions, but then you have to adapt that SQL to match your database tables and schemas.&lt;/p&gt;

&lt;p&gt;What if ChatGPT could act as an expert on your database, and give you valid SQL that would work for you? Guess what - with a little help from SchemaCrawler, it can. You can use SchemaCrawler to "teach" ChatGPT what your database schema looks like.&lt;/p&gt;

&lt;p&gt;SchemaCrawler is now integrated with ChatGPT models to provide an interactive way to interrogate your database schema metadata. When you start SchemaCrawler with the "aichat" command, you will have an interactive chat shell with ChatGPT, enhanced with information about your database metadata. &lt;/p&gt;

&lt;p&gt;You can try prompts such as the following ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"List all tables"&lt;/li&gt;
&lt;li&gt;"Describe the Track table"&lt;/li&gt;
&lt;li&gt;"What are the indexes on the Track table?"&lt;/li&gt;
&lt;li&gt;"What are the Track columns?"&lt;/li&gt;
&lt;li&gt;"What is the Track primary key?"&lt;/li&gt;
&lt;li&gt;"Show me the triggers on Track"&lt;/li&gt;
&lt;li&gt;"Find the parents of Track"&lt;/li&gt;
&lt;li&gt;"What are the dependents of Album?"&lt;/li&gt;
&lt;li&gt;"What are the design problems with this database?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To quit the console, you can type something like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I think I have everything I need"
or simply, "done", "exit" or "quit".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To start using this integration, you will need to create your own &lt;a href="https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/" rel="noopener noreferrer"&gt;OpenAI API key&lt;/a&gt;. Then download a SQLite database called &lt;a href="https://github.com/schemacrawler/chinook-database/releases/download/v16.11.7/chinook-database-2.0.1.sqlite" rel="noopener noreferrer"&gt;"chinook-database-2.0.1.sqlite"&lt;/a&gt; into your current directory.&lt;/p&gt;

&lt;p&gt;Run this command in a bash shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:/home/schcrwlr &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;your&lt;/span&gt;&lt;span class="sh"&gt;-openai-api-key&amp;gt;&amp;gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
schemacrawler/schemacrawler:extra-latest &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
  /opt/schemacrawler/bin/schemacrawler.sh &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
  --server=sqlite &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
  --database=chinook-database-2.0.1.sqlite &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
  --info-level=standard &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
  --command=aichat
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(If you are using PowerShell on Windows, replace the trailing backslash on each line with a back-tick, and map the current directory differently.)&lt;/p&gt;

&lt;p&gt;Once the Docker container starts up, enter some of the prompts above.&lt;/p&gt;

&lt;p&gt;If you are willing to share your database metadata with OpenAI, the creators of ChatGPT, you can provide an additional &lt;code&gt;--use-metadata true&lt;/code&gt; on the commandline, and then you can get SQL statements customized to your database.&lt;/p&gt;

&lt;p&gt;You can then try prompts such as the following ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Get me the SQL statement to find all the tracks and their artists' names"&lt;/li&gt;
&lt;li&gt;"Get me the SQL statement to find the number of tracks for each artist, for artists that have more than 25 tracks, sorted by those who have the most"&lt;/li&gt;
&lt;li&gt;"What is the purpose or function of this database?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you have got this working, use the SchemaCrawler command-line to connect to your own database, and explore it using a natural language interface courtesy of ChatGPT.&lt;/p&gt;

</description>
      <category>database</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Sufficient Software Tests Using Metrics</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Sun, 01 Dec 2024 18:47:15 +0000</pubDate>
      <link>https://dev.to/sualeh/sufficient-software-tests-using-metrics-36i2</link>
      <guid>https://dev.to/sualeh/sufficient-software-tests-using-metrics-36i2</guid>
      <description>&lt;p&gt;The primary goal of software testing is to prevent bugs and defects from reaching end users. Effective testing ensures that the software is reliable, functional, and meets the specified requirements, enhancing user satisfaction. However, simply knowing how much of the code is executed (or covered) by tests is not enough. Testing metrics provide insight into how effective tests are, and whether the software system is tested sufficiently. The code coverage metric measures the percentage of code executed during testing but does not account for the quality of those tests. We need to have a combination of different metrics to evaluate various aspects of test sufficiency, such as defect detection, test case effectiveness, and automation coverage. By using a range of metrics, teams can gain a holistic view of their testing efforts, ensuring that the software is thoroughly tested and capable of handling real-world scenarios without failures.&lt;/p&gt;

&lt;p&gt;Unit testing is a method where individual components of software are tested in isolation, forms the foundation of software quality assurance. Code must be instrumented to measure the extent of code execution during testing. This involves inserting additional code and using tools to collect execution data, helping developers detect untested areas, edge cases, and potential bugs. The code coverage metric for unit tests quantifies the proportion of tested source code. High code coverage suggests fewer untested paths and gives greater confidence in the software's reliability.&lt;/p&gt;

&lt;p&gt;The testing pyramid is a conceptual framework that illustrates the different levels of testing in software development. At the base of the pyramid are unit tests, which are numerous and run frequently. As we move up the pyramid, the tests become broader and fewer in number. Integration tests focus on the interactions between different units or modules of the software, ensuring that combined parts function together as expected. At the system test level, the entire system is tested as a whole to verify that it meets the specified requirements. Acceptance tests, which are the highest level of tests, are often conducted by end-users or clients to validate the software against their expectations and business requirements. This hierarchical approach helps ensure that testing is comprehensive and efficient, covering both individual components and the integrated system.&lt;/p&gt;

&lt;p&gt;Higher levels of testing, such as system and acceptance testing, are typically performed on built or containerized software. This means the entire application or significant parts of it are deployed in a test environment that closely mirrors the production environment. These tests validate the behavior of the application in real-world scenarios, ensuring that all components work together seamlessly. Unlike unit tests, achieving code coverage at the system or acceptance test levels is not feasible. This is because these tests operate on the application as a whole, rather than its individual components. They focus on the end-to-end functionality and user experience, rather than the internal workings of the code. It is not possible, nor is it a good idea, to instrument code in software built (or containerized) for production deployments. Thus, other metrics are needed to ensure comprehensive testing at these higher levels.&lt;/p&gt;

&lt;p&gt;Here are some metrics to consider for gaining insight into test sufficiency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Defect Density:&lt;/strong&gt; This measures the number of defects found per unit of code or functionality. It helps identify areas that may need more thorough testing, guiding focus towards problematic areas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Case Effectiveness:&lt;/strong&gt; This evaluates how well test cases detect defects. It's measured by the number of defects found versus the number of test cases executed. High test case effectiveness indicates robust and thorough testing scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Automation Coverage:&lt;/strong&gt; This metric assesses the percentage of test cases that are automated versus manual. Higher automation coverage leads to more efficient and consistent testing, reducing the risk of human error and increasing test repeatability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging these metrics, teams can ensure that their testing efforts are comprehensive, efficient, and effective at all levels of the testing pyramid. Metrics not only help in assessing current testing effectiveness but aid in the iterative and continuous improvement of test coverage.&lt;/p&gt;

</description>
      <category>tests</category>
    </item>
    <item>
      <title>"Computer Use" for UAT</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Mon, 28 Oct 2024 01:14:53 +0000</pubDate>
      <link>https://dev.to/sualeh/computer-use-for-uat-4dkd</link>
      <guid>https://dev.to/sualeh/computer-use-for-uat-4dkd</guid>
      <description>&lt;p&gt;Anthropic, a company working on advanced artificial intelligence (AI), has recently introduced a new feature for their AI model called Claude 3.5 Sonnet. This feature, called "computer use," allows the AI to interact with computer interfaces just like a human would. Using a programming interface (API), Claude can control your computer and&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Move the cursor around the screen&lt;/li&gt;
&lt;li&gt;Click buttons and icons&lt;/li&gt;
&lt;li&gt;Type text using a virtual keyboard&lt;/li&gt;
&lt;li&gt;Open and use different software programs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is still an experimental feature and not perfect yet, but it's designed to help automate tasks that usually require human interaction, like filling out forms or navigating through software.&lt;/p&gt;

&lt;p&gt;This opens an interesting possibility in user acceptance testing. User Acceptance Testing (UAT) is the final phase in the software development process to test the software in real-world scenarios to ensure it meets their needs and works as expected. User Acceptance Tests (UATs) are typically written collaboratively by Business Analysts, QA Engineers and business stakeholders. A standard human readable format for UAT popularized by Dan North, and in the Gherkin format, includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenario: Specific situation or use cases to be tested&lt;/li&gt;
&lt;li&gt;Given: The initial context or state&lt;/li&gt;
&lt;li&gt;When: The action or event&lt;/li&gt;
&lt;li&gt;Then: The expected outcome or result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We might potentially see some tools that can take human readable test definitions written in Gherkin-like language and test them out on a user interface developed by a software team before software is released.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
    </item>
    <item>
      <title>"Computer Use" to Speed Up UI Development</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Mon, 28 Oct 2024 01:06:13 +0000</pubDate>
      <link>https://dev.to/sualeh/computer-use-to-speed-up-ui-development-29hh</link>
      <guid>https://dev.to/sualeh/computer-use-to-speed-up-ui-development-29hh</guid>
      <description>&lt;p&gt;Anthropic, a company working on advanced artificial intelligence (AI), has recently introduced a new feature for their AI model called Claude 3.5 Sonnet. This feature, called "computer use," allows the AI to interact with computer interfaces just like a human would. Using a programming interface (API), Claude can control your computer and&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Move the cursor around the screen&lt;/li&gt;
&lt;li&gt;Click buttons and icons&lt;/li&gt;
&lt;li&gt;Type text using a virtual keyboard&lt;/li&gt;
&lt;li&gt;Open and use different software programs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is still an experimental feature and not perfect yet, but it's designed to help automate tasks that usually require human interaction, like filling out forms or navigating through software.&lt;/p&gt;

&lt;p&gt;This opens an interesting possibility in user interface (UI) design. UI design with focus groups involves gathering a small group of users (usually 5-10 people) to provide feedback on a product or design. Typically, a moderator prepares a discussion guide with questions and activities to guide the session. The session is recorded, and the feedback is analyzed to identify common themes and insights. Focus groups typically get hands-on access to the UI. This way, they can interact with the interface, explore its features, and provide direct feedback on their experience. Their reactions and suggestions help designers see what works, what doesn't, and what needs tweaking. It's all about making sure the final product is user-friendly and meets the needs of its audience.&lt;/p&gt;

&lt;p&gt;Typically, companies run multiple sessions to ensure a diverse range of feedback. It's common to see anywhere from 3 to 10 focus groups, each with different participants, to gather comprehensive insights. The more varied the feedback, the better the final design can be tailored to meet user needs.&lt;/p&gt;

&lt;p&gt;The recent developments with "computer use" have the potential to make UI development much faster, and reduce the cost by reducing number of focus groups that need to be brought in. UI designers can give AI simple instructions and find out quickly if the user interface is intuitive enough for the average user (represented by an AI system) to use. If it is not, they can improve the UI iteratively before bringing it to a human focus group.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
    </item>
    <item>
      <title>Intercepted System.exit(...) on Java 21</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Sun, 25 Aug 2024 19:47:39 +0000</pubDate>
      <link>https://dev.to/sualeh/java-systemexit-on-java-21-445a</link>
      <guid>https://dev.to/sualeh/java-systemexit-on-java-21-445a</guid>
      <description>&lt;p&gt;In Java 8, it is possible (easy enough) to substitute a custom security manager which can capture system exit calls and allow code to continue executing. Java 21 makes this more difficult. However, this can cause a problem for systems that are being upgraded from Java 8 to Java 21, since processes can sometimes fail without any exceptions or logs. This is usually because libraries that used to substitute a custom security manager no longer do that with Java 21. Tracing the source of the error can be difficult.&lt;/p&gt;

&lt;p&gt;Consider this use case:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu094p1v22kvlthty1d6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu094p1v22kvlthty1d6f.png" alt="Calling code with System.exit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If "Library 1" starts supporting Java 21, and so does not substitute a custom security manager, it can cause the "Web Application" to terminate without warning.&lt;/p&gt;

&lt;p&gt;For example, look at &lt;a href="https://github.com/apache/ant/commit/689b6ea90ee1fbad580a437137d80609c9336f12" rel="noopener noreferrer"&gt;this note from Apache Ant&lt;/a&gt; which says a custom security manager will be substituted if running on versions of Java lower than Java 18, but not on Java 18 and above. Other libraries are taking similar approaches.&lt;/p&gt;

&lt;p&gt;I have boiled this down to the essentials. Take a look at how this could happen: &lt;a href="https://github.com/sualeh/system-exit-21" rel="noopener noreferrer"&gt;sualeh/system-exit-21&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>java21</category>
      <category>java8</category>
    </item>
    <item>
      <title>What a Character</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Fri, 26 Jan 2024 23:46:59 +0000</pubDate>
      <link>https://dev.to/sualeh/what-a-character-4a1p</link>
      <guid>https://dev.to/sualeh/what-a-character-4a1p</guid>
      <description>&lt;p&gt;I've developed a comprehensive programmer's guide for managing international text in code. It features detailed examples in Java, JavaScript, and Python, with each slide offering easily digestible information. The &lt;a href="https://github.com/sualeh/What-a-Character"&gt;"What a Character"&lt;/a&gt; guide begins with a thorough introduction to character representation concepts and Unicode. Following that are practical code demonstrations covering international text handling using regular expressions, case conversion, and numeric data extraction. Additionally, there's a section dedicated to encoding with UTF-8 and UTF-16, including detailed bit-level insights for those interested. Whether you prefer reading or watching, there's even a video presentation available for your convenience.&lt;/p&gt;

&lt;p&gt;On GitHub, &lt;a href="https://github.com/sualeh/What-a-Character"&gt;sualeh/What-a-Character&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would love to have your comments!&lt;/p&gt;

</description>
      <category>unicode</category>
      <category>international</category>
    </item>
    <item>
      <title>Use ChatGPT to Get SQL Help For **Your** Database Schema</title>
      <dc:creator>Sualeh Fatehi</dc:creator>
      <pubDate>Fri, 05 Jan 2024 01:01:58 +0000</pubDate>
      <link>https://dev.to/sualeh/use-chatgpt-to-get-sql-help-for-your-database-schema-1jbi</link>
      <guid>https://dev.to/sualeh/use-chatgpt-to-get-sql-help-for-your-database-schema-1jbi</guid>
      <description>&lt;p&gt;ChatGPT offers almost magical help with coding questions. You can use it to get help with SQL questions, but then you have to adapt that SQL to match your database tables and schemas.&lt;/p&gt;

&lt;p&gt;What if ChatGPT could act as an expert on your database, and give you valid SQL that would work for you? Guess what - with a little help from SchemaCrawler, it can. You can use SchemaCrawler to "teach" ChatGPT what your database schema looks like.&lt;/p&gt;

&lt;p&gt;SchemaCrawler is a relational database exploration tool. It obtains database schema metadata such as tables, stored procedures, foreign keys, triggers and so on. SchemaCrawler can output this information in a compact format that ChatGPT can consume. Once you have provided this information to ChatGPT, you can ask ChatGPT to generate SQL for you, and more. ChatGPT can even summarize the purpose of your database, tell you the function of each table.&lt;/p&gt;

&lt;p&gt;You can try prompts such as the following ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Get me the SQL statement to find all the tracks and their artists' names"&lt;/li&gt;
&lt;li&gt;"Get me the SQL statement to find the number of tracks for each artist, for artists that have more than 25 tracks, sorted by those who have the most"&lt;/li&gt;
&lt;li&gt;"List all tables"&lt;/li&gt;
&lt;li&gt;"Describe the Tracks table, with its columns and foreign keys"&lt;/li&gt;
&lt;li&gt;"What is the purpose or function of this database?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To generate the database schema information, you will need Docker installed. Download a SQLite database called &lt;a href="https://github.com/schemacrawler/chinook-database/releases/download/v16.11.7/chinook-database-2.0.1.sqlite"&gt;"chinook-database-2.0.1.sqlite"&lt;/a&gt; into your current directory.&lt;/p&gt;

&lt;p&gt;Run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;bind&lt;/span&gt;,source&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;,target&lt;span class="o"&gt;=&lt;/span&gt;/home/schcrwlr &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
schemacrawler/schemacrawler &lt;span class="se"&gt;\&lt;/span&gt;
/opt/schemacrawler/bin/schemacrawler.sh &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sqlite &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;chinook-database-2.0.1.sqlite &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--info-level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;standard &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;serialize &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--output-format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;compact_json &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--output-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;schema.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(If you are using PowerShell on Windows, replace the trailing backslash on each line with a back-tick, and map the current directory differently.)&lt;/p&gt;

&lt;p&gt;Fire up ChatGPT in your browser (even the free one will do if you do not have a subscription). At the prompt, paste in the contents of your "schema.json" file. Then you can try out the prompts (questions) above. Enjoy exploring your database!&lt;/p&gt;

&lt;p&gt;After you have got this working, use the SchemaCrawler command-line to connect to your own database, and explore it using a natural language interface courtesy of ChatGPT.&lt;/p&gt;

</description>
      <category>database</category>
      <category>chatgpt</category>
    </item>
  </channel>
</rss>
