<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mads Hansen</title>
    <description>The latest articles on DEV Community by Mads Hansen (@mads_hansen_27b33ebfee4c9).</description>
    <link>https://dev.to/mads_hansen_27b33ebfee4c9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mads_hansen_27b33ebfee4c9"/>
    <language>en</language>
    <item>
      <title>If your AI needs a human SQL translator, it's not really integrated</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Tue, 07 Apr 2026 07:34:37 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/if-your-ai-needs-a-human-sql-translator-its-not-really-integrated-4mpl</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/if-your-ai-needs-a-human-sql-translator-its-not-really-integrated-4mpl</guid>
      <description>&lt;p&gt;Here's a simple test for whether your AI workflow is real or just cosmetic:&lt;/p&gt;

&lt;p&gt;When someone asks a business question, does the system return an answer from live data?&lt;/p&gt;

&lt;p&gt;Or does a human still have to step in and translate the question into SQL?&lt;/p&gt;

&lt;p&gt;If it's the second one, your AI is not integrated. It's just sitting on top of the same old process.&lt;/p&gt;




&lt;h2&gt;
  
  
  Human middleware is the giveaway
&lt;/h2&gt;

&lt;p&gt;This is what fake integration looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user asks AI a question&lt;/li&gt;
&lt;li&gt;AI sounds smart but lacks the real data&lt;/li&gt;
&lt;li&gt;someone technical gets pulled in&lt;/li&gt;
&lt;li&gt;that person checks schema and writes SQL&lt;/li&gt;
&lt;li&gt;result gets posted back manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That may be acceptable for a demo.&lt;br&gt;
It does not scale as an operating model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real integration starts lower in the stack
&lt;/h2&gt;

&lt;p&gt;The shift happens when AI tools can query databases and APIs through structured interfaces with scoped permissions and schema awareness.&lt;/p&gt;

&lt;p&gt;That is what turns a chat interface into an actual workflow layer.&lt;/p&gt;

&lt;p&gt;Not because the model became smarter.&lt;br&gt;
Because the data stopped being isolated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where we're seeing the gap
&lt;/h2&gt;

&lt;p&gt;A lot of teams are already past the "should we use AI?" phase.&lt;br&gt;
The real question now is whether the infrastructure underneath can support production use.&lt;/p&gt;

&lt;p&gt;That's the idea behind this piece: &lt;a href="https://conexor.io/blog/why-ai-projects-stall-at-the-database-layer?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Why AI projects stall at the database layer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you want to try that connection layer yourself, &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; is built for connecting live databases and APIs to MCP-compatible clients.&lt;/p&gt;

&lt;p&gt;If a human still has to translate every useful question into SQL, the bottleneck hasn't moved.&lt;br&gt;
It just got rebranded.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>database</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The hidden cost of every 'quick SQL question'</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Tue, 07 Apr 2026 07:33:09 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/the-hidden-cost-of-every-quick-sql-question-1p7f</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/the-hidden-cost-of-every-quick-sql-question-1p7f</guid>
      <description>&lt;p&gt;Most teams don't notice the cost because each request feels small.&lt;/p&gt;

&lt;p&gt;A quick SQL question here.&lt;br&gt;
A quick churn check there.&lt;br&gt;
A quick pull for the board deck.&lt;br&gt;
A quick customer list for sales.&lt;/p&gt;

&lt;p&gt;Individually, none of these feel dramatic.&lt;br&gt;
Together, they become a permanent tax on the people who know the data model best.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real issue
&lt;/h2&gt;

&lt;p&gt;The problem isn't that the questions are hard.&lt;/p&gt;

&lt;p&gt;It's that the route to the answers is manual by default.&lt;/p&gt;

&lt;p&gt;Every time someone needs a number from a live system, the workflow often looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask in Slack&lt;/li&gt;
&lt;li&gt;Wait for the right person&lt;/li&gt;
&lt;li&gt;Write or adjust SQL&lt;/li&gt;
&lt;li&gt;Double-check joins and filters&lt;/li&gt;
&lt;li&gt;Paste result back into chat&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's not analytics.&lt;br&gt;
That's organizational friction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why AI doesn't magically fix it
&lt;/h2&gt;

&lt;p&gt;A lot of teams assume adding AI will solve this. But if the model can't access the live database, nothing changes.&lt;/p&gt;

&lt;p&gt;You still have the same bottleneck — just with better wording around it.&lt;/p&gt;

&lt;p&gt;The only durable fix is connecting AI tools to real data in a structured and safe way.&lt;/p&gt;




&lt;h2&gt;
  
  
  What changes when you do
&lt;/h2&gt;

&lt;p&gt;Once the connection layer exists, recurring questions stop becoming tickets.&lt;/p&gt;

&lt;p&gt;The value is not just speed. It's reducing interruption cost for engineering and data teams, while making answers available where the questions already happen.&lt;/p&gt;

&lt;p&gt;I wrote a deeper breakdown here: &lt;a href="https://conexor.io/blog/why-ai-projects-stall-at-the-database-layer?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Why AI projects stall at the database layer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're experimenting with MCP-based workflows, &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; is built exactly for that layer.&lt;/p&gt;

&lt;p&gt;Not another dashboard.&lt;br&gt;
Not another exported CSV.&lt;br&gt;
Just live questions against live data.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>postgres</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>AI isn't the bottleneck. Your database access is.</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Tue, 07 Apr 2026 07:33:09 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/ai-isnt-the-bottleneck-your-database-access-is-2io9</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/ai-isnt-the-bottleneck-your-database-access-is-2io9</guid>
      <description>&lt;p&gt;Most teams think they have an AI problem.&lt;/p&gt;

&lt;p&gt;Usually, they have a data access problem.&lt;/p&gt;

&lt;p&gt;You can buy the best model on the market. Claude, GPT, Gemini — doesn't matter. If every useful answer still depends on someone checking a schema, writing SQL, and pasting results back into Slack, your AI stack is just a prettier front-end for the same internal queue.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where projects actually stall
&lt;/h2&gt;

&lt;p&gt;A PM asks:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Which customers downgraded after the March rollout?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That answer should take seconds.&lt;/p&gt;

&lt;p&gt;Instead, it often becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;find the right database&lt;/li&gt;
&lt;li&gt;inspect the tables&lt;/li&gt;
&lt;li&gt;write the query&lt;/li&gt;
&lt;li&gt;validate the result&lt;/li&gt;
&lt;li&gt;send it back manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the team says they're "using AI," but the path to the answer still runs through human middleware.&lt;/p&gt;

&lt;p&gt;That's the bottleneck.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters in production
&lt;/h2&gt;

&lt;p&gt;Demos are easy.&lt;/p&gt;

&lt;p&gt;Production is where the questions get messy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What changed week over week?&lt;/li&gt;
&lt;li&gt;Which accounts are at risk?&lt;/li&gt;
&lt;li&gt;What is driving support load in one region?&lt;/li&gt;
&lt;li&gt;Where is revenue slipping relative to plan?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you need live data access, schema awareness, scoped permissions, and a reliable way for AI tools to query real systems without custom glue code every time.&lt;/p&gt;

&lt;p&gt;That is not a prompt problem. It is infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  The shift
&lt;/h2&gt;

&lt;p&gt;If you want AI to be part of actual workflows, your database cannot stay trapped behind tickets and ad hoc SQL.&lt;/p&gt;

&lt;p&gt;That's the idea behind MCP infrastructure.&lt;/p&gt;

&lt;p&gt;We've been writing about this more here: &lt;a href="https://conexor.io/blog/why-ai-projects-stall-at-the-database-layer?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Why AI projects stall at the database layer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you want to test it with your own stack, &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; connects PostgreSQL, MySQL, SQL Server, and APIs to Claude, ChatGPT, Cursor, n8n, and other MCP clients.&lt;/p&gt;

&lt;p&gt;The model is not the hard part anymore.&lt;br&gt;
Getting it to your live data is.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>mcp</category>
      <category>engineering</category>
    </item>
    <item>
      <title>The Monday morning report that should write itself</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Sun, 05 Apr 2026 09:16:45 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/the-monday-morning-report-that-should-write-itself-1ab8</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/the-monday-morning-report-that-should-write-itself-1ab8</guid>
      <description>&lt;p&gt;Every Monday morning, somewhere in an IT team, someone is doing the same thing.&lt;/p&gt;

&lt;p&gt;They are opening five tabs, pulling numbers from a dashboard, copy-pasting into a spreadsheet, writing a summary in Slack or email, and sending it to a manager who will skim it for 30 seconds.&lt;/p&gt;

&lt;p&gt;This takes 45 minutes. It happens every week. It has happened every week for years.&lt;/p&gt;

&lt;p&gt;It should not exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  The report is not the problem
&lt;/h2&gt;

&lt;p&gt;The information in that report matters. Patch compliance rates. Open tickets by priority. Devices offline. SLA performance. Client health scores.&lt;/p&gt;

&lt;p&gt;Leadership needs this. Account managers need this. The team lead needs this to plan the week.&lt;/p&gt;

&lt;p&gt;The problem is not the report. The problem is that a human is assembling it manually from data that already exists in structured systems.&lt;/p&gt;

&lt;p&gt;That is not a reporting problem. That is an automation problem disguised as a workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it has not been automated yet
&lt;/h2&gt;

&lt;p&gt;Most teams know this report could be automated. The reason it has not been is usually one of three things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The data is in too many places.&lt;/strong&gt; PSA here, RMM there, maybe a spreadsheet someone maintains on the side. Building an integration felt like a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The format keeps changing.&lt;/strong&gt; Every quarter someone asks for a new column or a different breakdown. Hardcoded scripts break.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Nobody owns it.&lt;/strong&gt; It gets done because someone takes responsibility, not because there is a system.&lt;/p&gt;

&lt;p&gt;These are real constraints. But they are not permanent ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the automated version looks like
&lt;/h2&gt;

&lt;p&gt;The shift that makes this tractable is AI with direct data access.&lt;/p&gt;

&lt;p&gt;Instead of scripting a rigid report that pulls fixed columns in a fixed format, you connect your data sources to a model and let it generate the report dynamically — from a prompt.&lt;/p&gt;

&lt;p&gt;The prompt might look like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Generate a Monday morning status report covering: patch compliance by client (flag anyone below 80%), open tickets older than 5 days, devices that have not checked in since Friday, and any SLA breaches in the past 7 days. Format it for a non-technical operations lead.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That prompt runs against live data. The model handles the aggregation, the formatting, the flagging. If leadership wants a different view next week, you update the prompt, not a script.&lt;/p&gt;

&lt;p&gt;Platforms like &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Conexor.io&lt;/a&gt; are built for exactly this kind of use case — giving AI models structured access to IT data so you can query it in natural language instead of building bespoke integrations for every report.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 45 minutes is the least of it
&lt;/h2&gt;

&lt;p&gt;Saving 45 minutes a week is not nothing — that is 39 hours a year per person doing this manually.&lt;/p&gt;

&lt;p&gt;But the bigger cost is what does not happen during those 45 minutes.&lt;/p&gt;

&lt;p&gt;The engineer doing the report is not fixing anything. They are not reviewing alerts. They are not thinking. They are transcribing data between systems that should already talk to each other.&lt;/p&gt;

&lt;p&gt;And because reports are assembled manually, they are a snapshot in time. The data is already stale by the time the Slack message is sent.&lt;/p&gt;

&lt;p&gt;An automated report that runs at 07:00 every Monday, pulls live data, and drops into a channel is not just faster. It is more accurate. It runs even when the person who usually does it is sick or on holiday.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;Pick one report that runs on a predictable schedule and currently involves manual data assembly. Just one.&lt;/p&gt;

&lt;p&gt;Map where the data lives. Figure out if there is an API or a database you can query. Connect it to an AI layer — via MCP, a direct database connector, or an API integration.&lt;/p&gt;

&lt;p&gt;Run the automated version alongside the manual one for two weeks. Compare them. When the team trusts the automated version, stop doing the manual one.&lt;/p&gt;

&lt;p&gt;The goal is not to automate everything at once. It is to remove the first one. After that, the second one becomes obvious.&lt;/p&gt;

&lt;p&gt;Monday mornings should be for decisions, not for copy-paste.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why your LLM knows more about ancient Rome than your own database</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Sun, 05 Apr 2026 09:16:16 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/why-your-llm-knows-more-about-ancient-rome-than-your-own-database-386f</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/why-your-llm-knows-more-about-ancient-rome-than-your-own-database-386f</guid>
      <description>&lt;p&gt;Your AI assistant can tell you the exact year Julius Caesar crossed the Rubicon.&lt;/p&gt;

&lt;p&gt;Ask it what devices are currently offline in your network? Blank stare.&lt;/p&gt;

&lt;p&gt;This is not a bug. It is architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The training data problem
&lt;/h2&gt;

&lt;p&gt;LLMs are trained on the public internet — Wikipedia, Stack Overflow, GitHub, books, papers. Ancient Rome is well-documented. Your internal database is not.&lt;/p&gt;

&lt;p&gt;This means your AI has deep knowledge about everything &lt;em&gt;except&lt;/em&gt; the thing you actually care about: your own systems, your clients, your infrastructure state.&lt;/p&gt;

&lt;p&gt;And most teams accept this as a limitation. They use AI for writing and coding, and keep their databases separate — queried only by humans who know SQL, or by dashboards someone built two years ago and nobody touches.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap is not about AI capability
&lt;/h2&gt;

&lt;p&gt;Modern LLMs are genuinely good at reasoning over structured data. Give Claude or GPT-4 a table of device status, patch levels, and last-seen timestamps, and it will immediately surface patterns a human would take an hour to find.&lt;/p&gt;

&lt;p&gt;The gap is about &lt;em&gt;access&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Your LLM does not know what is in your database because nobody connected them. It is an integration problem, not an intelligence problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP changes the equation
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol (MCP) is an open standard that lets you connect data sources directly to AI models. Instead of copy-pasting database results into a chat window, your AI can query your database in real time, as part of the conversation.&lt;/p&gt;

&lt;p&gt;The workflow shift is significant:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before MCP:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Think of a question&lt;/li&gt;
&lt;li&gt;Write SQL (or ask someone who can)&lt;/li&gt;
&lt;li&gt;Export results&lt;/li&gt;
&lt;li&gt;Paste into AI&lt;/li&gt;
&lt;li&gt;Get answer&lt;/li&gt;
&lt;li&gt;Think of follow-up&lt;/li&gt;
&lt;li&gt;Repeat from step 2&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;After MCP:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask the question&lt;/li&gt;
&lt;li&gt;Get the answer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is not an exaggeration. The model handles the query, the follow-up, the filtering, the aggregation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;Imagine asking: &lt;em&gt;"Which clients have devices that have not checked in for more than 72 hours and are running an outdated agent version?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Without MCP, that is a SQL join, an export, maybe a Slack message to someone on the team. Could take 20 minutes. Could get deprioritized.&lt;/p&gt;

&lt;p&gt;With MCP connected to your CMDB or IT management platform, it is a single question. You get the answer in seconds, with the option to drill down, cross-reference, or trigger an action.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Conexor.io&lt;/a&gt; are built around exactly this idea — making your IT data queryable by AI, not just by dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ancient Rome problem is actually worse than it sounds
&lt;/h2&gt;

&lt;p&gt;Here is the irony: your LLM is confidently wrong about your infrastructure.&lt;/p&gt;

&lt;p&gt;Ask it about your network topology and it will hallucinate something plausible. Ask it about ancient Rome and it is accurate. The more domain-specific your question, the less reliable the model — unless you ground it in real data.&lt;/p&gt;

&lt;p&gt;Connecting your data sources via MCP does not just make AI more useful. It makes it &lt;em&gt;trustworthy&lt;/em&gt;. The model stops guessing and starts reporting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical starting point
&lt;/h2&gt;

&lt;p&gt;You do not need to connect everything at once. Start with one data source that gets queried manually every week. Your device inventory. Your ticket backlog. Your patch compliance report.&lt;/p&gt;

&lt;p&gt;Connect it. Ask questions. See what the model surfaces that you were not looking for.&lt;/p&gt;

&lt;p&gt;The goal is not to replace your dashboards. It is to make your data answerable to anyone on the team — not just the people who know SQL.&lt;/p&gt;

&lt;p&gt;Your database knows more about your infrastructure than any LLM ever will. The only question is whether you let the AI access it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>mcp</category>
      <category>postgres</category>
    </item>
    <item>
      <title>The 3 questions every new IT manager asks on day one (and why they're so hard to answer)</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Fri, 03 Apr 2026 18:29:09 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/the-3-questions-every-new-it-manager-asks-on-day-one-and-why-theyre-so-hard-to-answer-13nn</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/the-3-questions-every-new-it-manager-asks-on-day-one-and-why-theyre-so-hard-to-answer-13nn</guid>
      <description>&lt;p&gt;It doesn't matter what company you join. The first week as a new IT manager has a predictable shape.&lt;/p&gt;

&lt;p&gt;You sit down with your team, open your laptop, and start asking the questions that seem like they should have obvious answers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;"What do we have?"&lt;/strong&gt; — What devices, what software, what licenses, what infrastructure?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"What does it cost?"&lt;/strong&gt; — Total spend, renewals coming up, anything overprovisioned?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"What's out of date?"&lt;/strong&gt; — Unpatched systems, expired licenses, EOL software still running?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple questions. Completely reasonable. And almost always — surprisingly hard to answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why can't anyone answer them?
&lt;/h2&gt;

&lt;p&gt;It's not that the data doesn't exist. It's that it lives in five different places, maintained by different people, updated on different schedules, and nobody has a unified view.&lt;/p&gt;

&lt;p&gt;The asset list is in a spreadsheet. License info is in email threads and a shared drive. Patch status is in the patching tool. Cost data is split between procurement, finance, and a SaaS management platform that only covers some of the stack.&lt;/p&gt;

&lt;p&gt;You end up spending your first two weeks not managing IT, but &lt;em&gt;auditing&lt;/em&gt; IT. Piecing together a picture that someone should have already had.&lt;/p&gt;




&lt;h2&gt;
  
  
  The cost of the answer gap
&lt;/h2&gt;

&lt;p&gt;This isn't just onboarding friction. The same problem bites every time there's an audit, a renewal, a security incident, or a board question about IT spend.&lt;/p&gt;

&lt;p&gt;Every time someone needs a clear answer to "what do we have and what's its status," there's an investigation instead of a lookup.&lt;/p&gt;




&lt;h2&gt;
  
  
  What would actually fix it
&lt;/h2&gt;

&lt;p&gt;A single queryable source of truth. Not another tool to maintain — but something connected to your existing data that can answer these questions in real time.&lt;/p&gt;

&lt;p&gt;That's the promise of MCP-connected databases: &lt;a href="https://conexor.io/blog/connect-claude-to-postgresql?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;natural language queries against your live IT data&lt;/a&gt;, without the two-week dashboard build.&lt;/p&gt;

&lt;p&gt;The three questions should take three seconds. Not three weeks.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>database</category>
      <category>career</category>
    </item>
    <item>
      <title>Stop building dashboards. Start asking questions.</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Fri, 03 Apr 2026 18:28:34 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/stop-building-dashboards-start-asking-questions-4pkd</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/stop-building-dashboards-start-asking-questions-4pkd</guid>
      <description>&lt;p&gt;Every data team I've talked to has the same problem: they spend more time building dashboards than answering questions.&lt;/p&gt;

&lt;p&gt;A stakeholder asks: &lt;em&gt;"Which accounts are most at risk of churning?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The data team says: &lt;em&gt;"We can build a dashboard for that. It'll be ready in two weeks."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Two weeks later, the dashboard is live. It answers the original question. And then the stakeholder asks a slightly different question, and the cycle starts again.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why dashboards are the wrong abstraction
&lt;/h2&gt;

&lt;p&gt;Dashboards are answers to questions you predicted in advance. They're good for monitoring known metrics. They're terrible for exploration, ad-hoc analysis, or any question that wasn't anticipated when the dashboard was built.&lt;/p&gt;

&lt;p&gt;The world doesn't ask predictable questions. Business moves faster than dashboard backlogs.&lt;/p&gt;




&lt;h2&gt;
  
  
  The alternative: queryable data
&lt;/h2&gt;

&lt;p&gt;What if instead of building a dashboard, you made your database directly queryable in natural language?&lt;/p&gt;

&lt;p&gt;Not "give everyone raw SQL access" — that's a different kind of problem. But a controlled layer where someone can ask:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Show me all customers who upgraded in the last 30 days but haven't used the new feature yet."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And get an answer from live data, in seconds, without a ticket.&lt;/p&gt;

&lt;p&gt;This is what MCP (Model Context Protocol) enables when it's connected to your database. The AI constructs the query, validates it, and returns the result — without the stakeholder needing to know SQL, and without the data team building yet another dashboard.&lt;/p&gt;




&lt;h2&gt;
  
  
  It's not about replacing analysts
&lt;/h2&gt;

&lt;p&gt;Data analysts are still valuable — for modeling, for data quality, for building the infrastructure that makes this possible. But routing every ad-hoc business question through a two-week dashboard build is a waste of everyone's time.&lt;/p&gt;

&lt;p&gt;If you're curious what this looks like in practice, I wrote about it here: &lt;a href="https://conexor.io/blog/kill-the-data-request-ticket?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Kill the data request ticket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The dashboard era isn't over. But for ad-hoc questions? There's a better way.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>ai</category>
      <category>database</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The patch that broke everything (and the spreadsheet that caused it)</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Fri, 03 Apr 2026 18:27:58 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/the-patch-that-broke-everything-and-the-spreadsheet-that-caused-it-4156</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/the-patch-that-broke-everything-and-the-spreadsheet-that-caused-it-4156</guid>
      <description>&lt;p&gt;It was a routine patch cycle. The team pushed updates to what they thought was the full device list. Three servers weren't on it.&lt;/p&gt;

&lt;p&gt;Two of them were running a legacy payment integration. One was exposed to the internet.&lt;/p&gt;

&lt;p&gt;The post-mortem finding: the asset spreadsheet hadn't been updated in six weeks. Someone had spun up the servers during a crunch period and never added them. The patch ran, the known devices were updated, and the unknown ones stayed vulnerable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real problem with patch management
&lt;/h2&gt;

&lt;p&gt;Most teams treat patch management as a tooling problem. Get the right patch tool, run it on a schedule, check the boxes.&lt;/p&gt;

&lt;p&gt;But patch tools only patch devices they know about. If your inventory is incomplete, your patch coverage is incomplete — by definition.&lt;/p&gt;

&lt;p&gt;This is why &lt;strong&gt;patch management is fundamentally a data problem&lt;/strong&gt;. You can have the best patching tool in the world and still miss 15% of your environment because your inventory is stale.&lt;/p&gt;




&lt;h2&gt;
  
  
  What "complete" actually means
&lt;/h2&gt;

&lt;p&gt;A complete asset inventory isn't a spreadsheet updated every quarter. It's a live, queryable record of what's connected to your environment right now — with enough metadata to answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this device in scope for this patch?&lt;/li&gt;
&lt;li&gt;When was it last seen on the network?&lt;/li&gt;
&lt;li&gt;Who owns it?&lt;/li&gt;
&lt;li&gt;What's running on it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without that, you're patching from a map that's already out of date.&lt;/p&gt;




&lt;h2&gt;
  
  
  The fix isn't more process
&lt;/h2&gt;

&lt;p&gt;Adding a "please update the spreadsheet" step to your onboarding checklist doesn't solve this. People forget. Contractors don't know to do it. Emergency deployments skip it.&lt;/p&gt;

&lt;p&gt;The fix is making the inventory self-updating — or at least queryable from your actual infrastructure rather than maintained manually.&lt;/p&gt;

&lt;p&gt;If you're thinking about how AI fits into this, &lt;a href="https://conexor.io/blog/why-your-ai-cant-answer-business-questions?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; is worth a look — it's MCP infrastructure that connects your databases to AI tools so you can query live asset data in natural language instead of hunting through stale spreadsheets.&lt;/p&gt;




&lt;p&gt;The spreadsheet didn't cause the breach. The gap between the spreadsheet and reality did.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>productivity</category>
      <category>database</category>
    </item>
    <item>
      <title>What happens when every developer on your team can query production (safely)</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Thu, 02 Apr 2026 13:16:18 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/what-happens-when-every-developer-on-your-team-can-query-production-safely-3jb1</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/what-happens-when-every-developer-on-your-team-can-query-production-safely-3jb1</guid>
      <description>&lt;p&gt;The data team is the bottleneck. But giving everyone raw database access is terrifying. There's a middle path.&lt;/p&gt;

&lt;p&gt;Most engineering teams have an unwritten rule: only certain people touch the production database. It makes sense. One bad &lt;code&gt;SELECT *&lt;/code&gt; on a 50M row table can bring down your app. One accidental &lt;code&gt;DELETE&lt;/code&gt; without a &lt;code&gt;WHERE&lt;/code&gt; clause is a very bad afternoon.&lt;/p&gt;

&lt;p&gt;So instead, teams build dashboards. Write internal tools. Route every ad-hoc question through the data analyst. The bottleneck is real. The fear is real. But the trade-off is painful — especially when the data team is always two weeks behind.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's changed
&lt;/h2&gt;

&lt;p&gt;AI-native query layers can now act as a safe intermediary between your developers and your production database.&lt;/p&gt;

&lt;p&gt;You're not giving your frontend dev raw SQL access. You're giving them natural language access, where the query is generated, validated, and executed through a controlled interface with scoped permissions.&lt;/p&gt;

&lt;p&gt;They ask: &lt;em&gt;"How many trials converted last week?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;They don't touch the schema. They don't accidentally run a full table scan. They just get the answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The safety properties that matter
&lt;/h2&gt;

&lt;p&gt;For this to work in production, you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read-only by default&lt;/strong&gt; — no INSERT, UPDATE, DELETE unless explicitly configured&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameterized queries&lt;/strong&gt; — not string substitution (protects against prompt injection attacks)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoped tools&lt;/strong&gt; — the AI only sees the tables and columns you expose&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit log&lt;/strong&gt; — who asked what, when, against which tables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't nice-to-haves. They're the difference between "we let AI query our database" and "we let AI query our database &lt;em&gt;safely&lt;/em&gt;."&lt;/p&gt;




&lt;h2&gt;
  
  
  How we built this into conexor.io
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; is MCP infrastructure for database access. Every query runs parameterized. Tools are scoped to the data sources you configure. Read-only by default.&lt;/p&gt;

&lt;p&gt;If you're curious about the security model in detail: &lt;a href="https://conexor.io/blog/mcp-security-model?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;How conexor.io enforces zero data exposure&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Democratizing data access doesn't have to mean democratizing risk. The two are separable — if your infrastructure is built for it.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>ai</category>
      <category>engineering</category>
    </item>
    <item>
      <title>The Monday morning report that should write itself (but doesn't)</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Thu, 02 Apr 2026 13:15:28 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/the-monday-morning-report-that-should-write-itself-but-doesnt-pic</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/the-monday-morning-report-that-should-write-itself-but-doesnt-pic</guid>
      <description>&lt;p&gt;Every Monday. Same 6 queries. Copy-paste into a spreadsheet. Send to Slack. Repeat forever.&lt;/p&gt;

&lt;p&gt;I've talked to dozens of engineering leads and ops managers. They all have a version of the same Monday ritual:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open database client&lt;/li&gt;
&lt;li&gt;Run the weekly queries&lt;/li&gt;
&lt;li&gt;Open Excel&lt;/li&gt;
&lt;li&gt;Paste and format&lt;/li&gt;
&lt;li&gt;Send to #ops-updates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;45 minutes. Every week. Nobody questions it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why hasn't this been automated?
&lt;/h2&gt;

&lt;p&gt;The data is right there. The queries are deterministic. The output format never changes. And yet — a human is in the loop for 45 minutes of mechanical work every single Monday.&lt;/p&gt;

&lt;p&gt;The usual answer: &lt;em&gt;"We tried to build a dashboard but it only covers some of the queries"&lt;/em&gt; or &lt;em&gt;"We keep meaning to automate it but nobody owns it."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's the thing: the technology to eliminate this has existed for a while. Cron jobs, scripts, even basic automation. But they're brittle — they break when schemas change, they need someone to maintain them, and they still don't handle the "I need a slightly different cut of this" request that comes every other week.&lt;/p&gt;




&lt;h2&gt;
  
  
  What AI-native query layers actually change
&lt;/h2&gt;

&lt;p&gt;The right solution isn't a dashboard. It's a layer that can answer any question against your live data — including the ones you didn't predict.&lt;/p&gt;

&lt;p&gt;With MCP-connected databases, you don't need to pre-build every query. You describe what you want in plain English, and the AI constructs and runs the query against your actual database.&lt;/p&gt;

&lt;p&gt;Monday morning report becomes: &lt;em&gt;"Summarize this week's key metrics vs last week and flag anything anomalous."&lt;/em&gt; Done. Including the anomalies you didn't think to check for.&lt;/p&gt;




&lt;h2&gt;
  
  
  The missing piece
&lt;/h2&gt;

&lt;p&gt;For most teams, the blocker is the connection layer — getting your database to actually talk to an AI tool in a secure, structured way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; is MCP infrastructure that handles this without custom code. Connect your PostgreSQL, MySQL, or SQL Server database, and your AI tools can query it directly.&lt;/p&gt;

&lt;p&gt;More on eliminating the data request bottleneck: &lt;a href="https://conexor.io/blog/kill-the-data-request-ticket?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Kill the data request ticket&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're still doing the Monday copy-paste ritual — 2026 is the year to stop.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>postgres</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why your LLM knows more about ancient Rome than your own database</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Thu, 02 Apr 2026 13:15:27 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/why-your-llm-knows-more-about-ancient-rome-than-your-own-database-609</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/why-your-llm-knows-more-about-ancient-rome-than-your-own-database-609</guid>
      <description>&lt;p&gt;Your AI can write a sonnet about Marcus Aurelius. Ask it your Q4 churn rate — it guesses.&lt;/p&gt;

&lt;p&gt;There's a weird paradox in AI tooling right now. The models are extraordinary. GPT-4, Claude, Gemini — they've ingested the internet. They can reason, code, explain, synthesize.&lt;/p&gt;

&lt;p&gt;But ask them something actually useful — &lt;em&gt;"what's our average deal size this quarter?"&lt;/em&gt; or &lt;em&gt;"which clients haven't logged in for 30 days?"&lt;/em&gt; — and they stall.&lt;/p&gt;

&lt;p&gt;Why? Because the answer isn't on the internet. It's in your database. And your LLM has never seen your database.&lt;/p&gt;




&lt;h2&gt;
  
  
  The gap is infrastructure, not intelligence
&lt;/h2&gt;

&lt;p&gt;This isn't a model problem. The models are ready. The problem is that your business data — every transaction, every user event, every metric that actually matters — lives in a PostgreSQL or MySQL database that no AI tool has ever touched.&lt;/p&gt;

&lt;p&gt;So every AI conversation about your own business is either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Based on data you manually copy-pasted into the prompt&lt;/li&gt;
&lt;li&gt;A hallucination dressed up as an answer&lt;/li&gt;
&lt;li&gt;Redirected to: "I don't have access to that information"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  MCP closes the gap
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol is the standard that lets AI models connect directly to external data sources — databases, APIs, tools — in real time. Instead of copy-pasting, the model queries. Instead of guessing, it returns actual numbers.&lt;/p&gt;

&lt;p&gt;Ask Claude: &lt;em&gt;"Who are our top 10 customers by LTV?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It hits your database. It returns the actual list. In about 4 seconds.&lt;/p&gt;

&lt;p&gt;Not 4 days. Not a ticket to the data team. Not a dashboard that only answers the questions you thought to ask last quarter.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;If you want to try this with your own database, &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;conexor.io&lt;/a&gt; lets you connect PostgreSQL, MySQL, or SQL Server to Claude, Cursor, or any MCP client in under 5 minutes. Free tier, no credit card.&lt;/p&gt;

&lt;p&gt;We wrote a more detailed breakdown here: &lt;a href="https://conexor.io/blog/why-your-ai-cant-answer-business-questions?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Why your AI assistant can't answer business questions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The models are ready. The question is whether your data is connected.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>mcp</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Your monitoring stack is lying to you (and you can prove it with SQL)</title>
      <dc:creator>Mads Hansen</dc:creator>
      <pubDate>Tue, 31 Mar 2026 05:10:03 +0000</pubDate>
      <link>https://dev.to/mads_hansen_27b33ebfee4c9/your-monitoring-stack-is-lying-to-you-and-you-can-prove-it-with-sql-4c9e</link>
      <guid>https://dev.to/mads_hansen_27b33ebfee4c9/your-monitoring-stack-is-lying-to-you-and-you-can-prove-it-with-sql-4c9e</guid>
      <description>&lt;p&gt;Pull up your monitoring dashboard. Now open your ticketing system. Now check your deployment logs.&lt;/p&gt;

&lt;p&gt;Are they telling the same story?&lt;/p&gt;

&lt;p&gt;They're not. They never are.&lt;/p&gt;

&lt;p&gt;Your dashboard shows green. Your tickets show a slow degradation that started 3 weeks ago. Your deployment logs show a config change that went out the same day the tickets started. Nobody connected the dots because nobody was looking at all three at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboards visualize data. They don't correlate it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The fix is boring: get your operational data into a SQL-queryable form, then ask cross-system questions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; 
  &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;deployed_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tickets_opened&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time_to_resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;avg_resolve_minutes&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;deployments&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;tickets&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="k"&gt;BETWEEN&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;deployed_at&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;deployed_at&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'48 hours'&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;deployed_at&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;deployed_at&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That query will tell you more about deployment quality than any dashboard you've built.&lt;/p&gt;

&lt;p&gt;The challenge is getting your data into one place. Most teams have their monitoring, ticketing, and deployment data in separate systems with separate APIs and separate export formats.&lt;/p&gt;

&lt;p&gt;That's what structured IT data platforms solve — and why we built &lt;a href="https://conexor.io?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=content" rel="noopener noreferrer"&gt;Conexor.io&lt;/a&gt; to connect those sources and expose them via MCP so you can query across them with natural language or SQL.&lt;/p&gt;

&lt;p&gt;Your data is telling the truth. Your dashboards are just not asking the right questions.&lt;/p&gt;

</description>
      <category>database</category>
      <category>devops</category>
      <category>postgres</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
