This is a submission for the GitHub Copilot CLI Challenge
From SQLite CLI to Cloud Platform: How I Became an AI Architect in 25 Hours
What I Built
TL;DR: I transformed a local SQLite-based CLI tool into a production-grade analytics platform in 25 hours of actual work (10 evenings, 2-3 hours each), without writing a single line of code by hand.
The Starting Point
Every developer has one: a useful local script that becomes your "technical memory." Mine was a DEV.to analytics tracker—a Python CLI tool backed by SQLite—that helped me understand my content performance beyond basic stats. It tracked follower attribution using 7-day windows with 6-hour tolerance, calculated quality scores with weighted formulas, and performed sentiment analysis on comments using VADER. You can read about it in my When DEV.to Stats Aren't Enough: Building My Own Memory article.
This work is not a port of a third-party project. It is an evolution of a codebase I originally created and maintain, and the repository is publicly available on GitHub: https://github.com/pcescato/devto_stats
But it lived in isolation on my machine.
The Vision
I wanted to transform this personal tool into a secure, scalable web platform accessible from anywhere. My non-negotiable constraints:
- PostgreSQL 18 (not 16, not 17—I wanted latest JSONB features and pgvector compatibility for tomorrow)
- SQLAlchemy Core (NOT ORM—I refused to hide my procedural SQL logic behind ORM magic)
- Authentik (self-hosted IAM with granular groups, not just a basic OAuth proxy)
- Caddy outside Docker (bare metal reverse proxy for performance)
- Apache Superset (initially... more on that pivot later)
The Final Stack
After strategically pivoting away from Superset (1GB RAM was too heavy for my 4GB VPS), the production stack became:
| Component | Technology | Purpose |
|---|---|---|
| Backend | FastAPI (async) | High-performance REST API |
| Database | PostgreSQL 18 | Partitioned tables, JSONB, arrays, pgvector-ready |
| Cache | Valkey 8.0 | Redis-compatible in-memory store |
| Frontend | Streamlit | Interactive data visualization (replaced Superset) |
| Security | Authentik + Caddy | Self-hosted IAM with proxy auth |
| Infrastructure | Docker Compose | Containerized deployment |
Key Features
1. The "Sismograph"
Unlike traditional analytics showing cumulative totals, my Sismograph visualizes real-time activity pulses. It calculates deltas between data snapshots to reveal when traffic actually spikes, not just how many views you have total.
2. Author DNA
Automatic thematic classification of content:
- "Expertise Tech" (SQL, PostgreSQL, Docker)
- "Human & Career" (feedback, learning, growth)
- "Culture & Agile" (management, performance)
The system analyzes titles and tags, counting keyword matches to determine dominant themes.
3. Future-Proof Architecture
I added Vector(1536) columns via pgvector—even though I'm not using embeddings yet. When I decide between BGE, Gemma, Qwen… the schema is ready.
Why? Because AI-assisted architecture makes preparing for tomorrow free today. I didn't spend 3 days writing schema migrations manually, so I have no emotional attachment preventing me from pivoting.
Demo
🔗 Live Platform:
- API Documentation: analytics.weeklydigest.me/docs
-
Dashboard: streamlit.weeklydigest.me (requires Authentik authentication: login:
judge, password:Github~Challenge/2k26) - Source Code: GitHub Repository
Architecture Overview
The platform implements a proxy-based forward authentication model:
User Request
↓
Caddy Reverse Proxy (bare metal)
↓
Authentik Verification (SSO, groups: Admin/Judge)
↓
Protected Service (Streamlit/API)
Security Benefits:
✅ Applications remain "auth-agnostic" (zero authentication code in app)
✅ Centralized identity management with granular RBAC
✅ Single Sign-On across all subdomains
Resource Optimization Story
Initial deployment included Apache Superset (1GB RAM), which proved too heavy. I made a strategic architectural pivot:
❌ Removed: Apache Superset (1GB)
✅ Added: Authentik IAM (600MB) + Custom Streamlit dashboard (512MB)
💡 Result: 10% memory footprint reduction + better security + custom UX
Because the code was AI-generated, pivoting took hours, not days. No sunk cost fallacy—just constraint optimization.
My Experience with GitHub Copilot CLI
No, I Didn't Code for 10 Days Straight
I worked mostly in the evenings (2–3 hours per session), plus one Saturday afternoon and evening, and one Sunday morning — roughly 30 hours total to migrate from a SQLite-based CLI to a production-grade cloud platform with SSO.
My Secret Workflow: Three-Stage Delegation
I didn't talk directly to Copilot CLI. I used a cascade of intelligence:
Claude/Gemini (The Architect): Brainstorming and constraint definition. I discussed requirements ("PostgreSQL 18 mandatory", "Core not ORM", "API not CLI"). It structured my fuzzy ideas into precise technical prompts.
GitHub Copilot CLI (The Implementer): I fed the optimized prompts + source files (
@devto_tracker.py,@content_collector.py...). It generated 57-page technical documentation, PostgreSQL schema, FastAPI endpoints, Docker configs.Me (The Guardian): I validated business logic preservation and enforced technical constraints.
Working with Copilot as a layered system
I didn’t “chat” with GitHub Copilot CLI. I treated it as an execution layer inside a broader workflow.
Before Copilot ever saw the code, I clarified non-negotiable constraints using a general-purpose model (Claude, sometimes Gemini or ChatGPT): PostgreSQL 18 (not 16 or 17), SQLAlchemy Core instead of an ORM, Authentik and Caddy outside Docker, Streamlit replacing Superset once memory pressure became an issue. These decisions were made upfront and never negotiated later.
Only once the intent was explicit did I involve Copilot CLI. I pointed it at the real codebase and asked it to extract documentation, schemas, and implementation details. In one pass, it produced a 50+ page technical document describing architecture, data flows, algorithms, and business rules — without me writing a single line of code.
The final step was purely human: enforcing invariants. Whenever Copilot proposed a local optimization that conflicted with system-level intent — such as dropping reaction-level history in favor of aggregates — the answer was simply no. Aggregation was allowed only on top of preserved raw data, never instead of it.
What mattered here wasn’t prompt cleverness. It was clarity of constraints. Once those were explicit, Copilot became extremely effective — not as a decision-maker, but as an execution engine.
The quality of the outcome didn’t come from better prompts, but from better invariants.
That structure worked well — until one architectural decision made it clear where responsibility really sits.
The Moment I Had to Remind AI Who's Boss
There was one moment where the limits of delegation became very clear.
At one point, Copilot suggested simplifying the database schema by dropping the reactions table and keeping only aggregated totals per article. From a purely technical standpoint, the suggestion made sense: fewer tables, fewer joins, simpler queries.
But that optimization would have broken something fundamental. Without temporal granularity in reactions, my weighted follower attribution algorithm collapses. The extra tables weren’t accidental complexity — they were deliberate, even if they made the schema heavier.
I didn’t “argue” with the AI or try to outsmart it. I simply restated the constraint: the schema was not up for simplification. This wasn’t a performance issue, it was an architectural one. Copilot adjusted immediately and moved on.
The lesson wasn’t that the AI was wrong. It was that local optimization without systemic intent is just guesswork. AI optimizes syntax; humans guard semantics.
Zero Lines Written, 100% Generated, 100% Controlled
- Technical Documentation: 57 pages in one pass (2 hours vs 2-3 days)
- SQL Schema: 26KB, 18 tables, partitioning, JSONB, arrays, pgvector-ready
- FastAPI Endpoints: 14 routes, async, SQLAlchemy Core
- Authentik Integration: Complete Docker Compose setup
- Tests: pytest suite with 82% coverage
I wrote zero lines of Python code. I wrote prompts, I validated architectures, I corrected trajectories.
The Non-Negotiable Invariants
When I said "PostgreSQL 18," it was non-negotiable. Not for whimsy, but because I wanted:
- Improved JSONB performance
- Future pgvector compatibility
When I demanded "SQLAlchemy Core," it was to preserve exact existing SQL patterns—like the "proximity search" finding the closest snapshot within a 6-hour tolerance window.
AI generates the "how." You must imperative keep the "why" and the "what."
Impact Metrics
| Task | Traditional Estimate | Actual (AI-assisted) | Time Saved |
|---|---|---|---|
| Technical Documentation | 2-3 days | 2 hours | ~90% |
| SQLite → PostgreSQL Migration | 3-4 days | 4 hours | ~85% |
| FastAPI Development | 5-7 days | 6 hours | ~80% |
| IAM Configuration (Authentik) | 2 days | 3 hours | ~75% |
| Total | 12-16 days | ~30 hours | ~80% |
But the real gain isn't time—it's optionality. When I realized Superset consumed too much RAM, I pivoted to Streamlit in hours, not days. Because I hadn't "written" the code, I had no emotional attachment to what had to be discarded.
Beyond Code: The Infrastructure Blueprints
One of the most revealing moments of this challenge wasn’t about writing better prompts or cleaner Python. It was realizing that code alone was not the artifact worth sharing.
Using GitHub Copilot CLI, I deliberately piloted the AI to export and document the production chassis itself—not just the application logic, but the architectural constraints that make the system actually run on a 4GB VPS.
What I Exported
Instead of pushing isolated source files, I created an anonymized Deployment Blueprint in /deploy/production/, capturing the real operating context:
docker-compose.yml— Service orchestration with explicit memory ceilings (FastAPI, Streamlit, Valkey)Caddyfile— Reverse proxy configuration encoding the SSO flow (Caddy → Authentik → applications)deploy_analytics.sh— A zero-downtime deployment script with validation steps.env.example— A complete environment template, with every secret replaced by{{CHANGE_ME}}
This wasn’t about reproducibility for its own sake—it was about making constraints visible.
Defensive Documentation
I also required Copilot to generate what I call defensive documentation: a README that clearly defines boundaries, not just capabilities.
What this directory is NOT
Not a backup — This is an architectural snapshot, not a recovery plan
Not plug-and-play — Domains, networks, and volumes must be adapted per environment
Not containing secrets — All sensitive values have been intentionally scrubbed
This distinction matters. AI didn’t “decide” what was safe to publish, deployable, or acceptable.
It followed instructions.
That, to me, is the real lesson: a modern architect doesn’t delegate responsibility to AI—they use it to enforce clarity, reproducibility, and accountability across the entire system.
Conclusion: From Developer to Prompt Architect
Being an AI architect isn’t about delegating thinking — it’s about orchestrating it.
What this experience taught me is that I’m no longer just a developer who writes code. I’ve become an architect who writes constraints, curates business logic, and decides where complexity is acceptable — and where it isn’t.
GitHub Copilot CLI isn’t my coding assistant. It’s an execution engine. It implements what I specify, quickly and relentlessly, but it doesn’t own the intent. When architectural decisions mattered — like preserving reaction-level granularity in my data model — the responsibility stayed firmly with me.
The real shift isn’t that AI writes code for us. It’s that it forces us to be explicit about our decisions. The clearer the intent, the less the AI needs to “think” — and the more effective it becomes.
As this system scales from thousands to tens of thousands of records, I’m less worried about the code I wrote and more confident in the constraints I defined. Those constraints can evolve. Prompts can be rewritten. Architecture can be re-expressed without dragging years of accidental technical debt behind it.
The future of development isn’t AI replacing developers. It’s developers moving upstream — from implementation to strategic architecture — with AI handling the translation.
GitHub Copilot CLI Challenge — January 2026. 22 commits, 0 lines written by hand, 25 hours of actual work, 7,078+ records migrated, production-grade security deployed.




Top comments (4)
beautiful design, Pascal! 💯
I logged into the site... it's elegant! 🏆
Thanks, Aaron! I’m really glad you logged in and tried it out — keeping the experience clean and unobtrusive was very much part of the goal.
Great explanation!
Thanks! Appreciate you reading through it. 👍