DEV Community

Adam El Kabir
Adam El Kabir

Posted on • Edited on

Echoflow (🧠 + πŸ“ + πŸ€–)

GitHub Copilot CLI Challenge Submission

This is a submission for the GitHub Copilot CLI Challenge

Echoflow - Active Recall Powered by AI

Taking notes is good. Being tested on them intelligently is even better.


A Personal Introduction (first post)

Hi ! I'm Adam, a young self-taught student who has spent months deliberately avoiding AI coding tools.

Why ? Because I wanted to "earn" my skills first to understand what I was building without shortcuts, to struggle through bugs, and to truly grasp the fundamentals. I was convinced that using AI would somehow make me... less of a developer.

But then this GitHub community challenge came along, and I thought :
"Maybe I just needed an excuse πŸ˜…"

So here we are. Echoflow is my first real project built almost entirely with GitHub Copilot CLI, and honestly ? It's been transformative. Not because it wrote perfect code, but because it let me stay in the flowβ€”thinking, iterating, and building without constantly context-switching.


What is Echoflow?

Echoflow is an AI-powered Active Recall tool designed to solve a problem I face constantly :

I write tons of notes... and i read them.. That's it.

The idea is simple : instead of passively rereading notes, Echoflow uses AI to quiz you on what you've written. It's like having a study buddy who remembers everything you learned and asks you the right questions at the right time.

Core Features

  • πŸ“ Markdown-powered note-taking (because plain text is so 2010)
  • πŸ€– AI-generated questions based on your notes (powered by OpenRouter)
  • πŸ’¬ Conversational quizzing - chat with AI to deepen understanding
  • πŸ“Š Progress tracking - AI analyzes your strengths and weaknesses (not perfectly implemented yet)
  • 🎨 Dark mode UI - because your eyes deserve better
  • πŸ”’ Privacy-first - your notes stay yours (Supabase RLS)

Here's a demo

The app is pretty intuitive.
After authenticating, you'll find some pre-filled categories: :

Inside each category, you'll find some pre-filled dedicated notes (you can access it by clicking on category) :

If you are in a hurry (or feel confortable with yours notes), you can directly quiz yourself by clicking de "Quiz" button. A modal chat will appear with a relevant question about your note :

The app is now more feature-rich than ever ! It includes full CRUD operations for both categories and notes, along with powerful tools like the multi-select mode for targeted LLM quizzing with a contextual feedback loop ensuring the LLM references its previous conclusions to maintain consistency across conversations. I invite you to explore the project yourself, you can find a deep dive into the mechanics and capabilities on the "Learn More" and "Features" pages (available before authentication).

Try It Out (password for demo account : N7!qA3@Zp#L9mE$R)

You can try it out here without signing up. Just click "Demo Account" with the provided password and start creating notes and quizzing yourself (using the single or multiple-selection feature).

Why Active Recall?

Active recall is one of the most effective learning techniques backed by cognitive science. Instead of rereading notes (which feels productive but isn't), you actively retrieve information from memory. Echoflow automates this process with AI.


The Process: From Idea to Deployment

The BMade Method

I started with the BMade methodology, which breaks down the development process into three phases :

  1. Brainstorm - Define the problem, target audience, and core features.
  2. Map - Design the architecture, tech stack, and data flow.
  3. Make - Implement the features iteratively, testing and refining as you go.

GitHub Copilot CLI strictly followed this planβ€”no feature creep, no random refactors. Just the PRD, executed step by step.

The Tech Stack

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      🌐 Data Acquisition (Input)    β”‚
β”‚  β”œβ”€ Next.js Web Application         β”‚
β”‚  └─ Markdown/Rich-text Editor       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        πŸ” Auth & Persistence        β”‚
β”‚  β”œβ”€ Supabase (PostgreSQL + Auth)    β”‚
β”‚  β”œβ”€ Row Level Security (RLS)        β”‚
β”‚  └─ Vector Storage            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚       πŸ€– AI Logic (Echoflow)        β”‚
β”‚  β”œβ”€ OpenRouter API Gateway          β”‚
β”‚  β”œβ”€ Prompt Engineering (Context)    β”‚
β”‚  └─ Streaming Responses (SSE)       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     πŸŽ“ Active Recall (Output)       β”‚
β”‚  β”œβ”€ Dynamic Quiz Interface          β”‚
β”‚  β”œβ”€ Smart Search Tags (JSON)        β”‚
β”‚  └─ Token/Energy Quota Tracking     β”‚ <- I haven't implemented this yet.
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚
                  β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        ☁️ Infrastructure            β”‚
β”‚  β”œβ”€ Vercel (Edge Functions)         β”‚
β”‚  └─ Monorepo (Next.js + Scripts)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

It creates me a prd.md file :

And a architecture.md file :

The "Orchestrator" Strategy: Controlling the AI Flow

To ensure Echoflow remained consistent, scalable, and professional, I didn't just "chat" with the AI β€” I engineered its environment.

I implemented a strict instruction framework via the .github/copilot-instructions.md file. This file acts as the project's source of truth and enforces the following:

  • Strict Tech Stack Enforcement β€” Ensures the project follows the chosen stack (Next.js 15 App Router, Tailwind CSS 4.0, and pnpm).
  • Architectural Integrity β€” Defaults to Server Components where appropriate and enforces Zod validation for API routes.
  • The BMade Workflow β€” Integrates the BMade methodology into the AI prompts to preserve a "think-before-code" approach.
  • Commit Discipline β€” Ensures each change is well-documented and traceable.

MCP: Context7 Integration for Real-Time Documentation

I've also set up the Context7 MCP (Model Context Protocol) with the /mcp command. Context7 injects real-time, version-specific documentation into the AI's context whenever I mention a library or framework. This provides several benefits:

  • Real-time documentation access β€” the AI can fetch official docs for 60,000+ libraries instantly.
  • Reduced hallucinations β€” Context7 supplies the exact API syntax for the target version, avoiding outdated or incorrect guidance.
  • Version-accurate examples β€” it returns code examples that match the versions I'm using (e.g., Next.js 14+), preventing deprecated patterns.
  • Smoother workflow β€” no manual copy/paste: the AI retrieves the relevant documentation automatically when prompted (for example with use context7).

By defining the rules of the game before the first line of code was written, I turned the AI from a simple assistant into a specialized senior engineer tailored for Echoflow.

SKILL.md: A Living Document of AI Feedback Analysis

I rencently learned about the concept of a SKILL.md file from this insightful thread. The idea is to have a dedicated file where I analyze the AI's feedback on my code, identify patterns in its suggestions, and reflect on how to improve my prompts and code structure.

I've always wanted to try this.. so i installed some relevant skills :

A Humble Disclaimer

I'm a learner. This is my first real full-stack project, and I'm sure there are "creative architectural choices" in here that would make senior devs cringe πŸ˜…

Some things I'm still figuring out:

  • Could my database schema be better normalized ? Maybe.
  • Could my analysis of AI feedback be more sophisticated ? Definitely.
  • Are there edge cases I haven't considered ? Absolutely..
  • Is security airtight ? Probably not enough (but I did implement RLS and follow best practices 🀞).

I probably made some "questionable" decisions in the codebase. But for a first try, I'm pretty proud of the result and the process I went through. The goal was to learn, experiment, and create something useful, not to create a perfect codebase.

If you find bugs or have suggestions, I'm all earsβ€”teaching me is teaching the next generation of devs (but teach me first please).


The Power of "Vibe-Coding"

Here's what I learned: GitHub Copilot CLI enables 100% "vibe-coding" when used correctly.

What does that mean?

Instead of:

  1. Thinking of a feature
  2. Opening an editor
  3. Writing code
  4. Switching to terminal to test
  5. Switching back to fix bugs
  6. Repeat 47 times

I could just :

  1. Think of a feature
  2. Describe it to Copilot CLI
  3. Watch it implement, test, and iterate
  4. Stay in the flow

To be honest, I did fix a few bugs manually. Sometimes, hopping into the code editor often with a quick assist from Copilot Chat was faster than explaining the context to the CLI. While I had to switch contexts a few times, the overall time lost was negligible compared to the massive speed boost the CLI provided.

The "Flow State" Advantage

When you stay in the terminal:

  • No context switching between IDE and browser
  • No manual file navigation
  • No "where was I?" moments
  • Just pure creation mode

It's like pair programming with someone who never gets tired, never judges your questions, and always has the docs ready (but you still have to understand the docs yourself).


By the Numbers (GitHub Copilot CLI analysis by the DEVELOPMENT_LOG.md)

  • Development Time: ~12 hours (spread over 5 days + 1 last to finalize)
  • Lines of Code: ~5,000 (TypeScript + SQL)
  • Copilot CLI Features Used: 30+
  • Coffee Consumed: β˜•β˜•β˜• ... (too many)
  • Stack Overflow Visits: 2 (down from ~50 per project)
  • "Aha!" Moments: Countless

What I Built

Feature Status Notes
πŸ” Authentication (Google OAuth) βœ… Supabase Auth with RLS
πŸ“ Note Management (CRUD) βœ… Markdown support + syntax highlighting
πŸ—‚οΈ Category Organization βœ… Color-coded, filterable
πŸ€– Single-Note Quizzing βœ… AI asks questions, you answer
🎯 Multi-Note Quizzing βœ… AI quizzes across multiple notes
πŸ’¬ Conversational AI βœ… Chat-based learning
πŸ“Š Progress Tracking βœ… AI analyzes analysis/weaknesses
🎨 Dark Mode UI βœ… Glassmorphism design
🌊 Streaming Responses βœ… Progressive AI rendering
πŸ”’ Demo Account βœ… Try without signing up

Technical Highlights

1. AI Model Rotation System

// For freemium accounts (default) : Automatically tries free models until one succeeds
const FREE_MODELS = [
    "meta-llama/llama-3.2-3b-instruct:free",
    "z-ai/glm-4.5-air:free",
    "stepfun/step-3.5-flash:free",
    "meta-llama/llama-3.3-70b-instruct:free",
    "qwen/qwen-3-235b-a22b:free",
    "mistralai/mistral-small-3.1-24b:free",
    "google/gemma-3-4b-instruct:free",
];

// For premium accounts (even for the "Demo-account")
const PREMIUM_MODELS = [
    "openai/gpt-4o-mini:paid", // generally this one is used
    "mistralai/mistral-7b-instruct:paid",
];
Enter fullscreen mode Exit fullscreen mode

2. Streaming SSE Implementation

// Progressive AI response rendering
const stream = new ReadableStream({
    start(controller) {
        reader.read().then(function push({ done, value }) {
            if (done) return controller.close();
            controller.enqueue(value);
            return reader.read().then(push);
        });
    },
});
Enter fullscreen mode Exit fullscreen mode

3. Row Level Security (RLS)

-- Users can only see their own notes
CREATE POLICY "Users can view own notes"
  ON notes FOR SELECT
  USING (auth.uid() = user_id);
Enter fullscreen mode Exit fullscreen mode

4. JSONB Feedback Storage

// Flexible AI feedback structure example
ai_feedback: {
  analysis: "The user struggles with recall of key concepts, but shows strength in understanding relationships between ideas.",
  weaknesses: "Really stupid at X topic.",
  conclusion: "I need to review X topic more often, maybe with a different approach.",
}
Enter fullscreen mode Exit fullscreen mode

What I Learned

About AI Coding Tools

  • They don't make you "lazy"β€”they make you productive
  • The quality of output depends on the quality of input (prompt engineering matters)
  • Staying in the terminal is way more efficient than I expected

I've also learned how to prompt correctly ! With the right context, the CLI enables 100% vibe-coding.

For a skilled developer (or an aspiring one), the real limit isn’t how fast you can type but how clearly you can communicate your intent.

Here's an example of a part prompt i used with :

We can't see well in this screen but my method is :

  • Explain the idea (or issues) precisely
  • Share the relevant file using the "@" symbol
  • Add details if needed (like links) Yeah i'm not professional πŸ˜… but that was my "prompt cycle".

But it's important to have a good understanding of the underlying code and architecture. It's not about blindly accepting AI suggestions, but about guiding the AI with clear prompts and making informed decisions based on its output.

About programming in general

This challenge was a deep dive back into the frontend ecosystem. After focusing heavily on Backend architecture during my studies, this project was the perfect opportunity to bridge the gap and sharpen my React skills.

  • Modern Stack Discovery : It was my first time getting my hands dirty with Next.js 15 and shadcn ui. The speed of iteration and the "developer experience" they provide are truly game-changing for a solo builder.

  • Infrastructure & Auth: Integrating Supabase felt seamless, though setting up Google OAuth provided a rewarding challenge in managing secure authentication flows.

  • AI Orchestration: Finally using OpenRouter provided the model flexibility I’ve always wanted. I had to learn Prompt Engineering on the flyβ€”it was fascinating to see how subtle tweaks in instructions could drastically shift the AI's behavior (you can see my logic in api/ai/generate-questions/route.ts). There is something almost artistic about controlling AI output through code, and it made me deeply appreciate the importance of clear, structured communication with LLMs.
    I can already see the immense potential for creating highly customized AI interactions in the future.


What's Next?

After a long break, I will continue improving Echoflow based on user feedback, and I plan to add more features based on the best practices to learn faster and more effectively with AI.


Final Thoughts

The ultimate irony : I spent my time learning development, only to build a tool that helps me learn even more πŸ€”πŸ˜Ž..

Made with πŸ’™ (and lots of β˜•) by Adam

Powered by GitHub Copilot CLI πŸ˜‰

https://github.com/ElAkab
https://twitter.com/El_Akab

Top comments (0)