DEV Community

Tom Schoutens
Tom Schoutens

Posted on

CodePop - an AI-powered quiz generator

GitHub Copilot CLI Challenge Submission

This is a submission for the GitHub Copilot CLI Challenge

What I Built

CodePop generates interactive multiple-choice quizzes on coding topics using AI. Simply enter "Quiz me about X" where X is your chosen coding topic, and get 5 dynamically generated questions with instant feedback and scoring.

It was a fun experience for me to incorporate a LLM in an app. I am currently studying for a certification and I like that I can ask any coding topic and can learn in a fun way.

Features

  • AI-Powered Question Generation: Uses LLM (Ollama or cloud API) to create factual, verifiable quiz questions
  • Smart Topic Validation: Supports 19 coding topics across backend, frontend, and mobile development
  • Intelligent Caching: Stores generated questions in SQLite to avoid repetition within a session
  • Difficulty Levels: Each question categorized as easy, medium, or hard
  • Graceful Degradation: Shows partial quiz if LLM fails, with helpful notifications

Supported Topics

Backend: Java, Python, Node.js, C#, Go, Rust, PHP

Frontend: JavaScript, TypeScript, React, Vue, Angular, HTML, CSS, Svelte

Mobile: Swift, Kotlin, React Native, Flutter

Subtopics supported too (e.g., "Java Spring Boot", "React hooks")

Demo

Github: https://github.com/mastatomba/codepop

Screenshots

The user will enter a coding (sub)topic and click on "Start Quiz".

The user goes through the quiz flow and gets immediate feedback.

After the quiz is ended, the user is presented with a result screen.

The user is able to continue with the topic, or start a new quiz.

How It Works

  1. User enters a coding topic
  2. Frontend loads previously asked questions from browser session storage
  3. Backend validates topic against supported list
  4. Backend checks SQLite for existing questions (excluding already-asked ones)
  5. If < 5 questions available, LLM generates new questions with explanations
    • Backend passes ALL existing question texts to LLM to avoid duplicates
    • LLM sees complete history and generates questions on different aspects
  6. New questions stored in database for future use
  7. Frontend saves question IDs to session storage
  8. User answers 5 multiple-choice questions
  9. Score displayed at the end
  10. Taking the same topic again shows different questions (within same session)

My Experience with GitHub Copilot CLI

I wrote down my idea in a MD file, which I shared with copilot CLI to start brainstorming. This phase was going really well and we created the architecture and decisions docs.

Later on I implemented some features together with copilot CLI, like implementing the Ollama quiz master LLM.

When testing the app, I sometimes got errors with question containing code snippets, we used json as LLM output. Together with copilot CLI we made it more robust.

The plan mode works fine in the beginning of a session, but I notice when continuing the conversation going back to plan mode does not prevent it from writing.

In the end I had a good experience working with Github Copilot CLI and I will continue to use it for my personal projects.

Top comments (0)