DEV Community

Cover image for How I Solved My Own Pain Point with AI: A Frontend Dev's GitLab Hackathon Diary
yuki uix
yuki uix

Posted on

How I Solved My Own Pain Point with AI: A Frontend Dev's GitLab Hackathon Diary

I recently submitted a project to the GitLab AI Hackathon 2026. Built it in 3 days. This isn't a tutorial or a "look what I built" post — it's a diary of what actually happened: the problem I found, how I thought about it, what I built, where I got stuck, and what I walked away understanding.


The Problem: A Page That Exhausted Me

It started with a course detail page.

The page had to serve multiple countries, multiple user roles, all from a single codebase. The UI logic depended on 8 state dimensions: 4 user states (organization membership, employee status, certification, enrollment) and 4 course states (organization, pricing, subscription support, quote support).

These 8 dimensions combined to determine what banner to show, how to display pricing, whether the CTA button was visible, clickable, or should trigger a tooltip or flyout — and in what priority order these checks should run.

The logic itself wasn't the hard part. What exhausted me was this: the same business intent was being described four separate times.

The BA wrote it in a requirements doc. UX annotated it in Figma. I translated it into TypeScript hooks. QA rewrote it as plain-text test cases.

None of these formats were compatible. When something changed, all four roles had to update their version and sync in a meeting. The real logic lived in collective memory — not in any single file that could be version-controlled.

Every time I needed to modify one condition, I'd spend a full working day just locating what might be affected, then make the change, then worry I'd broken something else.

I started calling this the Specification Gap: the same business intent, lost in translation across every role handoff, with no single source of truth.


The Insight: What Architecture Drawings Taught Me

Before becoming a frontend developer, I studied architecture for five years. The discipline that stayed with me wasn't drawing — it was the ability to move from a vague brief to a concrete, executable plan. That's what architectural training drills into you.

When I was thinking about the Specification Gap, something clicked: CAD drawings solve this problem in construction.

One set of structural drawings is read by architects, structural engineers, contractors, and clients. Each extracts what they need — spatial relationships, load calculations, dimensions, visual outcomes — but it's one file. Update it once, everyone is working from the latest version.

Software development is missing the equivalent. BA documents, Figma files, TypeScript hooks, test cases — that's four separate "drawings," not one.

The direction became clear: I didn't need another document format. I needed a single source of truth that every role could read from, with AI acting as the translation layer.


The Architecture: Getting the Layers Right First

Before the hackathon, I had already been working on this in my real codebase. The key insight wasn't about AI — it was about structure.

I identified five distinct layers that the existing logic could be reorganized into:

  • types.ts — TypeScript interfaces and enums only
  • rules.ts — rule table: input conditions → scenario name, ordered by priority
  • presentation.ts — scenario name → UI output mapping
  • resolver.ts — orchestration only
  • utils.ts — pure helper functions

The core principle: rules.ts decides what situation we're in. presentation.ts decides what to show. These two must never mix.

This separation matters because it makes each layer's responsibility unambiguous. Business rule changed? Touch rules.ts. UI variant changed? Touch presentation.ts. Both changed? Edit them separately, independently.

I validated this by copying the existing test file and replacing the hook calls with the specDriven implementation. All tests passed. Then I manually checked the UI with different user accounts in the browser — no regressions.

What had previously taken a full working day to locate and safely modify could now be covered by a plain-language description, with test cases as a guarantee.

That felt like acceleration with quality assurance — not just moving faster, but moving faster without losing confidence.


The Hackathon: Turning the Idea into a GitLab Flow

The GitLab AI Hackathon theme was "You Orchestrate. AI Accelerates." The requirement was clear: not a chatbot. Agents that react to triggers and take action.

My idea was straightforward: let developers describe a UI scenario in plain English inside a GitLab Issue, and have a flow automatically update specDriven/rules.ts, open a Merge Request, and post a structured summary back to the Issue.

The flow has four components:

read_existing_rules (DeterministicStepComponent) — reads specDriven/rules.ts from the repo. If the file doesn't exist yet, returns NO_RULES_FILE_FOUND to signal first-rule creation downstream.

scenario_parser (AgentComponent) — takes the natural language description and existing rules, outputs structured JSON classifying the scenario as a gate condition, user state condition, or course state condition, and determines correct priority placement.

spec_generator (AgentComponent) — generates the complete updated rules.ts, commits it to a feature branch, opens a Merge Request.

post_comment (AgentComponent) — posts a structured summary to the triggering Issue: plain-English business summary, presentation.ts code snippet, Jest test cases covering the new scenario and adjacent regression, plus a next steps checklist with the MR link.

Why TypeScript instead of YAML for the spec file? This lives in a frontend repo. TypeScript gives type checking, IDE autocompletion, and as const satisfies Rule[] to preserve literal type precision. It should feel native to the developers who read it.

Why open a Merge Request instead of committing directly to main? Security by design. Anyone who can comment on an Issue can trigger the flow. Only people with merge permissions can apply the change. AI handles the translation work. Humans make the final call.


The Debugging: What Actually Took Time

Honest breakdown: in those 3 days, I spent more time debugging the GitLab Flow schema than writing business logic.

Issue 1: Schema format wasn't in one place

GitLab Duo Agent Platform is a relatively new platform. Documentation was spread across the official blog, the v1 schema spec, and the hackathon-specific template — and they didn't always agree.

The inputs format tripped me up early. I was writing:

inputs:
  - "context:goal"
Enter fullscreen mode Exit fullscreen mode

The correct format is:

inputs:
  - from: "context:goal"
    as: "user_goal"
Enter fullscreen mode Exit fullscreen mode

And the hackathon platform requires wrapping the entire v1 config in a definition: key — which only appeared in the hackathon template, not the main schema docs.

Issue 2: create_file_with_contents ≠ commit

The flow Action log showed "Create file: success." The file wasn't in the repo.

Turns out create_file_with_contents only prepares the content. You need create_commit to actually write it to a branch. The correct three-step sequence:

  1. create_file_with_contents — prepare content
  2. create_commit — write to feature branch
  3. create_merge_request — open MR from that branch

Issue 3: Priority inference was inconsistent

My first prompt described priority placement too loosely. Gate conditions (like organization membership checks) were sometimes placed at priority 2 instead of priority 1.

The fix was an explicit four-tier hierarchy in the prompt:

  1. GATE conditions (identity, org membership) → always priority 1, shift all existing rules down
  2. USER STATE conditions (enrollment, certification) → after gate conditions
  3. COURSE STATE conditions (pricing, subscription) → after user state
  4. FALLBACK → always last

Issue 4: context:issue_iid wasn't reliably available

The flow trigger payload includes the Issue IID, but it's embedded in the goal string as Context: {Issue IID: X} — not available as a standalone named variable. Using context:issue_iid as an input returned empty, causing the post_comment step to silently fail.

Fix: have the agent extract the IID directly from the goal string.


The Result: Three Issues, Three MRs, One Rules File

Three test runs, three different scenarios:

Run 1: User not in the correct organization → flow creates specDriven/rules.ts from scratch, first rule at priority 1, MR opened.

Run 2: User already enrolled → flow reads existing file, classifies as user state condition, inserts at priority 2, existing rule shifts to priority 3, second MR opened.

Run 3: Course price not yet set → flow classifies as course state condition, inserts at priority 3, all previous rules shift, third MR opened.

Each run posted a structured comment to the triggering Issue. Three MRs. Three clean diffs. One rules.ts that accumulated the full business logic incrementally — built from natural language, without touching a file manually, without a single alignment meeting.


What I Actually Learned

AI's correct role is eliminating translation work, not replacing judgment.

Every step in this flow involves translation: natural language → structured rules, rules → test cases, rules → plain-English business summary. These are high-volume, low-ambiguity conversions that humans do slowly and inconsistently. AI does them fast and consistently.

But the MR review, the decision to merge, the presentation.ts changes — those stay with the developer. Anything that requires contextual judgment and accountability stays human. That boundary matters.

When the repetitive work gets lighter, you can go deeper on the hard stuff.

The part of frontend work I find genuinely interesting is system design, cross-role collaboration patterns, architectural decisions. The part that exhausted me was spending a day locating and safely modifying a single condition in a complex state machine.

Spec-driven architecture + AI automation doesn't eliminate my job. It removes the part that was getting in the way of the part I actually want to do.

A deadline is the most underrated productivity tool.

I'd been thinking about this spec-driven approach for a while. The hackathon is why it exists as a working thing now instead of a note in a doc. Three days, part-time, real constraints. Find a deadline, even an artificial one.


What's Next

The flow has obvious gaps: presentation.ts changes are still suggested in comments rather than committed automatically, multi-component awareness isn't there yet, and the priority inference still needs tuning for edge cases.

But for three working days starting from a real problem, I'll take it.

Project submission: https://devpost.com/software/spec-driven-scenario-updater

GitLab repo: https://gitlab.com/gitlab-ai-hackathon/participants/35552697

If you've run into the Specification Gap in your own team — BA docs, Figma annotations, TypeScript hooks, and QA test cases all describing the same thing in four incompatible formats — I'd genuinely like to hear how you're dealing with it.

Top comments (0)