This is a submission for the GitHub Copilot CLI Challenge
What I Built
mrktr is a terminal-based reseller price research dashboard built with Go, Bubble Tea, and Lip Gloss. It lets resellers compare prices across eBay, Mercari, and Amazon without leaving the terminal.
I flip stuff on marketplaces sometimes, and I got tired of opening a dozen browser tabs just to compare prices and figure out if a deal is actually worth it after fees. So I built a dashboard that does all of that in one place.
How It Works
Type a product name (e.g., "iPhone 14 Pro"), hit Enter, and mrktr queries live search APIs to find listings across marketplaces. Results land in a sortable, filterable table showing platform, price, condition, and listing status, with a row-by-row reveal animation. A real-time statistics panel calculates min, max, average, median, P25/P75 percentiles, standard deviation, and coefficient of variation, complete with sparkline trend visualization. A built-in profit calculator lets you enter your cost and instantly see net margins after platform-specific fees (eBay's 13.25%, Mercari's 10%, Amazon's 15%, Facebook's 5%).
Key Features
- Multi-marketplace search with a 3-provider fallback chain (Brave → Tavily → Firecrawl)
- Conservative query expansion using a local TF-IDF product index for vague queries
- Inline predictive suggestions with ghost text from search history and product catalog
- Three stats views: Summary, Distribution (histogram), and Market (per-platform breakdown)
- Profit calculator with real platform fee structures and "best net platform" recommendation
- CSV/JSON export and clipboard copy for individual listings
- Search history persisted across sessions with relative timestamps
- Animated intro, row-by-row result reveals, value tweening between searches, and a reduce-motion accessibility toggle
- Vim-style navigation (j/k, Tab cycling, / to search, c for calculator)
Architecture
The app follows the Elm Architecture (Model-View-Update) pattern that Bubble Tea encourages:
mrktr/
├── main.go # Entry point, alt screen + mouse support
├── model.go # Application state, message types, initialization
├── update.go # All state transitions and side effects
├── view.go # Pure rendering, no side effects
├── view_panels.go # Panel render methods (search, results, stats, calc, history)
├── view_intro.go # Animated ASCII art splash screen
├── styles.go # Lip Gloss styles with adaptive light/dark colors
├── keys.go # Keybinding definitions
├── history.go # JSON-backed persistent search history
├── export.go # CSV and JSON export logic
├── api/ # Search providers, price parser, query suggestions
│ ├── search.go # Provider fallback chain coordinator
│ ├── brave.go # Brave Search API provider
│ ├── tavily.go # Tavily API provider
│ ├── firecrawl.go # Firecrawl API provider
│ ├── parse.go # Regex price extraction + platform detection
│ └── suggest.go # TF-IDF product index for query expansion
├── types/ # Shared domain types
│ ├── listing.go # Listing, Statistics, ProfitCalculation
│ ├── fees.go # Platform fee calculations
│ ├── sort.go # Multi-field sorting
│ └── filter.go # Platform/condition/status filtering
└── idea/ # Extended statistics and visualization
├── stats_model.go # ExtendedStatistics with percentiles, histograms
├── stats_distribution.go # Histogram rendering
├── stats_market.go # Per-platform stat breakdowns
└── stats_animation.go # Skeleton loading and value tweening
The codebase is ~9,300 lines of Go across 40+ source files, with 200+ tests (118 test functions with table-driven subtests) covering parsers, statistics calculations, sorting/filtering, fee math, history persistence, animations, and UI state transitions.
Demo
GitHub Repository: github.com/keiranhaax/mrktr
Screenshots
Search results with real-time statistics and sparkline trend:
Profit calculator showing net margins after platform fees:
Price distribution histogram view:
Detail overlay with per-platform market breakdown:
User Flow
-
Launch:
go run .from themrktr/directory (requires at least one API key:BRAVE_API_KEY,TAVILY_API_KEY, orFIRECRAWL_API_KEY) -
Search: Type a product name in the search panel. Inline ghost-text suggestions appear from your history and a built-in product catalog. Press
Tabto accept a suggestion orEnterto search. -
Browse Results: Results appear row-by-row with an animation. Navigate with
j/kor arrow keys. Presssto cycle sort fields (price, platform, condition, status),Sto reverse direction,fto open the filter bar. -
Analyze Stats: The statistics panel updates in real-time. Press
1/2/3to switch between Summary view (sparkline + percentiles), Distribution view (price histogram), and Market view (per-platform breakdowns). -
Calculate Profit: Press
cto focus the profit calculator. Enter your cost and see net profit at min/avg/max prices after platform fees. Presspto cycle platforms and compare fee structures. The "Best Net @ Avg" line tells you which platform gives the highest profit. -
Act on Results: Press
Enterto open the detail overlay, thenEnteragain to open the listing URL in your browser. Pressyto copy the URL,Yto copy the full listing,eto export all results as CSV, orEfor JSON. -
History: Previous searches are saved with timestamps and result counts. Press
Tabto the history panel, navigate withj/k, and pressEnterto re-run a past search.
Quick Start
git clone https://github.com/keiranhaax/mrktr.git
cd mrktr
# Set at least one API key
export BRAVE_API_KEY="your-key-here"
# Run
go run .
My Experience with GitHub Copilot CLI
This project was built with AI assistance the whole way through, mainly GitHub Copilot CLI and Claude Code. Both run in the terminal, which made a big difference for a TUI project where I'm constantly switching between writing code and running go run . to check the output.
How the Tools Helped
The whole thing went from nothing to ~9,300 lines with 200+ tests over a weekend. I don't think I could have done that without the CLI tools handling a lot of the repetitive parts. Here's what that actually looked like:
Scaffolding. I started by having the AI set up the Bubble Tea MVU structure: state in model.go, transitions in update.go, rendering in view.go. It got the conventions right out of the gate (message types with Msg suffix, tea.Cmd for side effects, pure View() functions), so I could jump straight to building features instead of wiring up boilerplate.
API providers. The three search providers (Brave, Tavily, Firecrawl) all follow the same shape: isolated function, gated on an env var, returning []types.Listing. I got the first one working manually, then AI helped stamp out the other two while adapting to each API's response format. The fallback chain in api/search.go went through a few rounds of back-and-forth to get the error handling right.
Tests. 200+ tests is still a lot for a weekend project. I didn't write each one by hand. I'd write one or two table-driven tests to establish the pattern, then have the AI expand coverage with edge cases (malformed prices, missing fields, zero results, that kind of thing). The api/parse_test.go and types/ test files are dense because of this. Golden snapshot tests for the stats panel rendering were AI-assisted too.
Animations. The intro animation, sparkline rendering, gradient text, and profit bars all involve fiddly math (color interpolation, easing curves, tweening). I'd describe the visual effect I wanted and the AI would spit out the math. This was probably the area where AI saved the most time, because I'd have spent ages looking up interpolation formulas otherwise.
Refactoring. As the codebase grew, the AI helped extract the api/ package from inline code, pull types into types/, and later split out the idea/ package for extended statistics. Being able to rename or move things across multiple files without breaking tests made it easy to keep the structure clean as I went.
What Worked
- Staying in the terminal. For a TUI project this was huge. I never had to context-switch out of my editor/shell to get help. Just ask, get an answer, keep going.
- Pattern replication. Once I had one provider, one panel renderer, or one set of table-driven tests working, the AI could stamp out more of the same reliably. That's where a lot of the test coverage comes from.
- Cheap experiments. Features like the TF-IDF query expansion, histogram view, and reduce-motion toggle started as "let's see if this works" ideas. AI made it cheap to try things, and the ones that worked became features.
What the AI Couldn't Do for Me
The design decisions were all mine. Which panels to include, how Tab should work (it cycles panels normally, but accepts search suggestions when you're in the search bar), what fees each platform charges, the visual hierarchy. The color palette (#7D56F4 → #EA80FC), rounded vs. thick borders for active panels, adaptive colors for light/dark terminals: I picked all of that. AI can generate code fast, but it doesn't have opinions about what looks good.
Same for the trickier architectural stuff. Keeping views pure, using generation counters to prevent stale animation ticks, cancelling in-flight searches when a new one starts: those decisions came from understanding how Bubble Tea actually works, not from prompting.
Using Multiple Tools
I used both Copilot CLI and Claude Code on this project, and they're good at different things. Copilot CLI was better for quick inline completions and shell commands. Claude Code was better for big multi-file refactors and cranking out test suites. Using both made sense.
If you're building TUI apps, having an AI assistant in a terminal pane next to your running app is a really good setup. Worth trying if you haven't.
Built with: Bubble Tea · Lip Gloss · Bubbles




Top comments (0)