Most developers stop when their AI assistant says "I cant do that." I didn't.
Check out the full repo at
https://github.com/jmoncayo-pursuit/market-data-api.
For a recent take home assignment I built a financial market data microservice with FastAPI, Docker Compose, PostgreSQL, Redis, Kafka and pytest. The real engineering happened in agent mode with Cursor AI, where I taught the tool to recover from CI failures and document each fix.
Purpose-Driven AI Orchestration
This was not casual "vibe coding." It was intentional:
- Drafted a PRD from the assignment requirements and referred to it in every session
- Defined project rules before writing code
- Fed Cursor AI official GitHub Actions docs like Accessing workflow context
- Vigilantly tracked file creation to avoid duplicate filenames
- Reviewed and validated every AI generated fix
Each prompt had a clear goal. Each response was reviewed. No autopilot.
Project Overview
-
Stack
- FastAPI with dependency injection
- PostgreSQL via SQLAlchemy ORM
- Redis caching and job status store
- Apache Kafka with confluent-kafka-python
- Docker Compose orchestrating all services
-
Testing
- 278+ comprehensive tests with full integration coverage
- Endpoint tests covering authentication flows
- Service layer tests for Kafka producer and consumer patterns
-
CI/CD
- GitHub Actions workflow with dynamic retry logic
- Solved rate limiter initialization failures in CI
- Handled API authentication mismatches 401 and 403 errors
- Added Redis connection timeout handling
- Mocked Kafka services to avoid external dependencies
-
Observability
- Prometheus metrics integration for request rates and error counts
- Grafana dashboard showcasing service health and throughput
- Structured logging formatted for ELK consumption
- Health check endpoints for service readiness
-
Docs & Deliverables
- Swagger UI and OpenAPI spec
- Complete Postman collection with environment configs
- Alembic migrations for schema management
- GitHub Actions workflows with health check steps
Architecture
Data Flow
Troubleshooting CI Failures
To give Cursor AI the context it needed, I added this snippet to my internal docs so it could extrapolate our custom monitoring flow:
# 1. List the most recent failed run
gh run list --status failure --limit 1 \
--json databaseId,status,conclusion,createdAt,url > last_run.json \
&& cat last_run.json
# 2. View logs of the failed run
gh run view <RUN_ID> --log-failed > last_run.log \
&& tail -100 last_run.log
# 3. Compute sleep duration based on last job timing
last_duration=$(gh run view <RUN_ID> --json timing \
| jq .timing.totalDuration)
sleep $(( last_duration / 2 ))
# 4. Rerun only the failed jobs
gh run rerun <RUN_ID> --failed
Cursor AI learned to watch logs, wait intelligently, retry failures, and avoid wasted prompts. It even adapted when the Redis connection timed out or the rate limiter threw event loop errors.
Timeline and Complexity
- Built a production grade microservice with streaming data pipeline
- Implemented full CI/CD with health checks and self healing retries
- Created a 278+ test suite covering all service layers
- Integrated observability, docs and migrations in under two weeks
Real Learning Outcomes
- Learned to handle Redis connection failures with retry backoff
- Mastered Kafka consumer group management and mocking techniques
- Implemented async await patterns for high throughput
- Built robust error handling for external API dependencies
Conclusion
It is time to rethink what an AI native builder can deliver. By defining clear goals, keeping a living PRD, feeding the right documentation, and guiding each AI step, you can ship production level code with AI as your teammate. Demand more of your tools and hold them to your standards, and you will build resilient systems that keep moving forward, no matter what.
Top comments (0)