I've been experimenting a lot with AI-assisted coding tools like Claude Code and Cursor.
One thing I noticed is that code quality checks usually run only later in CI. Linting, type checks, tests, security scans, and coverage often happen after the code is already written.
That workflow works for humans, but it feels awkward when AI is generating code.
So I started experimenting with running the entire quality pipeline locally during development and exposing the results in a way that AI tools can use to iterate on fixes.
That experiment became a small project called LucidShark.
What LucidShark does
LucidShark is a local-first CLI code quality pipeline designed to work well with AI coding workflows.
Key ideas:
- Runs entirely from the CLI
- Local-first (no SaaS or external service)
- Configuration as code via a repo config file
- Integrates with Claude Code via MCP
- Generates a quality overview that can be committed to git
It orchestrates common quality checks such as:
- linting
- type checking
- tests
- security scans
- coverage
Example usage
pip install lucidshark
lucidshark init
lucidshark scan
Language and tool support is still fairly limited right now, but it should work reasonably well for Python and Java projects.
Why I built it
The main goal is to explore workflows where AI agents can read quality results and fix issues automatically, instead of developers running the checks manually later.
The project is still early and I'm mostly looking for feedback from people experimenting with AI coding workflows.
GitHub: https://github.com/toniantunovi/lucidshark
Docs: https://lucidshark.com
Top comments (0)