I believe the best way to learn is by doing. But in a busy job, you don’t always get to try new tech on company time. That can’t be an excuse—our field moves fast, and AI is one of the biggest shifts right now.
My day-to-day didn’t include AI work, so I turned to my personal website repo as a safe sandbox: serhatozdursun/resume. It’s the perfect “lab mouse.” I wanted to explore AI-assisted quality gates: the things we already rely on in QA—static analysis, unit tests, code reviews—augmented with an assistant that understands the diff and helps us ship better.
This post walks through the pipeline I built: SonarCloud + Jest coverage + Danger + OpenAI, wired together with Secrets so I can tune behavior without touching YAML.
What you get
✅ SonarCloud for static analysis + PR decoration + coverage dashboards.
✅ A coverage gate on changed code (e.g., 80% on lines you touched).
✅ AI unit test suggestions generated directly from the diff (copy-paste-ready Jest/TS).
✅ AI code review that posts a concise diff snippet (old → new) with an explanation.
(No fragile line numbers; we still add inline notes when mapping is reliable.)
✅ Secret-driven knobs so you can change thresholds/toggles from CI Secrets—no PRs to change policy.
Architecture (bird’s eye)
- Tests & coverage: Jest runs and writes coverage/lcov.info.
- SonarCloud: runs as usual for code smells and coverage on new code.
- Danger + OpenAI (same job):
- Reads the PR diff.
- Computes new-lines coverage per changed file from LCOV.
- If below MIN_FILE_COVERAGE, fails the PR and posts AI test ideas.
- Regardless of coverage, asks OpenAI to:
- propose copy-pasteable Jest tests, and
- do a short code review with actionable notes and a tiny diff snippet.
- All behavior is toggled by environment variables you define as secrets.
## One workflow for both PRs and main (AI runs only for PRs)
I keep one file so everything lives in one place. The workflow triggers on PRs and on pushes to main, but the AI step is gated so it only runs on PRs.
name: Code Quality (Tests + SonarCloud + AI Review)
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
push:
branches: [main]
permissions:
contents: read
pull-requests: write
statuses: write
# Optional hardening: avoid duplicate runs while a PR is updated
concurrency:
group: qg-${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test-and-analyze:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with: { fetch-depth: 0 }
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'yarn'
- name: Install deps
run: yarn install --frozen-lockfile
- name: Run unit tests (with coverage)
run: yarn test --coverage
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
- name: Upload coverage report (PRs only)
if: ${{ github.event_name == 'pull_request' }}
uses: actions/upload-artifact@v4
with:
name: coverage-html
path: coverage/lcov-report/
- name: Install Danger deps
if: ${{ github.event_name == 'pull_request' }}
run: yarn add -D danger lcov-parse openai
- name: Run AI coverage-aware review
if: ${{ github.event_name == 'pull_request' }}
env:
# Required
DANGER_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# Optional knobs (set via Secrets to avoid editing YAML)
# MIN_FILE_COVERAGE: ${{ secrets.MIN_FILE_COVERAGE }} # default 80
# MAX_FILES_TO_ANALYZE: ${{ secrets.MAX_FILES_TO_ANALYZE }}# default 3
# OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }} # default gpt-4o-mini
# DANGER_ALWAYS_NEW_COMMENT: ${{ secrets.DANGER_ALWAYS_NEW_COMMENT }} # default off
# AI_REVIEW_ENABLED: ${{ secrets.AI_REVIEW_ENABLED }} # default 1
# AI_REVIEW_INLINE: ${{ secrets.AI_REVIEW_INLINE }} # default 1
# AI_REVIEW_BLOCK_ON_FINDINGS: ${{ secrets.AI_REVIEW_BLOCK_ON_FINDINGS }} # default off
# AI_REVIEW_MAX_CHARS: ${{ secrets.AI_REVIEW_MAX_CHARS }} # default 100000
# AI_REVIEW_MAX_FINDINGS: ${{ secrets.AI_REVIEW_MAX_FINDINGS }} # default 8
run: npx danger ci --dangerfile dangerfile.ts
Why this pattern?
- Single file keeps maintenance low.
- AI job runs only on PRs (if:), so merging to main doesn’t re-comment PRs.
- You still get tests + Sonar on pushes to main.
The dangerfile.ts
— what it actually does
1) Coverage gate on changed code
- We parse coverage/lcov.info using lcov-parse.
- We fetch the PR diff and compute coverage on new/modified instrumented lines.
- If that effective coverage is below MIN_FILE_COVERAGE (default 80), we:
- fail the PR,
- post a table showing per-file new-lines coverage and whole-file coverage, and
- ask the AI for test ideas (see next).
2) AI unit test suggestions (copy-paste ready)
Prompt instructs the model: “You’re a senior TS/React dev. Output one Markdown code block: a runnable Jest TS test file. Keep it minimal but cover branches and edge cases implied by the diff. Use a file name like src/tests/.auto.test.ts.”
You can always generate suggestions, or only when coverage dips below the threshold—toggle with Secrets.
3) AI code review (snippet-first + optional inline)
- We enumerate added lines per file and ask the AI to return JSON findings:
{
"file": "src/pages/index.tsx",
"index": 0,
"severity": "warn",
"title": "...",
"body": "..."
}
- For each finding, we create a concise diff snippet centered around that added line and post it as a file-scoped comment, with explanation and suggested fix.
- (This avoids wrong line numbers; the snippet shows the exact old → new change.)
- If mapping is available, we also add an inline note (secondary signal).
- If you set AI_REVIEW_BLOCK_ON_FINDINGS="1", any fail severity will fail the PR.
![AI review(https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qh5jt29g4dfnaa6l1pvl.png)
“Knobs” behind Secrets (and not just on GitHub)
The behavior is entirely env-driven, so you can change policy without opening PRs:
-
Required
- DANGER_GITHUB_API_TOKEN (or GITHUB_TOKEN) — lets Danger comment & set statuses.
- OPENAI_API_KEY — used for test ideas + AI code review.
-
Optional (defaults already in dangerfile.ts)
- MIN_FILE_COVERAGE → default 80
- MAX_FILES_TO_ANALYZE → 3
- OPENAI_MODEL → gpt-4o-mini
- DANGER_ALWAYS_NEW_COMMENT → off (set "1" to force fresh comment per push)
- AI_REVIEW_ENABLED → "1"
- AI_REVIEW_INLINE → "1"
- AI_REVIEW_BLOCK_ON_FINDINGS → off (set "1" to enforce blocking)
- AI_REVIEW_MAX_CHARS → 100000
- AI_REVIEW_MAX_FINDINGS → 8
Not using GitHub Actions? The exact same env names work in other CI/CDs—store them in that platform’s secret manager:
- AWS CodeBuild/CodePipeline → AWS Secrets Manager / SSM Parameter Store
- Bitbucket Pipelines → Repository Variables (secured)
- Azure DevOps → Library Variable Groups and/or Azure Key Vault
- GitLab CI → CI/CD Variables (masked/protected)
- CircleCI → Project Environment Variables or Contexts
- The “secret knobs” pattern is portable; only the secret store changes.
Optional hardening
- Concurrency (already in the YAML): prevents duplicate runs while developers push more commits.
- Fail early if LCOV missing (add right before Danger step if you prefer a clear error):
- name: Assert LCOV present
if: ${{ github.event_name == 'pull_request' }}
run: |
test -f coverage/lcov.info || (echo "coverage/lcov.info not found" && exit 1)
- Pin tool versions (optional):
npx -y danger@<pinned> ci --dangerfile dangerfile.ts
Rollout plan
- Start with suggestions only (no blocking).
- Set MIN_FILE_COVERAGE="80"; raise to 85–90 if the team is comfortable.
- Keep snippet-first review; enable inline (AI_REVIEW_INLINE="1") for extra hints.
- Flip AI_REVIEW_BLOCK_ON_FINDINGS="1" when you’re ready for strict enforcement.
Final thoughts
This setup lets you practice AI-augmented code quality in a real repo without disrupting anything. SonarCloud continues to do the heavy lifting for static analysis and coverage dashboards; Danger + OpenAI add targeted, actionable feedback on the code you changed today—and can enforce gates when you’re ready.
- Add the workflow.
- Drop in the dangerfile.ts.
- Set your Secrets.
- Open a PR and watch the reviewer go to work.
Top comments (0)