DEV Community

137Foundry
137Foundry

Posted on

5 Open Source Linters and Static Analysis Tools for AI-Assisted Codebases

Static analysis has always been worth running. When AI coding assistants are generating a significant share of your codebase, it becomes essential. The suggestions these tools produce default to common patterns from training data - not your project's specific conventions, not the architectural decisions your team made six months ago, not the security guidance that applies to your industry.

Linters and static analysis tools encode rules that catch the gap between "code that runs" and "code that fits." Here are the ones worth having in your pipeline when AI is doing a meaningful share of the writing.

Developer working with code analysis tools in a terminal window
Photo by Daniil Komov on Pexels

1. ESLint

ESLint is the standard for JavaScript and TypeScript static analysis. Every major framework ecosystem has an eslint configuration package tailored to its conventions, and the plugin ecosystem covers everything from React hook rules to accessibility requirements to security vulnerability detection.

For AI-assisted TypeScript projects, the rules worth configuring beyond the defaults are @typescript-eslint/no-explicit-any, which prevents the AI from using any as an escape hatch when it cannot infer a type correctly, and @typescript-eslint/strict-null-checks, which enforces that the AI's code handles nullable values correctly rather than assuming they are always defined.

The eslint-plugin-security plugin adds rules that catch common security misconfigurations: object injection sinks, unsafe uses of regular expressions, and non-literal calls to require. AI tools produce all of these regularly when working from common tutorial-style examples in their training data.

Run ESLint in CI with --max-warnings 0 so new warnings introduced by AI suggestions block merge rather than accumulating.

2. Pylint

Pylint is the most configurable Python linter available. It checks code style, type consistency, and common programming errors, and produces a score per module that tracks quality over time. The per-module scoring is particularly useful for monitoring whether AI-generated additions are raising or lowering the quality score for specific parts of the codebase.

Pylint's type inference is less strict than Mypy, which makes it a better starting point for projects that do not have full type annotations yet. For AI-assisted Python codebases, the rules most worth enabling are W0611 (unused imports - AI tools frequently import things that were relevant to an earlier version of the suggestion), W0102 (dangerous default value - a common AI mistake with mutable default arguments), and C0302 (module too long - AI generation tends to grow functions without the natural pause that comes from running out of space on a screen).

# Install and run pylint with a minimum score threshold
pip install pylint
pylint src/ --fail-under=8.0
Enter fullscreen mode Exit fullscreen mode

The --fail-under flag makes Pylint fail CI if the score drops below your threshold - useful for ensuring AI contributions do not degrade the overall code quality score.

"One thing AI tools reliably miss is the project-specific context that defines what 'correct' means for your codebase. A linter configuration is one of the few ways to encode that context mechanically." - Dennis Traina, 137Foundry

3. Semgrep

Semgrep is a multi-language static analysis tool with pattern-matching rules that look similar to the code they match. The free tier includes the full engine and access to the community rule registry, which has thousands of rules covering security, correctness, and best practices for most major languages.

What distinguishes Semgrep from standard linters is the ability to write custom rules in minutes. If AI suggestions keep introducing a pattern your team has decided to avoid - a deprecated internal API, a data access pattern that does not go through your caching layer, a logging call that might capture PII - you can write a rule that flags it in CI.

# Custom Semgrep rule: flag direct DB queries that bypass the ORM
rules:
  - id: no-raw-sql-in-views
    pattern: db.execute("$QUERY", ...)
    message: "Use the ORM query builder instead of raw SQL in view functions"
    languages: [python]
    severity: ERROR
    paths:
      include:
        - "*/views/*.py"
Enter fullscreen mode Exit fullscreen mode

This kind of custom rule is the primary way to encode team conventions that standard linters do not cover.

4. golangci-lint

golangci-lint is the standard meta-linter for Go projects. It runs multiple linters in parallel and is significantly faster than running each linter individually. The default configuration includes a sensible set of checks; the full list of available linters covers security, performance, style, and correctness.

For Go codebases where AI is generating code, the linters worth enabling explicitly are gosec (security checks, including SQL injection and path traversal patterns), gocritic (code critique checks that go beyond style), and unparam (detects function parameters that are always called with the same value, a pattern AI generates when it produces overly generic functions).

# .golangci.yml
linters:
  enable:
    - gosec
    - gocritic
    - unparam
    - errcheck
    - staticcheck
  disable:
    - exhaustruct  # too strict for most projects

issues:
  exclude-rules:
    - path: "_test.go"
      linters:
        - gosec
Enter fullscreen mode Exit fullscreen mode

golangci-lint integrates directly with GitHub Actions and produces output that maps cleanly to GitHub's pull request annotation format.

5. Checkstyle (Java)

Checkstyle is the standard Java static analysis tool for enforcing coding standards. It checks code against a configurable rule set and produces reports that integrate with Maven, Gradle, and most CI systems.

For Java codebases with AI assistance, Checkstyle's most useful contributions are enforcing Javadoc requirements (AI generates undocumented methods frequently), checking cyclomatic complexity thresholds (AI tends to generate complex conditional logic when simpler approaches would work), and enforcing import ordering rules (AI pulls in whatever imports it thinks are needed without regard for your project's import conventions).

Google and Sun both publish Checkstyle configurations that are commonly used as starting points. For most teams, starting with one of these and tightening specific rules over time is more practical than writing a configuration from scratch.

<!-- Maven plugin configuration for Checkstyle in CI -->
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-checkstyle-plugin</artifactId>
  <version>3.3.0</version>
  <configuration>
    <configLocation>google_checks.xml</configLocation>
    <failsOnError>true</failsOnError>
    <consoleOutput>true</consoleOutput>
  </configuration>
  <executions>
    <execution>
      <goals><goal>check</goal></goals>
    </execution>
  </executions>
</plugin>
Enter fullscreen mode Exit fullscreen mode

Code quality metrics dashboard showing linting results across multiple files
Photo by Daniil Komov on Pexels

Using These Tools Together

The most effective approach is layered. Fast local checks run pre-commit to catch the most common issues before pushing. Linters and static analysis run in CI on every PR and block merge if they fail. Trend tracking at the repository level catches systemic drift that point-in-time checks miss.

Automated analysis does not replace code review - it redirects it. When tools handle surface-level pattern checking, human reviewers can focus on the questions tools cannot answer: does this code fit the design, does it make the right trade-offs for this context, does it introduce dependencies that will complicate future changes. That redirection is especially valuable when AI assistance is increasing the raw volume of code under review, because reviewers' time becomes the scarcest resource in the system.

The web development team at 137Foundry sets up these kinds of quality pipelines as part of building production applications - so the tooling is in place before the codebase grows to the point where retrofitting it becomes expensive. For the broader governance considerations around AI tools in production workflows, the guide on using AI coding tools without technical debt covers the process and policy side of keeping AI-assisted codebases maintainable.

Top comments (0)