DEV Community

dohko
dohko

Posted on

7 Agentic Coding Patterns That Replace Manual Dev Workflows (2026 Edition)

7 Agentic Coding Patterns That Replace Manual Dev Workflows (2026 Edition)

AI coding tools crossed a line in 2026. They're no longer just suggesting the next line — they're running multi-step workflows autonomously. File creation, test writing, debugging, deployment.

Here are 7 patterns where agentic coding actually works better than doing it yourself, with code you can adapt.

1. Autonomous Test Generation

Instead of writing tests manually, describe what you want tested and let the agent generate + run + fix them.

# agent_test_gen.py — Prompt pattern for autonomous test generation
SYSTEM_PROMPT = """You are a test generation agent. Given a function:
1. Analyze the function signature and docstring
2. Generate pytest tests covering: happy path, edge cases, error cases
3. Run the tests
4. Fix any failures
5. Return the final passing test file

Rules:
- Minimum 5 test cases per function
- Include at least 1 edge case and 1 error case
- Use descriptive test names: test_<function>_<scenario>
"""

def generate_tests(source_file: str, function_name: str) -> str:
    """Agent loop: generate → run → fix → repeat."""
    import subprocess

    source = open(source_file).read()
    prompt = f"Generate tests for `{function_name}` in:\n```
{% endraw %}
python\n{source}\n
{% raw %}
```"

    max_attempts = 3
    for attempt in range(max_attempts):
        test_code = call_llm(SYSTEM_PROMPT, prompt)

        # Write and run
        with open("test_generated.py", "w") as f:
            f.write(test_code)

        result = subprocess.run(
            ["pytest", "test_generated.py", "-v"],
            capture_output=True, text=True
        )

        if result.returncode == 0:
            return test_code  # All tests pass

        # Feed errors back to the agent
        prompt = f"Tests failed:\n{result.stdout}\n{result.stderr}\nFix them."

    return test_code  # Best effort after max attempts
Enter fullscreen mode Exit fullscreen mode

Why it works: The agent iterates. It writes tests, runs them, reads failures, and fixes — something autocomplete can't do.

2. Multi-File Refactoring Agent

Rename a function and the agent updates every import, test, and reference across the codebase.

# refactor_agent.py
import ast
import os
from pathlib import Path

def find_all_references(root: str, old_name: str) -> list[dict]:
    """Scan codebase for all references to a symbol."""
    references = []
    for path in Path(root).rglob("*.py"):
        source = path.read_text()
        try:
            tree = ast.parse(source)
            for node in ast.walk(tree):
                if isinstance(node, ast.Name) and node.id == old_name:
                    references.append({
                        "file": str(path),
                        "line": node.lineno,
                        "col": node.col_offset,
                        "type": "name"
                    })
                elif isinstance(node, ast.ImportFrom):
                    for alias in node.names:
                        if alias.name == old_name:
                            references.append({
                                "file": str(path),
                                "line": node.lineno,
                                "type": "import"
                            })
        except SyntaxError:
            continue
    return references

def refactor(root: str, old_name: str, new_name: str):
    """Find and replace all references safely."""
    refs = find_all_references(root, old_name)
    print(f"Found {len(refs)} references across {len(set(r['file'] for r in refs))} files")

    # Group by file for batch edits
    by_file = {}
    for ref in refs:
        by_file.setdefault(ref["file"], []).append(ref)

    for filepath, file_refs in by_file.items():
        content = open(filepath).read()
        # Simple replacement — production version uses AST rewriting
        updated = content.replace(old_name, new_name)
        with open(filepath, "w") as f:
            f.write(updated)
        print(f"  Updated {filepath} ({len(file_refs)} refs)")
Enter fullscreen mode Exit fullscreen mode

Key insight: The AST scan finds references that grep would miss (aliased imports, string references in decorators). An agentic version would also run tests after each file change to catch regressions.

3. Self-Healing CI Pipeline

When CI fails, the agent reads the error, proposes a fix, and opens a PR.

# .github/workflows/self-heal.yml
name: Self-Healing CI
on:
  workflow_run:
    workflows: ["CI"]
    types: [completed]

jobs:
  auto-fix:
    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Get failure logs
        run: |
          gh run view ${{ github.event.workflow_run.id }} --log-failed > failure.log
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Agent fix
        run: |
          python3 agent_fix.py failure.log
          # agent_fix.py reads the log, identifies the issue,
          # applies a fix, runs tests locally, commits if green

      - name: Create PR
        if: success()
        run: |
          git checkout -b fix/auto-heal-${{ github.run_id }}
          git add -A
          git commit -m "fix: auto-heal CI failure"
          git push origin HEAD
          gh pr create --title "fix: auto-heal CI failure" --body "Automated fix for CI failure in run ${{ github.event.workflow_run.id }}"
Enter fullscreen mode Exit fullscreen mode

Reality check: This works well for dependency issues, type errors, and linting failures. It won't fix logic bugs — but it handles 40-60% of CI failures in practice.

4. Documentation Sync Agent

Keep docs in sync with code automatically. When a function signature changes, the agent updates the docs.

# doc_sync_agent.py
import ast
import re

def extract_functions(source: str) -> dict:
    """Extract function signatures and docstrings."""
    tree = ast.parse(source)
    functions = {}
    for node in ast.walk(tree):
        if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
            args = [a.arg for a in node.args.args]
            docstring = ast.get_docstring(node) or ""
            functions[node.name] = {
                "args": args,
                "docstring": docstring,
                "lineno": node.lineno
            }
    return functions

def check_doc_drift(source_file: str, doc_file: str) -> list[str]:
    """Find functions where docs don't match code."""
    source = open(source_file).read()
    docs = open(doc_file).read()

    functions = extract_functions(source)
    drifts = []

    for name, info in functions.items():
        # Check if function is documented
        if name not in docs:
            drifts.append(f"MISSING: `{name}({', '.join(info['args'])})` not in docs")
            continue

        # Check if args match
        for arg in info["args"]:
            if arg != "self" and arg not in docs:
                drifts.append(f"DRIFT: `{name}` param `{arg}` missing from docs")

    return drifts
Enter fullscreen mode Exit fullscreen mode

5. Dependency Audit Agent

Scan your package.json or requirements.txt, check for vulnerabilities, outdated packages, and license issues — then fix them.

# dep_audit_agent.py
import json
import subprocess

def audit_python_deps() -> dict:
    """Run a full dependency audit."""
    # Check for vulnerabilities
    vuln_result = subprocess.run(
        ["pip-audit", "--format", "json"],
        capture_output=True, text=True
    )
    vulns = json.loads(vuln_result.stdout) if vuln_result.returncode == 0 else []

    # Check for outdated
    outdated_result = subprocess.run(
        ["pip", "list", "--outdated", "--format", "json"],
        capture_output=True, text=True
    )
    outdated = json.loads(outdated_result.stdout)

    return {
        "vulnerabilities": len(vulns),
        "outdated": len(outdated),
        "critical": [v for v in vulns if v.get("severity") == "critical"],
        "update_candidates": [
            f"{p['name']}: {p['version']}{p['latest_version']}"
            for p in outdated[:10]
        ]
    }

def auto_fix_vulnerabilities(audit: dict) -> list[str]:
    """Attempt to fix critical vulnerabilities by updating packages."""
    fixes = []
    for vuln in audit["critical"]:
        pkg = vuln["name"]
        fixed_version = vuln.get("fix_versions", [None])[0]
        if fixed_version:
            subprocess.run(["pip", "install", f"{pkg}>={fixed_version}"])
            fixes.append(f"Updated {pkg} to {fixed_version}")
    return fixes
Enter fullscreen mode Exit fullscreen mode

6. Code Review Agent

Automate first-pass code review. The agent checks for patterns, not just syntax.

# review_agent.py
REVIEW_CHECKLIST = """
Review this PR diff for:
1. **Security**: SQL injection, hardcoded secrets, unsafe deserialization
2. **Performance**: N+1 queries, missing indexes, unbounded loops
3. **Reliability**: Missing error handling, race conditions, resource leaks
4. **Maintainability**: Functions >50 lines, magic numbers, missing types

For each issue found, provide:
- Severity: 🔴 critical / 🟡 warning / 🔵 suggestion
- File and line number
- What's wrong
- How to fix it (with code)

If the code looks good, say so. Don't invent problems.
"""

def review_pr(diff: str) -> str:
    """Run an AI code review on a diff."""
    response = call_llm(
        system=REVIEW_CHECKLIST,
        user=f"Review this diff:\n```
{% endraw %}
diff\n{diff}\n
{% raw %}
```"
    )
    return response
Enter fullscreen mode Exit fullscreen mode

Pro tip: Run this as a GitHub Action on every PR. It catches 80% of the issues a human reviewer would flag, and it responds in seconds instead of hours.

7. Feature Flag Cleanup Agent

Old feature flags are tech debt. An agent can find flags that are 100% rolled out and remove them.

# flag_cleanup_agent.py
import re
from pathlib import Path

def find_stale_flags(codebase: str, flag_config: dict) -> list[dict]:
    """Find feature flags that are fully rolled out and can be removed."""
    stale = []

    for flag_name, config in flag_config.items():
        if config.get("rollout_percentage") != 100:
            continue
        if config.get("age_days", 0) < 14:
            continue  # Wait at least 2 weeks after full rollout

        # Find all references in code
        references = []
        for path in Path(codebase).rglob("*.py"):
            content = path.read_text()
            if flag_name in content:
                lines = [
                    (i+1, line.strip()) 
                    for i, line in enumerate(content.split("\n")) 
                    if flag_name in line
                ]
                references.extend([
                    {"file": str(path), "line": ln, "code": code}
                    for ln, code in lines
                ])

        if references:
            stale.append({
                "flag": flag_name,
                "rollout": "100%",
                "age_days": config["age_days"],
                "references": references
            })

    return stale
Enter fullscreen mode Exit fullscreen mode

The Pattern Behind the Patterns

Every agentic workflow follows the same loop:

Analyze → Act → Verify → Fix → Repeat
Enter fullscreen mode Exit fullscreen mode

The difference from traditional automation is step 4: when something goes wrong, the agent adapts. It reads the error, adjusts its approach, and tries again. That's what makes it agentic — not the AI, but the feedback loop.

Key Takeaways

  1. Start with CI/CD — self-healing pipelines have the clearest ROI
  2. Always verify — agents that act without checking their work are dangerous
  3. Cap iterations — set max_attempts to avoid infinite loops (and costs)
  4. Log everything — you need to audit what the agent did
  5. Human in the loop — use PRs, not direct commits, for agent changes

This is part of the "AI Engineering in Practice" series — practical guides for developers building with AI. Follow for more.

Top comments (0)