Git hooks are a practice we see more and more in projects. We often adopt them because we've seen them elsewhere, or because they seem like a good idea to ensure code quality. But is that really the case?
Before going further, let's explain what a git hook is. With git, you can run scripts when you want to create a commit, push, etc. (pre-commit, pre-push, etc.).
💡 Husky is a great tool to discover and create your first hooks
By using a pre-commit hook, when you run the "git commit" command, a script is executed. If it fails, the commit is cancelled. You're then forced to fix the problem before you can commit.
To make collaboration easier within the team, you might think "Great, I'll be able to check that":
- all my tests pass
- my linter passes
- my formatter passes
- my commit name follows conventions
That's where the problems start. Because yes, the feedback will be faster, but at what cost?
A commit is not a delivery
With hooks, we run tests, the linter, the formatter, and commit name validation every time we want to commit. Making a commit becomes slow and frustrating. We're in our flow, we've made progress on our feature, and we have to wait for everything to pass before we can share our work.
And what happens in the end? If we end up bypassing hooks with --no-verify, it's to get back to a fast dev flow.
💡 Git has a
--no-verifyoption on its commands to skip hooks
The other problem is that hooks push us toward "perfect" code at every commit. But a commit is not a delivery. Let's take a concrete example: we're working in TDD, we've just written a red test that describes the behavior we want to implement. We want to commit this test before moving on to the implementation. Except the pre-commit hook runs the build or the linter or the type check, it fails, and the commit is blocked. We're forced to either implement right away or use --no-verify. The red-green-refactor cycle, which relies precisely on intermediate states, becomes painful.
That's the whole point of a commit: saving a state so we can go back if needed. With overly strict hooks, we're asked for perfect code at every save. We lose that freedom.
And the problem doesn't stop at the commit. Even when the pre-commit hook is limited to lint and build, we often find tests in a pre-push hook. Result: we can commit locally, but can't push until the tests pass. If we're doing pair or mob programming and want to share a work-in-progress state, it's blocked. If we need a colleague's opinion on an approach before continuing, it's blocked. We fall back into the same logic: "perfect" code just to be able to collaborate.
And that's not the only problem...
Double pain, zero gain
The other issue we create with hooks is duplication. The same checks run locally via hooks, then a second time in the pipeline (GitHub Actions, GitLab CI, etc.). We wait for the hooks to pass, then wait for CI to run the exact same thing. We don't save time, we lose it.
And it's not just a matter of time. We end up maintaining checks in two places: the hook config and the pipeline. We update an option in a command on the CI side but forget to do it on the hook side, or vice versa. The two diverge, and we only notice when something breaks.
On top of that, we risk falling into the "it works on my machine" trap, or the exact opposite. The local environment and CI are not the same thing. Tool versions, dependencies, config... there are many reasons the result could be different. We add cognitive load trying to understand why it passes locally but fails in the pipeline, or vice versa.
Ultimately, it's the CI that decides whether we can merge or not. It's the source of truth. If we trust it to block a merge, why duplicate that work locally?
But then, if hooks cause so many problems, why do we keep using them?
What we're really compensating for
The real problem is our CI's feedback loop. The pipeline takes 20 minutes, we don't get any notification on failure, and most importantly, everything stays very opaque. Because we have one big step that mixes a bunch of things, and to find out what broke we have to dig through logs. It's frustrating.
Hooks are a response to that frustration. We want faster, clearer feedback, closer to us. And that makes sense.
In any case, hooks are a band-aid, not a solution. We're treating the symptom but not the cause. The real question isn't "how to get faster feedback locally", it's "why is my feedback slow and unclear in the first place".
If the pipeline takes 20 minutes, adding hooks locally doesn't make the pipeline faster. We've just moved the problem. The day the hook passes but CI fails, we're at the same point as before, except we've wasted more time.
And if CI feedback isn't clear, it's rarely a tooling problem. It's often that we have one big step doing too many things, that errors are buried in logs, that nobody took the time to make it readable. Hooks don't solve any of that.
The real question is: how do we reduce the feedback loop time and make it clearer when it fails. And the answer is rarely "add another layer locally". So what do we do?
Fixing the source
Rather than compensating locally, we should invest in the Developer eXperience of our pipeline.
First, split CI into clear, parallelized steps. One step for lint, one for tests, one for the build. If lint fails, we see it immediately — no need to dig through logs of a monolithic step. And if steps run in parallel, we reduce total time.
Next, have notifications when CI fails. We shouldn't have to manually check whether the pipeline passed. A Slack message, an email, a GitHub notification... whatever, but we need to know quickly.
And finally, make errors readable. When something fails, we should understand why in a few seconds, not after 5 minutes of reading logs. It's a one-time effort that benefits the whole team, unlike hooks which remain an individual solution.
💡 If you still want a hook while improving CI, lint-staged lets you run formatting and linting only on staged files, with automatic fixing. It doesn't block the flow — it's an acceptable compromise during the transition.
But be careful not to put everything in the hook. Formatting and linting work well in pre-commit because they're fast and auto-fixable: run, fix, move on. Secret detection is another legitimate case: if CI catches it, it's already too late — the secret is in the git history. The pre-commit hook is the only place where blocking makes sense. Tests are different. They're slow, they fail for legitimate reasons (a red test in TDD for example), and they have nothing to auto-fix. Putting them in a hook, whether pre-commit or pre-push, is falling back into the problem we're trying to solve: blocking the flow to force "perfect" code before being able to collaborate. Tests are CI's job.
But with the arrival of AI agents, the problem takes on a whole new dimension.
AI amplifies the problem, the commit changes its role
With AI, the commit is no longer just a save — it's a supervision mechanism. AI moves fast, it touches many files, and we don't read every line in real time. We need small increments to stay in control: reviewing a readable diff rather than 20 files changed at once, doing a precise revert if the AI goes in the wrong direction. The faster the AI goes, the more we need to commit often. And the more we commit, the more hooks become a bottleneck. It's a paradox: the tool meant to "ensure quality" is precisely what prevents the mechanism that lets us control the quality of the AI's work.
And when a developer uses --no-verify, you might think they're in a rush or being careless. But when an AI — with no ego, no impatience, no laziness — also bypasses hooks, that says something else. Concrete example: we ask the AI to add a field to a form. An unrelated test fails in the pre-commit hook. The AI can't commit. We told it not to touch the tests. But the hook blocks it. So it does what a dev would do: it works around it. It adds code to make the test pass without directly modifying it, or it uses --no-verify. The result: the AI spirals, code quality degrades, and we end up having to revert everything because we couldn't commit before things went off the rails. If even an emotionless agent bypasses hooks, the mechanism is the problem, not the people.
Yet, we still want the AI to run tests, to check the lint. The problem isn't the checks — it's that git hooks are a single mechanism for two actors with opposing needs. Humans want freedom: commit an intermediate state, try something, go back. AI needs checks, but adapted to it, with feedback we control.
The AI still needs to actually run those checks, though. With Claude Code for example, you can write instructions in the project's CLAUDE.md that the agent follows every session, like "run tests after every modification". And if the conversation context gets compacted and instructions might be lost, you can put them in a dedicated skill to make sure they're always there.
The remaining question is: what happens when it fails? That's where it gets interesting. Tools like Claude Code offer lifecycle hooks internal to the agent: you can configure a script that runs when a command fails (for example tests), and that injects context into the agent's conversation. Concretely, when tests fail, instead of blocking the commit, the hook guides the AI: "Analyze whether this failure is related to your changes. If yes, fix it. If not, inform the developer." It's not a rigid rule, it's guidance. The AI receives the context and makes a judgment. If the failing test is directly related to what it just modified, it fixes it on its own and continues. If it's an unrelated test, it commits to save the current state and informs us: "This test failed but it has nothing to do with my changes. Want to look at it together?"
The git hook applies a blind rule — it passes or it blocks, for everyone, without distinction. The agent hook provides context and leaves judgment to the AI, while returning control to the developer. And it doesn't stop at local: the agent can also react to CI events, receive a failure notification, and analyze or fix the error automatically.
Concretely, with Claude Code, it looks like this in the project's .claude/settings.json:
{
"hooks": {
"PostToolUseFailure": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"if": "Bash(npm run test|vitest)",
"command": "echo '{\"additionalContext\": \"Some tests failed. Analyze whether these failures are related to your changes. If yes, fix them. If not, commit your changes and inform the developer about the failing tests unrelated to your work.\"}'"
}
]
}
]
}
}
The hook only triggers when npm run test or vitest fails. It doesn't inject a rule — it injects context that the AI uses to decide on the course of action.
💡 To go further on Claude Code hooks (
PostToolUseFailure,additionalContext,matcher, etc.), the full documentation is available here: https://code.claude.com/docs/hooks-guide
One tool, multiple contexts
What I'm describing here applies to a specific context: a team in a company, with CI, and developers who work together daily. In this setting, hooks compensate for a problem we have the means to solve differently.
But git hooks didn't come from nowhere. They were designed for projects where contributors are numerous, at very different skill levels, and where nobody has a grasp on the entire project. That's exactly the case with open source.
On an open source project, the constraints are different. Contributors don't necessarily know the project's conventions, they have very different local setups, and they sometimes only contribute once. An occasional contributor who opens a PR isn't going to spend 20 minutes figuring out why the pipeline failed. If a hook fixes their formatting and flags a lint error before they even push, that's time saved for everyone: for them and for the maintainers who review.
In this context, hooks don't compensate for a bad CI. They serve as an accessible safety net that reduces noise in PRs and accelerates feedback for people who might only contribute once. It's a legitimate use, and that's why we find them in many well-maintained open source projects. Next.js, Angular, webpack, Storybook, NestJS, Prisma... all use hooks to run formatting and linting on staged files before each commit. Not tests, not the build. Just what's fast and auto-fixable.
Git hooks aren't a bad tool. In open source, they're a proven safety net. But in a company team, with CI and the means to invest in it, using them to compensate for a slow feedback loop is putting on a band-aid. The problem isn't local — it's in the pipeline. That's where we need to act.
Top comments (0)