DEV Community

Cover image for Claude Code in Production: 40% Productivity Increase on a Large Project
Dzianis Karviha
Dzianis Karviha

Posted on • Edited on

Claude Code in Production: 40% Productivity Increase on a Large Project

Introduction

Over the past 4 months since August 2025, I've been actively integrating Claude Code into my development workflow. After this period of experimentation and practice, I believe Claude Code is now mature enough to be successfully integrated into production workflows on projects of various scales.

I'm a solo maintainer of a 350k+ LOC codebase (PHP, TypeScript/React, React Native, Terraform, Python) with 10+ years of commercial software development experience. Since August, 80%+ of all code changes have been fully written by Claude Code — generated, then corrected by Claude Code after my review, with only minimal manual refactoring. All of them are reviewed by me. I've implemented many updates of different sizes.

I've compiled my experience with Claude Code into this guide. I'll share my approach to using Claude Code in the development workflow, starting from the basic tools and building up to a complete overview of the workflows I use.

This guide focuses on integrating Claude Code into existing large projects. While my experience is as a solo maintainer, the approaches described apply to engineering processes in general and can be adapted for teams. The principles should also work well with other tools like Cursor CLI or Codex.

This guide complements the official Claude Code documentation by focusing on real-world integration patterns. For API reference and basic usage, see https://code.claude.com/docs.

1. Before Claude Code

Like most software developers today, I've been using GitHub Copilot as an intelligent autocomplete tool. And Claude Code wasn't my first attempt to delegate complete tasks to an LLM.

In early 2025, I tried to adopt Cursor IDE (https://cursor.com/) for my workflow. It's built on top of VS Code. The disadvantage of working in an unfamiliar IDE outweighed all the benefits I gained from Cursor. I'm a big fan of JetBrains products and have been using IntelliJ IDEA for a decade. I tried using both IDEs—IntelliJ IDEA for regular work and Cursor for delegating tasks to the LLM—but this approach led to constant context switching and loss of focus. Eventually, I stopped using Cursor entirely.

Then I discovered Claude Code. This tool has a completely different design. Instead of providing a full IDE, it's a terminal-based CLI tool with a thin integration layer for your existing IDE. This allows you to integrate LLM assistance without changing your development environment. Such a simple setup provides huge opportunities for customization and fits various workflows.

2. Challenges

Integrating Claude Code into an existing codebase comes with two core challenges.

Context window limits

Claude Code has a 200k token context window—less than 5% of this project's codebase. You must carefully select which information to provide: include code that shows your patterns; exclude anything irrelevant. Too little context leads to incorrect implementations; too much leads to context pollution and degraded performance.

Balancing speed, quality, and oversight

Implementation with an LLM should be at least as fast as doing it yourself—otherwise, you lose the benefit. This creates a tradeoff:

  • Quality: Early on, Claude produces inconsistent code that doesn't follow your patterns—forcing refactoring or detailed prompt crafting.
  • Oversight: Small tasks can run with minimal supervision; architectural changes need close attention.

The challenge is developing intuition for when to step in and when to let Claude work autonomously.

3. Claude Code Installation

This guide is focused on Claude Code. To use it, you need a Claude.ai account and to install Claude Code on your system.
Follow the official guide https://code.claude.com/docs/en/quickstart#step-1:-install-claude-code.

Important: All code you work with in Claude Code is sent to Anthropic's servers for processing. Before using it on any project, get explicit consent from the project owner or customer, and review Anthropic's privacy policy. This applies especially to proprietary code or projects with strict data handling requirements. Do not apply it on the project until the consent is received.

To try out the tool, you'll need to buy at least a Pro Plan subscription costing $20/month. For regular usage, you'll likely need a Max subscription ($100/month). Claude sets usage limits per 5-hour window—with the Pro plan, I'd hit the limit within 1 hour of active work. The Max plan (5x the capacity) fits my workflow well, and I usually use it close to its limits.

Claude Code Limits

Here is the official Claude documentation https://platform.claude.com/docs/en/ and the Claude Code documentation https://code.claude.com/docs/en. I'll reference specific parts in the subsequent sections.

4. Building Blocks

This section is not documentation for Claude Code features, but rather a short overview in which I highlight specific Claude Code tools and aspects which are important for my use cases. You might find different ways to apply them for your scenarios.

Simple prompt

It's the simplest tool. You can type claude in your working directory and then type any prompt, press Enter and wait for the result. Claude Code will try to perform the task you've just provided to it.

CLAUDE.md file

https://platform.claude.com/docs/en/agent-sdk/modifying-system-prompts#method-1-claude-md-files-project-level-instructions

CLAUDE.md file is a plain but powerful feature of Claude Code. The idea is simple: CLAUDE.md content is always included in the context when Claude Code works with files in this directory. In CLAUDE.md, you can mention documentation files, source code files, or any other references Claude should consider. There's no special syntax—just write naturally, like: "See docs/git-commit-format.md when preparing Git commits." Claude Code will include referenced files only when relevant to the current task.

Also, it supports nested CLAUDE.md files. For example, when you have the following structure:

root
|_CLAUDE.md
|_dir1
  |_CLAUDE.md
  |_dir2
    |_CLAUDE.md
    |_MyComponent.tsx
Enter fullscreen mode Exit fullscreen mode

When Claude Code reads MyComponent.tsx file - it will include root/CLAUDE.md, root/dir1/CLAUDE.md, root/dir1/dir2/CLAUDE.md files in the context.

This hierarchy lets you organize context by scope: project-wide conventions in the root CLAUDE.md, module-specific patterns in subdirectory files. Claude gets both general and specific context when working with module files.

Subagents

https://code.claude.com/docs/en/sub-agents

The official doc says:

Custom subagents in Claude Code are specialized AI assistants that can be invoked to handle specific types of tasks. They enable more efficient problem-solving by providing task-specific configurations with customized system prompts, tools and a separate context window.

Each subagent has its own context, which helps reduce the context usage of the main conversation. Also, it has other features - like limiting tools available for this agent. In many guides on the Internet, you might find the description of the pipelines people build using subagents, like: Requirements Writer -> Requirements Reviewer -> Software Architect -> Backend Developer -> Frontend Engineer -> Code Reviewer -> etc..

Personally, I didn't find this approach to work well. It doesn't mean that it cannot be effective, but I found it's beneficial to have a single conversation with the feature context - which, of course, has its constraints - in context window size. Mostly, I'm using subagents for code reviewing. During the code review, the subagent runs with its own context (base process just provides some overview of what should be reviewed) and it's not biased by the decisions made during the source code implementation.

There are many public collections of Claude Code subagents like this, for example https://github.com/VoltAgent/awesome-claude-code-subagents.

However, I suggest building your own subagents (and other prompts) designed specifically for your project rather than using public ones. Generic prompts won't understand your codebase patterns and conventions. See the "It's all about context" section for more on why minimal, project-specific prompts work better.

Technically, a subagent is just a markdown file located in .claude/agents directory or its subdirectories. Claude Code uses it as a base for building the context in the child process.

Claude autonomously decides when to invoke a specific subagent (based on the subagent description). Also, you can explicitly ask Claude to use a specific subagent (use backend-codereviewer subagent for this task, and so on).

The following subagents are used on my project:

backend-code-reviewer
frontend-code-reviewer
mobile-code-reviewer
Enter fullscreen mode Exit fullscreen mode

When I say "separate context," I mean that subagents run in their own context window, independent from your main conversation. The main process shares only what you explicitly pass to the subagent — it doesn't inherit the full conversation history. This keeps the subagent focused and unbiased by previous decisions, which is especially valuable for code review.

MCP server

https://code.claude.com/docs/en/mcp

MCP is the protocol that allows you to connect with external sources, for example, with the task tracker, with the monitoring tool, Slack, etc.

While you can achieve the same result with a set of bash scripts and skills, MCP is the standard protocol, understandable by many LLMs.

Each MCP server provides a set of tools that Claude Code may use when it needs to.

Here is a specification of tools in the custom MCP server I implemented for getting the data from YouTrack (the task tracker we use on the project).

{
  tools: [
      {
          name: 'get_issue',
          description: 'Get YouTrack issue details including title, description, and attached files',
          inputSchema: {
              type: 'object',
              properties: {
                  issueId: {
                      type: 'string',
                      description: 'The YouTrack issue ID (e.g., PROJECT-123)',
                  },
              },
              required: ['issueId'],
          },
      },
      {
          name: 'get_issue_comments',
          description: 'Get all comments for a YouTrack issue',
          inputSchema: {
              type: 'object',
              properties: {
                  issueId: {
                      type: 'string',
                      description: 'The YouTrack issue ID (e.g., PROJECT-123)',
                  },
              },
              required: ['issueId'],
          },
      },
      {
          name: 'get_attachment_content',
          description: 'Get the content of a specific attachment from a YouTrack issue',
          inputSchema: {
              type: 'object',
              properties: {
                  issueId: {
                      type: 'string',
                      description: 'The YouTrack issue ID (e.g., PROJECT-123)',
                  },
                  attachmentId: {
                      type: 'string',
                      description: 'The attachment ID to retrieve',
                  },
              },
              required: ['issueId', 'attachmentId'],
          },
      },
      {
          name: 'get_comment_attachment_content',
          description: 'Get the content of a specific attachment from a YouTrack issue comment',
          inputSchema: {
              type: 'object',
              properties: {
                  issueId: {
                      type: 'string',
                      description: 'The YouTrack issue ID (e.g., PROJECT-123)',
                  },
                  commentId: {
                      type: 'string',
                      description: 'The comment ID',
                  },
                  attachmentId: {
                      type: 'string',
                      description: 'The attachment ID to retrieve',
                  },
              },
              required: ['issueId', 'commentId', 'attachmentId'],
          },
      },
  ],
}
Enter fullscreen mode Exit fullscreen mode

So, when Claude Code has a YouTrack issue number, it can get the issue description using get_issue tool and all comments using get_issue_comments, when there are attachments, it'll read them using get_attachment_content or get_comment_attachment_content tools.

This approach allows you to not copy-paste requirements and conversation, but just reference your task tracking / log monitoring tool for getting relevant details.

There are already many published official MCP servers, if an official MCP server is not available, I prefer to write my own (using Claude Code, you don't even need to study the MCP specification for it).

Skills

https://code.claude.com/docs/en/skills

This is a relatively new tool introduced in Claude Code in the middle of October 2025.
It's just a different dimension of presenting the project knowledge in comparison with subagents and CLAUDE.md files. According to Claude Code documentation, skills are expected to be the description of atomic operations/updates you do on the project, like implementing-unit-test, implementing-controller, creating-database-migration.

Unlike subagents, skills are used in the same conversation from which they are called (so skills use the same context as the main process). Unlike commands, skills are invoked by Claude Code autonomously (and you may ask to use a skill explicitly). Within the skill diretory, you can develop a custom bash/python/etc script that will perform specific tasks and skill will have knowledge how to utilise these bash (or Python, or in any other language) scripts.

You can decompose the patterns you use and the approaches applied to your project into small pieces and describe each as a separate skill.

For example, when we speak about a REST web application. To implement a single API method, Claude Code should understand the following:

  • How controllers are implemented (how routes are set up, how HTTP request is mapped to request DTO)
  • How user authentication is done (how to get the actual user id and data, etc.)
  • How database transactions are configured for requests
  • How domain services are implemented
  • How domain entities are implemented
  • How data is persisted
  • Events and subscriptions functionality if these patterns are used

When this information is not provided via skill, Claude Code will try to understand patterns just based on the existing source code and documentation—and in most scenarios, it does this successfully. Skills offer a more structured way to encode your patterns.

Note: Earlier, I experienced skills being invoked rarely—Claude Code seemed to use SKILL.md content without explicitly indicating the skill was running - while it seem that Claude Code actually reads SKILL.md content but just didn't indicate that skill is run. This has improved in recent versions. I'd suggest experimenting with skills on your projects—along with Plugins (https://code.claude.com/docs/en/plugins), they can be a good way to share patterns within your organization.

Slash commands

https://code.claude.com/docs/en/slash-commands

Claude Code provides a set of standard slash commands (like /clear for clearing the existing context, /exit etc).

Also, you can implement your custom slash commands, which are just shortcuts for your prompts.

Instead of typing:

Commit all not committed frontend changes.

See backend/doc/git-commit-format.md for reference.
Enter fullscreen mode Exit fullscreen mode

and other (possibly, much longer prompts) you might just type: /commit:backend

5. It's all about context

Context management is the single most important skill for effective Claude Code usage.

Why context matters

This section is short, but important. The way you work with context has a high impact on the result.

The following article clearly states the problem https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents.

You should always think about the ways to provide as little information as required for the LLM agent to maximize the chances that it completes the task successfully.

An important principle that is stated in the article above:

Regardless of how you decide to structure your system prompt, you should be striving for the minimal set of information that fully outlines your expected behavior. ... Given the rapid pace of progress in the field, "do the simplest thing that works" will likely remain our best advice for teams building agents on top of Claude.

That's why generic public prompt collections (like those mentioned in the Building Blocks section) often don't work well—they're too abstract for your specific codebase.

I won't share prescriptive prompt engineering rules—they change as Claude improves. Instead, focus on principles: keep documents to 50-100 lines, start with the bare minimum, test on real tasks, iterate based on results. For syntax, best practices and detailed guidance, see the official documentation at https://code.claude.com/docs. Over time, you'll build intuition for what works.

Claude Code has a 200K token context limit. When reached, it automatically compacts by summarizing—often losing important details. I've experienced forgotten tasks and degraded performance after compaction. The practical solution: don't reach the limit. Clear context with /clear and start fresh when tasks are complete.

6. Workflows implementation

After understanding the challenges and building blocks, this section shows how I combine them into daily workflows. I'll describe my main workflows and provide specific examples of how I work with tasks.

By workflow, I mean a repeatable process for completing tasks. It includes:

  • My actions: reviewing plans, making decisions, providing approvals
  • Claude Code commands: initiating tasks, triggering reviews, committing
  • Other tools: task tracker integration, code review subagents

Each workflow follows steps: gather context → clarify requirements → prepare implementation plan → review plan by the developer -> implement planned updates → review implementation by Claude Code and developer → commit. Key gates (plan review, code review) ensure I maintain control over the output.

6.1 Implementation constraints

This section describes the constraints I follow when working with Claude Code. These guide my decisions about LLM autonomy.

Having full control over the design decisions

When building a quick prototype, you might not be worried about the quality of the generated source code, and you might give the LLM the opportunity to design and implement solutions on its own, but for a large project, this approach will fail—Claude needs human guidance for design decisions.

So, for medium and large-sized tasks, I'm preparing implementation plans on my own. For small-sized tasks, I'm asking Claude Code to prepare the implementation plan first, and carefully review it before accepting.

Implementing each task within the Claude Code context window

As covered earlier, context compaction degrades performance. I split tasks to fit within a single context window. This also aligns with CI practices—smaller tasks mean smaller, more focused commits.

6.2 Repository and project structure

6.2.1 Modular structure

You need to help Claude Code understand the structure of your project so it can efficiently navigate it and find the needed functionality more quickly.

I recommend using a feature-based structure for your project instead of a flat or component-type-based structure.

Example of feature-based structure:

src/
|_projects/
  |_projects/
  |_tasks/
    |_actions/
    |_list/
    |_view/
    |_comments/
      |_actions/
      |_view/
|_notifications/
  |_ ...
Enter fullscreen mode Exit fullscreen mode

Example of component-type-based structure:

src/
|_hooks/
|_pages/
|_services/
|_types/
|_...
Enter fullscreen mode Exit fullscreen mode

A feature-based structure is easier for both humans and programs to understand. And this structure follows the high cohesion principle. A modular structure simplifies the gathering of relevant context for an LLM agent — by reading the project documentation and files from a specific directory, it can get the information about the full feature implementation.

6.2.2 CLAUDE.md files and project documentation

My root level CLAUDE.md has the following structure:

## Technical Stack overview

[Just list of used frameworks, main libraries, database, etc]

## Architecture overview

### Feature module structure

[Short description of typical feature-module structure]

### Key Feature modules

[List of key root-level modules with 1-line description]

### Key Patterns

[Statement of key patterns used in the source code with the reference to the document that provides some more information how the pattern is applied in the project (uses progressive disclosure)]

### Guides

[The set of links to documentation documents in the format:
* `See ./filepath.md for ...`
That references the documentation related testing strategy, git commit format, etc. So Claude Code might read these docs if it's needed for the current task.]

### HARD RULES

[The list of unstructured rules that I add to tweak Claude Code behavior (when I notice that it fails consistently with some tasks). Also, you can type `#` to quickly add some rule to this section just from your current conversation.
When this list grows you might revise it and structure it - create separate documents, etc.]
Enter fullscreen mode Exit fullscreen mode

I store all project documents in the same repository in the doc/ directory.

A modular structure also lets you use nested CLAUDE.md files—each level adds relevant context automatically. You might create a CLAUDE.md file on each level, and when Claude Code accesses src/projects/tasks/comments/actions/UpdateButton.tsx, it will add to the context files:

  • CLAUDE.md
  • src/projects/CLAUDE.md
  • src/projects/tasks/CLAUDE.md
  • src/projects/tasks/comments/actions/CLAUDE.md

And all these files will append to each other by providing highly relevant data to the context.

Just a note: I do not recommend creating CLAUDE.md for every single module that you have in your project. Start without this module CLAUDE.md file, add it only when you see issues in understanding some implementation. Also, if Claude Code needs to access the directory frequently, you might want to add CLAUDE.md to make it work faster by providing a condensed description of the module implementation. But assume by default that Claude Code should understand the implementation based on your project-level documentation (remember the rule: use the simplest solution that works).

6.2.3 Monorepo setup

Beyond organizing individual modules, the overall repository structure also matters. I found it useful to work with all project components in a monorepo setup. This way you can easily reference backend source code when working on frontend, etc.

Technically, I have different repositories for each component, I've just created a separate repository with a wrapper for all components. The final structure looks like the following:

monorepo
|_.claude
|_docs
|_CLAUDE.md
|_backend
  |_docs
  |_CLAUDE.md
  |_...
|_frontend
  |_docs
  |_CLAUDE.md
  |_...
|_mobile
  |_docs
  |_CLAUDE.md
  |_...
|_infrastructure
  |_docs
  |_CLAUDE.md
  |_...
Enter fullscreen mode Exit fullscreen mode

So, I'm working with all components in a single IDE window. All Claude Code agents/commands/skills are located in .claude directory in the root.

monorepo/docs and monorepo/CLAUDE.md contain documents that are applicable to the whole project. And each component also has a set of documentation specific to it.

7. Workflows

So, here is the main part of this guide describing the main workflows I use for features development.

By workflow, I mean the process in which I interact with Claude Code when working on specific tasks. This section focuses on feature development workflows, including code review. Other workflows — such as documentation generation and requirements gathering — are outside the scope of this guide and may be covered in future articles.

I've created a set of Claude Code commands that initiate these workflows and that are used during the implementation.

Each workflow I build has the following high-level steps:

  • Gathering relevant context
  • Clarifying uncertainties
  • Building an implementation plan
  • Reviewing the implementation plan
  • Implementing the task based on the verified plan
  • Reviewing the implementation
  • Committing the result

Let me explain with examples.

Choosing between workflows

There's no well-defined criterion for choosing between fast and full workflows. It comes down to intuition: if I'm confident that Claude Code will deliver a good result with the fast workflow, I use it. Otherwise, I use the full workflow. This intuition develops over time as you apply these workflows to different tasks and learn what works for your project.

7.1 Fast workflow

Fast workflow is initiated just with one command:

/workflows:fast <issueId>

This workflow is used for pretty clear and pretty straightforward tasks:

  • That may be fully implemented just by examples in the source code base, and does not require a lot of changes/additions
  • simple bug fixes
  • etc

I can also provide some additional info to the command, like:

  • /workflows:fast DEV-1234 See component X implementation and implement in the same way
  • /workflows:fast DEV-1234, likely the defect occurs due to ...

The prompt for the command is pretty simple. If some project details are skipped, it's like:

1) Find YouTrack issue id from $ARGUMENTS
2) Read the issue description, all comments, and all attachments
3) Carefully review the existing source code base to understand the problem
4) In case you have questions, try to find the answers in the issue description, in the source code, and in the project documentation
5) Use `AskUserTool` (a built-in Claude Code tool for asking clarification questions) if you have questions. If you have no questions, just skip this step.
6) Prepare the implementation plan and present it to the user
7) In case the user has notes, update the plan and present it again
8) Start implementation after explicit approval only
9) Use code review subagents before providing the result to the review
10) When everything is ready, ask the user to review the update
Enter fullscreen mode Exit fullscreen mode

So, this prompt acctually builds the following process:

Fast Workflow

Claude Code will read the specified task tracker issue, and, along with project documentation, will use it to understand and solve the problem.

In this workflow, I'm covering my constraint of having full control over implementation by reviewing the implementation plan prepared by Claude Code. This implementation plan is currently printed just to the Terminal, so I can efficiently review it for minor changes.

When the implementation is complete, I call /approved command (a custom slash command I created), which triggers Claude Code to run code review subagents and prepare the commit.

This type of workflow has a great opportunity to be automated in the future, like the following:

  • The issue is created in YouTrack
  • Claude Code is tagged
  • Claude Code prepares the implementation plan for the update and updates the issue status
  • The developer reviews it, and after all the notes are fixed, the developer accepts it
  • Claude Code implements the update and creates a merge request
  • A temporary environment is launched where this update can be verified before merging into the main branch

7.2 Full workflow

And this is the main workflow that I'm using on the project. I'll describe the main parts of it.

7.2.1 Feature directory

I create a feature directory for each task I'm working on. All the feature directories are located in the /features directory of the monorepo. The name of the feature directory is the identifier of the issue in the task tracker.

monorepo
|_features
  |_DEV-1456
    |_implementation-plan.md
    |_to-do.md
  |_...
Enter fullscreen mode Exit fullscreen mode

Each feature directory basically has the following documents.

  • implementation-plan.md
  • to-do.md

It can also have additional files - additional documents, images, UML diagrams, generally, everything that Claude Code might use for task implementation.

After the task is completed, the feature directory stays in git history—Claude Code can reference it for future updates. Some artifacts may also be copied to the source code module itself as documentation.

7.2.2 Implementation plan

The implementation document describes the main implementation decisions. It has as few details as required for successful task implementation.

For bigger tasks, I'm preparing UML diagrams, referencing them in the implementation plan, and adding clarifications needed. In most scenarios, I add clarifications just to UML diagram (using notes) and skip implementation plan creation.

For smaller tasks, I can skip the implementation plan as well and write some details just in the to-do document.

7.2.3 To-do document

This is the main feature directory document containing the implementation plan decomposed into subtasks. Each subtask (D-1, D-2, etc.) contains a checklist of steps for Claude Code to implement.

Each subtask has its own identifier, which is used further for quick reference.

Example of simple to-do document.

## DEV-1234. Implement debug mode

### D-1. DebugModeConfiguration entity & service

- [x] create entity
- [x] implement unit tests
- [x] create database migration and apply it
- [x] repository for the entity
- [x] Create DebugModeConfigurationService service
- [x] Create unit tests for service
- [x] Commit without review

...

### D-4. Activation strategy

- [x] implement activation strategy class
- [x] update monolog config to use it
- [x] create unit tests
- [x] integration test: verify that configuration actually applied to the container
- [x] Wait for review
- [x] Commit

...
Enter fullscreen mode Exit fullscreen mode

Some notes:

  • This to-do document is created as an addendum to the implementation plan or UML diagram, so it doesn't have detailed information;
  • For the unit tests, I'm just briefly describing the scenarios that should be tested
  • Subtasks may have nested structure - usually, when a subtask involves the update of several application components (backend, frontend, mobile app, etc); in that case, they have identifiers like D-2.1, D-2.2.
  • Depending on the task size and on how typical it is, I can explicitly specify that this subtask can be committed just after it's implemented. Review is skipped for pretty straightforward tasks that were previously successfully implemented without issues several times. (Potential issue can be caught during code review in GitLab UI)
How is subtask size chosen?

So, as was mentioned previously, my target is to implement each subtask within a single context window, so I could be guided just by this constraint - decompose to the size that Claude Code is capable of implementing within a single context window.

But basically, I'm just splitting the task into the set of subtasks as I previously did by applying continuous integration practices. One subtask - one small meaningful commit.

7.2.4 Implementation solution & to-do documents review commands

I built two commands that are used for the validation of my inputs before the actual implementation is run:

  • implementation solution review
  • to-do checklist review

These commands are run when the implementation plan/to-do document/UML diagram/other inputs are ready.

Implementation solution review command

/workflows:full:review-solution DEV-1234

The goal of this command is to verify if the designed implementation matches patterns used in the project, naming standards, design principles, etc. Also, it verifies if the implementation covers all the requirements.

To-do document review

/workflows:full:review-todo DEV-1234

This command reads the original requirements from the task tracker and checks that the implementation plan and to-do document cover all requirements completely. Also, it verifies that the order of implementation is optimal. This catches missing items or misunderstood requirements early, before any code is written.

This is an example of successful review output:

Verification Results for DEV-1234

✅ Verification Passed

YouTrack Requirements Covered:
- ✓ Deep Link Handling - covered in T-4.3 Deep Linking
- ✓ First Invitation (account creation) - covered in I-4 Team Invitation First-time
  scenario
- ✓ Invalid Token error handling - covered in I-4.1 (ResourceNotFoundException)
- ✓ Token Reuse error - covered in I-4.1 (InvitationCodeAlreadyAcceptedException)
- ✓ Token Expiration - covered in I-4.1 (InvitationCodeExpiresException)

Critical Components Present:
- ✓ Database migrations - I-2 (TeamInvitationCode::project), I-3.1 (SessionToken
  entity)
- ✓ Backend entities & services - AuthenticationService,
  TeamInvitationAcceptanceService
- ✓ Symfony security integration - I-3.2 with authenticator configuration
- ✓ API controllers - AuthenticationController, TeamInvitationAcceptanceController
- ✓ Client App API service - T-4.2 (teamInvitationService)
- ✓ UI screens - InvitationScreen, LandingScreen
- ✓ Deep linking - T-4.3 with preference storage and banner

Task Order:
- ✓ Logical order: package rename → entity/migration → authentication → team
  invitation → frontend
- ✓ Backend API completed before frontend implementation
- ✓ Migrations before code using new schema

Scope:
- ✓ To-do correctly scopes to Use Case 1 only (first-time/account creation)
- ✓ Use Case 2 deferred to DEV-1235 as specified

The to-do list is structurally complete. Claude Code can proceed with
implementation using existing patterns and skills.
Enter fullscreen mode Exit fullscreen mode

Here is the output of the command that highlights some issues

✅ What's Good

Logout (L-1) - Complete:
- Backend: AuthenticationService.deleteSession, UseCases, Controller (DELETE /clientapp/logout) ✓
- Frontend: ConfirmDeletePopup, Settings menu, API service ✓
- E2E tests ✓

Login (L-2) - Structure Present:
- OneTimeCode entity and repository ✓
- OneTimeCodeService (createForEmail, redeemToken) ✓
- OneTimeCodeUseCases ✓
- OneTimeCodeController ✓
- Integration tests ✓
- Frontend login screen with forms ✓

❌ Critical Missing Items

1. Missing Event System Components for Email Sending

1. The diagram shows email sending requires:
   - OneTimeCodeCreated event class
   - OneTimeCodeEventListener (subscribes to event, triggers sender)
   - OneTimeCodeSender (infrastructure - actually sends the email)

The to-do mentions "publishes event" in tests but doesn't explicitly include creating the event listener and sender. Without these, no email will be sent.

Location: After OneTimeCodeService in L-2.1

Suggested additions:
- OneTimeCodeCreated event
- OneTimeCodeEventListener.onCreated()
- OneTimeCodeSender.sendOneTimeCode() (infrastructure)
2. Two Separate API Endpoints Needed

2. Diagram shows OneTimeCodeController has TWO endpoints:
   - POST /clientapp/authentication/onetimecode (request code)
   - POST /clientapp/authentication/withonetimcode (redeem code)

The to-do just mentions "OneTimeCodeController" without distinguishing. This might be fine if Claude Code infers both methods, but it's worth noting.

⚠️ Minor Observations

- Request DTOs (RequestOneTimeCodeRequest, RedeemCodeRequest) not mentioned - Claude Code can infer these
- expiresAt not in entity diagram - appears to be calculated from createdAt + 1 hour

  ---
Recommendation: The event system components (listener + sender) are critical for the "email with login code is sent" acceptance criterion. Do you want me to add these explicitly to the to-do.md?
Enter fullscreen mode Exit fullscreen mode

7.2.5 Task implementation command

/workflows:full:implement DEV-1234 D-1.1

Where DEV-1234 is the identifier of the issue in the task tracker, and D-1.1 is the identifier of the subtask in the to-do document.

The command has the following description: ""

### 1. Read Youtrack issue

Get Youtrack issue number from $ARGUMENTS, the format is DEV-xxxx. Read Youtrack issue description, all issue comments and attachments.

### 2. Read feature documentation

1. Define the subtask identifier from $ARGUMENTS
2. Read implementation-plan.md
3. Read to-do.md
4. Read all documents starting with subtask identifier: pattern `<subtaskId>-*.md`
5. If needed, read implementation overview of previously implemented subtasks, the format is `<subtaskId>-implementation-overview.md`

### 3. Study related source code

Deeply study existing source code to understand the implementation.

### 4. Ask clarification questions

Before asking questions, try to find the answers in the projects documentation, in the requirements and in the existing source code.

If you cannot find the answer, use `AskUserTool` to ask your questions.

### 5. Prepare the implementation plan

Based on the project documentation, task documentation, source code you studied, prepare the implementation plan and save it to `<subtaskId>-implementation-plan.md` file (separate from your task-level implementation-plan.md).

In the implementation plan:
* save all the questions you had and user's answers
* describe all classes that will be created
* all methods that will be updated
* main technical decisions
* do not duplicate the documentation prepared in CLAUDE.md and in doc/ folders, reference these documents instead
* mention skills you are going to use to implement specific part of your plan (write `Using skill-name, I'll implement...`)

When you create the implementation plan document, ask the user to review it and STOP. Do not make any changes before the user's approval.
When the user has notes, read the implementation document and read all notes starting with `COMMENT: ` string, update the implementation plan and ask the user to review the updated plan. STOP, do not make any changes before explicit approval.

### 6. Implement the changes

When the plan is explicitly approved by user, implement all the changes. Only in case the subtask's to-do list states `Commit without review` you are allowed to create git commit. In all other scenarios - if it's not specified explicitly or specified `wait for review` - stop and ask the user to review your uncommitted changes.


### 7. Commit your changes

When you received approval - commit your changes. Use component specific skill:
* `committing-backend-changes` - for backend
* `committing-frontend-changes` - for frontend
....


### 8. Prepare implementation overview

When the changes are committed, create implementation overview. Describe the functionality implemented with the references to created / modified files.
Do not repeat the implementation detail - just reference the files with short description of what was done.
Reference related project documentation that was used during the implementation.
File name: `<subtaskId>-implementation-overview.md`

### 9. Update to-do document and commit changes

Update to-do document - check all completed to-do items. Commit all the changes in the monorepo you did.
Enter fullscreen mode Exit fullscreen mode

This prompt implements the following process:

Full Workflow

So, Claude Code gathers the context using the following info:

  • task tracker issue description, comments, and attachments
  • task documentation
  • implementation details of the previous subtasks

To control the implementation, I'm reviewing the implementation plan prepared by Claude Code. It's saved to a separate markdown file (D-1.1-implementation.md for this example), so it's easier to review and edit it in comparison with doing this just in the Terminal.

In case something is not correct, I'm adding comments just to this markdown file, and run /workflows:full:answered command.

Once all the details are approved, I run Claude Code in "accept all changes" mode — accepting all tool calls without manual confirmation for each one.

The user can execute Claude Code in the following modes (they are switched by pressing Shift+Tab in the Terminal):

normal - Claude Code ask the permission to make any change in the files

auto-accept - Claude Code won't ask permission for most of the changes

plan - Claude Code doesn't make any changes, just prints the implementation plan to the terminal; it can also be accessed with the standard /plan command. To make any changes, the user needs to enter normal or auto-accept mode (Claude Code offers this when the plan is complete).

Claude Code can also be run with the --dangerously-skip-permissions flag (claude --dangerously-skip-permissions). In that mode, it makes all the changes without any asking.

By accept all changes mode above, I mean auto-accept mode. Usually, I run Claude Code in this mode only and rarely using plan mode - Claude Code follows my prompt to ask the questions, do not start implementation until an explicit command is received. But this is my personal preference.

What's important here is that the development of each task starts with an empty context. Claude Code can recreate all context for every next task based on the documents in the feature directory. In that way, all the tasks can be accomplished within a single context window. (Again, this is not a hard rule; it's just my own constraint.)

The implementation overview serves as a bridge between subtasks. Since each subtask starts with a fresh context (/clear), Claude Code reads the implementation overviews from previous subtasks to understand what was already built. This allows it to continue working without needing the full conversation history from earlier sessions.

Here is a demonstration of how Claude Code gathers the context in this workflow:


/workflows:full:implement is running… DEV-1234 L-1.2

⏺ Read(features/DEV-1234/to-do.md)
⎿  Read 103 lines

⏺ Search(pattern: "features/DEV-1234/*.md")
⎿  Found 6 files (ctrl+o to expand)

⏺ Let me read all the related documentation and the CLAUDE.md files for backend and frontend.

⏺ Read(features/DEV-1234/L-1.1-implementation-plan.md)
⎿  Read 115 lines

⏺ Read(features/DEV-1234/L-1.1-implementation-overview.md)
⎿  Read 78 lines

⏺ Read(features/DEV-1234/L-0-implementation-plan.md)
⎿  Read 111 lines

⏺ Read(frontend/CLAUDE.md)
⎿  Read 51 lines

⏺ Read(backend/CLAUDE.md)
⎿  Read 168 lines

⏺ youtrack - get_issue (MCP)(issueId: "DEV-1234")
⎿  {
"customFields": [
{
… +177 lines (ctrl+o to expand)

⏺ youtrack - get_issue (MCP)(issueId: "DEV-1233")
⎿  {
"customFields": [
{
… +260 lines (ctrl+o to expand)

⏺ Let me read DEV-1235 (Settings Screen) for more context on the menu requirements, and explore the existing frontend code structure.

⏺ youtrack - get_issue (MCP)(issueId: "DEV-1235")
⎿  {
....
Enter fullscreen mode Exit fullscreen mode

7.2.6 Examples

So, basically, all my workflows are built on top of these /workflows:fast, /workflows:full:* commands. The inputs to the workflows might be different based on the type and size of the task.

In general, I'm just choosing the form of task presentation that is most convenient to me to think about and design the implementation. This is not related to the Claude Code specific constraints that I found (so you can experiment with the inputs you usually use for your tasks).

Here are several examples:

Small-sized and medium-sized tasks that are pretty typical for the project

An implementation plan in that scenario is usually not prepared, and some implementation notes are added just to the to-do.md file.

### P-1. Preferences configuration

#### P-1.1 Frontend. Create feature toggle

- [x] Create ToggledFeature::PREFERENCES_PAGE (for isDefaultTeam)
- [x] Commit

#### P-1.2 Backend. Create a preferences API method

- [x] Create enum DisplayTasksFilter.php with values: TODAY, PAST_DUE, NO_DUE_DATE, ALL
- [x] Create src\Preferences\Preferences\Domain\Preferences.php (+ repository)

~~~
- $showCompletedTasks: true;
- $displayListOfTasksInStatus: "DUE_TODAY"
~~~

- [x] Create migration and execute
- [x] Create API POST /preferences - Api, Application (for isDailyMailEnabled use DailyMailService):
  - create PreferencesService
  - use DailyMailService for isDailyMailEnabled
  - create PreferencesPresenter
  - create PreferencesUseCase

~~~json
{
  "isDailyMailEnabled": true,
  "showCompletedTasks": true,
  "displayListOfTasksInStatus": DisplayTasksFilter,
}
~~~

- [x] Create PreferencesApiTest::testSavePreferences - 1 test
- [x] Update /me

~~~json
{
  ...,
  "_preferences": {
    "showCompletedTasks": true, // default false
    "displayListOfTasksInStatus": "DUE_TODAY" // default "DUE_TODAY"
  }
}
~~~

- [x] Update integration test for GET /me method to test _preferences (1 test for default values and 1 test for customized values)
- [x] Wait for review
- [x] Commit

#### P-1.3 Frontend. Create preferences page
...

#### P-1.4 Mobile App. Create preferences screen
...
Enter fullscreen mode Exit fullscreen mode
Medium-large sized task

The implementation plan is prepared by me, for some parts of it, I may have a conversation with Claude Code for brainstorming different ideas and implementation options. Then it prepares a summary that I just paste into the implementation plan.

The structure usually is the following:

## DEV-1234. Task Name

Package: src/Contacts/BulkImport

### API method for downloading CSV template

GET /contacts/bulk/template.csv will return CSV file template with all possible columns. Generate the document on the fly just in the controller.

### API method for CSV file uploading

POST /contacts/bulk/import

Example of request:
~~~json
{
  ...
}
~~~

Classes:

**Application/ContactsCsvParser**

Responsibilities:
* parses input CSV into PHP array of array{ firstName: string, lastName: string, ... }
* validates max rows requirement (with validation exception in case of issue)
* validates that firstName and lastName are present and have values
* validates extra columns

Unit tests:
* valid CSV parsing
* missing first name column - validation exception
* CSV file with extra column - validation exception
* ...

**Application/BulkContactsImporter**

Responsibilities:
....
Enter fullscreen mode Exit fullscreen mode

Notes:

  • I'm currently preparing the list of unit test scenarios on my own. Previously, it was prepared by Claude Code, and sometimes the tests were too detailed. It's definitely possible to tune it to write a good set of unit and integration tests. But I've decided that providing exact scenarios for unit and integration tests is also a good form of documentation for Claude Code.

  • The issue description in the task tracker already has detailed information regarding requirements about required columns in the CSV file, so the implementation plan does not duplicate these requirements, but references them. Claude Code understands such references well, though referencing requirements by their ID should work even better for complex requirements.

Large-sized tasks

For complex tasks involving significant architectural changes, I prepare UML class diagrams for backend implementation. The implementation plan serves as an appendix to the UML documents, describing solutions that aren't obvious from the diagrams alone. Usually, UML is enough, and the implementation plan is skipped - I can add all clarifications needed with notes to a class diagram.

The UML file is provided in two forms: as an image (PNG) and in XML format. Claude Code can analyze both — the image for visual understanding and the XML for extracting exact method signatures and argument names.

I use Visual Paradigm for UML—its CLI commands allow automated export to PNG and XML, which integrates well with Claude Code. In the implementation plan, I reference diagrams using Visual Paradigm's identifier. I've created a skill (diagram-exporting-diagram) that runs Visual Paradigm CLI commands to generate PNG and XML files from these references.

I also use Claude Code to review UML diagrams during design — checking for consistency with existing patterns and completeness of the design.

This UML-based approach is the most effective workflow I've found for complex tasks—it deserves a dedicated article, which I plan to write in the future. The to-do document in this case becomes quite simple — just referencing names of classes presented on the diagram and lists test scenarios:

## DEV-1234. Sharing Client Portal Access

Diagram: ClientApp, package Access

### T-1. Backend API

- [ ] ClientAppAccess, repository, migration
- [ ] ClientAppAccessService. Unit Tests:
  - [ ] creates ClientAppAccess when not exist
  - [ ] publishes event
  - [ ] removes ClientAppAccess when contact is not provided
  - [ ] publishes event
- [ ] ClientAppAccessUseCases, use ProjectPermission.READ_CLIENT_PORTAL_CONFIGURATION
- [ ] ClientAppAccessPresenter
- [ ] ClientAppAccessController + integration tests
  - [ ] test that data is created
  - [ ] test that existing records are removed
- [ ] Implement ClientPortalAccessEventListener with 1 unit test and 1 integration test for this logic
- [ ] wait for review
- [ ] commit

...
Enter fullscreen mode Exit fullscreen mode

Claude Code reads the diagram and fills in the implementation details — class properties, method signatures, relationships — based on what it sees in the UML.

7.3 Code review

Code review is a cross-cutting concern that applies to all workflows described above. It's implemented via dedicated subagents (backend-code-reviewer, frontend-code-reviewer, mobile-code-reviewer — see Building Blocks section).

The workflows are set up in a way that code review subagents are called automatically:

  • Before providing results for my review
  • Before committing the result (for "commit without review" subtasks)

Since subagents run in their own context window, the reviewer is not biased by the decisions made during implementation — it evaluates the code independently.

Additionally, code review with Claude Code is integrated into the commit-stage CI pipeline, providing an extra layer of verification before changes are merged.

7.4 Working on subtasks in parallel

To speed up the implementation of the task, I run several subtasks in parallel. In general, Claude Code documentation recommends using git worktrees:

This is the best way to do this. But on my project, currently, this approach is not used because of the backend app integration tests design — I currently cannot run multiple instances of integration test environments on my local machine.

Instead, I run multiple Claude Code instances in separate terminal tabs within my IDE. Each tab is named with the YouTrack issue ID or subtask ID for easy tracking. Usually, 2-3 instances run in parallel during the development phase.

Since my monorepo is a wrapper directory containing separate git repos for each component (backend, frontend, infrastructure), parallel instances working on different components don't conflict — each commits to its own repository.

For coordination:

  • I order subtasks carefully so parallel ones don't have dependencies
  • When a subtask depends on another that's still in progress, I can proceed with the assumption that the API will be implemented as described in the requirements
  • If all subsequent subtasks are blocked by the current one, I work on documentation for the next task instead

While multiple Claude Code instances could technically work in parallel and commit to the same repo (working on different files), I don't practice this to avoid potential conflicts.

7.5 Troubleshooting Common Failures

Here's a short overview of how I handle common failures.

Repeated incorrect code

When Claude Code produces incorrect implementations repeatedly, my response depends on the type of failure.

Immediate fixes:

  • I add // REVIEW: comments directly in the source code to point out specific issues
  • I type correction notes in the terminal for quick adjustments
  • For complete misalignment with my design, I roll back changes, /clear, and revise the task description or project documentation before retrying

Preventing recurrence:

I use # at the start of my input to quickly add a rule to CLAUDE.md (Claude Code appends the message to the nearest CLAUDE.md file), or I add a rule directly — this is my most common fix for recurring issues.

For repeated failures in specific operations, I create a dedicated skill. Example: Claude kept forgetting to apply migrations to the test database and verify that all migrations were applied. A backend-implementing-migration skill with step-by-step verification fixed this permanently.

Context pollution

When Claude starts exploring wrong modules or mixing up context:

Recognition: I monitor what Claude includes in context — it describes its process. When I notice it exploring wrong areas (e.g., modules with similar naming), I /clear it immediately rather than trying to correct the course.

Prevention by design: The workflow structure prevents most context pollution architecturally:

  • Each subtask starts with /clear
  • Sessions end with commit + implementation overview
  • Implementation overviews preserve progress across sessions — when I /clear and start fresh, Claude reads the overview instead of needing full conversation history

Plan gaps discovered during implementation

Sometimes implementation plan has gaps that only become apparent during coding.

When I spot issues, I add // REVIEW: comments directly in the source code or type them to the terminal or update the UML diagram to address the problem. Claude Code reads these comments and changes, then makes the necessary fixes and updates the implementation plan and overview retrospectively.

API hallucinations

Claude may invent non-existent methods or use incorrect API signatures — both for external libraries and my own codebase.

What catches them:

  • Static code analysis (type checking)
  • Unit and integration tests

These run before code review, catching most issues automatically.

When they slip through:

  • I look up the actual API reference manually
  • I provide Claude with links to current documentation
  • I ask Claude to read my custom components and study their interfaces before using them

8. Full Structure Example

Here is just a summary of the Claude Code configuration files built based on the principles and information in the previous sections.

8.1. My Current Setup

The current section provides the full structure of my project setup. This structure implements everything described in previous sections.

The example includes only the configuration that is relevant to the development workflows I described.

monorepo/
|_.claude/
|   |_agents/
|   |   |_backend/
|   |   |   |_code-reviewer.md
|   |   |_frontend/
|   |   |   |_code-reviewer.md
|   |   |_mobile/
|   |       |_code-reviewer.md
|   |_commands/
|   |   |_workflows/
|   |       |_full/
|   |       |   |_implement.md
|   |       |   |_approved.md
|   |       |   |_review.md
|   |       |_fast.md
|   |_skills/
|       |_backend-applying-authorization/
|       |_backend-implementing-console-command/
|       |_backend-implementing-command-integration-test/
|       |_backend-implementing-controller/
|       |_backend-implementing-domain-entity/
|       |_backend-implementing-domain-service/
|       |_backend-implementing-file-type/
|       |_backend-implementing-migration/
|       |_backend-implementing-presenter/
|       |_backend-implementing-repository/
|       |_backend-implementing-request-mapper/
|       |_backend-implementing-use-case/
|       |_creating-feature-directory/
|       |_diagram-exporting-diagram/
|       |_diagram-querying-xml/
|       |_frontend-implementing-breadcrumbs/
|       |_frontend-implementing-api-service/
|       |_frontend-implementing-feature-toggle/
|       |_frontend-implementing-form/
|       |_frontend-implementing-form-input/
|       |_frontend-implementing-platform-layout/
|       |_frontend-implementing-settings-module/
|       |_frontend-using-data-loader/
|       |_implementing-skill/
|_backend/
|   |_docs/
|   |   |_codestyle.md
|   |   |_coding-standards.md
|   |   |_domain-entity.md
|   |   |_domain-services.md
|   |   |_git-commit-format.md
|   |   |_integration-tests.md
|   |   |_unit-tests.md
|   |   |_monitoring.md
|   |   |_presenter.md
|   |   |_repositories.md
|   |   |_testing-strategy.md
|   |   |_use-cases.md
|   |   |_authorization.md
|   |   |_data-mappers.md
|   |   |_module-structure.md
|   |   |_event-bus.md
|   |_CLAUDE.md
|_frontend/
|   |_docs/
|   |   |_...
|   |_CLAUDE.md
|_features/
|   |_DEV-1234/
|   |   |_implementation-plan.md
|   |   |_to-do.md
|   |   |_diagram.png
|   |   |_diagram.xml
|   |_DEV-1235/
|       |_...
|_CLAUDE.md
Enter fullscreen mode Exit fullscreen mode

8.2. Better options for Claude configuration organization

So, as I mentioned previously, for storing the Claude Code configuration, I created a separate "wrapper" project that contains .claude/ directory files and CLAUDE.md file. You might want to go further, especially at the organization level.

Plugins are a unified packaging mechanism that bundles multiple components together. The single plugin structure looks as the following:

 plugin-name/
  ├── .claude-plugin/
  │   ├── plugin.json      # Manifest with metadata
  │   └── hooks.json       # Optional hooks config
  ├── .claude/
  │   ├── commands/        # Slash commands
  │   ├── agents/          # Subagents
  │   ├── skills/          # Agent skills
  │   └── .mcp.json        # MCP servers
  └── marketplace.json     # For distribution
Enter fullscreen mode Exit fullscreen mode

You can use plugins within your organization to apply standardized processes across your teams.

For a single project, this also makes sense. Currently, I have a pretty well-structured process of feature development. But I also have a set of occasional agents/skills/commands for different tasks — preparing requirements, generating an overview for the client, etc. Related tools can be structured into plugins.

See the documentation here https://code.claude.com/docs/en/plugins.

9. Getting Started

The previous sections described the workflows and structure I use. This section shows how to build toward that setup — whether you're working solo or introducing it to a team.

9.1 For Individual Developers

Take time to build intuition

Don't expect immediate productivity gains. The first weeks are an investment in learning how Claude Code behaves with your specific codebase, understanding its strengths and limitations, and developing a sense for which tasks it handles well autonomously versus which need close supervision.

At the same time, I believe it can be introduced safely without decreasing the current performance just from the first days.

Start with the foundation

Before diving into complex workflows, set up the basic structure described in Section 8:

  • Create your CLAUDE.md files with project-specific context
  • Configure basic code review subagents

Identify patterns and create skills upfront

Before starting real tasks, analyze your codebase for recurring patterns. It's more effective to create skills for your most common patterns upfront:

  • Review your typical feature implementation: what components do you usually create?
  • Identify the common operations: adding entities, creating API endpoints, implementing forms, etc.
  • Create a skill for each pattern with examples from your existing code

Begin with simple tasks

Once your skills are in place, start with straightforward, well-defined tasks:

  • Simple bug fixes with clear reproduction steps
  • Adding a new field to an existing entity
  • Implementing a feature similar to existing ones

Run one task at a time. Watch how Claude Code approaches the problem, what context it gathers, where it succeeds, and where it struggles. In case of failure, try to understand why it happened, tune Claude Code configuration (create a doc, update CLAUDE.md, etc), and start the process from the beginning.

Run tasks in parallel only when you're confident in your current workflows

Start tasks in parallel only if you are confident enough that your workflows are stable.

Iterate continuously

Your setup will never be "done." As you work:

  • Add skills when you notice repeated patterns
  • Update documentation when Claude Code makes incorrect assumptions
  • Refine workflows based on what actually works for your project

The goal is continuous improvement, not perfection from day one.

9.2 For Teams

Choose a Champion

Designate one person to lead the introduction. This champion should:

  • Have time to experiment and iterate on the setup
  • Be able to document patterns and create initial skills

The champion builds the foundation that the rest of the team will use.

Centralize the configuration

Teams benefit significantly from a centralized approach:

  • Shared CLAUDE.md files ensure consistent context across all team members
  • Common skills library prevents duplicate effort
  • Standardized workflows make code reviews predictable
  • Unified code review subagents enforce team standards

Spread across team members

Once the foundation is solid:

  1. Start with developers who are enthusiastic about the tooling
  2. Pair them with the champion for initial onboarding
  3. Gather feedback and refine the setup
  4. Gradually expand to the rest of the team

Don't force adoption.

Spread across the organization

For multi-team organizations:

  • Share successful patterns between teams via plugins
  • Create organization-wide skills for common infrastructure and patterns (auth, logging, etc.)
  • Establish cross-team standards for Claude Code configuration

9.3 Bonus: A Skill for Creating Skills

When building your initial skills library, having a meta-skill that creates other skills can speed up the process significantly. Here's the actual skill I use:

---
name: implementing-skill
description: Creates concise Claude Code skills from source documents, existing source code, code examples, or prompts. Extracts only project-specific patterns. Target 50-100 lines.
---

# Implementing Concise Claude Code Skills

Creates focused skills containing only project-specific patterns, conventions, and gotchas.

## When to Use

- User wants to create skill from documentation, code examples, source code or description
- Codifying project-specific patterns into reusable skill

## Core Philosophy

**Assume Claude is an expert programmer.** Only include what's unique to this project:

✅ **Include:**
- Project-specific naming/structure conventions
- Unique file locations and organization
- Critical gotchas (e.g., `@psalm-suppress TooManyArguments`)
- Project-specific validation commands
- Non-obvious patterns from codebase

❌ **Exclude:**
- Standard language/framework knowledge Claude already knows
- Generic programming patterns
- Verbose explanations of common concepts
- Information fully covered in referenced example files

**Target: 50-100 lines** (150 max)

## Instructions

### 1. Analyze All Available Context

Extract patterns from whatever user provides:

**Source types:**
- Documentation files (e.g., `backend/doc/repository-creation.md`)
- Code examples (e.g., "based on TeamDomainRepository.php")
- User prompt describing workflow
- UML diagrams or technical specs
- Mix of above

**Read and identify:**
- What's project-specific vs generic?
- What patterns repeat across examples?
- What gotchas or special annotations appear?
- What validation/workflow steps are unique?

### 2. Check for Conflicts & Better Solutions

Before creating skill, verify:

**Check existing skills:**
- Glob `.claude/skills/*/SKILL.md` and `*/config.json`
- Name/description overlaps? → Propose extending/merging
- Keyword conflicts? → Suggest different keywords
- Similar trigger scenarios? → Consolidate or differentiate

**Is skill the right solution?**
- Simple one-time pattern? → Add to `CLAUDE.md` instead
- Reactive automation? → Use hook (`.claude/hooks/`)
- User-invoked workflow? → Create slash command (`.claude/commands/`)
- Complex repetitive pattern? → Skill is appropriate

**If conflicts/alternatives found:** Present options to user, get approval before proceeding.

### 3. Extract Project-Specific Patterns Only

From all sources, extract ONLY:
- Unique naming conventions (e.g., `{EntityName}Repository`, `{EntityName}RepositoryImpl`)
- Specific file paths/structure (e.g., `{Module}/Domain/`, `{Module}/Infrastructure/`)
- Critical annotations (e.g., `@psalm-suppress`, `@extends`, specific doc comments)
- Project validation commands (e.g., `docker compose exec php composer check`)
- When to trigger (keywords, scenarios)

### 4. Generate Skill Name & Ask Minimal Questions

**Generate name automatically:**
- Component-specific: `{component}-{action}-{object}`
  - Examples: `backend-implementing-repository`, `frontend-creating-component`
- General: `{action}-{object}`
  - Examples: `creating-feature-directory`, `reviewing-skill`
- Format: lowercase-with-hyphens, gerund form (-ing), max 64 chars

**Ask 1-2 questions max, only if:**
- Skill name is ambiguous (confirm generated name)
- Critical pattern unclear from all sources (rarely needed)

Extract everything else from context automatically.

### 5. Create Concise SKILL.md (50-100 lines)

**Structure:**
~~~markdown
---
name: {skill-name}
description: {What + when, max 164 chars}
---

# {Title}

{One sentence: what it creates/does}

## When to Use

- {Trigger keyword scenario 1}
- {Trigger keyword scenario 2}

## Instructions

### 1. {First Step - usually context gathering}

Extract from context (docs/UML/prompt/code):
- {What to extract}
- Only ask if critical info missing

### 2. {Main Action Step}

**{File/Component Type 1}:** `{project/specific/path}`
- {Project-specific pattern}
- {Critical gotcha if any}

**{File/Component Type 2}:** `{project/specific/path}`
- {Pattern unique to this project}

**Reference:** `{path/to/example/file}`

### 3. {Project-Specific Patterns Section}

**{Pattern Name}** (only if non-obvious):
\~~~{language}
{Minimal code showing unique project pattern}
\~~~

### 4. Validate

\~~~bash
{project-specific validation command}
\~~~

### 5. {Optional: Updating Existing}

{Brief: Read → Update → Validate}
~~~

**Apply ruthless filtering:**
- Each line: "Is this project-specific?" → No? Delete.
- Each example: "Would Claude know this?" → Yes? Just reference file instead.
- Keep critical patterns, skip explanations.

### 6. Create config.json

**File:** `{skill-name}/config.json`
~~~json
{
  "keywords": ["keyword1", "keyword2"],
  "hint": "☝️ {Brief hint when to use this skill}"
}
~~~

**Keywords:** Words that trigger hint in user prompts (case-insensitive, whole word match)
**Hint:** Short message suggesting skill usage (shown via hook when keywords match)

### 7. Create Templates (Optional)

Only if template:
- Contains non-obvious project structure
- Saves significant time vs writing from scratch

Otherwise: reference existing example files.

### 8. Validate & Report

1. Check line count: 50-100 ideal, 150 absolute max
2. Remove any generic content that snuck in
3. Invoke `reviewing-skill`
4. Report created files and next steps

## Naming Conventions

- `backend-{action}-{object}` (e.g., `backend-implementing-repository`)
- `frontend-{action}-{object}` (e.g., `frontend-creating-hook`)
- `infrastructure-{action}-{object}` (e.g., `infrastructure-deploying-lambda`)
- General: `{action}-{object}` (e.g., `creating-feature`, `reviewing-skill`)

Always gerund form (-ing) for action.
Enter fullscreen mode Exit fullscreen mode

Ironically, this skill doesn't follow its own "50-150 lines" recommendation. This is not a hard rule, but rather a constraint for Claude Code while it implements the skill - it's forced to keep the information short and provide only necessary details.s

10. Results

Subjectively, I'd estimate the performance gain at 30-40% in comparison with the period before the Claude Code introduction. So, it's definitely more than a week saved per month.

I decided to gather the statistics based on my git history for the last two-year period.

I understand that it cannot be used as the only valid metric for measuring performance. At the same time, I think that it has a correlation with productivity if we compare it for long periods for the same codebase, and with similar types of tasks done during these periods.

I've chosen the following periods to compare:

  • May - August 2024
  • August - October 2024
  • October 2024 - January 2025
  • January - March 2025
  • March - June 2025
  • June - August 2025
  • October - December 2025

September 2025 is excluded because it was a period for different experiments with Claude Code, and it's not relevant for comparison.

All the statistic is aggregated by weeks. Weeks with fewer than 10 git commits in a week are excluded from the calculation (in order to exclude vacations, etc). Each of the periods above has 9-11 weeks, matching this requirement.

Generally, all these periods are ongoing development of the application. At the same time, they could have some features impacting the LOC statistic.

Here are summary comparison table for the statistics and some graphs. More detailed data is available via the links at the end of this section in interactive graph view.

Metric May-Aug'24 Aug-Oct'24 Oct-Jan'25 Jan-Mar'25 Mar-Jun'25 Jun-Aug'25 Oct-Dec'25
Weeks included 11 9 11 10 11 11 11
Avg Commits per Week 30 25 30 25 26 28 50
Avg Prod Deployments per Week 21 19 20 19 22 22 28
Total NET LOC - Min 1,419 374 499 959 2,288 1,369 2,801
Total NET LOC - Max 3,413 3,306 3,815 3,870 7,062 4,297 12,143
Total NET LOC - Avg 2,590 1,681 2,045 2,481 3,975 2,506 5,947
Source NET LOC - Min 925 377 454 765 2,042 1,061 988
Source NET LOC - Max 2,871 3,094 3,696 3,132 5,349 3,025 6,799
Source NET LOC - Avg 2,125 1,480 1,701 2,016 3,267 1,822 3,473
Test NET LOC - Min 88 -232 45 159 129 132 296
Test NET LOC - Max 924 661 886 953 1,713 1,988 5,091
Test NET LOC - Avg 434 145 344 463 707 505 2,043

The following metrics are used:

  • Weeks Included - the number of weeks used for calculation
  • Avg Commits per Week - the metric is not a direct reference for performance. Commits with Claude Code became more atomic (previously, I could combine several changes into a single commit to save time). That is also better from a CI/CD principles perspective and makes it easier to apply automated code review by a clear scope for each commit.
  • Avg Prod Deployments per Week - this is an approximate number of production deployments gathered as the number of git tags created during this period. Git tag is used as the version of the production build. Not all of them are actually deployed to production. At the same time, several components are deployed from a single git repo and only 1 tag is used for them
  • Source NET LOC - * - metric calculated for source files only as NumberOfAdded minus NumberOfDeleted
  • Test NET LOC - * - metrics calculated for test files only
  • Total NET LOC - both, source files + test files

And here is the graph that compares NET LOC statistic of October - December 2025 with other periods

Net LOC Statistic

For more detailed metrics, see the interactive charts:

Weekly Statistic | Period Statistic

11. What's next?

Claude Code may be used in numerous scenarios for optimizing development workflows.

Requirements gathering

Requirements often arrive lacking technical detail. Claude Code can help structure this process by asking clarification questions upfront, preparing requirements in a standardized format, and suggesting implementation options.

This reduces back-and-forth communication and speeds up development — requirements arrive ready for implementation planning.

Automate simple updates implementation via CI

This would let me implement and deploy small, straightforward updates directly from the task manager through a CI pipeline — no manual coding required.

The implementation path is clear but requires CI/CD pipeline improvements and dedicated time.

Custom Tools for development workflow

I believe the current performance boost will plateau soon. While the workflow is efficient, jumping between terminal tabs, markdown files, and source code affects focus significantly.

The idea is to build a dashboard that visualizes task input (to-do files) and orchestrates Claude Code instances.

This is still exploratory — I'll explore existing tooling first. If you've built something similar, I'd be interested to hear about your approach.


Author: Dzianis Karviha (https://www.linkedin.com/in/dzianis-karviha)

Top comments (0)