DEV Community

Evan-dong
Evan-dong

Posted on

10 Battle-Tested Claude Code Practices

Recently, I came across a repository called "claude-code-best-practice" that shot to the top of GitHub Trending with over 20k stars. The author compiled 84 best practices for Claude Code, covering everything from prompting to CLAUDE.md configuration, skills management, and debugging strategies.

Repository: https://github.com/shanraisshan/claude-code-best-practice

As a long-time Claude Code user, I spent over two hours going through every single practice. Some I was already using, others were brilliantly insightful, and a few made me realize I'd been doing things the hard way. I've selected the 10 most practical tips that I've personally tested and verified.

  1. Keep CLAUDE.md Under 60 Lines

This was my biggest mistake when starting out.

When I first began using Claude Code last year, I crammed nearly 500 lines into CLAUDE.md—project descriptions, coding standards, API documentation, everything. The result? Claude would selectively ignore rules, especially those toward the end of the file.

Later, I learned from Boris Cherny (creator of Claude Code) that frontier LLMs can reliably follow about 150-200 instructions, and Claude Code's system prompt already uses around 50. That doesn't leave much room. According to public information, the HumanLayer team keeps it under 60 lines, with 300 as the hard limit.

My current approach: only include information Claude might overlook—build commands, test commands, branch naming conventions, and project-specific architectural decisions. If Claude can infer it from reading the code, don't put it in CLAUDE.md. If you have too many rules, split them into multiple files under .claude/rules/ and load them on demand. Critical rules can be wrapped in tags to prevent them from being ignored.

  1. Use Plan Mode for Complex Tasks

Boris himself has mentioned this multiple times, and it should be common knowledge for Claude Code users by now.

Before tackling complex tasks, press Shift+Tab twice to enter Plan Mode. In this mode, Claude only researches and plans without writing code. Once the plan is confirmed, switch back to Normal Mode for execution.

I used to dive straight into coding, which often resulted in Claude writing a bunch of code in the wrong direction, forcing me to start over. Now I plan first, then execute—much more efficient. Anthropic's official recommended workflow has four steps: Explore → Plan → Implement → Commit. Small tasks can be done directly, but anything moderately complex should go through planning.

  1. Let Claude Interview You First

Give Claude a simple requirement description and let it use the AskUserQuestion tool to interview you, clarifying all the details. After the interview, start a new session for execution.

When I was building an API, Claude asked me about "how to handle concurrent requests" and "what's the timeout strategy"—things I hadn't even considered initially. Its questions often help you discover edge cases you missed.

The key is to start a new session after the interview. The lengthy conversation from the interview process clutters the context and actually degrades subsequent execution quality.

  1. Demand a Rewrite for Mediocre Solutions

This is my favorite tip from Boris's team.

When Claude gives you a solution that works but isn't elegant, don't patch it up. Just say: "knowing everything you know now, scrap this and implement the elegant solution." Claude will redesign the solution based on its complete understanding of the problem. I've tried this several times, and the rewritten version is consistently better than the patched version.

Similarly, you can say "prove to me this works" to have Claude diff the current branch against main to verify the changes are correct.

  1. Paste the Bug and Say "Fix"—Don't Micromanage

This might be the most counterintuitive of all 84 practices.

When you encounter a bug, paste the error message to Claude and say one word: "fix." Don't guide it on how to fix it, don't speculate on the cause, don't prescribe a solution. Claude's debugging ability is stronger than most people imagine—the more you micromanage, the more likely you are to lead it astray. In my experience, letting Claude fix it directly has an 80%+ success rate.

If it doesn't work after two attempts, stop insisting. Use /clear to reset the context and approach it from a different angle. Anthropic officially recommends restarting if corrections exceed two attempts.

I used to over-explain when fixing bugs, adding tons of descriptions. Now I see that was unnecessary.

  1. Say "Use Subagents" in Your Prompt

Simply include "use subagents" in your prompt, and Claude will split the task among multiple sub-agents for parallel processing.

This is especially useful for code reviews and large-scale refactoring. According to public sources, someone used 9 parallel sub-agents for code review, each focusing on a different quality dimension. I've used it for cross-file renaming—Claude spawned 3 sub-agents working in parallel, much faster than single-threaded execution, and the main context wasn't polluted by search results.

Pro tip: When creating subagents, make them feature-specific (like "frontend component agent") rather than generic ("QA agent"). The more specific the function, the more precise the context, and the better the results.

  1. Structure Skills as Folders with a Gotchas Section

Many people (including my former self) create a Skill by writing a single SKILL.md file and calling it done.

The repository emphasizes that Skills should be complete folder structures: a main SKILL.md file plus references/, scripts/, and examples/ subdirectories. This progressive disclosure is key—Claude only reads subdirectory content when needed, rather than cramming everything into context at once.

After restructuring my academic writing Skill into a folder format, the improvement was significant. Previously, all specifications were crammed into one file, and Claude often missed details. Now the main file only contains core rules and an index, with corpus and checklists in references/.

Another long-term valuable technique: create a Gotchas (pitfalls record) section in each Skill, documenting failure modes every time Claude makes a mistake. Over time, this becomes the highest signal-to-noise content. My academic writing Skill documents over a dozen "AI-sounding" patterns—adding this section significantly improved first-draft quality.

  1. Manually Compact at 50% Context Usage

This is the most critical workflow-level practice.

Claude Code has what's called an "agent dumb zone"—when context usage exceeds 60-70%, performance noticeably degrades, with Claude ignoring instructions and making basic coding errors. The repository recommends manually executing /compact at 50%, rather than waiting for automatic compaction, which is often too late.

I used to run sessions until exhaustion or only compact around 10% remaining. Now I compact at 50% or /clear for a new session—much better results. Use /statusline to monitor usage in real-time. Boris's team's script even uses color coding: green for safe, yellow for caution, red for danger.

  1. Press Esc Esc to Rollback When Off Track

What do you do when Claude goes off track? Most people's instinct is to correct it within the current session.

The repository recommends pressing Esc twice (or using /rewind) to rollback directly to the previous checkpoint. Trying to correct drift within the same context often makes it worse, because the erroneous reasoning is still in context, and Claude gets led by its own mistaken logic.

My current habit: if it goes off track, Esc Esc to rollback. If it drifts twice on the same issue, /clear and restart.

  1. Don't Use Complex Workflows for Small Tasks

The repository offers a refreshingly clear-headed suggestion: native Claude Code handles small tasks better than any complex workflow.

I used to make this mistake, running through the complete Plan → Execute → Review workflow even for renaming a variable. In reality, you can just say it in one sentence. Those complex workflows (Superpowers, Spec Kit, BMAD-METHOD, etc.) are designed for large tasks involving multiple files and steps. For things that take three to five minutes, native Claude Code is fastest.

Beyond the Official Claude Code: Alternative Access Methods

While Claude Code in the official IDE offers an excellent experience, there are other ways to access Claude's powerful capabilities that you might want to explore:

OpenRouter provides unified API access to multiple AI models including Claude, making it easy to integrate Claude into your custom workflows and tools. It's particularly useful for developers who want to build their own AI-powered applications with flexible model switching.

Evolink stands out as a comprehensive AI development platform that not only provides access to Claude and other frontier models, but also offers enhanced collaboration features, cost optimization, and seamless integration with various development environments. For teams working on complex projects, Evolink's unified interface and advanced management capabilities can significantly streamline your AI-assisted development workflow. The platform's intelligent routing and fallback mechanisms ensure consistent access even during peak times, making it a reliable choice for production environments.

Both platforms offer competitive pricing and additional features beyond the standard Claude Code experience, giving you more flexibility in how you leverage Claude's capabilities across different projects and use cases.

These 10 practices are what I've filtered from the original 84. The complete list is worth reading through in the repository. The author also compiled a horizontal comparison of 8 mainstream workflows and all of Boris Cherny's interview links—incredibly information-dense.

If you found this useful, please share it so more people can benefit from these insights!

Top comments (0)