Every week, I run small experiments with context-driven development (CDD), AI tools, and imdone - a tool I built that keeps your backlog right in your source code so you never lose context.
This week, I'm putting Claude Code to the test on a real-world feature: allowing developers to select which JIRA project to add issues to when using imdone-cli.
Why This Feature Matters
On my team, we constantly juggle multiple JIRA projects. We have our main development project, but we also need to create tickets for the infrastructure team, file bugs in different projects, and coordinate across team boundaries.
Previously, switching between projects meant manually editing configuration files or working around limitations. This new feature eliminates that friction, making cross-team collaboration smoother.
The Context-Driven Development Approach
The key to working effectively with AI coding assistants is giving them the right context. Here's how I structured the task:
1. Define the Goal Clearly
Goal: When a user runs imdone add, they can choose which project to add the new issue to.
2. Specify Constraints
- Prompt for project key on
imdone add - Allow
--project-keyoption for command-line usage - Use Test-Driven Development (TDD)
- Ensure all existing tests continue to pass
3. Provide Relevant Files
I included the key files Claude Code would need to understand:
-
bin.mjs- Where CLI commands start -
add-issue.js- The use case implementation - Existing tests to understand the testing patterns
This context allowed Claude Code to understand not just what to build, but how to build it within the existing architecture.
The Development Journey
00:23 - Getting Started with Claude Code
I started by opening the task in my backlog (stored as markdown in my source code) and asking Claude Code to complete it. The first challenge? Claude Code couldn't initially see my backlog folder because it was in a separate git repository.
Lesson learned: Be explicit with file paths when working across repository boundaries.
06:02 - Claude Code Begins Implementation
Once Claude Code had the context, it immediately:
- Analyzed the existing code structure
- Found the test folder
- Started implementing tests using TDD
- Added the
--project-keyCLI option
Watching it work was fascinating - it understood the patterns in my codebase and followed them consistently.
09:05 - First Test Runs and Debugging
The first test run revealed some issues with the test setup. Claude Code needed a few iterations to get the mocking correct for the inquirer prompts. This is where AI-assisted development gets interesting - it's not magic, but it's a productive back-and-forth.
14:00 - Code Review
While Claude Code worked, I reviewed the implementation:
// The new project key selection flow
const projectKeyOptions = await this.jiraAdapter.projectKeys()
const selectedProjectKey = options.projectKey ||
(projectKeyOptions.length > 1
? await this.promptForProjectKey(projectKeyOptions)
: projectKeyOptions[0]?.key)
Clean, logical, and it handled multiple scenarios:
- Explicit
--project-keyflag - Interactive prompt when multiple projects exist
- Default to single project if only one is configured
18:38 - All Tests Passing!
After a few iterations, all 14 tests passed. This is where TDD really shines - we had confidence that the new feature didn't break existing functionality.
20:10 - Manual Testing
I built and linked the CLI locally to test the real user experience:
npm run build
npm link
imdone add
The interactive prompt appeared, showing both my SCRUM and DEMO projects. I selected DEMO, chose "Story" as the issue type, and created a test issue.
It worked! The issue was created in the correct project with the proper template applied.
23:51 - Discovering an Edge Case
During testing, I noticed that sprint selection wasn't working correctly for the secondary project. The root cause? The getActiveSprints() function was joining all project keys with a comma instead of filtering by the selected project.
This is a great example of why manual testing matters - even with comprehensive unit tests, edge cases emerge in real usage.
27:00 - Real-World Challenges
Just as I was about to have Claude Code fix the edge case, I hit the credit limit. This is a real constraint when using paid AI services - you need to budget for it.
Key Takeaways
What Worked Well
Context-Driven Structure: By defining goals, constraints, and relevant files upfront, Claude Code had everything it needed to succeed.
TDD Approach: Having tests as guardrails meant we could iterate confidently without breaking existing functionality.
Code Quality: Claude Code followed existing patterns and wrote clean, readable code that fit naturally into the codebase.
Challenges and Limitations
Setup Friction: Initial path issues and repository boundaries required manual intervention.
Edge Cases: The AI implemented the happy path well, but edge cases still required human discovery and iteration.
Cost Considerations: Running out of credits mid-task is a real concern with usage-based pricing.
When to Use AI Assistants
AI coding assistants like Claude Code excel at:
- Implementing well-defined features
- Following existing patterns
- Writing comprehensive tests
- Handling boilerplate and setup code
They struggle with:
- Ambiguous requirements
- Complex architectural decisions
- Edge cases that require domain knowledge
- Understanding cross-repository dependencies
The Bigger Picture: Context-Driven Development
This experiment reinforces my belief in keeping context close to code. By storing my backlog as markdown files in my repository, I can:
- Quickly provide context to AI assistants
- Never lose track of why decisions were made
- Link tasks directly to the code they affect
- Version control everything - requirements, code, and tests together
This is what imdone enables, and it's why I built it.
Try It Yourself
Want to experiment with this workflow?
-
Install imdone-cli:
npm install -g imdone-cli(multi-project support is available in v0.27.0 and later) - Check out Claude Code (requires Anthropic API access)
- Structure your tasks with goals, constraints, and file references
- Iterate and learn - every experiment teaches you something new
What's Next?
I'll continue with Claude Code to fix the sprint selection edge case. The structure and approach are solid - it's just a matter of adding one more constraint and letting it iterate.
Next week, I'll run another experiment. Maybe I'll compare different AI assistants, or dive deeper into a complex refactoring challenge.
Join the Conversation
Have you used Claude Code or other AI coding assistants? What's worked well for you? What challenges have you faced?
Drop a comment below with your experiences - I'm especially interested in:
- How you structure tasks for AI assistants
- Patterns you've discovered for effective AI collaboration
- Edge cases or limitations you've encountered
If you found this useful, give it a ❤️ and follow me for weekly experiments at the intersection of AI, development workflows, and better tooling.
Video Timeline
Jump to specific sections:
- 00:00 - Introduction to weekly CDD experiments
- 00:23 - Starting with Claude Code and defining the task
- 03:02 - Setting up the context and files
- 06:02 - Claude Code begins TDD implementation
- 09:05 - First test runs and debugging
- 14:00 - Code review: how Claude implemented the feature
- 18:38 - All tests passing!
- 19:00 - Building and linking for manual testing
- 20:10 - Live demo: selecting projects with the new feature
- 21:56 - Success! Creating issues in different projects
- 23:51 - Discovering edge case with sprint selection
- 27:00 - Running into credit limits (real challenges)
- 28:00 - Reflection and next steps
This post is part of my weekly context-driven development experiment series. Check out previous experiments and follow along as I explore better ways to build software with AI assistance.
Top comments (0)