Scrum shouldn't be a reporting chore
Here's a pattern I've seen at every company that "does Scrum":
Sprint planning is a calendar ritual. The backlog is a graveyard of tickets nobody reads. Daily standups are people reading yesterday's status off a screen while everyone else zones out. And the PM tool? A bloated browser app that takes 8 seconds to load, existing primarily so someone in management can export a burndown chart to a slide deck.
The tools aren't built for the people doing the work. They're built for the people watching the work get done.
For developers, the workflow is brutal. You're in the zone, deep in your editor, and then you need to update a ticket status. Context-switch to a heavy browser tab, wait for it to load, click through three dropdowns, and by the time you're back in your terminal you've forgotten what you were doing.
I got tired of it. So I built something different.
What I built: Lasimban (羅針盤)
Lasimban means "compass" in Japanese. It's a Scrum-specialized task management tool — not a generic project management Swiss army knife, but a tool that actually understands the Scrum framework.
The structure is opinionated by design. Epics break down into Product Backlog Items (PBIs), PBIs live in sprints, sprints contain tasks. This isn't a flexible "use it however you want" board — it's Scrum Guide constraints baked into the data model. You can't accidentally create a 6-week sprint or have orphan tasks floating in the void.
A few things that matter to me:
Native 4-language support (EN, JA, VI, TL). I work with teams spanning Tokyo and Manila. Language shouldn't be a barrier to using your own project management tool. This isn't bolted-on i18n — it's built in from day one.
Real-time sync via GraphQL subscriptions. When someone in Manila moves a task, the board updates instantly in Tokyo. No refresh button, no "someone else modified this item" conflicts.
DSU (Daily Stand-Up) mode. Tasks untouched for 48 hours glow red — they're probably blocked. Tasks completed in the last 24 hours glow green. Your daily standup becomes a 30-second visual scan instead of a 15-minute status recital.
"Scrum is a Game" — Why I added confetti to a B2B SaaS
This is where people usually raise an eyebrow. Confetti? In a B2B tool? Hear me out.
Scrum, at its core, is a multiplayer game. You have a team, a timebox, a goal, and a set of rules. There are rounds (sprints), a score (velocity), and win conditions (sprint goals met). The problem is that every tool on the market treats it like a spreadsheet exercise. Check the box, move the card, generate the report.
So I leaned into the game metaphor:
- Starting a sprint triggers a sailing departure animation. You're setting off on a voyage.
- Completing a task plays a sparkle effect. Small, quick, satisfying.
- Completing a PBI drops confetti. Because shipping a feature should feel like something.
- Completing a sprint launches a rocket animation.
None of this is flashy for the sake of it. It's about dopamine loops. The same reason every game gives you visual feedback when you accomplish something — it reinforces the behavior. Completing tasks should feel good, not feel like paperwork.
I also added keyboard navigation shortcuts. Press g followed by a key to jump anywhere — g b for the backlog, g s for sprints, g d for the dashboard. Your hands never leave the keyboard. Because if you're building a tool for developers, it should respect how developers actually work.
The hack: MCP integration — never leave your IDE
This is the part I'm most excited about technically.
MCP (Model Context Protocol) is an open standard that lets AI assistants connect to external tools and data sources. Think of it as a universal API layer between LLMs and your applications. If your editor has an AI assistant (Cursor, Claude Code, GitHub Copilot, etc.), MCP lets that assistant talk directly to Lasimban.
The motivation is simple: developers live in the IDE. If I can bring the Scrum board into the editor through AI, there's zero context-switching. You ask your AI assistant "what's left in this sprint?" and it pulls the answer from Lasimban without you ever opening a browser.
Here's how I built it.
Stateless HTTP, not SSE
The MCP spec defines SSE (Server-Sent Events) streaming as a transport option, but I went with stateless HTTP — POST request in, JSON response out, connection closed.
Why? The backend runs on Cloud Run, which scales containers from 0 to 10 based on request volume. SSE requires long-lived connections and session management — a client connects, holds the connection open, and the server pushes events over time. That's fundamentally at odds with Cloud Run's request-based scaling model. You'd pay for idle containers holding open connections and need sticky sessions or external session storage.
Stateless HTTP means every request is independent. Container spins up, handles the request, spins down. Auto-scaling just works. Simple to operate, cost-effective, and fully compliant with the MCP 2025-03-26 spec's Streamable HTTP transport.
The implementation uses mark3labs/mcp-go v0.45.0 mounted on a Gin router. Single endpoint: POST /mcp, speaking JSON-RPC.
Markdown responses for LLM readability
Every tool response is formatted as Markdown text, not raw JSON.
This is deliberate. LLMs understand and summarize Markdown far better than they parse nested JSON objects. When the AI pulls sprint details, it gets a formatted overview with burnup/burndown data laid out in a way that's easy to reason about. A dedicated Presenter layer handles the conversion — the Usecase layer returns domain objects, and the Presenter formats them into Markdown strings.
The result: when you ask "summarize the current sprint," the AI gives you a coherent paragraph, not a JSON dump.
API key authentication
Authentication uses lsb_-prefixed API keys — Base62-encoded, 32 bytes of entropy. The plaintext key is shown exactly once at creation, then stored as a SHA-256 hash. Each user can have up to 3 keys, revocable anytime.
This is the same pattern GitHub and Stripe use for their API keys. Prefixed so you can identify them in logs and secret scanners, hashed so a database breach doesn't compromise access.
Here's the clever part: after API key authentication, the server generates the same user context as a regular browser login. That means zero new business logic was needed for MCP. The entire auth flow reuses existing infrastructure.
11 tools, read-heavy by design
The MCP server exposes 11 tools:
7 read tools:
-
list_projects— all projects the user has access to -
list_sprints— sprints for a project -
list_product_backlog_items— PBIs with status filtering -
list_backlog_statuses— available status options -
get_sprint_details— full sprint data including burndown metrics -
get_product_backlog_item— detailed PBI view -
get_task— individual task details
4 write tools:
-
update_task_status— change a task's status -
create_pbi— create a new PBI (with priority and Markdown description support) -
create_task— create a new task (with Markdown description and self-assign option) -
update_pbi_status— update a PBI's backlog status
Read-heavy, write-light. The AI can observe everything and make targeted changes — creating items and updating statuses — but can't restructure or delete existing data.
Every tool calls the existing Usecase layer directly. No new business logic was written for MCP — the tools are thin wrappers that authenticate, call the same functions the web app uses, and format the output as Markdown.
What this actually looks like in practice
From your editor, you can do things like:
- "Summarize the current sprint status" — the AI pulls sprint details, burndown data, and gives you a status overview
- "Which PBIs have the most remaining tasks?" — the AI identifies where bottlenecks are hiding
- "Mark task LSMB-42 as done" — status update from your IDE, no browser needed
The full workflow becomes: check your task, implement the code, update the status — all without leaving the editor. Your Scrum board becomes ambient information rather than a destination.
Tech stack
Quick overview for the curious:
- Backend: Go (Gin framework, clean architecture)
- Frontend: Next.js
- API: GraphQL for the web client, JSON-RPC for MCP
- Real-time: WebSocket
- Infrastructure: Cloud Run (0-to-10 auto-scaling)
- MCP: mark3labs/mcp-go, stateless Streamable HTTP
What's next
The latest update already expanded MCP tools to 11 — you can now create PBIs (with priority levels), create tasks with Markdown descriptions, and update PBI statuses directly from your IDE. The longer-term vision: AI facilitating actual Scrum events. Imagine an AI that auto-summarizes your sprint review based on completed PBIs, or suggests retrospective improvements based on sprint metrics patterns.
The goal isn't to replace the Scrum Master — it's to remove the clerical overhead so teams can focus on the conversations that actually matter.
If you want to try it out, Lasimban has a free tier at lasimban.team.
I'm curious — what's the most annoying thing about your current agile tool? The thing that makes you think "why is this so hard?" every single time. I'd love to hear what pain points other developers are hitting, especially if you're working across time zones or languages.


Top comments (0)