Jira is powerful — but painfully slow for keyboard-driven workflows.
As a daily vim user I am not satisfied with the experience. After years of trying to embrace it, I finally decided to build my own solution. So I built a keyboard-first Jira client powered by a custom Vim engine using Compose Multiplatform.
To be clear: this is not Jira replacement. You can still use Jira normally. I will be fetching Jira data through their official API, and show to you in a nicer way + vim
The Problem
- UI changes very frequently
- UI is clunky and slow
- You see too many elements at your face, even if you don’t use them
- Planning is hard
- Mouse-heavy workflow
The Goals
Have a keyboard-first navigation system with the option to also use a mouse and keyboard. Data would be fetched from official Jira api and display on a multiplatform application on your desktop or mobile phone.
Since the data are the same, this would allow me to use this app or Jira whenever I want to. Of course while building this app, I would still be using Jira for features that won’t be supported in the app yet.
The app UI will be modern, configurable, supports small screens (phone) and large screens (tablets + laptops). This would be powered by compose multiplatform. More on that later
The architecture
Before starting with UI stuff, I need to have vim like engine to be working, at least the basic usage for now and will improve as I go. I need a way to hit keystrokes on my keyboard and convert those to events. Like move-up, move-down, click, filter, assign task, etc… This means I need to listen to keyboard strokes and handle them properly
To understand how to do this nicely, We need to know how vim is done and how it is used on the terminal
At a high level, the system is composed of:
- VimEngine (mode-aware input processor)
- Mode Parsers (Normal, Command)
- MVI ViewModel layer
- Remote + Local data sources
- Compose Multiplatform UI
Vim Modes
Vim has many modes like:
- normal: while navigating a file
- insert: while writing to a file
- command: when running commands in a file
- visual: selecting text in a file
- v-line: selecting lines in a file and a few more modes
For now, I will only handle normal, command modes
Each mode handles keystrokes differently
Normal Mode Parser
This mode need to parse and optionally handle keystrokes as soon as they are received. For example, if user press j, this should be parsed and understood and move down, k for moving up and so on
This is simple, we could do something like this
fun handle(c: Char): VimAction? {
return when (c) {
'j' -> VimAction.MoveDown
'k' -> VimAction.MoveUp
else -> null
}
}
What happens if I want to handle gg (move to top of the file), you might think i can simply add it to the when statement, like this
fun handle(c: Char): VimAction? {
return when (c) {
// ...
'gg' -> VimAction.MoveToTop
// ...
}
but our handle function only send one character at a time. So we need to cache previous strokes. Then it becomes something like this
var buffer = StringBuilder()
fun handle(c: Char): VimAction? {
when (c) {
'j' -> {
buffer.clear()
return VimAction.MoveDown
}
'k' -> {
buffer.clear()
return VimAction.MoveUp
}
}
buffer.append(c)
when (buffer.toString()) {
'gg' -> {
buffer.clear()
return VimAction.MoveToTop
}
}
return null
}
There are more complicated cases, like if user hit g by mistake and then he want to hit j.
This is step by step on what would happen
buffer = ""
user clicked: g
no match -> buffer = "g"
user clicked j
no match -> buffer = "gj"
Now at this point no matter what user hits, buffer would keep increasing. We can clear buffer if user click esc but is not the vim way to handle it.
Partial Pattern Matching
If you try this in vim, hitting g then j would move up. How so?
This is called partial matches. gj is not a valid keybinding, but j itself is, so g is ignored and j will run
If no exact match found, we try last n-1 characters on the buffer (n = number of characters in buffer), if that also had no match we try n-2 and so until no more characters in the buffer. But if we found a partial match, we return the corresponding vim action and we clear the buffer
This type of pattern matching is needed in normal mode parser.
Command Mode Parser
Here we need to have a predefined grammar on our commands. Let’s start with a simple and familiar one
[verb] [target] (args)
for example:
:status done (change task status to done)
:assign bob (assign task to bob)
In this mode, we don’t want to parse on each keystroke, instead we keep adding to buffer and wait for user to hit enter to apply the command or esc to cancel it
We should have a predefined vocabulary on each verb, then verb to target maps or something to know what we can handle and what we can’t
Using power of sealed interface we can do this nicely as follows
sealed interface CommandVerb {
fun toVimAction(target: String?, args: List<Arg>): VimAction?
data object Status: CommandVerb {
override fun toVimAction(target: String?, args: List<Arg>): VimAction? {
if (target.isNullOrBlank()) return null
val taskStatus = TaskStatus.getFrom(target) ?: return VimAction.Error("Unknown status=$target")
return VimAction.MoveTaskTo(taskStatus)
}
}
data object Assign: CommandVerb {
override fun toVimAction(target: String?, args: List<Arg>): VimAction? {
if (target.isNullOrBlank()) return null
return VimAction.AssignTo(target.lowercase())
}
}
data object Help: CommandVerb {
override fun toVimAction(target: String?, args: List<Arg>): VimAction {
return VimAction.ShowHelp
}
}
companion object Companion {
fun getFrom(verbName: String): CommandVerb? {
return when (verbName.lowercase()) {
"status" -> Status
"assign" -> Assign
"help" -> Help
else -> null
}
}
}
}
Using regex we parse the command, and from it we get Command Verb like so
val verb = getVerbFromCmd(cmd)
CommandVerb.getFrom(verb)?.toVimAction(target, args)
We need to care about edge cases like:
- What happens if verb is invalid
- What happens if target is invalid and so on
Creating The Engine
Since our mode parsers are ready now we can start develop the engine
Our engine needs NormalModeParser and CommandModeParser as params, the engine also need to know the current vim mode to be able to know where to send keystrokes
class VimEngine(
private val normalModeParser: ModeParser,
private val commandModeParser: ModeParser,
) {
val mode: VimMode = // something
private val modeToParser = mapOf(
VimMode.Normal to normalModeParser,
VimMode.Command to commandModeParser,
)
fun handleKey(key: VimKey) {
val parser = modeToParser[mode.value]
val vimAction = parser?.parse(key)
// handle vimAction
}
}
But wait, how should we change vim mode? and how is responsible for that.
What makes most sense is that the engine is the one holding vimMode and exposing that as a flow. I decided to make the parser return vim mode change, this way VimAction would only represent an action to be made later by ui and vim mode change is only meant for the VimEngine to see
so, I updated mode parser function to return ParserReult
data class ParseResult(
val action: VimAction? = null,
val nextMode: VimMode? = null
)
fun handle(c: Char): ParseResult {
// ...
then handleKey function in VimEngine becomes as follows
fun handleKey(key: VimKey) {
val parser = modeToParser[mode.value]
val parseResult = parser?.parse(key)
parseResult?.nextMode?.let { nextMode ->
scope.launch { _mode.emit(nextMode) }
}
parseResult?.action?.let { action ->
scope.launch { emit(action) }
}
}
What I have currently:
- User hit keys on keyboard → they get handled by
NormalModeParser - When user hit
:NormalModeParserassignnextMode=VimMode.CommandinParseResultso mode switches - Next keystrokes will be handled by
CommandModeParser
So far so good.
Later I can add
- more parsers,
- more keybindings to
normal mode - and more command to
command modeeasily
┌─────────────┐
│ Vim Engine │
└──────┬──────┘
│
┌──────┴──────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Normal │ │ Command │
│ Mode │ │ Mode │
│ Parser │ │ Parser │
└──────────────┘ └──────────────┘
Hook To UI
Once the engine was stable, the next challenge was integrating it cleanly with the Compose UI layer.
I am using compose and handling keystrokes is straightforward. I listen to onKeyEventModifier, get the key, and sent it to ViewModel to be sent later to VimEngine.
I had a screen with a vertical list of tasks and I was navigating through them in vim keybinding, switching between modes, etc.. all is working
But then I wanted to show a task details, side-by-side with task list screen. The standard these days in compose is using multi-pane view. On the left i have tasks list, and on the right I have task details view. I did this and it looked nice.
┌─────────────────────────────────────────────────┐
│ │
│ ┌───────────────┐ ┌───────────────────┐ │
│ │ │ │ │ │
│ │ │ │ │ │
│ │ tasks list │ │ task details │ │
│ │ │ │ │ │
│ │ │ │ │ │
│ └───────────────┘ └───────────────────┘ │
│ │
└─────────────────────────────────────────────────┘
Why I Chose Multiple Vim Engines per Pane
When I introduced the multi-pane layout (tasks list + task details), an interesting architectural question appeared:
Who owns the keyboard behavior when multiple panes are visible?
Each pane had very different interaction semantics:
- Task list → navigation heavy (
j,k,gg, filtering…) - Task details → editing, actions, different commands
- Future panes → unknown behaviors
I considered three approaches.
Option 1 — Dynamic Keymap Switching
Switch keybindings whenever focus changes.
Pros:
- Single engine instant
- Simple mental model initially Cons:
- Keymap mutation at runtime
- Harder to reason about state
- Become fragile as number of panes grows
This felt convenient short-term but risky long-term
Option 2 — Swap Parsers Inside One Engine
Keep one engine but replace it’s parsers based on focused pane.
Pros:
- Still one engine
- Some separation of behavior (different parsers for different focus panes) Cons:
- Engine becomes focus-aware. This prevent it of being re-used in another projects
- Parser lifecycle becomes harder to track
- Increased coupling between UI and engine
This improved separation slightly but still mixed responsibilities.
Option 3 — Multiple Vim Engines (Chosen)
Each focusable pane owns its own VimEngine instance.
The ViewModel simply routes keystrokes to currently focused pane’s engine
Pros:
- Strong isolation between panes
- Each pane can evolve independently
- No runtime mutations of keymaps
- Simpler mental model per engine
- Future-proof for more panes
- Enable parsers sharing when desired Cons:
- More engine instances in memory
- Slightly more wiring in ViewModel
For this applications the tradeoff was clearly worth it.
The Key Design Principle
The decision was guided by one rule:
Keyboard behavior is contextual UI state, not global application state.
By giving each pane its own engine:
- focus becomes the only routing concern
- engines remain pure and predictable
- adding new panes does not increase complexity of existing ones
In practice, this made the system much easier to extend than the single-engine approaches.
Why This Matters for Future Growth
This design unlocks several things almost for free:
- Different Vim capabilities per pane
- Experimental keymaps in isolated areas
- Plugin-like future architecture
- Potential extraction of the Vim engine as a reusable library
Most importantly, it keeps the architecture honest: each UI surface owns its own interaction model.
Small But Important Optimization
Even though I use multiple engines, parsers themselves can still be shared when behavior overlaps. This avoids unnecessary duplication while preserving isolation where it matters.
Key Press
│
▼
┌────────────────┐
│ ViewModel │
│ (focus aware) │
└──────┬─────────┘
│
┌────────────┴────────────┐
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Tasks List Pane │ │ Task Details │
│ VimEngine │ │ VimEngine │
└────────┬────────┘ └────────┬────────┘
│ │
▼ ▼
Normal / Command Normal / Command
Parsers Parsers
The complete flow looks like this
User presses key
│
▼
┌──────────────────────┐
│ Compose UI │
│ onKeyEvent(...) │
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ ViewModel │
│ (focus-aware router) │
└──────────┬───────────┘
│ routes by focus
▼
┌──────────────────────┐
│ VimEngine │
│ (mode-aware parse) │
└──────────┬───────────┘
│ emits
▼
┌──────────────────────┐
│ VimAction │
└──────────┬───────────┘
│ mapped to
▼
┌──────────────────────┐
│ ScreenIntent │
└──────────┬───────────┘
│ handled by
▼
┌──────────────────────┐
│ ScreenInteractor │
│ (business logic) │
└──────────┬───────────┘
│ produces
▼
┌──────────────────────┐
│ Reducer (MVI) │
│ uses currentState │
└──────────┬───────────┘
│ emits
▼
┌──────────────────────┐
│ New State │
└──────────┬───────────┘
│
▼
Compose UI
recomposes
UI
Switching Focus
To be able for the viewModel to know which pane is focused I created a state for it and saved it in viewModel. Then based on emitted vim actions I would know which pane is focused
For example:
- when user click
enteron a task in tasks list pane → switch focus to task details pane - when user click
escon task details pane → switch focus back to tasks list pane - I even went step further and added vim like keybinding for this in
NormalModeParser-
ctrl-l: switch focus to pane on the right (in this case task details) -
ctrl-hswitch focus to pane on the left (in this case tasks list)
-
Of course using mouse clicks here would also work and switch focus properly
Reminder: this is not a vim-only app, but vim based so mouse still works as expected (screen touches as well for mobile devices)
UI Features
-
Tasks List: auto scroll when user hit
j,k,gg,G - Animated task status update, with loading progress while it’s getting updated
- Stacked Notification system: show
info,error,warningnotifications at the top-right corner of the app, with auto-disappearing after 3 seconds - Highlight focused pane
- Popup to show all available keybindings and what each do
More Vim Features
Here is a list of vim feature I also supported
-
Repeatable actions Repeating last command by clicking on
. - In Command Mode: click
arrow-up/arrow-downit would show previous/forward commands executed - Each parser has it’s own buffer
- Vim Engine: Expose buffer for the currently working parser, show buffer content on UI
-
NormalModeParser:
- Created default keybinding
- Extra keybinding: to support configurable keybindings later
Data Layer
This app is intended to be used offline. So I need a local data source and remote data source (Jira). They both expose my domain level models.
- Remote Data Source (interface)
- Real Implementation: abstract way Jira models, and only return back my domain level models
- Fake Implementation for testing
- Local Data Source (interface)
- In-Memory Implementation: store and cache data to memory
- Db Implementation (to be done) save into DB. This is needed for offline mode support
Claude Code
It’s 2026, not using AI in the development workflow would be a missed opportunity.
I used LLM tooling (Claude Code) strategically to accelerate implementation while keeping full architectural ownership and code review responsibility.
My rule was simple:
AI can generate — but I design, verify, and own the system.
What AI helped the most
UI Scaffolding
Since the early focus of the project was the interaction model rather than visual polish, I used Claude to scaffold several UI components.
With well-scoped prompts and clear context, most components were generated correctly in one pass. This allowed me to:
- move faster in early iterations
- avoid spending time on repetitive Compose boilerplate
- keep focus on the Vim engine and state flow
As the product matures, I expect to take more manual control over UX refinement.
API Layer
The Jira integration layer is something I’ve implemented many times professionally, so it was a good candidate for delegation.
I provided Claude with:
- Jira API documentation
- my project structure conventions
- interface contracts (real + fake implementations)
- error-handling expectations
- domain model mappings
Because the constraints were explicit, the generated code was:
- Clean
- Testable
- Idiomatic
- and required only light review adjustments
This is exactly the kind of work where AI currently provides the most leverage.
Unit Testing
The Vim engine and parsers have many edge cases, and comprehensive unit testing is essential but time-consuming.
My workflow was:
- I defined the test scenarios
- Claude generated the test implementations
- I reviewed and refined them
- GitHub Actions enforce them on every PR
This gave me broad test coverage quickly while maintaining confidence in correctness.
What AI Did Not Own
The following remained fully manual:
- overall architecture
- Vim engine design
- mode system
- state flow (MVI)
- focus routing model
- concurrency decisions
AI accelerated the build — but the system design decisions remained human-driven.
Takeaway
Used carelessly, AI can produce fragile systems.
Used deliberately, it becomes a powerful force multiplier.
In this project, the goal was never to replace engineering judgment — only to remove unnecessary friction from the implementation process.
Things To Improve
This is still the first version of the app. The core interaction model is working well, but several areas need to mature before this could be considered production-ready.
High priority
Proper Jira API authentication
Currently the app uses a simple API token. Supporting OAuth and improving token handling will be required for real-world usage.Offline mode support
The data layer is designed for it, but the database implementation and sync strategy still need to be completed.Smarter cache invalidation
Right now caching is basic. As usage grows, I will need more deliberate invalidation and refresh strategies to avoid stale task data.
Medium priority
Extract VimEngine as a reusable library
The engine is already mostly decoupled. With some cleanup it could become a standalone module usable in other projects.More Vim motions and text objects
The current implementation focuses on navigation and commands. Expanding motion coverage will improve muscle-memory compatibility for heavy Vim users.
Longer-term explorations
Performance tuning under heavy key input
As the number of panes and commands grows, I want to measure and optimize keystroke latency and buffering behavior.Plugin-style extensibility
The multi-engine design opens the door for pane-specific extensions. I’m interested in exploring how far this model can scale.
Final thoughts
What started as a small experiment has quietly become part of my daily workflow. I now use it daily at work on daily standup (I am the scrum master, I can do that 😁)
Simple operations — like filtering tasks or moving an issue from todo to done are now muscle memory. For example, md (move to done) is often faster than reaching for the mouse and navigating multiple menus.
Interestingly, my teammates initially assumed I was using some existing tool rather than something custom-built. That reaction alone was a strong signal that the interaction model is heading in the right direction.
There is still plenty of work ahead, but the core bet is already paying off: when keyboard interaction is treated as a first-class architectural concern, the entire experience changes.
The goal was never to replace Jira, only to make working with it finally feel fast.
Top comments (0)