I wrote about Skaro before, but since then the project has changed quite a lot.
Version 2.0 is not a cosmetic update. It is a deeper rethink of how the product should work and what kind of collaboration model it should support.
So instead of posting a simple changelog, I want to explain what Skaro actually is, why I keep building it, and where I want to take it next.
The question I get the most
Whenever I show a new AI tool for software development, I get the same question almost immediately:
How is this different from Cursor, Codex, Claude Code, and why build something else at all?
That is a fair question. But I think the problem is in the framing.
Most people try to place Skaro into an existing category: editor, coding agent, code generator, or just another UI around an LLM.
That is not how I think about it.
Skaro was never meant to be just another utility that helps write code faster. And it was not meant to be just a chat where you occasionally paste pieces of a project.
The idea is different.
What Skaro is
In short, Skaro is a collaborator and a workspace for building software projects together with AI.
Not in the sense of replacing the developer.
Not in the sense of asking the model to do everything on its own.
But in the sense of real collaboration, where the human and the AI have different roles, and those roles are separated in a natural way.
The human leads the project. The human owns the intent, the key decisions, the architecture, the constraints, and the meaning behind the work.
The AI acts as engineering leverage. It helps discuss ideas, shape documents, break work into stages, define tasks, implement changes, return to context, and push things toward completion.
That is the model that makes sense to me.

There is a typo in the generated image :)
Why I felt this tool needed to exist
Once you start building software with AI seriously, one problem shows up very quickly.
Human + AI collaboration produces a large amount of decisions, assumptions, agreements, intermediate conclusions, corrections, and context. You cannot keep all of that in your head. And you cannot scatter it across random chats either.
At some point, a project starts to fall apart not because there is not enough code, but because the thread gets lost:
- why a certain decision was made
- what the architecture depends on
- what was already agreed on
- how one task relates to another
- which constraints were already established
That is why I think an AI-assisted project needs external memory.
Not a detached knowledge base somewhere on the side, but a working structure that lives close to the project itself and stays visible during everyday work.
Why artifacts are at the center of Skaro
That is what artifacts are for in Skaro.
Artifacts are not documents for the sake of documents. And they are not bureaucracy layered on top of development.
They are the recorded output of what was already discussed, designed, and agreed on: architecture, plans, milestones, tasks, and other important project decisions.
That gives the human a stable thread to follow.
And it gives the AI something better than a blank slate every time. The model can work from an existing foundation instead of guessing from a short prompt.
For me, this is one of the key ideas.
If a project is developed together with AI, then an important part of the context should live not only in the author’s head and not only in chat history, but in a clear structure inside the repository.
That is what gives the work continuity.
This is what the .skaro folder structure looks like: (trimmed)
.skaro
│ config.yaml
│ constitution.md
│ devplan.md
│ secrets.yaml
│ state.yaml
│ token_usage.yaml
│ usage_log.jsonl
│
├───architecture
│ │ adr-001-using-fastapi-as-web-framework.md
│ │ adr-002-simplified-layered-monolith-as-architectural-pattern.md
│ │ adr-003-using-psutil-for-system-metrics.md
│ │ adr-004-no-database-in-favor-of-stateless-architecture.md
│ │ adr-005-no-docker-in-favor-of-native-windows-run.md
│ │ adr-006-no-authentication-and-authorization-for-public-....md
│ │ adr-007-using-pydantic-settings-for-config.md
│ │ adr-008-sync-psutil-calls-in-async-endpoints.md
│ │ adr-009-testing-strategy
│ │ architecture.md
│ │ chat-conversation.json
│ │
│ └───diagrams
├───chat
│ tasks.json
│
├───docs
│ review-results.json
│
├───features
├───milestones
│ ├───01-foundation
│ │ │ milestone.md
│ │ │ order.json
│ │ │
│ │ ├───config-module
│ │ │ │ clarifications.md
│ │ │ │ plan.md
│ │ │ │ spec.md
│ │ │ │ tasks.md
│ │ │ │ tests-confirmed
│ │ │ │ tests.json
│ │ │ │ verify.yaml
│ │ │ │
│ │ │ └───stages
│ │ │ └───stage-01
│ │ │ AI_NOTES.md
│ │ │
├───models_cache
│ groq.json
│
├───ops
└───templates
adr-template.md
ai-notes-template.md
architecture-template.md
constitution-template.md
devplan-template.md
plan-template.md
security-checklist.md
spec-template.md
What the workflow is supposed to look like
The workflow I believe in looks roughly like this:
First, the human and the AI think through the project and create the core architecture documents.
Then those documents are used to shape a plan, break it into milestones, and define concrete tasks.
Only after that does implementation begin, again with AI, but no longer in a chaotic way. It happens with a recorded foundation behind it.
This is also why chat matters to me — but only where discussion is actually needed.
The point is not “the product has chat.”
There is already enough chat in the world.
The point is that architecture, plans, tasks, and other working materials can be discussed right where they live, and the result can be saved directly into the repository.
That keeps the workflow coherent. Context stays attached to the thing it belongs to.
What changed in Skaro 2.0
With version 2.0, my goal was not to add isolated features.
The goal was to improve the overall process of human + AI collaboration around a software project.
Not “a few more screens,” but a product that makes this way of working more understandable, more natural, and more usable in day-to-day practice.
Here are the changes that matter most to me.
Chat is now embedded where the work happens
In Skaro 2.0, chat is available wherever discussion with AI is actually part of the workflow.
That matters not because “there is now a chat inside the product.” Plenty of tools already have that.
What matters is that the discussion now happens in the context of artifacts, tasks, and working pages. You do not have to leave the workflow or move important conversations into a disconnected place.
The model gets a real starting point before implementation
Before executing a task, the LLM receives an initial project context and can then request the files it needs for implementation.
That is an important step toward consistency.
The model is not just given a short prompt and expected to work almost blindly. It starts from a defined context and can retrieve the relevant material within the scope of the task.
That makes the implementation more coherent and reduces the number of cases where generated changes drift away from decisions that were already made.
Auto-commit when a task is completed
There is now an option to enable automatic commits after task completion.
This does not look like a headline feature, but in practice it is genuinely useful.
When your workflow is moving quickly, small things like this reduce friction and help maintain momentum.
Statistics now live on a separate page
This made the interface cleaner and easier to reason about.
Core workflows no longer compete for space on the same screen, and model usage and activity metrics are easier to inspect in a dedicated place.
The start page reflects project state better
The start page now includes a Kanban-style task board.
This is not just a nicer overview. It gives a quick read on the current project state, shows where work stopped, and helps you get back into the flow without extra navigation.
For a tool that is supposed to help run a project, not just generate code fragments, that matters a lot.
The interface became more flexible and more calm
Version 2.0 adds lightweight theme customization. You can choose an accent color or set your own.
More broadly, the UI was improved to feel more stable and predictable during daily use.
I do not think of interface design as decoration. If a product is meant to be a real workspace, it should feel calm, readable, and consistent.
So in 2.0 I paid attention not only to what was added, but also to how the product feels while you work in it every day.
Known issues were also fixed
A less glamorous but important part of the work in 2.0 was fixing known issues that were getting in the way of the product being used the way it was intended.
For me, that kind of work matters just as much as shipping new features.
If a tool wants to become a long-term workspace, it has to be solid in everyday use, not only interesting as an idea.
Where I want to take the project next
The core of Skaro will remain Open Source.
That is a principle for me.
The underlying idea — building software with AI around artifacts, project memory, and a tighter link between discussion and implementation — should stay open.
At the same time, I want to build a separate workspace layer around Skaro for teams and companies.
That means a broader collaboration environment: access control, analytics, cost visibility, boards, shared workflows, and other capabilities that matter when the product is used by a team rather than a single developer.
That feels like the next logical layer.
But the open source core is not going away, and it is not meant to become a closed showcase.
Final thoughts
I do not see Skaro as an attempt to make “just another AI coding tool.”
And I do not see it as another shell around a model.
I see it as a working environment where humans and AI build software projects together, while key decisions, plans, milestones, and tasks are stored in a clear structure so the project does not dissolve into memory and scattered chat threads.
That is what Skaro 2.0 was rebuilt for.
If this model resonates with you, I would genuinely like to hear your thoughts.
And if you are already building software with AI and have run into the same problems — project memory, decision consistency, context loss — I would be especially interested in comparing approaches.
GitHub: https://github.com/skarodev/skaro
Website: https://skaro.dev
Docs: https://docs.skaro.dev
Telegram: https://t.me/skarodev
Discord: https://discord.gg/zUv6AHuJwD




Top comments (0)