DEV Community

Cover image for Beyond Open Source: Why AI-Assisted Projects Need Open Method
Ola Prøis
Ola Prøis

Posted on

Beyond Open Source: Why AI-Assisted Projects Need Open Method

A few weeks ago, someone on Hacker News called my project "open weights."

I'd just shared Ferrite, a markdown editor with 100% AI-generated Rust code. The source was on GitHub, MIT licensed, the whole deal. But this commenter argued that without sharing the prompts and process that created it, I was essentially doing the AI equivalent of releasing model weights without the training data. The code was visible, but the inputs weren't.

At first I pushed back. The code compiles. It runs. Anyone can fork it, modify it, contribute. Isn't that what open source means?

But the comment stuck with me. And the more I thought about it, the more I realized they had a point.

The gap in "open source"

Open source was designed for a world where humans wrote code. The implicit assumption was always: if you can read the code, you can understand how it was made. A skilled developer could look at a function and reverse-engineer the thinking behind it. The code was the process, more or less.

(This is idealized, of course. Plenty of open source projects have opaque decision-making - closed maintainer discussions, undocumented tribal knowledge. But the potential for transparency was there. The code contained enough signal to reconstruct intent.)

AI-assisted development breaks that assumption.

When Claude writes a function based on my prompt, the code tells you what it does, but not why it exists in that form. Was this the first attempt or the fifteenth? What constraints did I give? What did I reject along the way? What context did the AI have about the rest of the codebase?

The code is a snapshot. The process is invisible.

This isn't necessarily a problem for using the software. But it's a problem if you want to learn from it, build on it, or understand the decisions behind it. Traditional open source invited you into the workshop. AI-assisted open source shows you the finished product and locks the door.

graph

What would "open method" look like?

After that HN comment, I went back and documented everything. The actual workflow is now public: how I use multiple AIs for ideation, how I structure Product Requirements Documents, how I break tasks into subtasks, how I write handover prompts between sessions, how I maintain an AI context file so the model knows the codebase patterns.

The full thing is in the repo under docs/ai-workflow/. It's not polished. Some of it is rough notes. But it's real.

And here's what I learned: sharing this stuff is harder than sharing code.

Code has conventions. Linters. Tests. There's a shared understanding of what "good" looks like. But AI prompts? There's no standard format. No agreed-upon level of detail. Do I share every failed attempt? Just the successful ones? The iterative refinements?

I landed on sharing:

  • The workflow documentation (how I actually work with AI, step by step)
  • Historical PRDs (the requirements documents I feed to the AI)
  • Handover prompts (what I tell the AI at the start of each session)
  • Task breakdowns (how features get decomposed into implementable chunks)

Is this enough? I don't know. But it's more than just the code.

Why this matters beyond my project

I keep seeing AI-assisted projects pop up on GitHub. Some are explicit about it, some aren't. And I think we're heading toward a world where a significant chunk of open source code will be AI-generated or AI-assisted.

If that's true, we need to figure out what transparency means in this context.

The conversation is already starting. Mozilla.ai recently argued that "without an AI Coding policy that promotes transparency alongside innovation, Open Source codebases are going to struggle." They've implemented PR templates asking contributors to disclose their level of AI usage. Coder has published formal guidelines requiring disclosure when AI is the primary author, plus verification evidence that the code actually works.

This is good progress. But disclosure - saying "AI wrote this" - is different from method - sharing how it was written. Both matter, but they serve different purposes. Disclosure helps reviewers calibrate their trust. Method helps others learn, replicate, and build on your approach.

The traditional open source licenses don't cover either. MIT and GPL talk about distribution, modification, attribution. They don't say anything about documenting your process. There's no legal requirement to share your prompts.

But there's a difference between legal requirements and community norms. Open source thrived because of a culture of sharing knowledge, not just code. READMEs, contribution guides, architectural decision records, commit messages that explain why - all of this is technically optional but practically essential.

I'm arguing we need to extend that culture to AI-assisted development. Not as a legal requirement, but as a community expectation. If you're releasing AI-generated code, consider releasing the method too.

There's also a selfish reason to do this: open method protects the author. If your code has quirks or unconventional patterns, the method shows why. It proves you guided the AI - that you made architectural decisions, rejected bad suggestions, iterated toward something intentional. Without it, people might assume you just pasted output without thinking. With it, you're demonstrating the craft behind the code.

What to call it

"Open weights" doesn't quite fit. That term has a specific meaning in ML - releasing model parameters without training data or code. It's about what you're withholding, not what you're sharing.

"Reproducible development" is closer, but sounds academic. And true reproducibility might be impossible anyway - you can't perfectly reproduce an AI interaction. Run the same prompt twice and you'll get different output. This isn't about deterministic reproduction like a build script. It's about conceptual reproduction - understanding the approach well enough to build something similar, or to pick up where someone left off.

I've been thinking of it as "open method" - sharing not just the code, but the process that created it. The prompts, the workflow, the decisions. Enough that someone else could understand not just what you built, but how you built it.

graph

This parallels how academia handles research. "Open science" doesn't just mean publishing results. It means sharing data, methodology, analysis code. A paper without methodology gets rejected - you can't just say "trust me, the results are valid." Yet in software, we accept the binary without the lab notes all the time. We're used to it because the code was the lab notes. With AI-assisted development, that's no longer true.

Software development with AI needs the same shift academia made: recognize that outputs alone aren't enough.

Practical suggestions

If you're releasing AI-assisted code and want to practice "open method":

Document your workflow. Not a polished tutorial, just an honest description of how you actually work. What tools? What prompts? What iteration looks like?

Save your PRDs. If you write requirements documents for the AI, keep them. They're the closest thing to "training data" for your specific project.

Keep handover context. Whatever you tell the AI at the start of sessions - system prompts, context files, architectural notes - consider making it available.

Note significant decisions. When you rejected an AI suggestion or chose between approaches, a quick note about why helps future readers.

None of this needs to be perfect. The bar should be "useful to someone trying to understand how this was built," not "publishable academic paper."

The tradeoffs

I should acknowledge: there are reasons not to share everything.

Prompts can reveal proprietary thinking - your secret sauce for getting good results. Sharing failed iterations might expose security vulnerabilities or embarrassing dead ends. Some companies have legitimate IP concerns about their AI workflows.

This isn't all-or-nothing. You can share your general approach without revealing every prompt. You can document the workflow without exposing sensitive business logic. The goal is enough transparency to be useful, not a livestream of your entire development process.

But I'd argue the default should shift toward openness, especially for open source projects. If you're already sharing the code, sharing the method is a natural extension.

What's still missing

I don't think sharing my workflow documentation solves this problem. It's one example. What we actually need are conventions - the way we have conventions for READMEs, for commit messages, for contribution guides.

What should an "open method" disclosure look like? Is there a standard format? Should it live in the repo, in the PR, somewhere else? How much detail is enough - the final working prompt, or the fifteen failed attempts before it?

We also lack tooling. We have git for code, but we don't have good version control for chat sessions. My "handover prompts" are markdown files I manually maintain. That works, but it's friction. The first tool that makes capturing AI development context as easy as git commit will unlock a lot more openness.

Here's a concrete starting point: what if GitHub repos had an AI_METHOD.md alongside README.md and CONTRIBUTING.md? A standard template that answers: What AI tools were used? What's the general workflow? Where can I find example prompts or PRDs? It's not a perfect solution, but it's a convention - and conventions are how communities coordinate.

I don't have all the answers yet. But I think starting to share, even imperfectly, is how we'll figure it out. The early open source movement didn't have perfect conventions either. They emerged through practice, through people trying things and seeing what worked.

If AI-assisted development is going to become normal, we need to normalize showing our work. Not because there's anything wrong with using AI, but because transparency builds trust, enables learning, and strengthens the open source ecosystem we all benefit from.

The code is the output. The method is the craft. Both can be open.


I'm curious what you think. If you were reviewing an AI-assisted PR, what documentation would actually help you trust it? What would you want to see in an "open method" disclosure? Let's figure this out together in the comments.


I wrote more about the specific workflow in The AI Development Workflow I Actually Use, and the story of building Ferrite in I shipped an 800-star Markdown editor without knowing Rust. The full workflow documentation is in the Ferrite repo.

Top comments (0)