Disclaimer: This post isn't a manifesto against modularity, but rather a description of a temporary approach for solo developers actively using LL...
For further actions, you may consider blocking this person and/or reporting abuse
This feels bad on so many levels.
First: user/developer experience. Using regions won't help you when you don't know which regions there are. Using generic regions are more visual noise than anything else.
A long script adds more distraction than a short one, that is why people recommend to keep classes and functions small.
Second: function/class visibility. When all the code is in one file how do you handle classes and functions that only are used by specific classes. how do you handle functions and classes that could have generic names in a module, you don't want 80 character plus names in you code, right?
Third: useless AI information. The goal of context engineering is to give the LLM just enough information to generate the wanted outout, no more no less. By feeding the LLM the whole application every time, you are filling the context window with superfluous information.
I looked at your project and you don't even implement your own advise. Why do you want to convince other people it is a good idea?
Thank you for the critique and the opportunity to clarify my thought. You're absolutely right: in an ideal world, during long-term development or on a large team, modularity is the norm, and I'm not advocating abandoning generally accepted practices. But when you're working alone and using AI, the opposite can be true. My article isn't about writing perfect, scalable code for a SaaS product. It's about a pain point: how to quickly iterate on a project where 90% of the work is prompt engineering and AI-assisted logic testing, and 10% is the code itself.
You're right, if you look at my repository, you'll notice that I use modules. This is a key point I should have made clear. I use modules for the final product I'm preparing for release. But the iterative development process for this demo - that is, the stage where I'm trying to get LLM to generate complex mathematical logic - took place in a monolith (if you look at the commit history). I used modularity for the cleanliness of the final code, but the monolith for generation speed.
First: You're right, for humans, excess visual noise is bad. But here we're fighting a double whammy: bad UX for humans versus inefficiency for AI. For me, in this scenario, Ctrl+F and regions are faster than constantly switching between 15 files, forcing LLM to re-learn the dependency graph. This is a temporary, pragmatic solution for rapid iteration.
Second: In a multi-module structure, this is critical. In my monolithic scenario, where I work alone and test the logic, function privacy is less important than overall visibility. If a function is needed only by class A, I simply leave it in the same file, rather than waste time creating a separate module that LLM will still have to reassemble into the context.
Third: Yes, context overloading is expensive. However, in my case, when I ask LLM to rewrite or extend existing 5,000-line logic, it's easier to give it the entire working code so it can see all the local variables and constants than to try to manually compose a perfect prompt that includes three files, five imports, and explains the location of a global constant it can't see. Also, it's easier to upload one file than 10, and some platforms have file upload limits, so there's a fine line between context overloading and the free limit. Moreover, the AI will still read ALL the files you send it, so it doesn't care whether everything is in one place or not.
I was talking about the workflow of an LLM developer, not about best practices in general. Monolithic is for AI iteration, modularity is for release.
The workflow doesn't make much sense. You keep on working on the application until is is done, and then you make it modular?
What do you do when you want to change the code, move everything back in a single file?
Use a project file like CLAUDE.md to give enough information about your project. You can add module information using sub files. And claude code or opencode will only read those when it thinks it needs them.
From what I read I think that you are not using the right AI tools. I would invest time in finding the right tool instead of trying to go with a solution that is half-baked.
Why combine multiple files into one if I need to change part of my app, not the whole thing? You already have the architecture, and you don't need to radically change the API (unless you suddenly decide to change the GUI framework, which is pointless at this stage), so it doesn't make sense. Combining them into a single file just to improve button placement is pointless, as is writing math.
You suggest using auxiliary files (like CLAUDE.md) to describe the structure. This is a great solution for understanding the AI project at the start or during refactoring. However, when I generate new, complex logic, I need the AI to continuously see all the local variables and functions it accesses. Adding documentation to prompts or separate files often results in the AI ignoring these instructions, as its context is massively overloaded with an additional 100 lines of documentation. It becomes even more confused about the architecture, especially when such documentation might contain a single error. I haven't tested it, but I think it would be a failure.
You're saying I'm using the tools incorrectly. That may be true, but I'm basing my approach on the fact that current LLMs work best as a "super-code-completion" tool in a single file, not as a "project manager." My approach is to minimize the likelihood that the AI will "forget" about external dependencies when I ask it to write complex code. In this scenario, simply following its replace/delete instructions in a single file is often faster than initiating complex AI refactorings through the project description system.
When I mentioned changes, i was thinking about big changes not little ones. The fact that you think it is pointless seems to indicate you will never consider rewrites once a project is split in to modules.
A note on your local variables. Global variables are a bad thing in an application. And your main function is 3000 lines. I see function definitions in a while loop. Loops in loops. Have you ran static analysis tools on the code to check the code quality?
The forgetting has to do with having too much tokens in the context window. The thing is there are multiple ways to avoid it, for example run sub agents in their own context window. Another is let AI come up with a plan, review it and store that, then run the plan.
The main point I'm trying to make is that blaming the tool, and having lack of knowledge about the tool lets you arrive to this hair-raising solution of using a single file.
Learn more before you start using it for complex things. AI is like an airplane, you don't just sit in the pilot seat and start flying. First you go in a simulator, than a small airplane and then bigger and bigger.
I never claimed to stay in a monolith forever. My approach is iterative development with AI, not an architectural standard. I use the monolith as a draft. After the AI has generated working logic (which can take dozens of iterations), I refactor to remove global variables, nested loops, and so on. You're right, if I were going to work on the code manually for six months, I'd start with modules. But here we're talking about generating 90% of the code in one month.
You're right, static analysis would reveal many problems in this "draft." But I intentionally skip this step during the AI generation stage. I don't run linters on code that the AI wrote three minutes ago and that hasn't yet passed the final check. It's like writing code on paper - you won't check the spelling until you've finished the draft. I use the monolith as a testing ground for the AI, not as a production-ready artifact. If you pay attention, the last commit was made three days ago, and the refactoring commit is five days old. Moreover, its name says "WIP," which means I haven't finished it yet.
Your airplane analogy is very accurate. I agree that learning is important. However, I believe my "simulator" is precisely that single-file monolith. It's the simplest and least demanding in terms of LLM's understanding of the external context. I'm not "blaming" the tool; I adapt the code structure to how the tool currently best perceives information - as a single block. This isn't a sign of ignorance, but an engineering compromise between speed and clarity, dictated by the current capabilities of LLM (at least the free version). Launching sub-agents, creating a plan - how much will this eat into my limit? I'm not one of those people willing to pay $20 for perfect code. I prefer to do everything quickly, cheaply, but unfortunately not with high quality, so I can see if my idea even works. On the last iteration of an AI, you might suddenly feel that the function you worked so hard on with it might turn out to be completely useless and will have to be removed; in fact, you are deleting not only part of the code, but also your finances.
Finally someone said it! 🙌
I've been thinking the same thing while working with AI. The moment I break code into 20 different files, Claude and GPT start hallucinating imports, forgetting functions, and losing context.
But here's the catch: 1000-line scripts are great for prototyping. For production? Nightmare.
My current workflow: prototype in one massive script → get it working → THEN refactor with AI's help. Best of both worlds.
What do you think - refactor at the end or never refactor? 🤔
Finally someone said it! 🙌
I've been doing the same thing lately. When I'm prototyping with Cursor, keeping everything in one monolithic script actually helps the AI understand the full context. The moment I split things into modules, the AI starts hallucinating imports and forgetting function signatures.
But yeah, once the prototype is stable, refactoring into modules becomes this satisfying "paying back technical debt" session.
Question: How do you decide when it's time to break the monolith?
Thank you! You can break the monolith when you're generally confident the project has achieved most of its goals. I always create a short roadmap, and once a lot of features are completed, you can break the project if you're confident your API won't undergo any major changes.
This is a really thoughtful post — and I appreciate that you clearly framed it as a context-specific approach, not a manifesto against modularity. That nuance matters.
I especially agree with this line:
“AI is not a project manager. It is an executor.”
That’s such an important distinction in the LLM era.
For solo developers who iterate fast and heavily rely on AI for refactoring or generation, a monolithic script does reduce cognitive overhead. One file = one context window. No dependency graph gymnastics. No “wait, which file defines this?” moments. From an AI-collaboration standpoint, that’s a very real productivity boost.
That said, I see the monolith not as an anti-pattern — but as a phase.
In rapid prototyping:
Speed > architecture purity
Context completeness > separation of concerns
Iteration > long-term scalability
But once a project stabilizes, modularity becomes less about structure aesthetics and more about:
Testability
Replaceability of components
Onboarding clarity
Long-term maintainability
I also liked your point about “illusion of structure.” I’ve seen projects where splitting into 10 folders created more mental overhead than clarity. Structure without boundaries is just fragmentation.
One thing I’d add:
Instead of thinking monolith vs modular, maybe we think in terms of cognitive load management — for both humans and AI. The real question becomes:
At this stage of the project, what structure minimizes friction?
For solo + AI-driven experimentation, a well-organized monolithic script with logical sections (regions, clear grouping, strong naming) can absolutely be the most pragmatic choice.
Thanks!