DEV Community

Cover image for Modularity - An Overrated Anti-Pattern? The Power of the Monolithic Script in the Age of AI.
EmberNoGlow
EmberNoGlow

Posted on

Modularity - An Overrated Anti-Pattern? The Power of the Monolithic Script in the Age of AI.

Disclaimer: This post isn't a manifesto against modularity, but rather a description of a temporary approach for solo developers actively using LLM during rapid iteration and prototyping. For production, large teams, and long-term projects, modularity remains the gold standard.


Hello everyone đź‘‹

I would like to share my experience (based on my project), or rather advice on why refactoring is your (and AI's) potential bottleneck and how to solve the problem if you are faced with breaking your monolithic 1000-line script into modules.

The Problem

We've all seen this path: a tiny script suddenly balloons to 500 lines. Then comes the moment when, following the advice of seasoned colleagues, we start the "proper refactoring": creating utils/, parsers/, and proliferating __init__.py files. But in an era where our main co-author is an LLM, this beauty becomes a hindrance.

But why is modularity your source of friction?

If your code works in a single file, that's good. But your project grows. The line count increases. You start getting confused. You decide to refactor. You do it manually because you understand that an AI's advice will turn into another "find -> copy -> paste" quest. But suddenly, you realize something is wrong.

  • You have too many files.
  • You realize it’s messy.
  • It’s an illusion of structure.
  • It introduces dependencies.

When Technical Problems Begin

Dependencies

You need to create a file to list global variables. Your project has bloated. Half the files are just importing dependencies. It looks structured from the outside, but it’s internal chaos.

Example: Your script A now depends on the function process_data() from the module utils.helpers.data_cleaner. For this to work, A needs from utils.helpers.data_cleaner import process_data. Now, if you rename data_cleaner to sanitizer, you are forced to change it in a dozen places, whereas in a monolith, you would only need to change def process_data to def sanitize.

Submitting to AI

If you rely heavily on AI assistants for code generation, managing a modular project can become cumbersome. The AI might struggle to keep track of complex dependency graphs, leading to errors in code generation or suggestions that break existing functionality.

Key point: AI is not a project manager. It is an executor. It works best with a complete, self‑contained context — a single file often provides this more reliably than a collection of interconnected modules.

The Monolith as a Solution

If you have one file, it's beneficial for several reasons:

  1. Local dependencies are resolved. You declare everything - classes, functions, variables - in one file.
  2. It's easier to send the AI one file than multiple files.
  3. It's easier for the AI to parse one file than to reason about the availability of local dependencies from specific project directories.

Pitfalls

A single file can be perceived as a problem simply because it gets huge.

  1. You get confused.
  2. You have to use Ctrl+F to jump to a specific class or function.
  3. Others find it hard to understand.

When monolithic design is a bad idea

  1. Teamwork.
  2. Code that will be maintained for a long time.
  3. Projects with clearly defined components (web API, database, data processing).

But these problems have solution.

Use regions. Seriously, it’s simple. It’s fast. It costs you 6-9 extra characters in the code. By using #region and #endregion (alternatives exist in other languages too), you can easily break the code into blocks. For example, you can declare all classes in #region Classes and helper functions in #region Helpers. It’s convenient. Practical. Clean. And with IDE extensions (for example, Region Maker for VScode), it becomes an excellent experience.

Conclusion

The best code is the code that isn't written, but seriously - choose what works best for you. Good luck!

Top comments (18)

Collapse
 
xwero profile image
david duymelinck • Edited

This feels bad on so many levels.

First: user/developer experience. Using regions won't help you when you don't know which regions there are. Using generic regions are more visual noise than anything else.
A long script adds more distraction than a short one, that is why people recommend to keep classes and functions small.

Second: function/class visibility. When all the code is in one file how do you handle classes and functions that only are used by specific classes. how do you handle functions and classes that could have generic names in a module, you don't want 80 character plus names in you code, right?

Third: useless AI information. The goal of context engineering is to give the LLM just enough information to generate the wanted outout, no more no less. By feeding the LLM the whole application every time, you are filling the context window with superfluous information.

I looked at your project and you don't even implement your own advise. Why do you want to convince other people it is a good idea?

Collapse
 
embernoglow profile image
EmberNoGlow • Edited

Thank you for the critique and the opportunity to clarify my thought. You're absolutely right: in an ideal world, during long-term development or on a large team, modularity is the norm, and I'm not advocating abandoning generally accepted practices. But when you're working alone and using AI, the opposite can be true. My article isn't about writing perfect, scalable code for a SaaS product. It's about a pain point: how to quickly iterate on a project where 90% of the work is prompt engineering and AI-assisted logic testing, and 10% is the code itself.

You're right, if you look at my repository, you'll notice that I use modules. This is a key point I should have made clear. I use modules for the final product I'm preparing for release. But the iterative development process for this demo - that is, the stage where I'm trying to get LLM to generate complex mathematical logic - took place in a monolith (if you look at the commit history). I used modularity for the cleanliness of the final code, but the monolith for generation speed.

First: You're right, for humans, excess visual noise is bad. But here we're fighting a double whammy: bad UX for humans versus inefficiency for AI. For me, in this scenario, Ctrl+F and regions are faster than constantly switching between 15 files, forcing LLM to re-learn the dependency graph. This is a temporary, pragmatic solution for rapid iteration.

Second: In a multi-module structure, this is critical. In my monolithic scenario, where I work alone and test the logic, function privacy is less important than overall visibility. If a function is needed only by class A, I simply leave it in the same file, rather than waste time creating a separate module that LLM will still have to reassemble into the context.

Third: Yes, context overloading is expensive. However, in my case, when I ask LLM to rewrite or extend existing 5,000-line logic, it's easier to give it the entire working code so it can see all the local variables and constants than to try to manually compose a perfect prompt that includes three files, five imports, and explains the location of a global constant it can't see. Also, it's easier to upload one file than 10, and some platforms have file upload limits, so there's a fine line between context overloading and the free limit. Moreover, the AI ​​will still read ALL the files you send it, so it doesn't care whether everything is in one place or not.

I was talking about the workflow of an LLM developer, not about best practices in general. Monolithic is for AI iteration, modularity is for release.

Collapse
 
xwero profile image
david duymelinck

The workflow doesn't make much sense. You keep on working on the application until is is done, and then you make it modular?
What do you do when you want to change the code, move everything back in a single file?

Use a project file like CLAUDE.md to give enough information about your project. You can add module information using sub files. And claude code or opencode will only read those when it thinks it needs them.

From what I read I think that you are not using the right AI tools. I would invest time in finding the right tool instead of trying to go with a solution that is half-baked.

Thread Thread
 
embernoglow profile image
EmberNoGlow

Why combine multiple files into one if I need to change part of my app, not the whole thing? You already have the architecture, and you don't need to radically change the API (unless you suddenly decide to change the GUI framework, which is pointless at this stage), so it doesn't make sense. Combining them into a single file just to improve button placement is pointless, as is writing math.

You suggest using auxiliary files (like CLAUDE.md) to describe the structure. This is a great solution for understanding the AI ​​project at the start or during refactoring. However, when I generate new, complex logic, I need the AI ​​to continuously see all the local variables and functions it accesses. Adding documentation to prompts or separate files often results in the AI ​​ignoring these instructions, as its context is massively overloaded with an additional 100 lines of documentation. It becomes even more confused about the architecture, especially when such documentation might contain a single error. I haven't tested it, but I think it would be a failure.

You're saying I'm using the tools incorrectly. That may be true, but I'm basing my approach on the fact that current LLMs work best as a "super-code-completion" tool in a single file, not as a "project manager." My approach is to minimize the likelihood that the AI ​​will "forget" about external dependencies when I ask it to write complex code. In this scenario, simply following its replace/delete instructions in a single file is often faster than initiating complex AI refactorings through the project description system.

Thread Thread
 
xwero profile image
david duymelinck

When I mentioned changes, i was thinking about big changes not little ones. The fact that you think it is pointless seems to indicate you will never consider rewrites once a project is split in to modules.

A note on your local variables. Global variables are a bad thing in an application. And your main function is 3000 lines. I see function definitions in a while loop. Loops in loops. Have you ran static analysis tools on the code to check the code quality?

The forgetting has to do with having too much tokens in the context window. The thing is there are multiple ways to avoid it, for example run sub agents in their own context window. Another is let AI come up with a plan, review it and store that, then run the plan.

The main point I'm trying to make is that blaming the tool, and having lack of knowledge about the tool lets you arrive to this hair-raising solution of using a single file.
Learn more before you start using it for complex things. AI is like an airplane, you don't just sit in the pilot seat and start flying. First you go in a simulator, than a small airplane and then bigger and bigger.

Thread Thread
 
embernoglow profile image
EmberNoGlow

I never claimed to stay in a monolith forever. My approach is iterative development with AI, not an architectural standard. I use the monolith as a draft. After the AI ​​has generated working logic (which can take dozens of iterations), I refactor to remove global variables, nested loops, and so on. You're right, if I were going to work on the code manually for six months, I'd start with modules. But here we're talking about generating 90% of the code in one month.

You're right, static analysis would reveal many problems in this "draft." But I intentionally skip this step during the AI ​​generation stage. I don't run linters on code that the AI ​​wrote three minutes ago and that hasn't yet passed the final check. It's like writing code on paper - you won't check the spelling until you've finished the draft. I use the monolith as a testing ground for the AI, not as a production-ready artifact. If you pay attention, the last commit was made three days ago, and the refactoring commit is five days old. Moreover, its name says "WIP," which means I haven't finished it yet.

Your airplane analogy is very accurate. I agree that learning is important. However, I believe my "simulator" is precisely that single-file monolith. It's the simplest and least demanding in terms of LLM's understanding of the external context. I'm not "blaming" the tool; I adapt the code structure to how the tool currently best perceives information - as a single block. This isn't a sign of ignorance, but an engineering compromise between speed and clarity, dictated by the current capabilities of LLM (at least the free version). Launching sub-agents, creating a plan - how much will this eat into my limit? I'm not one of those people willing to pay $20 for perfect code. I prefer to do everything quickly, cheaply, but unfortunately not with high quality, so I can see if my idea even works. On the last iteration of an AI, you might suddenly feel that the function you worked so hard on with it might turn out to be completely useless and will have to be removed; in fact, you are deleting not only part of the code, but also your finances.

Thread Thread
 
xwero profile image
david duymelinck • Edited

You are contradicting yourself. You stated before the project is in the final stage. And in the latest comment it is WIP?

Wouldn't it be better that you don't need to transform global variables into local ones, that you don't need to fix cyclomatic complexity, that you don't need to break up the code into modules?
For me that is not a draft, that are ink smudges on a wet beer coaster.
Would you accept the current code from someone you are mentoring?
If you allow an LLM to write bad code, it will continue to write bad code because that is how you learned it to behave.

The airplane analogy is not about the project, but about the way you work with LLM's.
I'm totally on board trying the free route. In fact the first thing I did diving into AI was testing what LLM I could run on my laptop. And i discovered you can run quite accurate models on a laptop, if you are willing to sacrifice some speed.
Kimi has a free tier that allows you to run a tool like opencode.
If you are willing to code subagents there is hugging face transformers.
You don't need to use the models that are relying on subscriptions.
I payed for a month of Claude use to see how it is different from free models. It is a subscription so you can cancel it after you payed. And I'm careful to stay within the limits of the subscription.

AI is like any other tool, if you want it easy, you pay. If you want it cheap you have to put in some elbow grease.
A single script might help with a single LLM, but with the options that are available it is not the best way to use AI as a code generator. It shouldn't be in the top 10 ways to generate code.

Thread Thread
 
embernoglow profile image
EmberNoGlow

"Final" for an LLM is when the logic works. "WIP" for a human is when I bring this working draft up to the standards you describe. I use it to quickly check whether the idea is even worth my time. If the idea fails, I throw out 1,000 lines, not 10 modules created using a complex context. And what if you look at it from the end user's perspective? This statement may seem bad (even possibly very bad), but the end user doesn't care whether everything is in one file or 10 modules, whether there are super-large nested loops or not; they're using the finished product. WIP is less about functionality than about the quality of the code itself. I work on the project alone in my free time, so I certainly don't have enough time to simply make the code beautiful in such a short period.

My approach to the LLM here is as if I were a junior intern working under my strict but quick review. I let him 'mess up' the draft because I need proof of concept, not perfect code. I don't teach LLM how to write poorly; I use it as the fastest way to create a working prototype, which I then clean up, albeit slowly.

I appreciate your advice on free and local models, but you say some models can even run on a laptop, but you don't mention their power. What if I have old hardware? These are excellent options for those willing to invest the time to configure Transformers. My approach is based on minimal configuration and maximum inference speed using commercial (but limited) services. For me, as a developer who wants to quickly test a hypothesis, the overhead of setting up local infrastructure or complex agent scheduling outweighs the benefit of an ideal context. I choose a cheap and fast, albeit messy, MVP.

Collapse
 
leob profile image
leob • Edited

Yeah, makes zero sense to me what the author advocates - I've been breaking code into multiple files (with self-describing names) since I took my first baby steps in programming - so much easier to navigate your code, it's a no-brainer ...

Collapse
 
embernoglow profile image
EmberNoGlow

I agree with your position: in traditional development and teamwork, modularity is the right solution.

My point, however, concerns the development workflow with LLM. When we use AI as the primary engine for generating complex, interconnected logic, it's more difficult for it to effectively process a dependency graph scattered across 10 files than a single, coherent piece of code where all local constants and functions are immediately visible.

This isn't a rejection of good design, but a pragmatic solution for speeding up AI iteration. In the release (as I mentioned), I'll return to modularity, but for the generation phase, a monolith proved more effective.

Thread Thread
 
leob profile image
leob • Edited

"it's more difficult for it to effectively process a dependency graph scattered across 10 files" - but is that really the case? I believe that modern tools like Copilot/Cursor/Claude are pretty good at working with projects containing dozens of files - but, if you say that for your particular use case a 'monolith' worked better, then I believe you!

Thread Thread
 
embernoglow profile image
EmberNoGlow

Thank you! You're right, modern tools have become much better for working with projects.

However, the problem I encountered was precisely that LLM forgot about side effects between files. For example, Copilot constantly forgot to call a function that updated variables in another script before calling the function in the current file. This led to errors that I had to fix manually. When I sent the patch back, it acknowledged the error, but in the next iteration, to my surprise, it fell into the same trap. I don't know if the problem was my laziness in explaining the error in more detail, or if I somehow misread the prompt.

Thread Thread
 
leob profile image
leob

Makes sense - you just found a practical solution to a real and practical problem - "in theory" it should have worked with multiple files, in reality it didn't ... I'm also a pragmatist myself - when confronted with an issue and when I see a quick workaround I'm inclined to go with it, unless I'd be violating an important piece of the architecture, but otherwise I'd just go with it - theory is for the academics!

Collapse
 
harsh2644 profile image
Harsh

Finally someone said it! 🙌

I've been thinking the same thing while working with AI. The moment I break code into 20 different files, Claude and GPT start hallucinating imports, forgetting functions, and losing context.

But here's the catch: 1000-line scripts are great for prototyping. For production? Nightmare.

My current workflow: prototype in one massive script → get it working → THEN refactor with AI's help. Best of both worlds.

What do you think - refactor at the end or never refactor? 🤔

Collapse
 
harsh2644 profile image
Harsh

Finally someone said it! 🙌

I've been doing the same thing lately. When I'm prototyping with Cursor, keeping everything in one monolithic script actually helps the AI understand the full context. The moment I split things into modules, the AI starts hallucinating imports and forgetting function signatures.

But yeah, once the prototype is stable, refactoring into modules becomes this satisfying "paying back technical debt" session.

Question: How do you decide when it's time to break the monolith?

Collapse
 
embernoglow profile image
EmberNoGlow

Thank you! You can break the monolith when you're generally confident the project has achieved most of its goals. I always create a short roadmap, and once a lot of features are completed, you can break the project if you're confident your API won't undergo any major changes.

Collapse
 
halakabir234hub profile image
Hala Kabir

This is a really thoughtful post — and I appreciate that you clearly framed it as a context-specific approach, not a manifesto against modularity. That nuance matters.

I especially agree with this line:

“AI is not a project manager. It is an executor.”

That’s such an important distinction in the LLM era.

For solo developers who iterate fast and heavily rely on AI for refactoring or generation, a monolithic script does reduce cognitive overhead. One file = one context window. No dependency graph gymnastics. No “wait, which file defines this?” moments. From an AI-collaboration standpoint, that’s a very real productivity boost.

That said, I see the monolith not as an anti-pattern — but as a phase.

In rapid prototyping:

Speed > architecture purity

Context completeness > separation of concerns

Iteration > long-term scalability

But once a project stabilizes, modularity becomes less about structure aesthetics and more about:

Testability

Replaceability of components

Onboarding clarity

Long-term maintainability

I also liked your point about “illusion of structure.” I’ve seen projects where splitting into 10 folders created more mental overhead than clarity. Structure without boundaries is just fragmentation.

One thing I’d add:
Instead of thinking monolith vs modular, maybe we think in terms of cognitive load management — for both humans and AI. The real question becomes:

At this stage of the project, what structure minimizes friction?

For solo + AI-driven experimentation, a well-organized monolithic script with logical sections (regions, clear grouping, strong naming) can absolutely be the most pragmatic choice.

Collapse
 
embernoglow profile image
EmberNoGlow

Thanks!