DEV Community

Cover image for A Case Against Abstraction
Darkø Tasevski
Darkø Tasevski

Posted on • Edited on

A Case Against Abstraction

Recently, I was knee-deep in a very complex project. The problem wasn't just the size of the codebase, it was the endless forest of indirection. Factory functions, Providers, Managers, Registries, Mixins; everywhere I turned, there was another layer. Following the flow of data felt less like tracing logic and more like spelunking through a cave system with no map.

At one point, I leaned on an LLM to help me debug. And it failed. Not because the model was weak, but because the architecture was so fragmented and implicit that even an AI couldn't piece it together. Abstraction, the thing we've always leaned on to tame complexity, had itself become the source of complexity.

In this specific case, abstraction didn't clarify; it obscured. For both me and the AI.


Yesterday's Cure, Today's Disease

"Sufficiently advanced abstractions are indistinguishable from obfuscation." — @raganwald

Historically, abstraction was our main defense against cognitive overload. Humans only hold so much detail in working memory, so we built neat layers that hid what we didn't need to see. Encapsulation, state management, factories, etc. they weren't just patterns, they were survival mechanisms.

But in the AI era, the tradeoffs have shifted.

  • We hit cognitive limits quickly, while an LLM can plow through raw detail without ever getting tired.
  • What they can't handle is fragmentation. Implicit behavior scattered across dozens of files. Context broken into 15 layers of indirection.

The result? Abstraction no longer reduces cognitive load. It multiplies it. It takes effort that should go into problem-solving and reroutes it into archaeology.


Case Study: When the Russian Dolls Collapse

Note: The project details and code samples in this case study have been obfuscated and anonymized. The patterns, complexity, and architectural issues described here are real, but the identifiers and structures are adjusted so the example remains relatable without exposing proprietary code.

To see how this plays out, here's a snapshot from a real-world TypeScript/React project I worked on. It's not a hypothetical; it's a cautionary tale of what happens when abstractions pile up unchecked.

  • 40,000+ files in total, ~5,800 lines in core files
  • 233+ files matching abstraction patterns (Providers, Managers, Factories, Handlers)
  • 20+ Redux reducers with complex interdependencies
  • 15+ mixin compositions in the components layer
  • State object with 80+ properties, spanning UI, business logic, networking, and persistence
  • 800+ unit test files with 200-400 lines each
  • 200+ E2E test files with 300-500 lines each

Abstraction Layer Explosion

A simple user action could bounce through a chain like:

User Action 
 Component
 Action
 Reducer
 Manager
 Provider
 Backend
Enter fullscreen mode Exit fullscreen mode

That's not architecture, that's bureaucracy. Debugging usually requires going through at least 6 to 8 files just to locate the root cause.

Example chain:

    Some data
   DataManager
   DataProvider
   EntityManager
   StateManager
Enter fullscreen mode Exit fullscreen mode

Each layer existed to "decouple", but the combined effect was Russian-doll indirection where nothing was visible without peeling back four wrappers.


Custom Frameworks on Top of Frameworks

The team even rolled its own inheritance system to work around Immutable.js limitations:

class BusinessEntity extends InheritableImmutableRecord {
  static defaultValues = {
    id: null,
    name: '',
    config: Map(),
    state: Record({}),
    // ... 80+ more properties
  }
}

mergeImmutableRecordDefaults(BusinessEntity)
Enter fullscreen mode Exit fullscreen mode

This meant every developer had to understand:

  1. Immutable.js internals.
  2. The custom inheritance layer.
  3. The domain logic built on top.

The implicit behavior was so deep that neither humans nor LLMs could reason about it without constant back-and-forth exploration.


State Bloat

The main app state was a monolith with 80+ properties across unrelated domains:

interface ApplicationState {
  containers: List<Container>
  totalContainers: number
  dataItems: Map<ID, DataItem>

  // UI state
  containerRect: Rect
  scrollbarOffset: number
  isDebugModeEnabled: boolean

  // Business logic
  formFields: Map<string, FormField>
  attachments: Map<string, Attachment>
  validationErrors: Map<string, ValidationError>

  // Connection state
  connectionState: ConnectionState
  apiService: ApiService | null

  // ... 60+ more properties
}
Enter fullscreen mode Exit fullscreen mode

Any change meant wading through 500+ lines and dozens of imports. New developers were paralyzed, and AI assistance was worse than useless, it would hallucinate or give superficial answers because it couldn't hold the whole structure in context.


Pattern Proliferation

Each entity type copied the most complex existing pattern:

FormFieldManager + FormFieldProvider + FormFieldValueManager + FormFieldValueProvider
DataManager + DataProvider  
BookmarkManager + BookmarkProvider
CommentManager + CommentProvider
Enter fullscreen mode Exit fullscreen mode

The result was not reusability but 4x duplication with inconsistent interfaces. Even if an LLM parsed one chain, knowledge didn't transfer to others.


When LLM Models Hit the Wall

This over-abstraction wasn't just hard for me. It crippled my ability to collaborate with AI.

  • Context fragmentation: A single feature spanned 20+ files and thousands of lines, more than even the largest context windows can practically handle.
  • Implicit flows: State changes rippled through hidden chains like:
dispatch(updateEntity(entity))
   entityReducer updates state
   EntityManager.validateEntity()
   EntityProvider.syncToBackend()  
   DataManager.handleChange()
   StateManager.notifySubscribers()
   Multiple UI components re-render
Enter fullscreen mode Exit fullscreen mode

No LLM could trace that end-to-end without losing coherence.

  • Scattered logic: Validation in Models, error handling in Reducers, sync logic in Providers. No single place contained the truth.

Observed impact:

  • Bug diagnosis took 8-10x longer.
  • LLM explanations were 70-85% less accurate.
  • Refactoring suggestions were blocked by tangled dependencies.
  • Average time to onboard a new developer: 3-4 months (vs. 2-3 weeks in cleaner codebases)

The architecture didn't just slow humans, it actively blinded AI.


Test Suite Complexity and Flakiness

You might think: "Well, maybe the AI could still piece things together from the test suite." Unfortunately, no. The tests were just as over-engineered as the production code and far more fragile.

Thousands of tests existed, but instead of providing confidence, they became a constant source of pain.

Why the Tests Were a Crisis

  • Massive mocking requirements: Testing DataManager meant mocking 6+ dependencies plus an entire 80-property state tree.
  • Timing-dependent abstraction chains: Async flows cascaded through Managers, Providers, Reducers, and event emitters. Slight variations caused flakiness.
  • Implicit synchronization: Tests failed unless state propagation across 5+ abstraction layers happened within arbitrary timeouts.
  • Retry culture: E2E tests routinely required retries and 30-second waits, masking systemic fragility.

Example:

// Adding a property to DataManager required updating:
// 1. Manager mock
// 2. Provider mock
// 3. EntityManager mock
// 4. ReducerCallbacks mock
// 5. Component test wrappers
// 6. All fixtures
Enter fullscreen mode Exit fullscreen mode

A "simple" new feature meant updating 10-20 test files.

Quantified Pain

  • ~80% of test runtime was spent on setup, teardown, mocking, and retries, and not actual logic.
  • Debugging a single flaky test often took 3+ hours.
  • New developers needed 2-3 months before they could write reliable tests.
  • Test maintenance overhead: Nearly 50% of development time was spent debugging and keeping tests working.

When Abstraction Actually Works

Before we dive into solutions, let's acknowledge that abstraction isn't inherently evil. There are cases where it genuinely helps:

Good Abstraction Examples

  1. Domain Boundaries: Separating user management from payment processing
  2. Cross-cutting Concerns: Logging, error handling, authentication
  3. Complex Algorithms: When the implementation details would obscure the business logic
  4. External APIs: Wrapping third-party services with consistent interfaces

The Key Difference

Good abstraction reduces cognitive load by hiding irrelevant details. Bad abstraction increases cognitive load by hiding relevant details.


Why Humans + AI Both Need Directness

Humans think better with clarity. AI works better with explicitness. Abstraction, when it hides more than it reveals, hurts both.

In the pre-AI era, abstraction bought us simplicity. In the AI era, abstraction taxes us thrice:

  1. Humans pay in cognitive load.
  2. Machines pay in broken context.
  3. And tests pay in fragility and flakiness.

The best architecture for both is direct, explicit, domain-driven, not endlessly abstracted.


Practical Solutions

Audit Abstractions Like Dependencies

If a layer doesn't clearly earn its keep, remove it.

Before:

class DataManager {
  constructor(private provider: DataProvider) {}

  create(dataItem: DataItem) {
    return this.provider.createDataItem(dataItem);
  }
}

class DataProvider {
  createDataItem(dataItem: DataItem) {
    return backend.save(dataItem);
  }
}
Enter fullscreen mode Exit fullscreen mode

After:

class DataService {
  async create(dataItem: DataItem) {
    return backend.save(dataItem);
  }
}
Enter fullscreen mode Exit fullscreen mode

Two files collapsed into one. No loss of clarity. Massive gain in readability.


Favor Services Over Factories

Before:

const service = ServiceFactory.get('data');
service.create(dataItem);
Enter fullscreen mode Exit fullscreen mode

After:

dataService.create(dataItem);
Enter fullscreen mode Exit fullscreen mode

The factory added nothing but indirection.


Architect for AI Collaboration

Assume your pair programmer is an LLM. Flatten structures, slice state into domains, and make naming explicit.

Before (monolithic state):

interface ApplicationState {
  containers: List<Container>;
  dataItems: Map<ID, DataItem>;
  uiState: UIState;
  connection: ConnectionState;
  // ... dozens more
}
Enter fullscreen mode Exit fullscreen mode

After (domain slices):

interface AppState {
  page: PageState;
  data: DataState;
  ui: UIState;
  sync: SyncState;
}
Enter fullscreen mode Exit fullscreen mode

A model can now read and explain one slice without drowning in an 80-property blob.


Stay Aware of AI's Limits

As I wrote in my post on mental models and prompt engineering, LLMs are pattern-matchers, not reasoners. They inherit bias, they hallucinate, and they choke on hidden indirection. Don't anthropomorphize them, and don't expect them to reconstruct intent buried under five layers of Providers and Managers.


The 5-Minute Rule

If you can't explain what an abstraction does in 5 minutes to a new developer (or AI), it's too complex. Simplify or remove it.


Abstraction Isn't Dead, But the Defaults Have Changed

Abstraction isn't going away. It was never the enemy. The problem is that too much of it, stacked without discipline, turns into an obstacle rather than a tool. What I'm describing here isn't necessarily guidance only for the AI-assisted programming; it's simply a principle we should strive for regardless of whether we're coding alongside AI agents or not: clarity beats unnecessary indirection.

But in the AI era, the tradeoffs have become even sharper. Every abstraction has to earn its place.

Does it genuinely reduce cognitive load?
Does it make the code more straightforward for humans, machines, and the test suite?
Can you explain its purpose in 5 minutes or less?

If the answer is no, then the abstraction is adding weight, not lifting it.

This case against abstraction isn't absolute. It's contextual. But the context has shifted. With or without AI, the best code is rarely the most abstract; it's the most direct.

Top comments (0)