DEV Community

Rad Code
Rad Code

Posted on

Why AI Needs Human Oversight for Architecture: A Real Refactoring Story

How a simple authentication refactor taught me that AI assistants are great at code, but need human guidance for architectural decisions

This article is based on my experience refactoring the authentication system in the heyradcode/do-not-stop project.

The Task

I had a shared authentication package (@do-not-stop/shared-auth) that was being used by both my frontend (React web) and mobile (React Native) apps. Interestingly, this package was originally created by my AI assistant during a "vibe coding" session - I was just going with the flow and letting it build the structure. The code had some duplication - both projects were manually wiring up the same hooks and API clients. Simple task: consolidate the duplicated code.

What the AI Suggested First

When I asked my AI assistant to consolidate, it immediately jumped to a factory pattern:

// AI's first suggestion
export const createEthereumAuth = ({ apiUrl, storageAdapter }) => {
  const apiClient = createAuthApiClient(apiUrl);
  const useNonce = createUseNonce(apiClient);
  const useVerifySignature = createUseVerifySignature(apiClient, onTokenSuccess);

  const AuthProvider = createAuthProvider({
    useAccountHook: useAccount,
    useSignMessageHook: useSignMessage,
    useNonce,
    useVerifySignature,
    storageAdapter,
  });

  return { AuthProvider, useNonce, useVerifySignature };
};
Enter fullscreen mode Exit fullscreen mode

At first glance, this seems reasonable. It removes duplication, right? But it's still passing everything around. The AI assistant kept adding layers:

  • Factory functions that return other factories
  • Parameters that get passed through multiple levels
  • "Backward compatibility" exports "just in case"
  • Bridge files that re-export things

The Human Intervention

When I looked at the code, I kept asking simpler questions:

"Why do we need createAuthProvider? Can't we just use AuthProvider directly?"

"Why pass apiClient when we can set it globally and reuse it?"

"Why re-export hooks through bridge files when we can import directly?"

Each question stripped away another unnecessary layer.

The Final Solution

Instead of factories and parameters, we used global configuration:

// config.ts - configure once
setApiBaseUrl(API_URL);
setTokenSuccessCallback(callback);
setStorageAdapter(adapter);

// AuthContext.tsx - just use directly
import { useAccount, useSignMessage } from 'wagmi';
import { useNonce, useVerifySignature } from '../hooks';

export const AuthProvider = ({ children }) => {
  const { address } = useAccount();  // No parameters!
  const { signMessage } = useSignMessage();
  // ...
}

// App.tsx - simple import
import { AuthProvider } from '@do-not-stop/shared-auth';
Enter fullscreen mode Exit fullscreen mode

The difference:

  • ❌ Before: Factory pattern, 6+ parameters, bridge files, re-exports
  • ✅ After: Global setters, direct imports, zero parameters

The Final Architecture I ended up with:

packages/shared-auth/
  ├── api.ts              # Singleton API client
  ├── hooks/
  │   ├── useNonce.ts     # Direct hook (uses shared client)
  │   └── useVerifySignature.ts
  └── contexts/
      └── AuthContext.tsx # Direct component (uses hooks directly)

frontend/src/config.ts    # setApiBaseUrl(), setStorageAdapter()
mobile/src/config.ts      # Same, different adapter

App.tsx                   # import { AuthProvider } from 'shared-auth'
Enter fullscreen mode Exit fullscreen mode

Zero factories. Zero parameters. Zero bridge files.

What I Learned

The breakthrough questions I kept asking were all about simplification:

  1. "Can this be removed?" - I asked about every layer, every file, every export
  2. "Why pass this as a parameter?" - When the AI suggested passing hooks, I asked why not import directly
  3. "Where is this actually used?" - I found many files that were just re-exporting
  4. "What's the minimum I need?" - I stripped it down to just configuration + direct usage

Conclusion

AI is excellent at:

  • Writing code
  • Implementing patterns it's seen before
  • Fixing syntax errors
  • Refactoring within existing patterns

AI struggles with:

  • Questioning whether patterns are needed
  • Simplifying beyond existing patterns
  • Understanding when "less is more"
  • Architectural judgment

The solution? Use AI for implementation, but keep a human in the loop for architectural decisions. When AI suggests adding complexity, ask: "Can I do this more simply?"

The best code is often the code you don't write. AI doesn't always know that.


This article is based on my real refactoring session with an AI coding assistant while working on the heyradcode/do-not-stop project. The authentication system works great now, and the codebase is simpler than when I started.

Top comments (1)

Collapse
 
shemith_mohanan_6361bb8a2 profile image
shemith mohanan

This really hits home 👏 — AI assistants are amazing at producing code that works, but not necessarily code that makes sense long-term. That “factory pattern spiral” you described is exactly what happens when LLMs optimize for reuse instead of clarity.

Loved your line about asking “Can this be removed?” — that’s such an underrated skill. Simplification is architecture.

I’ve been building an AI automation tool (BusinessAdBooster.pro
) and ran into a similar issue — the AI kept over-engineering pipelines until I manually stripped it back to a single config layer. Sometimes less abstraction = more maintainability. 🙌