DEV Community

Clay Roach
Clay Roach

Posted on

Day 8: Modular Design for AI Development - When Architecture Enables Velocity

Day 8: Modular Design for AI Development - When Architecture Enables Velocity

The Plan: Add some UI improvements and testing fixes.

The Reality: "Holy shit, we just cracked the code for AI-assisted development at scale!"

Welcome to Day 8 of building an AI-native observability platform in 30 days. Yesterday started with routine UI enhancements but evolved into a strategic breakthrough in modular architecture that will accelerate the remaining 22 days of development.

The Insight That Changed Everything

Working with Claude Code and GitHub Copilot daily, I kept hitting the same wall: context overflow. AI tools work best with focused, isolated problems, but real applications are interconnected webs of dependencies.

The breakthrough came when I realized: we're not building for humans anymore. We're building for AI-assisted development, which requires a fundamentally different architectural approach.

Traditional vs AI-Optimized Architecture

Traditional Development optimizes for:

  • Human comprehension of large codebases
  • Shared state and cross-cutting concerns
  • DRY principles that create dependencies
  • Monolithic understanding

AI-Optimized Development requires:

  • Minimal context per component
  • Clear, isolated boundaries
  • Interface-first contracts
  • Single responsibility focus

The Modular Pattern That Works

Here's the pattern we established today:

// Clear, well-documented interface - AI only needs this
export interface TraceAnalyzer extends Context.Tag<"TraceAnalyzer", {
  // Single responsibility: analyze traces for anomalies
  readonly analyzeTraces: (traces: ReadonlyArray<TraceRecord>) => Effect.Effect<
    AnalysisResult, 
    AnalysisError, 
    never
  >

  // Clear inputs/outputs with validation
  readonly detectAnomalies: (
    input: DetectionRequest
  ) => Effect.Effect<AnomalyReport, DetectionError, never>
}>{}

// Implementation can be generated independently
export const TraceAnalyzerLive = Layer.succeed(
  TraceAnalyzer,
  TraceAnalyzer.of({
    analyzeTraces: (traces) => 
      Effect.gen(function* () {
        // AI focuses only on this package's responsibility
        yield* Schema.decodeUnknown(TraceRecordArray)(traces)
        // ... analysis logic isolated to this context
      })
  })
)
Enter fullscreen mode Exit fullscreen mode

When I need to generate or modify the TraceAnalyzer, AI only needs to understand:

  1. The specific interface contract
  2. Effect-TS patterns (consistent across all packages)
  3. The single responsibility of trace analysis

No database schemas. No UI components. No authentication logic.

This reduces context from ~10,000 lines to ~200 lines. That's a 50x reduction in cognitive load.

UI Enhancement: Making the Invisible Visible

While establishing modular patterns, we also shipped a practical improvement: encoding type visualization for our trace data.

The Problem

Our platform ingests OpenTelemetry data in both JSON and Protobuf formats. They're functionally equivalent, but distinguishing them matters for:

  • Performance debugging: Protobuf is typically 3-5x faster to parse
  • Data flow analysis: Understanding which services use which formats
  • Compliance requirements: Some environments mandate specific encodings
  • Optimization opportunities: Identifying conversion bottlenecks

The Solution: Color-Coded Intelligence

We added visual indicators that make encoding types immediately obvious:

{
  title: 'Encoding',
  dataIndex: 'encoding_type', 
  key: 'encoding_type',
  width: 100,
  render: (encoding: string) => {
    const isJson = encoding === 'json';
    return (
      <Tag color={isJson ? 'orange' : 'blue'}>
        {isJson ? 'JSON' : 'Protobuf'}
      </Tag>
    );
  },
  filters: [
    { text: 'JSON', value: 'json' },
    { text: 'Protobuf', value: 'protobuf' },
  ],
  onFilter: (value, record) => record.encoding_type === value
}
Enter fullscreen mode Exit fullscreen mode

The color psychology is intentional:

  • Orange (JSON): Warm, approachable, development-friendly
  • Blue (Protobuf): Cool, efficient, production-optimized

Encoding Type UI Screenshot

Now when investigating performance issues, the encoding type is immediately visible alongside duration and error status. Visual debugging FTW.

Testing That Actually Tests Reality

One lesson from yesterday's protobuf debugging: test with real infrastructure, not mocks.

Our integration tests now use TestContainers to spin up actual ClickHouse instances:

describe('JSON OTLP Integration', () => {
  it('should handle JSON OTLP data and track encoding type correctly', async () => {
    // Real ClickHouse container, not mocks
    const storage = new SimpleStorage(testConfig);

    // Test with actual JSON OTLP structure
    const jsonTraces: DatabaseTraceRecord[] = [{
      trace_id: 'json-test-trace-1',
      encoding_type: 'json',
      start_time: new Date().toISOString()
        .replace('T', ' ')
        .replace(/\.\d{3}Z$/, '.000000000'), // ClickHouse DateTime64 precision
      // ... rest of actual trace data
    }];

    await storage.writeTracesToSimplifiedSchema(jsonTraces);

    // Verify encoding type persisted correctly
    const result = await storage.client.query({
      query: 'SELECT encoding_type FROM otel.traces WHERE trace_id = ?',
      query_params: ['json-test-trace-1']
    });

    expect(result.json()[0].encoding_type).toBe('json');
  });
});
Enter fullscreen mode Exit fullscreen mode

Result: All 6/6 integration tests passing with both JSON and Protobuf validation.

The DateTime Precision Battle

ClickHouse's DateTime64 type expects nanosecond precision, but JavaScript gives us milliseconds. This tiny detail caused hours of "Cannot parse input" errors.

The fix was surgically precise:

// Transform: 2025-08-21T15:30:45.123Z
// Into:      2025-08-21 15:30:45.000000000
const clickHouseDateTime = isoString
  .replace('T', ' ')
  .replace(/\.\d{3}Z$/, '.000000000');
Enter fullscreen mode Exit fullscreen mode

Lesson: Observability platforms live in the details. Get the fundamentals rock solid.

Documentation-Driven Development Pays Off

Today reinforced why we write specs before code. Our documentation synchronization process caught several gaps:

What We Updated

  • Storage Package: Docs now reflect current single-path architecture (not outdated dual-ingestion)
  • UI Package: Enhanced with encoding type feature documentation
  • Operational Procedures: Complete build/run/test/deploy workflows
  • Implementation Status: Central tracking of what's built vs. what's planned

The Start-Day Agent Enhancement

We enhanced our AI workflow agent to automatically review package documentation at session start:

## Session Familiarization Process
1. Read Implementation Status: Review current package states
2. Review Package Interfaces: Scan docs for API contracts and dependencies  
3. Check Operational State: Review procedures for current build/test status
Enter fullscreen mode Exit fullscreen mode

This prevents duplicate work and maintains continuity across development sessions. AI managing AI development workflow.

Week 1: Foundation Complete ✅

Today marks Week 1 completion of our 30-day challenge:

Implemented & Battle-Tested

  • Infrastructure: Docker, ClickHouse, OTel Collector, MinIO
  • Storage Package: Complete OTLP ingestion with comprehensive testing
  • Server Package: Real-time APIs with protobuf/JSON support
  • UI Package: Electron + React with encoding visualization
  • Development Workflow: AI-assisted agents and documentation sync

📋 Remaining (Optimized for AI Development)

  • Week 2: LLM Manager + AI Analyzer (enable AI features)
  • Week 3: UI Generator + Config Manager (advanced AI features)
  • Week 4: Deployment + Production readiness

The modular patterns we established today will enable rapid AI-assisted development of these remaining packages.

The AI Development Velocity Multiplier

Here's what makes this architectural approach so powerful for AI development:

Before (Traditional)

  • AI needs 5,000+ lines of context to understand how to modify a component
  • Changes risk breaking unrelated functionality
  • Testing requires understanding entire system interactions
  • Development is sequential due to tight coupling

After (Modular + AI-Optimized)

  • AI needs 200-300 lines to understand a component's interface and patterns
  • Changes are isolated to single components
  • Testing focuses on interface contracts, not system integration
  • Development can be parallelized across multiple AI sessions

The Multiplication Effect

With 5 remaining packages to implement:

  • Traditional approach: 5 packages × 5 days each = 25 days
  • AI-optimized approach: 5 packages × 2 days each = 10 days (with parallel development)

We just bought ourselves 15 extra days for polish, optimization, and advanced features.

Tomorrow: LLM Manager Implementation

Day 8 will implement the LLM Manager package - the foundation enabling all other AI features. Using today's modular patterns, we'll:

  1. Define clear interfaces for GPT, Claude, and local Llama integration
  2. Implement context optimization for different model capabilities
  3. Build comprehensive testing with model mocking
  4. Enable the AI Analyzer and UI Generator packages

Key Takeaways for AI-Assisted Development

  1. Interface-First Architecture: Define contracts before implementation
  2. Single Responsibility Components: One focused purpose per package
  3. Consistent Patterns: Same architectural approach across all components
  4. Real Infrastructure Testing: TestContainers > mocks for observability platforms
  5. Documentation Synchronization: Keep specs aligned with implementation

The biggest insight: We're not just building an observability platform. We're proving that AI-assisted development can achieve enterprise velocity at startup speed.


This is Day 8 of the "30-Day AI-Native Observability Platform" series. Follow along as we prove AI-assisted development can compress 12-month enterprise timelines into 30 focused days.

Coming up: Day 8 - LLM Manager with Multi-Model Orchestration


Stack: TypeScript, Effect-TS, ClickHouse, OpenTelemetry, React, Electron

Repo: otel-ai

AI Tools: Claude Code, GitHub Copilot, custom workflow agents

Top comments (0)