DEV Community

Glen Baker
Glen Baker

Posted on • Originally published at entropicdrift.com

Refactoring a God Object Detector That Was Itself a God Object

Originally published on Entropic Drift


The Irony

My code analysis tool debtmap has a god object detector. It scans Rust codebases looking for structs with too many methods, too many fields, and too many responsibilities. When it finds one, it generates recommendations for splitting the monolith into focused modules.

The irony? The god object detector was itself a god object.

4,362 lines in god_object_detector.rs. 3,461 lines in god_object_analysis.rs. Mixed concerns everywhere—AST traversal interleaved with scoring algorithms, classification logic coupled to I/O, recommendation generation tangled with threshold checks.

I decided to fix this using Stillwater's "Pure Core, Imperative Shell" architecture. This post documents the nine-phase refactoring journey from specs 181a through 181i.

When Your Tool Turns on You

The best part? I didn't discover this problem through code review or intuition. Debtmap itself flagged its own god object detector as the #1 and #2 most critical technical debt in the entire codebase:

#1 SCORE: 159.2 [CRITICAL]
├─ LOCATION: ./src/organization/god_object_detector.rs
├─ IMPACT: -100 complexity, -219.3 maintainability improvement
├─ METRICS: 4362 lines, 163 functions, avg complexity: 3.1
├─ GOD OBJECT: 55 methods, 5 fields, 8 responsibilities (score: 0.9)
   Suggested: 2 recommended module splits
├─ ACTION: Split by analysis phase: 1) Data collection 2) Pattern detection
   3) Scoring/metrics 4) Reporting. Keep related analyses together.
└─ WHY THIS MATTERS: File has 8 distinct responsibilities across 55 methods.
   High coupling makes changes risky and testing difficult. Splitting by
   responsibility will improve maintainability and reduce change impact.

#2 SCORE: 66.3 [CRITICAL]
├─ LOCATION: ./src/organization/god_object_analysis.rs
├─ IMPACT: -61 complexity, -290.9 maintainability improvement
├─ METRICS: 3304 lines, 159 functions, avg complexity: 1.9
├─ GOD OBJECT: 142 methods, 33 fields, 12 responsibilities (score: 1.0)
   Suggested: 4 recommended module splits
├─ ACTION: URGENT: 3304 lines, 159 functions! Split by data flow:
   1) Input/parsing 2) Core logic/transformation 3) Output/formatting.
   Create 6 focused modules with <30 functions each.
└─ WHY THIS MATTERS: File has 12 distinct responsibilities across 142 methods.
Enter fullscreen mode Exit fullscreen mode

Dog food validation that the analysis works. So, I took debtmap's advice and refactored the god object detector.

Automating the Refactoring with Prodigy

Here's where it gets interesting: I didn't manually implement all nine refactoring phases. I used Prodigy, my AI workflow orchestration tool, to automate the entire process.

One command processed all nine specs:

prodigy run workflows/implement.yml --map "specs/181*"
Enter fullscreen mode Exit fullscreen mode

Prodigy discovered all 9 spec files (181a through 181i) and for each one:

  1. Implemented the spec using Claude
  2. Validated completion with automatic checking
  3. Ran tests (just test) to ensure nothing broke
  4. Ran linters (just fmt-check && just lint) to maintain quality
  5. Created commits with clear messages
  6. Recovered from incomplete implementations automatically

The results:

  • 3 hours 59 minutes total execution time
  • 9 specs processed sequentially
  • 19 commits created across the phases
  • All tests passed on first attempt for every spec
  • Automatic recovery when validation fell short (specs 181b, 181c, 181e, 181i)

The recovery mechanism in action (spec 181i):

⚠️ Validation incomplete: 45.0% complete (threshold: 100.0%)
🔄 Attempting to complete implementation (attempt 1/5)
⚠️ Validation still incomplete: 92.0% complete
🔄 Attempting to complete implementation (attempt 2/5)
⚠️ Validation still incomplete: 99.7% complete
🔄 Attempting to complete implementation (attempt 3/5)
✅ Validation passed: 100.0% complete
Enter fullscreen mode Exit fullscreen mode

Each spec went through validation before moving to the next phase. If validation showed gaps, Prodigy automatically attempted to complete the implementation up to 5 times. This meant I could walk away while the refactoring ran, confident that each phase would be fully completed and tested before the next began.

The merge workflow:
After all 9 specs completed, Prodigy:

  1. Ran CI checks to ensure everything still worked
  2. Merged the worktree branch to master
  3. Cleaned up the temporary worktree
  4. Created a clean commit history

Total automation from spec to production-ready code.

The Numbers

Before (v0.7.0):

  • 3 monolithic files
  • 7,823 lines of code
  • 110 functions (many with mixed concerns)
  • I/O, business logic, and orchestration all tangled together

After (v0.9.0):

  • 14 focused modules
  • 4,149 lines of code (~47% reduction)
  • 45 pure functions with clear separation
  • 97 tests passing in 0.03 seconds

The code is shorter, clearer, and more testable. Here's how I got there.

Phase 1: Foundation Analysis (Spec 181a)

Before touching any code, I needed to understand what I had. I analyzed all 8,762 lines across 6 files and classified every function:

  • Pure functions: 53 (48%) - No side effects, deterministic
  • I/O operations: 14 (13%) - AST traversal, file access
  • Orchestration: 29 (26%) - Composing other functions
  • Mixed: 11 (10%) - The problem children that needed splitting

This analysis identified the target architecture:

god_object/
├── Pure Core (business logic)
│   ├── types.rs (~200 lines)
│   ├── thresholds.rs (~100 lines)
│   ├── predicates.rs (~150 lines)
│   ├── scoring.rs (~200 lines)
│   ├── classifier.rs (~200 lines)
│   └── recommender.rs (~250 lines)
│
├── Orchestration
│   └── detector.rs (~250 lines)
│
└── I/O Shell
    └── ast_visitor.rs (existing)
Enter fullscreen mode Exit fullscreen mode

I also established baseline benchmarks for critical paths to ensure the refactoring didn't degrade performance.

Phase 2: Extract Types and Thresholds (Spec 181b)

The first actual code change: extract all data structures and configuration into dedicated modules.

Before: Types scattered throughout god_object_analysis.rs (648 lines)

After: Organized into focused type modules:

// core_types.rs - 258 lines
pub struct GodObjectAnalysis {
    pub is_god_object: bool,
    pub method_count: usize,
    pub field_count: usize,
    pub responsibility_count: usize,
    pub god_object_score: f64,
    pub confidence: GodObjectConfidence,
    // ...
}

// classification_types.rs - 121 lines
pub enum GodObjectType {
    MethodHeavy,
    FieldHeavy,
    ResponsibilityHeavy,
    LargeSize,
}

// split_types.rs - 173 lines
pub struct ModuleSplit {
    pub suggested_name: String,
    pub methods_to_move: Vec<String>,
    pub structs_to_move: Vec<String>,
    pub responsibility: String,
    // ...
}

// thresholds.rs - 59 lines
pub struct GodObjectThresholds {
    pub max_methods: usize,
    pub max_fields: usize,
    pub max_traits: usize,
    pub max_lines: usize,
    pub max_complexity: u32,
}
Enter fullscreen mode Exit fullscreen mode

This phase moved 728 insertions and removed 730 deletions - a nearly 1:1 swap that laid the foundation for everything that followed.

Key insight: Pure data types with no behavior make testing and reasoning trivial. All validation and computation moved to separate pure functions.

Phase 3: Extract Pure Scoring Functions (Spec 181c)

Next, I extracted all scoring algorithms into pure, testable functions.

Before: Scoring logic mixed with mutation and state:

// Inside a 500-line impl block
fn calculate_score(&mut self, analysis: &mut Analysis) {
    let mut score = 0.0;
    if self.method_count > threshold {
        score += self.method_factor();
        self.violations.push(Violation::MethodCount);
    }
    // ... more mutation
    analysis.score = score;
}
Enter fullscreen mode Exit fullscreen mode

After: Pure functions with explicit inputs and outputs:

/// Calculate god object score from method, field, responsibility counts, and LOC.
///
/// **Pure function** - deterministic, no side effects.
pub fn calculate_god_object_score(
    method_count: usize,
    field_count: usize,
    responsibility_count: usize,
    lines_of_code: usize,
    thresholds: &GodObjectThresholds,
) -> f64 {
    let method_factor = (method_count as f64 / thresholds.max_methods as f64).min(3.0);
    let field_factor = (field_count as f64 / thresholds.max_fields as f64).min(3.0);
    let responsibility_factor = (responsibility_count as f64 / 3.0).min(3.0);
    let size_factor = (lines_of_code as f64 / thresholds.max_lines as f64).min(3.0);

    let mut violation_count = 0;
    if method_count > thresholds.max_methods { violation_count += 1; }
    if field_count > thresholds.max_fields { violation_count += 1; }
    if responsibility_count > thresholds.max_traits { violation_count += 1; }
    if lines_of_code > thresholds.max_lines { violation_count += 1; }

    let base_score = method_factor * field_factor * responsibility_factor * size_factor;

    match violation_count {
        1 => base_score.max(30.0),
        2 => base_score.max(50.0),
        _ => base_score.max(70.0),
    }
}
Enter fullscreen mode Exit fullscreen mode

Testing becomes trivial:

#[test]
fn test_scoring_deterministic() {
    let thresholds = GodObjectThresholds::default();
    let score1 = calculate_god_object_score(30, 20, 8, 1500, &thresholds);
    let score2 = calculate_god_object_score(30, 20, 8, 1500, &thresholds);
    assert_eq!(score1, score2); // Always passes - pure function
}

#[test]
fn test_violation_based_scoring() {
    let thresholds = GodObjectThresholds::default();

    // Single violation
    let score = calculate_god_object_score(25, 5, 2, 200, &thresholds);
    assert!(score >= 30.0);

    // Two violations
    let score = calculate_god_object_score(25, 15, 2, 200, &thresholds);
    assert!(score >= 50.0);

    // Three+ violations
    let score = calculate_god_object_score(25, 15, 8, 200, &thresholds);
    assert!(score >= 70.0);
}
Enter fullscreen mode Exit fullscreen mode

This phase added 443 lines of pure scoring logic and removed 384 lines of mixed-concern code from the monolith.

Phase 4: Extract Detection Predicates (Spec 181d)

All boolean checks became composable predicate functions.

Before: Checks scattered through methods:

if analysis.method_count > self.max_methods
    || analysis.field_count > self.max_fields
    || analysis.responsibility_count > self.max_traits {
    // ... 50 lines of nested logic
}
Enter fullscreen mode Exit fullscreen mode

After: Named predicates that document intent:

/// Check if method count exceeds threshold.
pub fn exceeds_method_threshold(count: usize, threshold: usize) -> bool {
    count > threshold
}

/// Check if field count exceeds threshold.
pub fn exceeds_field_threshold(count: usize, threshold: usize) -> bool {
    count > threshold
}

/// Check if counts indicate a god object.
pub fn is_god_object(
    method_count: usize,
    field_count: usize,
    responsibility_count: usize,
    thresholds: &GodObjectThresholds,
) -> bool {
    exceeds_method_threshold(method_count, thresholds.max_methods)
        || exceeds_field_threshold(field_count, thresholds.max_fields)
        || exceeds_responsibility_threshold(responsibility_count, thresholds.max_traits)
}

/// Check if struct count and domain count indicate cross-domain mixing.
pub fn is_hybrid_god_module(struct_count: usize, domain_count: usize) -> bool {
    struct_count > 15 && domain_count > 3 && struct_count > domain_count * 3
}
Enter fullscreen mode Exit fullscreen mode

These predicates are composable—you can combine them with && and || to build complex detection logic. They're also self-documenting—the function name tells you exactly what it checks.

Phase 5: Extract Classification Logic (Spec 181e)

Classification determines confidence levels and groups methods by responsibility.

Before: Classification mixed with analysis iteration:

fn analyze(&mut self, ast: &Ast) -> GodObjectAnalysis {
    // ... 200 lines of AST traversal
    let mut confidence = GodObjectConfidence::NotGodObject;
    let mut violations = 0;
    if self.methods > threshold { violations += 1; }
    if self.fields > threshold { violations += 1; }
    // ... more mutation
    confidence = match violations {
        5 => GodObjectConfidence::Definite,
        3..=4 => GodObjectConfidence::Probable,
        _ => GodObjectConfidence::Possible,
    };
    // ... another 100 lines
}
Enter fullscreen mode Exit fullscreen mode

After: Pure classification functions:

/// Determine confidence level from score and metrics.
///
/// Maps threshold violations to confidence levels:
/// - 5 violations → Definite
/// - 3-4 violations → Probable
/// - 1-2 violations → Possible
/// - 0 violations → NotGodObject
pub fn determine_confidence(
    method_count: usize,
    field_count: usize,
    responsibility_count: usize,
    lines_of_code: usize,
    complexity_sum: u32,
    thresholds: &GodObjectThresholds,
) -> GodObjectConfidence {
    let mut violations = 0;

    if method_count > thresholds.max_methods { violations += 1; }
    if field_count > thresholds.max_fields { violations += 1; }
    if responsibility_count > thresholds.max_traits { violations += 1; }
    if lines_of_code > thresholds.max_lines { violations += 1; }
    if complexity_sum > thresholds.max_complexity { violations += 1; }

    match violations {
        5 => GodObjectConfidence::Definite,
        3..=4 => GodObjectConfidence::Probable,
        1..=2 => GodObjectConfidence::Possible,
        _ => GodObjectConfidence::NotGodObject,
    }
}

/// Group methods by their inferred responsibility domain.
pub fn group_methods_by_responsibility(
    methods: &[String],
) -> HashMap<String, Vec<String>> {
    let mut groups: HashMap<String, Vec<String>> = HashMap::new();

    for method in methods {
        let responsibility = infer_responsibility(method);
        groups.entry(responsibility).or_default().push(method.clone());
    }

    groups
}
Enter fullscreen mode Exit fullscreen mode

The classification module added detailed domain extraction, struct grouping, and responsibility inference—all as pure, testable functions.

Phase 6: Extract Recommendation Logic (Spec 181f)

Recommendation generation creates refactoring suggestions.

Before: Recommendations generated during analysis with mutation:

fn analyze(&mut self, ast: &Ast) -> GodObjectAnalysis {
    // ... analysis code
    for domain in domains {
        let mut split = ModuleSplit::new();
        split.name = format!("{}_{}", self.name, domain);
        split.add_methods(&domain_methods);
        self.splits.push(split); // Mutation
    }
    // ...
}
Enter fullscreen mode Exit fullscreen mode

After: Pure recommendation functions:

/// Suggest module splits based on struct name patterns (domain-based grouping).
///
/// Groups structs by domain and creates split recommendations for groups with
/// more than one struct.
pub fn suggest_module_splits_by_domain(structs: &[StructMetrics]) -> Vec<ModuleSplit> {
    let mut grouped: HashMap<String, Vec<String>> = HashMap::new();
    let mut line_estimates: HashMap<String, usize> = HashMap::new();
    let mut method_counts: HashMap<String, usize> = HashMap::new();

    for struct_metrics in structs {
        let domain = classify_struct_domain(&struct_metrics.name);
        grouped
            .entry(domain.clone())
            .or_default()
            .push(struct_metrics.name.clone());
        *line_estimates.entry(domain.clone()).or_insert(0) +=
            struct_metrics.line_span.1 - struct_metrics.line_span.0;
        *method_counts.entry(domain).or_insert(0) += struct_metrics.method_count;
    }

    grouped
        .into_iter()
        .filter(|(_, structs)| structs.len() > 1)
        .map(|(domain, structs)| {
            let estimated_lines = line_estimates.get(&domain).copied().unwrap_or(0);
            let method_count = method_counts.get(&domain).copied().unwrap_or(0);
            ModuleSplit {
                suggested_name: format!("config/{}", domain),
                structs_to_move: structs,
                responsibility: domain.clone(),
                estimated_lines,
                method_count,
                rationale: Some(format!(
                    "Structs grouped by '{}' domain to improve organization",
                    domain
                )),
                // ...
            }
        })
        .collect()
}
Enter fullscreen mode Exit fullscreen mode

The recommender module provides four different strategies for generating split recommendations, all composable and testable in isolation.

Phase 7: Create Orchestration Layer (Spec 181g)

With all the pure functions extracted, I needed a way to compose them.

The orchestration pattern:

pub struct GodObjectDetector {
    pub(crate) max_methods: usize,
    pub(crate) max_fields: usize,
    pub(crate) max_responsibilities: usize,
    pub(crate) location_extractor: Option<UnifiedLocationExtractor>,
}

impl GodObjectDetector {
    /// Analyze a type for god object patterns using pure functions.
    pub fn analyze_comprehensive(
        &self,
        item: &syn::Item,
        source: &str,
    ) -> Option<GodObjectAnalysis> {
        // 1. Extract metrics (I/O boundary)
        let type_analysis = TypeVisitor::analyze_item(item, source)?;

        // 2. Create thresholds (configuration)
        let thresholds = GodObjectThresholds {
            max_methods: self.max_methods,
            max_fields: self.max_fields,
            max_traits: self.max_responsibilities,
            max_lines: 500,
            max_complexity: 150,
        };

        // 3. Calculate score (pure function)
        let score = calculate_god_object_score(
            type_analysis.method_count,
            type_analysis.field_count,
            type_analysis.responsibilities.len(),
            type_analysis.lines_of_code,
            &thresholds,
        );

        // 4. Determine confidence (pure function)
        let confidence = determine_confidence(
            type_analysis.method_count,
            type_analysis.field_count,
            type_analysis.responsibilities.len(),
            type_analysis.lines_of_code,
            type_analysis.complexity_sum,
            &thresholds,
        );

        // 5. Check if god object (pure predicate)
        let is_god_object = is_god_object(
            type_analysis.method_count,
            type_analysis.field_count,
            type_analysis.responsibilities.len(),
            &thresholds,
        );

        // 6. Assemble result
        Some(GodObjectAnalysis {
            is_god_object,
            method_count: type_analysis.method_count,
            field_count: type_analysis.field_count,
            responsibility_count: type_analysis.responsibilities.len(),
            lines_of_code: type_analysis.lines_of_code,
            god_object_score: score,
            confidence,
            // ...
        })
    }
}
Enter fullscreen mode Exit fullscreen mode

The key insight: Orchestration composes pure functions in sequence. It's the only place where the order of operations matters. Everything else is deterministic transformation.

Phase 8: Update Public API (Spec 181h)

With the new modular structure complete, I needed to maintain backward compatibility for external code.

Strategy:

  • Keep old file names but mark as deprecated
  • Re-export new functions through old interface
  • Add #[deprecated] attributes with migration guidance
  • Create legacy_compat module for transition period
// legacy_compat.rs
#[deprecated(since = "0.9.0", note = "Use classifier::group_methods_by_responsibility")]
pub fn group_methods_by_responsibility_with_domain_patterns(
    methods: &[String],
) -> HashMap<String, Vec<String>> {
    crate::organization::god_object::classifier::group_methods_by_responsibility(methods)
}
Enter fullscreen mode Exit fullscreen mode

This allowed external tests to continue working while migrating to the new API at their own pace.

Phase 9: Complete Pure Function Migration (Spec 181i)

The final phase: ensure the GodObjectDetector no longer calls the old monolithic code.

Before: Detector delegated to legacy implementations

After: Detector composes pure functions from the modular structure

All 260 of 261 tests passing (99.6%). The single failing test had an edge case with responsibility granularity that was addressed in follow-up work.

The complete architecture:

src/organization/god_object/
├── types.rs              (23 lines)   - Re-exports all types
├── core_types.rs         (258 lines)  - Core data structures
├── classification_types. (121 lines)  - Classification enums
├── split_types.rs        (173 lines)  - Split recommendations
├── metrics_types.rs      (20 lines)   - Metrics tracking
├── thresholds.rs         (59 lines)   - Configuration
├── predicates.rs         (150 lines)  - Boolean checks (pure)
├── scoring.rs            (443 lines)  - Scoring algorithms (pure)
├── classifier.rs         (600 lines)  - Classification logic (pure)
├── recommender.rs        (700 lines)  - Recommendations (pure)
├── detector.rs           (500 lines)  - Orchestration
├── ast_visitor.rs        (450 lines)  - AST traversal (I/O)
├── legacy_compat.rs      (68 lines)   - Backward compatibility
└── metrics.rs            (367 lines)  - Metrics tracking

Total: 4,149 lines (down from 7,823)
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

1. Pure Functions Make Testing Trivial

Before refactoring, testing required mocking file I/O, setting up AST fixtures, and managing mutable state. After refactoring:

#[test]
fn test_confidence_classification() {
    let thresholds = GodObjectThresholds::default();

    assert_eq!(
        determine_confidence(30, 20, 8, 1500, 300, &thresholds),
        GodObjectConfidence::Definite
    );
}
Enter fullscreen mode Exit fullscreen mode

No mocks. No fixtures. Just inputs and outputs.

2. Separation Enables Composition

With pure functions extracted, I could compose them in different ways:

// Quick check
let is_god = is_god_object(methods, fields, responsibilities, &thresholds);

// Detailed analysis
let score = calculate_god_object_score(methods, fields, responsibilities, loc, &thresholds);
let confidence = determine_confidence(methods, fields, responsibilities, loc, complexity, &thresholds);

// Full pipeline
let analysis = detector.analyze_comprehensive(item, source);
Enter fullscreen mode Exit fullscreen mode

The same pure functions power all three paths. No duplication.

3. Module Size Matters

Breaking a 4,362-line file into modules averaging 277 lines each made everything easier to reason about. Each module fits in your head.

4. Break Down Big Problems, Then Automate

I designed 9 incremental specs to break the refactoring into focused phases. Each spec was just a markdown file describing one transformation. But the workflow (implement.yml) did the heavy lifting:

commands:
  - claude: "/prodigy-implement-spec $ARG"
    validate:
      claude: "/prodigy-validate-spec $ARG"
      threshold: 100
      on_incomplete:
        claude: "/prodigy-complete-spec $ARG --gaps ${validation.gaps}"
        max_attempts: 5

  - shell: "just test"
    on_failure:
      claude: "/prodigy-debug-test-failure --spec $ARG"
      max_attempts: 5

  - shell: "just fmt-check && just lint"
    on_failure:
      claude: "/prodigy-lint ${shell.output}"
      max_attempts: 5

merge:
  - claude: "/prodigy-merge-master"
  - claude: "/prodigy-ci"
  - claude: "/prodigy-merge-worktree ${merge.source_branch} ${merge.target_branch}"
Enter fullscreen mode Exit fullscreen mode

For each of the 9 specs, the workflow:

  1. Implemented it with Claude
  2. Validated it reached 100% completion (recovered automatically if incomplete, up to 5 attempts)
  3. Ran tests (debugged and fixed failures automatically, up to 5 attempts)
  4. Ran linting (fixed formatting/lint issues automatically, up to 5 attempts)
  5. Created commits only after all checks passed

After all 9 specs completed:

  1. Merged master into the worktree (handled conflicts if any)
  2. Ran CI and fixed any integration issues
  3. Merged back to master with clean history

The key insight: Specs describe what to do (one focused change at a time). Workflows describe how to do it safely (with validation, testing, recovery). Together they enable reliable automation.

This is better than manual incremental work because you get the benefits of small steps without the context-switching cost of implementing them one at a time over days. You also get guarantees (tests pass, validation complete) that humans often skip when tired.

The Results

Code Quality:

  • ✅ 97 tests passing in 0.03 seconds
  • ✅ Zero clippy warnings
  • ✅ 45 pure functions with clear separation
  • ✅ Each module under 1,000 lines (most under 300)

Maintainability:

  • ✅ Functions average 20 lines or less
  • ✅ Clear module boundaries
  • ✅ No circular dependencies
  • ✅ Easy to extend with new analysis types

Testability:

  • ✅ Pure functions need no mocks
  • ✅ Deterministic test results
  • ✅ Fast test execution (0.03s for 97 tests)
  • ✅ Easy to add property tests

Conclusion

The god object detector is no longer a god object. It's a collection of focused modules with clear responsibilities, composable pure functions, and a clean separation between I/O and business logic.

Three tools working together made this possible:

  1. Debtmap detected the problem - The tool caught its own worst code, validating that the analysis works in production
  2. Stillwater provided the architecture - Pure Core/Imperative Shell pattern gave a clear refactoring strategy
  3. Prodigy automated the implementation - 9 specs executed in 4 hours with automatic validation and testing

This is the development workflow I want: tools that find problems, patterns that solve them, and automation that implements solutions. No manual grunt work, no forgetting to run tests, no skipping validation steps.

The Stillwater architecture delivered:

  • Pure Core - Business logic as pure functions (scoring, predicates, classification)
  • Orchestration - Composing pure functions into pipelines (detector)
  • Imperative Shell - I/O isolated to boundaries (ast_visitor)

The result is testable (97 tests in 0.03s), maintainable (focused modules averaging 277 lines), and most importantly, it doesn't commit the sin it's designed to detect.

The full code is available in debtmap v0.9.0. All three tools are open source:

  • Debtmap - Code analysis and technical debt detection
  • Stillwater - Functional programming patterns for Rust
  • Prodigy - AI workflow orchestration

If you're fighting a god object in your own codebase, I hope this refactoring journey provides a roadmap. And if you want to automate your own refactorings, the workflow files that orchestrated this work are in the debtmap repository.

Related posts:


Have you refactored a god object before? What patterns worked for you? Open an issue to share your experience.


Want more content like this? Follow me on Dev.to or subscribe to Entropic Drift for posts on AI-powered development workflows, Rust tooling, and technical debt management.

Check out my open-source projects:

  • Debtmap - Technical debt analyzer
  • Prodigy - AI workflow orchestration

Top comments (0)