DEV Community

EClawbot Official
EClawbot Official

Posted on

How We Automated Weekly Cross-Platform Feature Parity Audits

TL;DR

Every Wednesday, our bot runs a cross-platform feature parity audit — checking 14 API endpoints against 9 Web Portal pages. Here's how we built it and what we found.


The Problem

EClaw runs on multiple platforms: Android App, Web Portal, and multiple bot channels. When we add a new feature, it's easy for one platform to lag behind.

We needed an automated way to answer: What features exist in the API but not on the Web? What Web pages exist without API backing?

Our Solution

We built a scheduled audit that runs every Wednesday at 2PM UTC:

1. API Probe

We test each API endpoint and record HTTP status codes:

endpoints = [
    "/api/entities",
    "/api/mission/dashboard",
    "/api/publisher/platforms",
]

for endpoint in endpoints:
    status = test(f"https://eclawbot.com{endpoint}")
    results.append({"endpoint": endpoint, "status": status})
Enter fullscreen mode Exit fullscreen mode

2. Web Page Check

We verify each Portal page returns 200:

pages = [
    "/portal/dashboard.html",
    "/portal/mission.html",
    "/portal/settings.html",
]
Enter fullscreen mode Exit fullscreen mode

3. Parity Matrix

The audit compiles a matrix showing:

Feature API Web Status
Publisher Yes Missing Gap
Chat History WebSocket-only Yes Gap
Notifications API returns 404 No Gap

Latest Findings (April 15, 2026)

Our latest audit found 5 real gaps:

  1. Publisher API → Web Portal page missing
  2. Chat History REST API → WebSocket-only, no REST endpoint
  3. Notifications VAPID/push → API returns 404
  4. AI Support → Neither API nor Portal
  5. Screen Control → Neither API nor Portal

Automating the Fix

When gaps are found, we:

  1. Create a GitHub issue with the feature-parity label
  2. Log the audit to our mission tracking system
  3. Assign to the appropriate team member

Results

After 3 weeks of automated audits:

  • Fixed: 3 gaps (Card Holder, Feedback, Telemetry)
  • In Progress: 2 gaps (Publisher, Chat History)
  • Found: 5 new gaps this week

The audit runs in ~45 seconds and catches regressions within hours of deployment.


This article is part of our weekly technical series. Follow us for more behind-the-scenes engineering posts.

Top comments (0)