DEV Community

Cover image for How I Built Unified Bug Bounty Scanning Across HackerOne, Intigriti, and Bugcrowd
Chudi Nnorukam
Chudi Nnorukam

Posted on • Edited on • Originally published at chudi.dev

How I Built Unified Bug Bounty Scanning Across HackerOne, Intigriti, and Bugcrowd

Originally published at chudi.dev


I found the same vulnerability on three different programs. Same IDOR pattern, same impact, same proof-of-concept.

Wrote three completely different reports. HackerOne wanted structured sections with their severity dropdown. Intigriti expected different field names and inline severity justification. Bugcrowd had its own template that matched neither.

That specific tedium--of reformatting the same finding three times--is exactly what automation should eliminate.

Multi-platform bug bounty integration requires a unified internal findings model that transforms to platform-specific formats at submission time. Store vulnerabilities once in a canonical structure with all possible fields. When submitting, platform formatters extract relevant data and restructure it for HackerOne, Intigriti, or Bugcrowd's expected format. One truth, three presentations.


Why Not Just Use Each Platform's API Directly?

Direct API integration seems simpler at first:

// The naive approach
if (platform === 'hackerone') {
  await hackeroneAPI.submitReport(finding);
} else if (platform === 'intigriti') {
  await intigritiAPI.submitReport(finding);
} else if (platform === 'bugcrowd') {
  await bugcrowdAPI.submitReport(finding);
}
Enter fullscreen mode Exit fullscreen mode

But then every piece of code needs platform awareness. Testing agents need to know which platform. Validation needs platform context. Storage needs platform-specific schemas.

The complexity explodes.

Instead, I built a unified findings model at the core. Every agent works with this model. Platform awareness only exists at two boundaries:

  1. Ingestion: When pulling program scope from platforms
  2. Submission: When sending reports to platforms

Everything between is platform-agnostic.

In part 1, I described the 4-tier agent architecture. The Reporter Agent handles submission--it's the only agent that knows about platform differences.


What Does the Unified Findings Model Look Like?

A finding has everything any platform might need:

interface Finding {
  // Core identification
  id: string;
  sessionId: string;
  targetAssetId: string;

  // Vulnerability details
  title: string;
  description: string;          // Markdown supported
  vulnerabilityType: VulnType;  // XSS, IDOR, SQLi, etc.

  // Severity
  cvssVector: string;           // Full CVSS v3.1 vector
  cvssScore: number;            // Calculated from vector
  severity: 'critical' | 'high' | 'medium' | 'low' | 'informational';

  // Proof
  poc: {
    steps: string[];            // Reproduction steps
    curl?: string;              // Raw curl command
    script?: string;            // Python/JS script
  };

  // Evidence
  evidence: {
    screenshots: string[];      // File paths or base64
    requestResponse: string[];  // HTTP exchanges
    hashes: string[];           // SHA-256 for authenticity
  };

  // Metadata
  confidence: number;           // 0.0 - 1.0
  status: FindingStatus;        // new, validating, reviewed, submitted
  createdAt: Date;
  platform?: string;            // Set at submission time
  externalId?: string;          // Platform's report ID after submission
}
Enter fullscreen mode Exit fullscreen mode

This model captures everything. Not every field is used by every platform--but all fields are available for any platform that needs them.

[!NOTE]
The CVSS vector is stored as a string (e.g., CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N). The score is calculated from this vector. Storing both allows quick sorting by score while preserving the detailed breakdown.


How Do Platform-Specific Formatters Work?

Each platform has a formatter that transforms the unified model:

// Simplified formatter pattern
interface PlatformFormatter {
  format(finding: Finding): PlatformReport;
  validate(report: PlatformReport): ValidationResult;
  submit(report: PlatformReport): Promise

</Steps>

The freshness score decreases over time. A program launched 1 hour ago gets higher priority than one launched 20 hours ago.

---

## What Are the Platform Authentication Differences?

Each platform authenticates differently:

**HackerOne**: Basic auth with username + API token
Enter fullscreen mode Exit fullscreen mode


javascript
const credentials = btoa(username + ':' + apiToken);
const headers = {
'Authorization': 'Basic ' + credentials,
'Content-Type': 'application/json'
};


**Intigriti**: Different OAuth-style flow with refresh tokens
Enter fullscreen mode Exit fullscreen mode


javascript
const headers = {
'Authorization': 'Bearer ' + accessToken,
'X-API-Key': apiKey
};


**Bugcrowd**: Yet another structure with API key in header
Enter fullscreen mode Exit fullscreen mode


javascript
const headers = {
'Authorization': 'Token ' + token,
'Content-Type': 'application/vnd.bugcrowd+json'
};


The credential manager stores these separately and handles refresh for each:

Enter fullscreen mode Exit fullscreen mode


typescript
class CredentialManager {
async getCredentials(platform: string): Promise {
const creds = await this.loadFromSecureStorage(platform);

if (this.needsRefresh(creds)) {
  return await this.refresh(platform, creds);
}

return creds;
Enter fullscreen mode Exit fullscreen mode

}
}


This connects to auth error recovery in [part 3](https://chudi.dev/blog/bug-bounty-failure-learning). When auth fails, the system attempts credential refresh before escalating.

---

## How Does the Unified Model Handle Platform-Specific Fields?

Some platforms have unique requirements not covered by the base model.

Solution: extensible metadata

Enter fullscreen mode Exit fullscreen mode


typescript
interface Finding {
// ... standard fields ...

platformMetadata?: {
hackerone?: {
weakness_id?: string; // HackerOne's weakness taxonomy ID
structured_scope_id?: string;
};
intigriti?: {
submission_type?: string; // Intigriti-specific field
};
bugcrowd?: {
bounty_table_entry?: string; // Bugcrowd payout tier
};
};
}


Formatters check for platform-specific metadata and use it if present. Otherwise, they derive the needed values from standard fields.

---

## What's the Report Submission Flow?

From validated finding to platform submission:

Enter fullscreen mode Exit fullscreen mode

Validated Finding (0.85+ confidence)

Human Review Queue

[Human approves]

Formatter transforms to platform format

Budget Manager confirms API availability

Platform API submission

External ID captured

Status → 'submitted'




All submissions go through [human-in-the-loop review (part 5)](https://chudi.dev/blog/bug-bounty-human-in-the-loop). Automation prepares; humans decide.

---

## Where Does This Series Go Next?

This is part 4 of a 5-part series on building bug bounty automation:

1. [Architecture & Multi-Agent Design](https://chudi.dev/blog/bug-bounty-automation-architecture)
2. [From Detection to Proof: Validation & False Positives](https://chudi.dev/blog/bug-bounty-validation-false-positives)
3. [Failure-Driven Learning: Auto-Recovery Patterns](https://chudi.dev/blog/bug-bounty-failure-learning)
4. **One Tool, Three Platforms: Multi-Platform Integration** (you are here)
5. [Human-in-the-Loop: The Ethics of Security Automation](https://chudi.dev/blog/bug-bounty-human-in-the-loop)

Next up: why humans still make every submission decision, and how mandatory review gates protect researcher reputation.

---

Maybe platform differences aren't obstacles. Maybe they're forcing functions--requiring a cleaner internal model that happens to work anywhere, because it had to.

Enter fullscreen mode Exit fullscreen mode

Top comments (0)