DEV Community

Cover image for The Quest to Minimize False Positives Reaches Another Significant Milestone
Dwayne McDaniel for GitGuardian

Posted on • Originally published at blog.gitguardian.com

The Quest to Minimize False Positives Reaches Another Significant Milestone

A few months ago, we introduced GitGuardian's FP Remover, our proprietary machine learning model that operates with complete data privacy. Unlike third-party LLM services such as OpenAI, our model processes all data internally - meaning your sensitive information never leaves your environment and is never used for model training. What's more, our own pre-trained LLM is not just safer---it's also more powerful: it has already proven its worth by cutting false positives by half, establishing a new standard in secrets detection.

Today, we're thrilled to announce FP Remover V2. This updated version identifies ~60% more false positives compared to the previous version while maintaining near-100% precision. 

This achievement stems directly from analyzing feedback from our largest customers, who often manage incident backlogs of tens of thousands of cases.

Their input helped us enhance our model's pattern recognition capabilities, which led to a dramatic improvement in false positive identification - up to 300% better performance for certain cases*.

These improvements mean security teams can now focus their time and energy on addressing genuine security threats rather than sifting through false alarms.

GitGuardian's recent Voice of Practitioners 2024 survey highlighted a pressing issue: when asked "What, if any, are the main challenges with your current AppSec tools?", respondents identified "high false positive rate" as the top concern, with 26% of the 1,000 participants citing this issue.

This feedback validates how false positives undermine AppSec efforts and reinforces our commitment to innovating solutions that benefit users daily.

This blog post will help explain how we achieved these results.

*Results vary by customer depending on their unique data characteristics

Our Feedback-Driven Approach

Far from operating in an isolated lab environment, our team combined user feedback with sophisticated data analysis to iterate and find the best model for improved recall without affecting the near-perfect precision requirement.

Customer annotations and real-world feedback were essential in identifying false positives the V1 missed. The team systematically analyzed this data to understand patterns specific to various companies and environments.

💡

GitGuardian's secrets detection engine begins by casting a wide net, capturing numerous potential matches including many generic secrets. The engine then carefully filters these candidates to eliminate false positives while maintaining high recall. While this process is already very efficient, some ambiguous results inevitably slip through initial filtering. That's where our machine learning model steps in, expertly handling these edge cases.

To better understand how FP Remover V2 improves filtering, here are some hand-picked examples of edge cases:

  • dates in password expirations password_expiration_date: '2024-01-01'
  • error codes: PasswordExpired(200014)
  • functions calls: KUBECTL_ROOT_PASSWORD=kubectl get secrets/rta-arangodb-root``
  • encoding algorithm names: call vault.add_to_vault('username', 'passphrase', 'AES-GCM');
  • and more unlucky matches, like type password:str\\n

For a pattern-matching algorithm, these would be very hard to distinguish from a true positive hard-coded secret (the bold part is the matching string). They require a kind of "code understanding" that only a developer normally has. While our previous model would have failed to discard these, the new version now efficiently filters out these false positives.

Another key improvement focused on the model's pattern recognition: the team analyzed patterns that commonly triggered false positives in customer environments and used these insights to enhance model training. 

⚠️

It's important to note that customer data is never directly used for training - we only apply these insights to our own datasets.

Here are examples of such patterns:

  • secrets' identifiers (or what we would call a secret's "name") in configuration files: V2 flags secret references stored in configuration files or repositories, benign entries that might have previously triggered false alarms.

`
variables:

  • name: CONFIG_SECRET_NAME value: project_secret_1632522223366 `

Copy

  • secrets' filepath or registry: often, paths themselves aren't the secrets but identifiers; V2 can accurately discern this distinction.

`
variables:
RESOURCE_TYPE: "secrets"
VAULT_ADDR: https://vault.prod.company_name.com
SECRET_PATH: "company_name/XYZ1234567/AB9876543/prod/secrets"
`

Copy

  • variable holding secret values: through enhanced context understanding, V2 recognizes when variable names are mistakenly flagged as credentials due to contextual clues.

`
AZURE_TOKEN=$AZURE_SAS_TOKEN_SFCPROJECTDEV2345_STAGE
`

Copy

  • plugin versions: V2 improves identification of numeric or alphanumeric sequences that aren't sensitive data but rather version identifiers.

`

plugins.yml

plain-credentials:139.ved2b_9cf7587b
`

Copy

This enhancement enabled the model to better recognize similar patterns across diverse, large-scale datasets. Public data assessments revealed over 85% recall while maintaining precision - findings that our internal data tests confirmed.

What's Next for FP Remover

Looking ahead, our promising results have encouraged the ML team to expand this pattern recognition technique to include new elements, such as Docker tags. While our current precision rate is remarkably high, we're working to find an even better tradeoff between precision and recall.

Beyond obvious false positives, the team is also addressing the challenge of 'dummy' passwords (your typical db_connect(username="john doe", password="john_doe_pwd")), and similar non-sensitive test credentials, in upcoming V3 and V4 releases.

Conclusion

FP Remover V2 doesn't just correct past inefficiencies---it sets a precedent for future advancements in our classification problem. We know that false positives aren't just minor annoyances, but resource-draining misdirections that dilute the focus from genuine threats.

On the GitGuardian Platform, the auto-ignore playbook helps streamline your workflow by automatically ignoring incidents our machine learning model identifies as False Positive. This feature is enabled by default for real-time detection of generic secrets. You can also apply this filtering to your existing incidents by running a historical scan across your perimeter.

Whether you're an engineer struggling with alert fatigue, or a CISO looking to boost your security team's efficacy, FP Remover V2 represents a compelling leap forward, heralding a new chapter in the fight against false positives.

Top comments (0)