DEV Community

Yuri Borges
Yuri Borges

Posted on

I'm 18 and Built an Open-Source Camera That Cryptographically Proves Photos Are Real

In 2026, generating a photorealistic fake image takes seconds. The C2PA standard (Adobe, Microsoft, Google) solves this with Content Credentials — but only on Samsung S25+ and Pixel 10. The other 3 billion Android phones have nothing.

I'm 18, from Brazil, and I built TrueShot to change that.

What happens when you take a photo

  1. 14 physical sensors are sampled at the exact instant of the shutter — accelerometer, gyroscope, magnetometer, barometer, light, proximity, gravity, rotation vectors, and more
  2. SHA-256 hash is computed on the JPEG bytes up to the EOI marker
  3. ECDSA P-256 signs the manifest via hardware-backed Android Keystore (StrongBox preferred, TEE fallback)
  4. The signed manifest is appended after the JPEG EOI marker — standard image viewers ignore post-EOI data, so the photo displays normally everywhere

Change one pixel → hash breaks. Forge the signature → mathematically impossible without the device's hardware key.

Anyone can verify in a browser at true-shot.vercel.app/verify. The image never leaves your browser.

The part I think is new

Sensor-based screen recapture detection

Every published method for detecting photos-of-screens uses visual analysis — moiré patterns, CNNs, Vision Transformers. The problem: modern OLED screens don't produce moiré. High-PPI displays don't cause aliasing. Visual methods are losing the arms race.

TrueShot does something different: it cross-correlates physical sensor readings to detect anomalies consistent with screen photography. No image analysis at all.

Scenario Score Flagged?
Normal photo (daylight) 20 No
Normal photo (dark room) 30 No
Screen capture (daylight) 70 Yes
Screen capture (dark room) 85 Yes

10 signals: focus distance, light/ISO mismatch, magnetometer magnitude, gyroscope stability, color gain blue-suppression, scene flicker, proximity, ambient darkness, step counter, and compound signals.

The approach works regardless of screen technology — LCD, OLED, MicroLED — because it never looks at the image content.

Cross-device corroboration without communication

Three reporters photograph the same protest on three different phones. Nobody pairs devices. Nobody sets anything up.

Later, an editor drops all three photos on the web verifier. JavaScript extracts the manifests and compares barometric pressure, timestamps, GPS, and ambient conditions.

Consistent sensors from independent devices = same event. Zero servers. Zero cloud. Everything happens in the browser.

Tech stack

  • Kotlin 2.1, Jetpack Compose, CameraX 1.4
  • Hilt for DI, Room for persistence
  • Android Keystore (ECDSA P-256, SHA-256)
  • Vanilla JS + WebCrypto API for the web verifier
  • 14 Gradle modules, ~5,700 lines of Kotlin
  • Zero C++, zero ML models, zero third-party SDKs

What it honestly does NOT do

  • Does NOT detect deepfakes or AI-generated content
  • Does NOT guarantee content truthfulness — a staged scene photographed with TrueShot is authentic as a capture
  • Key attestation chain is included but not validated against Google Root CA yet
  • Screen detection is heuristic, not definitive — it can produce false positives on macro photography in dark rooms

Full threat model: THREAT_MODEL.md

Privacy

  • Zero analytics, zero tracking, zero cloud
  • GPS off by default, opt-in only
  • No Firebase, no Crashlytics, no third-party SDKs
  • Device ID is anonymous (SHA-256 of public key, not IMEI)
  • Web verifier processes everything in-browser

Try it

MIT licensed. I'm preparing a paper on the sensor correlation approach for IEEE WIFS 2026 (deadline July 15). Feedback welcome, especially if you see attack vectors I'm missing.

Top comments (0)