I've been building an Android app that overlays aircraft labels on a live camera feed when you point your phone at the sky. You fetch a plane's position from an ADS-B API, you have your own GPS location, and the goal is to draw a label at the correct pixel on screen.
The problem sounds straightforward until you try to implement it. There are four distinct coordinate spaces between the aircraft's GPS position and that pixel. Wrong output, no exception, no warning. The math compiles fine either way.
I tried to look for something similar but I couldn't really find any proper resource that gets it all done, they all work with either one of those coordinate spaces, not integrate them all together. So here is all that I had implemented to get it all working, for you guys!
This post walks through the entire pipeline from first principles: every formula, every sign rule, and a full numerical walkthrough using real captured values so you can verify your own implementation against known results.
The Pipeline at a Glance
The full transformation chain looks like this:
Geodetic (lat, lon, alt)
↓ Stage 1 — flat-earth approximation
ENU — East, North, Up (metres)
↓ Stage 2 — R⊤ from the Android rotation vector sensor
Device frame (dX, dY, dZ)
↓ Stage 3 — one sign flip
Camera frame (Cx, Cy, Cz)
↓ Stage 4 — perspective divide + FOV normalisation
Screen pixels (xpx, ypx)
Each stage is a separate coordinate system with its own axis convention. Confusing any two of them produces a result that looks plausible, points in roughly the right direction, and is still wrong.
Stage 1: Geodetic to ENU
What ENU is
ENU stands for East-North-Up. It is a local Cartesian frame centred on your position. The aircraft's ENU vector is its displacement from you in metres along three orthogonal axes: East, North, and Up. The user is always at the origin.
This frame is called a local tangent plane because it approximates Earth's surface as flat in the immediate vicinity of the observer. That approximation is valid for distances well under 100 km, which covers the entire ADS-B reception range for any ground station or aircraft receiver.
The flat-earth conversion
Given:
- User position:
- Aircraft position:
- (mean Earth radius)
- Altitude from the API in feet, converted: The ENU components are:
Why the cosine appears in E but not N
This is the single most common implementation mistake. The term exists because meridians (lines of constant longitude) are not parallel — they converge toward the poles.
At the equator, one degree of longitude spans:
At latitude , the same one degree of longitude spans:
Lines of latitude, by contrast, are parallel. One degree of latitude spans the same arc length regardless of where you are. That is why the component has no cosine correction.
At N (the reference location used in the walkthrough below), , so a one-degree longitude difference corresponds to about 9.3% fewer metres than the same latitude difference. An implementation that omits this factor will produce East-West positions that are correct at the equator and increasingly wrong as latitude increases, with no visible error at low latitudes to alert you.
Derived quantities
Once you have , bearing, elevation, and range follow directly:
Note the argument order in for bearing: North is the reference direction, so it goes in the second position. Swapping these gives bearings rotated 90 degrees.
These values are not consumed by the projection math directly, but they are invaluable for debugging. If the projected screen position is wrong, comparing computed bearing and elevation against a known map position immediately tells you whether the error is in Stage 1 or in a later stage.
Accuracy
At 50 nautical miles (approximately 93 km, the practical ADS-B range limit), the relative error is around 0.11%, corresponding to roughly 100 metres of positional error. At the projection level, this is under 2 pixels on a 1080-wide screen with a 66-degree horizontal FOV. The approximation is entirely adequate for this application.
For distances above 100 km, the Haversine formula gives exact great-circle distance:
Stage 2: ENU to Device Frame via the Rotation Matrix
What the rotation matrix does
Android's TYPE_ROTATION_VECTOR sensor fuses the accelerometer, gyroscope, and magnetometer to output a quaternion representing the phone's orientation relative to Earth. Calling SensorManager.getRotationMatrixFromVector(R, values) converts that quaternion to a
rotation matrix stored in a FloatArray(9) in row-major order:
The column interpretation is the most important thing to understand about this matrix. Each column of is a unit vector describing where one device axis points in the ENU world frame:
- Column 0: world direction of the device's axis (right edge of phone)
- Column 1: world direction of the device's axis (top edge of phone)
- Column 2: world direction of the device's axis (out of screen, toward your face) In other words, transforms vectors from device frame into ENU world frame:
Going the other direction: R transpose
The projection engine needs the inverse transform. Given an ENU vector (the aircraft's position relative to the user), we want its representation in the device frame so we can reason about whether it is in front of or behind the camera.
Because is orthonormal (it is a rotation matrix, so ), the inverse is just the transpose:
Expanding this using the row-major index layout of Android's FloatArray(9):
Magnitude preservation as a sanity check
Because is orthonormal, it preserves vector magnitudes exactly:
This means after computing , you can immediately check:
If these do not match (accounting for floating-point rounding), there is a matrix indexing error somewhere. This check costs essentially nothing and catches the most common implementation mistake immediately.
Stage 3: Device Frame to Camera Frame
Axis conventions
Android's sensor frame and the camera frame have different axis conventions for the Z direction:
| Axis | Device frame ( ) | Camera frame ( ) |
|---|---|---|
| Meaning | Out of screen, toward your face | Into the scene, away from lens |
These point in opposite directions. The full mapping for a phone held upright in portrait mode is:
| Device axis | Physical meaning | Camera axis |
|---|---|---|
| Right edge of phone | (camera right) | |
| Top edge of phone | (camera up) | |
| Out of screen | (negate) |
So the camera frame components are:
The negation on is the only required correction for portrait mode. No matrix, no remap call, just one sign flip.
The occlusion test
An aircraft is behind the camera if and only if . At this point, perspective division would be undefined or produce a nonsensical result on the wrong side of the image plane. Any point with must be classified as off-screen before proceeding.
Stage 4: Perspective Projection to Screen Pixels
The pinhole camera model
A point in camera space projects onto the image plane via similar triangles. Setting the focal length to 1 (normalised):
This is the perspective divide. An object twice as far away ( doubled) produces half the projected displacement from centre. This is what gives AR overlays their correct sense of depth and scale.
Field of view normalisation
The perspective divide alone gives values whose scale depends on the camera's focal length. Normalising by the half-FOV tangent maps the visible frustum to (Normalised Device Coordinates, NDC):
where and are the horizontal and vertical field of view angles. An aircraft is inside the visible frustum when:
For a typical Android rear camera in portrait mode: , .
NDC to screen pixels
Given screen dimensions in pixels:
The in the second formula is the Y-axis flip. Screen coordinates have at the top-left corner, increasing downward. Camera points up. These are opposite conventions, and the formula corrects for it. If you write instead, aircraft above the horizon appear below screen centre and vice versa.
The full projection formula
Substituting NDC back in:
Equivalence to the camera intrinsics matrix
This formula is identical to the standard camera intrinsics model . The intrinsic matrix for this projection is:
where:
With , : .
The scalar formula and the matrix formulation are the same computation. Understanding the equivalence matters when you want to incorporate lens distortion correction later, which requires working in the intrinsics framework.
Off-Screen Direction Classification
When an aircraft is not in the frustum, a good AR overlay shows an edge indicator pointing toward it. The decision tree is:
- : aircraft is behind the camera. Show a BEHIND indicator or suppress.
- and : off the left edge.
- and : off the right edge.
- and : above the top edge.
- and : below the bottom edge. For the indicator screen position (with edge padding ):
LEFT: x = p, y = clamp(ypx, p, H-p)
RIGHT: x = W-p, y = clamp(ypx, p, H-p)
UP: x = clamp(xpx, p, W-p), y = p
DOWN: x = clamp(xpx, p, W-p), y = H-p
The , values from the projection formula (extrapolated even when off-screen) give the correct rotation angle for the indicator arrow, so compute them regardless of frustum status.
Full Numerical Walkthrough
You can skip this boring part
Real captured values from a live debugging session.
Given
| Symbol | Description | Value |
|---|---|---|
| User position | 24.8600° N, 80.9813° E | |
| Aircraft position | 24.9321° N, 81.0353° E | |
| Aircraft altitude | 18,000 ft = 5,486.4 m | |
| Screen | 1080 × 1997 px | |
| Camera FOV | 66°, 50° | |
| Azimuth / Pitch / Roll | Phone orientation | 33.0°, −4.3°, −6.5° |
Stage 1: Geodetic to ENU
Stage 2: Bearing and elevation
Stage 3: Rotation matrix
The rotation matrix captured from getRotationMatrixFromVector() at the session:
Applying via column indexing:
Applying the sign fix:
Magnitude check:
Against m. The 12 m difference is rounding from truncating to 3 decimal places. ✓
Stage 4: Perspective projection
Both NDC values are large negatives because — the aircraft is behind the camera in this captured frame (the phone was not pointing at the aircraft). The off-screen indicator fires: , , so direction is RIGHT.
This is the correct result. At azimuth 33.0° with the aircraft at bearing 37.0°, the aircraft is only 4° away horizontally. The pitch of −4.3° with elevation 29.4° means the phone is pointing far too low — the net vertical angle to the aircraft is 33.7°, well above the 25° half-FOV for the vertical axis. The phone needs to tilt up considerably to bring the aircraft into frame.
Error Budget
Every approximation in the pipeline introduces a bounded error. These are independent, so the total worst-case screen error is roughly their sum.
| Source | Max positional error | Screen error at 50 nm | Mitigation |
|---|---|---|---|
| Flat-earth vs. Haversine | ~130 m at 100 km | < 2 px | Acceptable; use Haversine beyond 100 km |
| Spherical vs. WGS-84 ellipsoid | ~55 m at 100 km | < 1 px | Negligible |
| User altitude = 0 | metres | 1–5 px near mountains | Read GPS altitude |
| Sensor latency (normal pan) | ~0.1° | < 3 px | Acceptable |
| Sensor latency (fast pan, 60°/s) | ~0.6° | ~16 px | Low-pass filter |
| Lens distortion (not corrected) | < 10 px at corners | < 10 px | Camera2 distortion coefficients |
The flat-earth error derivation: expanding and retaining the leading correction term gives relative error . At 93 km this is 0.11%.
Quick Reference
ENU (flat-earth)
E = Δλ × (π·RE/180) × cos(φ_user)
N = Δφ × (π·RE/180)
U = h_aircraft - h_user
Distance and bearing from ENU
d_3D = sqrt(E² + N² + U²)
β = atan2(E, N) → [0, 360)
ε = atan2(U, sqrt(E² + N²))
World to device (R transpose — column indexing)
dX = R[0]·E + R[3]·N + R[6]·U
dY = R[1]·E + R[4]·N + R[7]·U
dZ = R[2]·E + R[5]·N + R[8]·U
Device to camera (portrait mode)
Cx = dX, Cy = dY, Cz = -dZ
Perspective projection
NDCx = Cx / (Cz · tan(θH/2))
NDCy = Cy / (Cz · tan(θV/2))
Screen pixels (origin top-left)
xpx = (NDCx + 1) / 2 × W
ypx = (1 - NDCy) / 2 × H
Visible when: Cz > 0 AND |NDCx| ≤ 1 AND |NDCy| ≤ 1
Default FOV (portrait, typical rear camera)
θH ≈ 66°, θV ≈ 50°
Here is also the full math reference document that I have created: Math Reference Document
Questions about any stage of the pipeline or anything else are welcome.
Hope this was helpful.





Top comments (0)