A few months ago I found myself repeatedly opening the same clunky workflow: download a GTFS ZIP, unzip it, open stops.txt in a spreadsheet, try to make sense of route shapes from raw lat/lon coordinates. There had to be a better way.
So I built TransitLens — a browser-based GTFS viewer and analysis tool that runs entirely in the browser. No server, no installation, no account required. Drop in a ZIP or paste a feed URL and you're looking at an interactive map of every route and stop in seconds.
Here's what I learned along the way.
Why browser-only?
The obvious approach would be a backend: upload the file, parse it server-side, return JSON. But that creates real friction for the audience I was building for — transit agencies and developers who are often cautious about sending feed data to a third-party server. Some feeds contain schedule data that agencies consider sensitive before publication.
Running everything client-side solves this cleanly. The file never leaves the browser. There's nothing to sign up for, nothing to trust.
The tradeoff: you're parsing potentially large ZIP files (some GTFS feeds are 50MB+) entirely in JavaScript, in the main thread or a web worker.
Parsing GTFS in the browser
GTFS is just a ZIP of CSV files. The parsing pipeline looks like this:
- Unzip — using JSZip to decompress in the browser
-
Parse CSVs — streaming through
stops.txt,routes.txt,trips.txt,stop_times.txt,calendar.txt,shapes.txt - Build indexes — route → trips → stop_times → stops, so map interactions are fast
- Render — Leaflet.js for the map, with route polylines from shapes or interpolated from stop sequences
The biggest challenge was memory. stop_times.txt for a large city feed can have millions of rows. The trick was to build only the indexes you need at load time, and defer full trip expansion until the user actually inspects a route.
Features that turned out to matter
I thought the map would be the main feature. It turns out the table view matters just as much. Transit developers often need to cross-reference route IDs, stop sequences, and service patterns — and a structured table with filtering is faster than the map for that kind of work.
The service calendar view also surprised me. Visualizing which days have how many trips, color-coded by service level (reduced, normal, peak), made it easy to spot calendar anomalies that are invisible in the raw calendar_dates.txt file.
Quality alerts were a late addition but quickly became one of the most-used features — things like routes with only one stop, trips with no shape, or stops with missing names. Small data issues that are easy to miss in raw CSV but immediately obvious when surfaced as warnings.
Spatial analysis with GeoJSON/KML overlays
One feature I didn't originally plan: importing GeoJSON or KML polygon files as overlay layers. It grew out of a specific need — being able to drop a service area boundary on top of the route map to visually check coverage. Now it's one of the more distinctive features; most GTFS tools don't do this.
What's live today
- Interactive map of all routes and stops
- Route inspector — stops, service days, trip patterns
- Filter panel — by mode, agency, service day, time of day
- Service calendar — trip counts and exceptions by day
- Table view — routes and stops with data quality alerts
- GeoJSON/KML polygon overlays for spatial analysis
- Works with any valid static GTFS feed — just drop in a ZIP or paste a URL
You can try it at app.transit-lens.com — no signup, just load a feed.
What's next
I'm working on better handling of very large feeds, deeper trip pattern analysis, and improving the quality alert coverage. If you work with GTFS or transit data and have a use case that isn't covered, I'd genuinely like to hear about it.
TransitLens is at transit-lens.com. The live tool is at app.transit-lens.com.
Top comments (0)