I saved a record at 5 PM here in India.
When I viewed that same data as a user in the UAE, it still showed 5 PM.
It was wrong. It should have been 3:30 PM.
That specific bug forced me to stop guessing and actually understand how machines measure time. It turns out, it’s shockingly simple compared to the mess of timezones we deal with.
Everything starts from one fixed moment: The Unix Epoch (Jan 1, 1970).
From that exact second, computers just start counting. Every timestamp is nothing more than "how many milliseconds have passed since 1970." That’s it. Just a number.
The biggest realization I had is what’s not inside that number.
There is no timezone hiding in a Unix timestamp. It’s just a raw integer. The timestamp 0 always refers to the same instant in reality. It stays the same everywhere. The timezone only gets involved when you convert that number into a readable date.
This is exactly where I messed up with PostgreSQL.
I was using TIMESTAMP (without timezone). I was basically saving a picture of a clock. I saved "5 PM," and the database blindly showed "5 PM" to everyone. It forgot that my 5 PM is very different from 5 PM in Dubai.
If I had used TIMESTAMPTZ, the database would have been smarter. It would have taken my 5 PM India time, converted it to the universal machine baseline (UTC 11:30 AM), and stored that.
Then, when the UAE user asked for the data, the database would have seen their timezone and done the math automatically: 11:30 AM + 4 hours = 3:30 PM.
It’s a tiny piece of engineering, but it’s the difference between a system that breaks across borders and one that keeps the whole world in sync.
Store in UTC. View in Local.
Top comments (0)