DEV Community

Cover image for 15 Things I Learned in Tech (That Actually Matter)
Arijit Ghosh
Arijit Ghosh

Posted on

15 Things I Learned in Tech (That Actually Matter)

There’s a point where engineering stops being about writing correct code and starts being about making irreversible decisions under uncertainty.

At that stage, the hardest problems aren’t:

  • syntax
  • frameworks
  • algorithms

They’re:

  • trade-offs
  • failure modes
  • long-term consequences

Everything below comes from systems I’ve built, scaled, broken, and rebuilt. This is not theory. This is what reality taught me—sometimes expensively.


1. Tech Debt Is a Strategic Lever (If You Treat It Like One)

Early in my career, I thought tech debt was failure. Later, I realized it’s often the reason anything ships at all.

In one project, we had a narrow window to launch. We:

  • hardcoded configurations
  • duplicated logic
  • skipped abstraction layers

It worked. We got users.

Six months later:

  • every change required touching multiple places
  • onboarding new engineers took longer
  • bugs multiplied in unpredictable ways

The system wasn’t collapsing—but it was slowing us down invisibly.

The real problem wasn’t the debt. It was that we:

  • didn’t track it
  • didn’t prioritize repayment
  • didn’t quantify its cost

Personal insight:
Tech debt behaves like financial debt. If you don’t measure the interest, you’ll never know when it becomes unpayable.


2. Speed Is More Important Than Perfection

Perfection feels productive. Speed is actually productive.

I once spent weeks refining a feature:

  • optimized edge cases
  • clean abstractions
  • elegant architecture

Users ignored most of it.

In another case, we shipped a rough version in days:

  • incomplete flows
  • limited edge handling
  • basic UI

But we learned:

  • what users actually needed
  • what they ignored
  • what broke immediately

That feedback shaped the system far more than internal discussions ever could.

Personal insight:
Speed is not about rushing—it’s about reducing the time between decision and feedback.


3. Observability Is the Difference Between Control and Guessing

I’ve been in incidents where:

  • dashboards looked fine
  • alerts were unclear
  • logs were useless

We didn’t debug—we guessed.

That’s when I realized logging alone isn’t enough. You need observability as a system property.

A well-observed system lets you:

  • trace a request across services
  • correlate failures with inputs
  • understand system behavior under load

Here’s a simplified view of how observability should flow:

Personal insight:
If you can’t explain a failure quickly, you don’t understand your system.


4. System Design Is Where Most Mistakes Become Permanent

I once worked on a system that looked clean at the code level—but was fundamentally flawed in design.

Problems:

  • shared databases across services
  • tight coupling between modules
  • no clear ownership boundaries

Scaling exposed everything:

  • deployments became risky
  • small changes caused cascading failures
  • teams blocked each other

We didn’t rewrite code—we had to rethink the architecture.

Here’s the contrast I wish I had understood earlier:

Personal insight:
Code can be refactored. Architecture often requires migration.


5. Deep Foundations Outlast Shiny Frameworks

Frameworks made me productive early on. But when systems broke, frameworks didn’t help—fundamentals did.

Problems I faced:

  • slow queries → needed database indexing knowledge
  • latency issues → needed networking understanding
  • race conditions → needed concurrency fundamentals

Engineers who understood fundamentals:

  • debugged faster
  • designed better systems
  • adapted across stacks

Personal insight:
Abstractions are helpful—until they leak. Fundamentals are what you fall back on.


6. The “Happy Path” Is a Lie We Tell Ourselves

In development:

  • inputs are valid
  • systems are stable
  • flows are predictable

In production:

  • users refresh mid-request
  • networks drop
  • services timeout
  • retries overlap

I’ve seen a simple bug bring down a workflow because:

  • two requests updated the same record simultaneously

That led to inconsistent state.

Personal insight:
Concurrency and failure are the default—not edge cases.


7. Code Is Read Far More Than It Is Written

The hardest bugs I’ve solved weren’t complex—they were unclear.

I’ve spent hours understanding:

  • what a function was supposed to do
  • why a certain abstraction existed
  • how different parts interacted

Bad code increases:

  • cognitive load
  • onboarding time
  • error probability

Good code reduces decision-making friction.

Personal insight:
Readable code is not a luxury—it’s a scaling mechanism for teams.


8. Testing Is What Enables You to Move Fast Safely

In one system, we had minimal tests. Every deployment felt risky.

We hesitated to:

  • refactor
  • optimize
  • even fix bugs

Because we didn’t know what we’d break.

Later, in a well-tested system:

  • we deployed frequently
  • refactored confidently
  • caught issues early

Here’s how I now think about testing layers:

Personal insight:
Testing isn’t about correctness—it’s about confidence under change.


9. Users Are Fundamentally Unpredictable

Users don’t follow flows. They explore, misuse, and stress your system.

I’ve seen:

  • massive inputs in small fields
  • repeated rapid actions
  • usage patterns that bypass intended design

Systems that assumed “normal behavior” failed quickly.

Personal insight:
Design for worst-case behavior—not average behavior.


10. Storage Mechanics Dictate Your Ceiling

At scale, storage becomes the bottleneck.

I’ve faced issues like:

  • slow queries under load
  • unindexed data causing timeouts
  • uneven data distribution

One system hit a scaling wall because:

  • a single table became too large
  • queries degraded exponentially

We had to redesign with partitioning:

Personal insight:
You can’t scale beyond what your storage model allows.


11. Scaling Compute Is Easy. Scaling State Is Hard

Adding servers is trivial. Managing shared state is not.

Problems I’ve encountered:

  • inconsistent data across nodes
  • synchronization delays
  • conflicting updates

Stateless systems scale easily:

Stateful systems introduce complexity:

Personal insight:
State is where distributed systems become hard.


12. Caching Is a Tool, Not a Shortcut

Caching once improved performance dramatically in a system I worked on.

Then came:

  • stale data bugs
  • invalidation issues
  • inconsistent responses

The hardest problem wasn’t adding cache—it was knowing when to invalidate it.

Personal insight:
Caching trades correctness for performance—use it consciously.


13. A System Is Only as Good as Its Adoption

I’ve built systems that were technically impressive—but unused.

And simple systems that:

  • solved one problem well
  • were easy to adopt
  • spread quickly

The difference wasn’t engineering quality—it was user value.

Personal insight:
Adoption is the ultimate validation of engineering decisions.


14. Resilience Is Not Optional

Failures happen:

  • services go down
  • networks fail
  • dependencies break

I’ve seen systems:

  • crash completely
  • or degrade gracefully

The difference is design.

Personal insight:
A system that fails gracefully is better than one that works perfectly—until it doesn’t.


15. Ownership Extends Beyond Code

The biggest shift in my thinking was this:

Writing code is only the beginning.

I’ve built systems that:

  • worked perfectly
  • but were hard to deploy
  • difficult to debug
  • painful to maintain

That’s not success.

True ownership includes:

  • how the system behaves in production
  • how easily others can work with it
  • how it evolves over time

Personal insight:
You don’t own code—you own outcomes.


Final Reflection

If there’s one pattern across all of this, it’s:

Reality is always more complex than your design.

  • systems fail
  • users surprise you
  • scale exposes weaknesses

Your job is not to eliminate these realities.

Your job is to:

  • anticipate them
  • design for them
  • adapt to them

Because in the end, great engineering isn’t about writing perfect systems.

It’s about building systems that survive imperfection.

Top comments (0)