DEV Community

Cover image for I Built a Real-Time Video Calling App Using WebRTC in React Native, And It Was Harder Than I Expected
Prateek Mangalgi
Prateek Mangalgi

Posted on

I Built a Real-Time Video Calling App Using WebRTC in React Native, And It Was Harder Than I Expected

Most developers have used apps like Zoom, Google Meet, or WhatsApp calls.

But building one?

That’s a completely different story.

When I started working on my React Native WebRTC app, I thought:

“It’s just video streaming, right?”

I was wrong.

Very wrong.

The Moment I Realized This Isn’t Just Another App

In a normal app:

You send a request → server responds

In WebRTC:

Two devices talk directly

No middleman.
No API response cycle.
No “simple backend”.

Just two peers trying to:

Discover each other

Negotiate connection

Exchange network details

Stream audio/video in real-time

That’s when it hit me:

This isn’t frontend or backend.
This is network-level engineering.

What WebRTC Actually Does

WebRTC (Web Real-Time Communication) allows devices to communicate peer-to-peer with ultra-low latency.

Which means:

Your phone can directly stream video to another device

Without routing media through a server

With built-in encryption

And that’s powerful.

But also complex.

The Architecture Behind My WebRTC App

My app follows a classic but important structure:

1. Client Layer (React Native)

This is where everything starts.

The app:

  • Captures camera & microphone
  • Displays local & remote video
  • Handles UI for calling

React Native made it easier to build cross-platform apps for iOS and Android while still using native WebRTC performance.

But UI was the easy part.

2. Signaling Server (The Unsung Hero)

Here’s the biggest misconception:

“WebRTC is peer-to-peer, so no server needed.”

Wrong.

You do need a signaling server.

Its job:

  • Help users find each other
  • Exchange connection data (SDP)
  • Share ICE candidates

Without signaling, peers can’t even start talking.

WebRTC does NOT define how signaling works, you have to build it yourself.

In my project, this became the coordination layer.

3. Peer Connection (The Core Engine)

Once signaling is done:

A peer connection is created

Devices exchange:

  • Offer
  • Answer
  • ICE candidates

This is where the magic happens.

The connection shifts from:

Server-mediated to Direct device-to-device communication

4. NAT Traversal (The Hidden Complexity)

Real-world networks are messy.

Devices sit behind:

  • Routers
  • Firewalls
  • NATs

So WebRTC uses:

  • STUN servers → Find your public IP
  • TURN servers → Relay data if direct connection fails

Without these, many connections simply wouldn’t work.

5. Real-Time Media Flow

Once connected:

Audio & video streams flow directly between peers

No backend involved in media transfer

Latency stays extremely low

This is why WebRTC is used in:

  • Video calls
  • Live collaboration
  • Telemedicine apps

The Full Flow:

Here’s how a call actually happens:

  • User A starts a call
  • Signaling server sends “offer” to User B
  • User B responds with “answer”
  • Both exchange ICE candidates
  • Peer connection is established
  • Media streams directly between devices

Not simple.

But beautiful.

What Made This Project Challenging

This wasn’t just coding.

It was understanding systems.

1. Debugging is painful

You’re not debugging functions.

You’re debugging:

  • Network states
  • ICE failures
  • Connection negotiation

2. It’s asynchronous chaos

Everything happens in events:

  • Offers
  • Answers
  • Candidates
  • Streams

Miss one step → connection fails silently.

3. Documentation is scattered

WebRTC isn’t beginner-friendly.

You don’t “learn it once”.

You experience it over time.

What This Project Taught Me

Before this project, I thought:

“Apps are about APIs and UI.”

Now I know:

Some systems live below that layer.

WebRTC taught me:

  • Real-time systems are fundamentally different
  • Architecture matters more than code
  • Networking knowledge is underrated
  • Peer-to-peer is powerful but complex

From App Developer to System Thinker

This project changed how I think.

I stopped asking:

“How do I build this feature?”

And started asking:

“How does communication actually happen?”

That shift is what separates:

Developers from Engineers

Final Thought

Building a WebRTC app isn’t about video calling.

It’s about understanding:

  • Communication protocols
  • Network behavior
  • Real-time systems

And once you understand that…

You stop seeing apps as screens.

You start seeing them as systems.

GitHub Link: https://github.com/prateek-mangalgi-dev18/WebRTC-react-native-app

Portfolio Link: https://prateek-mangalgi.vercel.app/

Top comments (0)