DEV Community

Malik Chohra
Malik Chohra

Posted on • Originally published at codemeetai.substack.com

From React to React Native: what web devs get wrong on day one

From React to React Native: what web devs get wrong on day one

I built three React Native apps before I really understood it.

The first took me three weeks to ship something that should have taken three days. The second, I shipped fast by ignoring half the platform constraints and paying for it later. The third was the boilerplate I wish I'd had on day one.

This was by 2019, when React Native was new, and I always thought that jumping from React Native to ReactJS for websites would be smooth. Actually, it was.

Since then, i saw so many Web developers, they jump into mobile apps, and it is not the same. So much dependency to manage, just to think about the performance, it is a whole new topic, or to manage packages. Here, we are not talking about Native integration or creating native code from scratch. There is so many mistakes when it comes to that, and i will try to simplify life for you.

If you're a React developer planning to build a mobile app, this is the piece I'd hand you on day zero. What actually transfers from the web? What absolutely doesn't. Where Expo fits in. What to learn first? What to skip. And, because most "React to React Native" guides are written like AI is still 2022, what shipping AI features inside a mobile app actually looks like.

"It's just React, right?"

Yes, it's JSX. Yes, it's hooks. Yes, your component model carries over. That's about a third of what makes up "shipping a working app."

The other things are layout, navigation, storage, build, debugging, and deployment, which are different enough that pretending they aren't is the single biggest reason web devs give up on RN in week two.

So let me split it cleanly.

What transfers from React (the good news)

If you've shipped React on the web, these all carry over more or less untouched:

  • JSX and the component model. Same mental model. <MyComponent prop={value} /> is <MyComponent prop={value} />.
  • Hooks. useState, useEffect, useMemo, useCallback, useReducer, useContext, custom hooks all work the same.
  • TypeScript. Same setup, same tsconfig.json (almost). Expo gives you a working TS template by default.
  • State management. Zustand, Redux, Jotai. All work in RN. TanStack Query works. (If you're choosing between them, I broke down the trade-offs in Redux vs Zustand vs MobX in React Native.)
  • Most utility libraries. zod, date-fns, lodash, dayjs, uuid, all fine. Anything with no DOM dependency.
  • Patterns. Composition, lifting state up, container/presentational, render props if you're into that. All the same.

That's the part that lulls you into thinking "this is going to be smooth."

What doesn't transfer (the painful part)

Here's where week two starts.

No DOM elements. <div> doesn't exist. Neither does <span>, <button>, or <input>. You get <View>, <Text>, <Pressable>, <TextInput>. And every piece of text on the screen has to be inside a <Text> putting a string, directly in a <View> crashes the app at runtime. And by the way, if you use lazy loading for screens, and you don't check that screen, the deployment will crash heavily, as the deployment pipeline is so different. Well, you can use Over The Air update (OTA), but…

No CSS. No stylesheets, no media queries, no cascading, no :hover (there's no hover on mobile). You write style objects in JS, or you use a Tailwind equivalent like NativeWind, which I strongly recommend keeping you on familiar ground.

No React Router. Use Expo Router. It's file-based, feels closest to Next.js's app router, and is now the default for new Expo projects.

No browser APIs. No localStorage, no window, no document. You'll use AsyncStorage (or MMKV for performance), and react-native-reanimated for anything animated. There's no <a href>. There's no scroll event the same way. There's no getElementById.

Forms work differently. <TextInput> doesn't auto-handle the keyboard. You manage focus, dismissal, keyboard-avoiding behavior, autocorrect, and autocapitalize. Keyboard handling on mobile is a small engineering problem in itself.

Images load asynchronously. You don't <img src=...> and forget. You think about caching, placeholders, error states, and image sizes. expo-image handles most of this.

Animations are different. CSS transitions don't exist. Reanimated library is the standard, and it runs animations on the UI thread (separate from your JS), which is actually better than the web, but you have to learn worklets. The idea is we have JS and UI threads in React Native. Animation should run on UI threads to keep the app performance. (For high-performance graphics specifically, RN Skia is worth knowing too.)

Build and deploy aren't Vercel. No git push and you're live. You use EAS Build for cloud builds, EAS Submit to push to the App Store and Play Store, and you wait for app review. EAS Update lets you ship JS-only patches over the air without going through review again. That's the closest thing to "deploy on push."

Debugging is different. Flipper is deprecated. React Native DevTools is the new standard, and it's actually decent now. But native crashes (the kind that surface as a stack trace from Java or Objective-C) require different muscle memory.

That's the surface area you didn't know you didn't know.

Expo vs bare React Native

This is the first real fork in the road.

Bare React Native means you have a native iOS project and a native Android project sitting next to your JS code. You can install any native module, customize anything, and you'll probably spend Saturday afternoons fighting CocoaPods, Gradle, and Xcode signing certificates.

Expo (managed) means you write JS only, install Expo-compatible native modules, and let EAS handle native builds in the cloud. You get OTA updates, a working dev client, and you're shipping to TestFlight in a day instead of a week.

If you're a web developer starting out: use Expo. Don't believe people who tell you it's "for prototypes." It's not 2020 anymore. Expo's ecosystem covers nearly everything you'll actually need (camera, notifications, biometrics, in-app purchases, deep linking, file system). The workflow management in Expo is the best for RN right now.

I tried the bare RN route on app two. I lost a weekend to an Xcode signing error that turned out to be a typo in app.json. I went back to Expo for app three and have not looked back.

What to learn first (the priority list)

If you have one week to come up to speed before starting, here's the order:

  1. Expo Router. File-based routing, layouts, and dynamic routes. Read the Expo Router docs, they're short.
  2. NativeWind. Tailwind for RN. Lets you keep your CSS muscle memory and skip writing StyleSheet.create({ ... }) for every component.
  3. The core RN primitives. View, Text, Pressable, ScrollView, FlatList, TextInput, Image. Know what each is for and when to use which.
  4. AsyncStorage (or MMKV). Your localStorage replacement. MMKV is faster but adds native code; AsyncStorage is fine for most cases.
  5. Reanimated basics. useSharedValue, useAnimatedStyle, withTiming, withSpring. You don't need to master worklets on day one, but you do need this for any real interaction.
  6. EAS Build and EAS Update. Your build and deploy story. Ten minutes of reading saves you hours.

That's enough to ship a real app. Everything else, learn when you need it.

What to avoid (the trap list)

These are the things that cost me time. Don't repeat them.

  • Don't try to make React Router work. Expo Router exists. Use it. I still use React Navigation for my projects, as I'm more familiar with. Expo Router now is king.
  • Don't write StyleSheet.create from scratch when NativeWind solves it for you. You'll be slower, your code will read worse, and you'll resist refactoring. You can have a design system library and use it. Faster and easier.
  • Don't disable Hermes. It's the default RN engine now: faster startup, smaller bundle, better debugging. You shouldn't need to touch this.
  • Don't use setInterval for animations. Use Reanimated. The frame drops will tell you why.
  • Don't ignore the keyboard. Test every screen with the keyboard open. KeyboardAvoidingView is the minimum; react-native-keyboard-controller is what I actually use in production now.
  • Don't ship without offline handling. Phones lose signal in elevators, on planes, on the subway. Check NetInfo and have a fallback. If you want a deeper pattern, I wrote about offline support and caching in Expo with custom queuing.
  • Don't assume iOS and Android behave the same. Safe-area insets, permissions UX, file system paths, system gestures, they diverge in places that matter. Test both.
  • Don't ship API keys in your app. This is the single biggest mistake I see web devs make moving over. Your .env ships in your bundle. Anyone can decompile it. You need a backend proxy for any third-party API call that requires a secret key. I wrote a longer piece on secure storage patterns in Expo that covers what to do with the secrets you do need on-device.

That last one matters more than ever now, because of the AI part.

The AI part nobody talks about

If you're building a mobile app today and there's no AI feature on your roadmap, you're either underestimating where the market is or you have a very specific reason. Most web devs come into RN with an LLM feature already on the spec.

Here's the honest version of what changes when you ship AI inside a mobile app.

Streaming LLM responses without dropping frames. On the web, you stream tokens into a <div> and let the browser paint. On mobile, your JS thread renders into a <Text>, and if you re-render too aggressively you drop frames. The pattern is to batch tokens before pushing them into React state, not to call setState for every chunk that comes off the stream.

API key management. I said it above and I'll say it again, because this is where most first-AI-mobile-app projects ship something insecure. You cannot put your OpenAI / Anthropic / whoever-else API key in your app. It will be extracted within minutes if anyone cares. You need a backend proxy, even a tiny one. A Cloudflare Worker or a Vercel function fronting the AI provider, with rate limiting per device, is the minimum.

Generative UI on mobile. This is the gap. On the web, Tambo and Vercel AI SDK UI let you have an LLM render React components on the fly. On mobile, there's nothing equivalent that's stable yet. (Aside: I'm building one, open source. More on that another time.)

On-device inference is possible, barely. llama.rn, Core ML, MLKit can run small models locally for specific use cases (transcription, classification, simple chat). But for anything resembling Claude or GPT-4 quality, you're still calling an API. Plan for that.

Here's a representative snippet, a streaming chat hook the way I'd write it for a mobile app, with the buffer pattern that keeps frames smooth:

// useStreamingChat.ts: pattern from the AI Mobile Launcher boilerplate
// Note: streaming fetch in RN needs expo/fetch (Expo SDK 52+) or a polyfill.
import { useState, useRef, useCallback } from 'react';
import { fetch as expoFetch } from 'expo/fetch';

type Message = { role: 'user' | 'assistant'; content: string };

export function useStreamingChat(apiUrl: string) {
  const [messages, setMessages] = useState<Message[]>([]);
  const bufferRef = useRef('');
  const flushTimerRef = useRef<NodeJS.Timeout | null>(null);

  const flush = useCallback(() => {
    if (!bufferRef.current) return;
    const chunk = bufferRef.current;
    bufferRef.current = '';
    setMessages((prev) => {
      const last = prev[prev.length - 1];
      if (last?.role !== 'assistant') return prev;
      return [
        ...prev.slice(0, -1),
        { ...last, content: last.content + chunk },
      ];
    });
  }, []);

  const send = useCallback(
    async (input: string) => {
      setMessages((prev) => [
        ...prev,
        { role: 'user', content: input },
        { role: 'assistant', content: '' },
      ]);

      // Hit your backend proxy. Never the AI API directly from the device.
      const res = await expoFetch(`${apiUrl}/chat`, {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ input }),
      });

      const reader = res.body!.getReader();
      const decoder = new TextDecoder();

      while (true) {
        const { value, done } = await reader.read();
        if (done) break;
        bufferRef.current += decoder.decode(value);
        // Batch UI updates at ~60fps instead of every token.
        if (!flushTimerRef.current) {
          flushTimerRef.current = setTimeout(() => {
            flush();
            flushTimerRef.current = null;
          }, 16);
        }
      }
      flush();
    },
    [apiUrl, flush]
  );

  return { messages, send };
}
Enter fullscreen mode Exit fullscreen mode

That pattern alone, buffering tokens and flushing at ~60fps instead of on every chunk, fixes most of the dropped-frames issues new RN devs hit when they first try streaming.

The shortcut

If you're at day zero with React Native and you want to ship an AI-powered mobile app, here's the boring truth: the first two weeks are setup. Routing, styling system, secure API proxy, streaming UI, auth, EAS pipeline, build configs, app icons, splash screens. You'll do all of this before you write your first feature.

I built AI Mobile Launcher because I'd done that two-week setup three times in a row. It's an Expo + React Native boilerplate with:

  • React Navigation with auth screens scaffolded
  • Reanimated patterns ready
  • Backend proxy for OpenAI / Anthropic / OpenRouter (deploy to Cloudflare Workers in one command)
  • Streaming chat UI with the frame-safe pattern from the snippet above
  • EAS Build and EAS Update preconfigured
  • App icon, splash screen, and store metadata templated
  • Revenue Cat, Authentication, Onboarding skills, Design system with react native restyle, High Performance architecture, that is scalable, included with UAMOS system., read it here

    I spent 6 months losing fights with AI in React Native. Then I built U-AMOS.

    The memory system that cut hallucinations 93% and token costs 91% across my own projects — and why the broader ecosystem is converging on the same pattern.

    favicon codemeetai.substack.com

It's the boilerplate I'd hand my past self on day zero. If you're a web dev planning your first AI mobile app, it cuts the setup phase from two weeks to an afternoon.

Use it, fork it, ignore it. The goal is to not lose two weeks to plumbing.

One last thing

If you're a web dev planning to build a mobile app: stop reading the "React Native vs Flutter" arguments. The framework isn't your bottleneck. The surface area you don't know yet, keyboard handling, native builds, store submissions, AI key management, is.

Pick Expo. Ship something small. Hit a wall. Read the docs for that wall. Repeat.

That's the whole path.

Top comments (0)