DEV Community

Aleksei Kharitonov
Aleksei Kharitonov

Posted on

I Built an AI Chat in React Native in 30 Lines — Here's the Library That Made It Possible

You know that feeling when you need to add an AI chat to your React Native app, and suddenly you're dealing with polyfills, manual EventSource implementations, token counting, streaming state management, and a dozen edge cases nobody warned you about?

Yeah. I had that feeling too. So I built react-native-ai-kit — one package that handles all of it.


The Problem

React Native doesn't have the browser EventSource API. If you want streaming AI responses (and you do — nobody wants to stare at a blank screen for 10 seconds while GPT thinks), you're stuck with:

  • Writing your own SSE parser from scratch
  • Managing streaming state with useRef and useEffect spaghetti
  • Handling abort/cleanup on unmount manually
  • Building chat UI from scratch every time
  • Adapting response formats for every different LLM provider

Existing libraries either lock you into a single provider, don't handle streaming properly, or give you a hook and say "good luck with the UI."

The Solution

react-native-ai-kit gives you everything in one install:

  • SSE client with automatic reconnection
  • React hooks for streaming and chat management
  • Ready-made UI components
  • Parsers for structured data extraction

And it works with any LLM backend — OpenAI, Anthropic, Ollama, your own API. No framework lock-in.


30 Lines. That's It.

import { useChat, ChatList, ChatBubble } from 'react-native-ai-kit';

function ChatScreen() {
  const { messages, sendMessage, isStreaming } = useChat({
    apiUrl: 'https://api.openai.com/v1/chat/completions',
    systemPrompt: 'You are a helpful assistant.',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${process.env.EXPO_PUBLIC_OPENAI_KEY}`,
    },
  });

  return (
    <ChatList
      messages={messages}
      renderMessage={(msg) => (
        <ChatBubble
          message={msg}
          variant={msg.role === 'user' ? 'user' : 'assistant'}
        />
      )}
    />
  );
}
Enter fullscreen mode Exit fullscreen mode

No polyfills. No manual state management. No EventSource hacks. Just a working AI chat.


What's Inside

useChat — The All-in-One Hook

Handles messages, history, streaming, retry, and token tracking.

const {
  messages,      // Message[]
  sendMessage,   // (content: string) => void
  isStreaming,   // boolean
  tokenUsage,    // { promptTokens, completionTokens, totalTokens } | null
  retry,         // () => void
  stop,          // () => void
  clear,         // () => void
  error,         // Error | null
} = useChat(config);
Enter fullscreen mode Exit fullscreen mode

Token usage comes for free — track costs without extra API calls.

useAIStream — Low-Level Control

When you need raw streaming access without the chat management layer:

const { text, status, send, abort, reset } = useAIStream({
  apiUrl: 'https://api.anthropic.com/v1/messages',
  // ...
});
Enter fullscreen mode Exit fullscreen mode

UI Components

  • ChatList — FlatList with auto-scroll
  • ChatBubble — User/assistant message variants with avatar support
  • StreamingText — Animated typing effect with deferred Markdown rendering
<StreamingText
  text={streamingText}
  showCursor
  renderContent={(text) => <Markdown>{text}</Markdown>}
/>
Enter fullscreen mode Exit fullscreen mode

Markdown rendering is deferred until the stream completes — no more layout jumps from headings appearing mid-token.

Parsers — Extract Structured Data

Pure functions you can use anywhere — in hooks, utilities, middleware:

import { extractJSON, extractToolCalls, extractReasoning } from 'react-native-ai-kit';

// JSON from fenced code blocks or raw text
extractJSON('Result: ```

json\n{"score": 0.95}\n

```')
// => { score: 0.95 }

// Function calling data
extractToolCalls({ tool_calls: [{ type: 'function', function: { name: 'search', arguments: '{"q":"test"}' } }] })
// => [{ type: 'function', name: 'search', input: { q: 'test' } }]

// Chain-of-thought (DeepSeek-style)
extractReasoning('<think source="internal">Analyzing...</think >The answer is 42.')
// => { reasoning: 'Analyzing...', content: 'The answer is 42.' }
Enter fullscreen mode Exit fullscreen mode

Any Backend, Zero Lock-In

The default config works with OpenAI out of the box. For anything else, two functions is all you need:

const { messages, sendMessage } = useChat({
  apiUrl: 'https://my-custom-api.com/chat',
  buildRequestBody: (msgs) => ({
    messages: msgs,
    model: 'my-model',
    stream: true,
  }),
  parseResponse: (chunk) => {
    const parsed = JSON.parse(chunk);
    return parsed.result;
  },
  headers: { Authorization: 'Bearer TOKEN' },
});
Enter fullscreen mode Exit fullscreen mode

Works with Anthropic, Ollama, LangChain, Hugging Face, or literally any endpoint that returns streaming text.


Clean Cancellation

Streams are automatically aborted on unmount. Manual cancellation is one call:

<Button onPress={stop} title="Stop generating" />
Enter fullscreen mode Exit fullscreen mode

No dangling connections. No memory leaks.


Why Not Just Use...

react-native-ai-kit openai-sdk vercel/ai react-native-sse
React Native SSE Yes No (Node.js only) Limited Yes
Chat hooks Yes No Yes No
UI components Yes No No No
Response parsers Yes No Partial No
Any LLM backend Yes OpenAI only Yes Yes
Token tracking Yes Yes Partial No

This isn't a competitor to Vercel AI SDK — it's the React Native piece that's missing from that ecosystem.


Installation

npm install react-native-ai-kit
Enter fullscreen mode Exit fullscreen mode

Requires React Native >= 0.71 and React >= 18. Works with Expo and bare React Native.


What's Next

I'm actively developing this. Coming soon:

  • Image/vision support for multimodal models
  • Conversation persistence helpers
  • More UI component variants
  • Anthropic and Ollama preset configs

If you're building AI features in React Native, I'd love your feedback. Check out the repo — stars, issues, and PRs are all welcome.


Links:

If this saved you a few hours of SSE debugging, consider dropping a star — it helps others find it too.

Top comments (0)