DEV Community

Programming Central
Programming Central

Posted on • Originally published at programmingcentral.hashnode.dev

Building a Real-Time AI Chat UI with SwiftUI: The Ultimate Guide to Streaming Tokens and @Observable

The explosion of Large Language Models (LLMs) has changed what users expect from a chat interface. Gone are the days of waiting for a spinning loader to finish. Modern AI apps feel alive—they stream responses token by token, mimicking a real-time conversation.

But how do you build a UI that stays buttery smooth while receiving dozens of updates per second? The answer lies in the synergy between SwiftUI’s declarative paradigm and the Observation framework.

In this guide, we’ll dive into the reactive foundation of AI chat interfaces, exploring how to handle asynchronous data streams and build a high-performance chat bubble UI that scales.

The Reactive Foundation for AI Chat

Traditional apps work on a request-response cycle. AI apps work on a streaming cycle. When you query a model like GPT-4 or a local Core ML model, the data arrives incrementally via an AsyncSequence.

To handle this, your UI needs to be "reactive." Instead of manually updating a text label every time a new word arrives, we describe what the UI should look like based on the current state. SwiftUI then handles the heavy lifting of re-rendering only the parts of the screen that changed.

Why the @observable Macro is a Game Changer

With iOS 17, Apple introduced the @Observable macro, which is a massive leap forward for AI-driven apps. Unlike the older ObservableObject protocol, @Observable provides:

  1. Granular Updates: SwiftUI now tracks exactly which properties a view uses. If your ChatViewModel has ten properties but your chat bubble only reads currentMessage, the bubble won't re-render when other properties change. This is vital for performance during high-frequency token streaming.
  2. Less Boilerplate: No more @Published wrappers. The compiler synthesizes the observation code for you.
  3. Thread Safety: It integrates natively with Swift Concurrency, making it easier to ensure that AI background tasks don't crash your UI thread.

Managing AI State with Swift Concurrency

To keep the UI responsive, we must offload AI inference or API calls to background tasks. Here is how we structure a modern ChatViewModel using @Observable and @MainActor.

import Foundation
import Observation

@Observable
final class ChatViewModel {
    var messages: [ChatMessage] = []
    var currentAIMessageContent: String = ""
    var isLoading: Bool = false

    struct ChatMessage: Identifiable, Hashable {
        let id = UUID()
        let content: String
        let isUser: Bool
    }

    @MainActor
    func appendToken(_ token: String) {
        currentAIMessageContent += token
    }

    @MainActor
    func startNewAIMessage() {
        isLoading = true
        currentAIMessageContent = ""
    }

    @MainActor
    func finishAIMessage() {
        if !currentAIMessageContent.isEmpty {
            messages.append(ChatMessage(content: currentAIMessageContent, isUser: false))
        }
        currentAIMessageContent = ""
        isLoading = false
    }
}
Enter fullscreen mode Exit fullscreen mode

By marking these methods with @MainActor, we guarantee that state changes happen on the main thread, preventing race conditions while the AI model streams tokens in the background.

Implementing the Chat Bubble UI

The visual core of any chat app is the bubble. We need a flexible component that aligns to the right for the user and the left for the AI, with support for text wrapping and dynamic colors.

The Message Sender Logic

First, we define an enum to handle our styling logic:

enum MessageSender {
    case user
    case ai
}
Enter fullscreen mode Exit fullscreen mode

The ChatBubbleView Component

Here is a robust implementation of a chat bubble designed for iOS 17. It uses HStack and Spacer to handle alignment and fixedSize to manage text wrapping.

struct ChatBubbleView: View {
    let message: String
    let sender: MessageSender

    var body: some View {
        HStack {
            if sender == .ai {
                messageContent
                Spacer() // Pushes AI message to the left
            } else {
                Spacer() // Pushes User message to the right
                messageContent
            }
        }
        .padding(.horizontal, 10)
    }

    private var messageContent: some View {
        Text(message)
            .font(.body)
            .padding(.horizontal, 12)
            .padding(.vertical, 8)
            .background(sender == .user ? Color.blue : Color.gray.opacity(0.2))
            .foregroundColor(sender == .user ? .white : .primary)
            .cornerRadius(15)
            .frame(maxWidth: 280, alignment: sender == .ai ? .leading : .trailing)
            // Allows the bubble to grow vertically but stay constrained horizontally
            .fixedSize(horizontal: false, vertical: true)
    }
}
Enter fullscreen mode Exit fullscreen mode

Why This Works for AI

  1. The Spacer Trick: By placing a Spacer conditionally in an HStack, we create a flexible alignment system that feels natural on any screen size.
  2. Dynamic Wrapping: The .frame(maxWidth: 280) ensures that long AI responses don't stretch across the entire screen, which is a common UI pitfall. The .fixedSize modifier allows the text to wrap into multiple lines without being truncated.
  3. Accessibility: By using .font(.body), the UI automatically respects the user's Dynamic Type settings, ensuring your AI assistant is accessible to everyone.

Conclusion

Building a professional AI chat UI in SwiftUI is about more than just drawing boxes; it’s about managing the flow of data. By leveraging the @Observable macro and Swift’s structured concurrency, you can build an interface that handles rapid-fire token streaming without a single frame drop.

As AI models get faster, the efficiency of your UI state management will become your app's biggest competitive advantage.

Let's Discuss

  1. How are you handling "Auto-Scroll" in your SwiftUI chat views when new tokens arrive—do you prefer ScrollViewReader or a custom solution?
  2. With the shift to the @Observable macro, have you noticed a significant performance boost in your streaming-heavy apps compared to ObservableObject?

Leave a comment below and let's build better AI interfaces together!

The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the ebook
SwiftUI for AI Apps. Building reactive, intelligent interfaces that respond to model outputs, stream tokens, and visualize AI predictions in real time. You can find it here: Leanpub.com

Check also all the other programming & AI ebooks on python, typescript, c#, swift, kotlin: Leanpub.com

Book 1: Core ML & Vision Framework.
Book 2: Apple Intelligence & Foundation Models.
Book 3: Natural Language & Speech.
Book 4: SwiftUI for AI Apps.
Book 5: Create ML Studio.
Book 6: MLX Swift & Local LLMs.
Book 7: visionOS & Spatial AI.
Book 8: Swift + OpenAI & LangChain.
Book 9: CoreData, CloudKit & Vector Search.
Book 10: Shipping AI Apps to the App Store.

Top comments (0)