Building an AI-powered app isn't like building a standard CRUD application. In a traditional app, navigation is linear: you tap a button, you push a view, you go back. But AI is different. It’s asynchronous, it’s non-linear, and it’s often unpredictable.
If you’re still using imperative navigationController.pushViewController logic to handle multi-step AI processes—like generating images, streaming LLM tokens, or processing documents—your architecture is likely on the verge of collapsing under its own complexity.
To build world-class AI experiences, we have to move toward State-Driven Navigation. Here is how you can leverage SwiftUI and modern Swift concurrency to build robust, reactive AI workflows.
Why Imperative Navigation Fails AI
In a typical AI workflow, the UI needs to react to the engine, not just the user. Imagine an image generation app:
- User enters a prompt.
- AI starts generating (Asynchronous state).
- AI returns a result, but it might be an error or a request for more info (Conditional branching).
- User refines the output (Looping state).
If you try to manage this with "if-this-then-push" logic, you end up with "Massive View Controller" syndrome. SwiftUI’s declarative paradigm offers a better way: The UI is a direct reflection of the workflow state.
The Foundation: Workflow as a State Machine
The most effective way to model an AI workflow is as a Finite State Machine (FSM). Instead of thinking about "screens," think about "states."
- States:
PromptInput,Generating,Reviewing,Error. - Events:
SubmitPrompt,GenerationComplete,Retry. - Transitions: The logic that moves you from one state to another.
By binding your SwiftUI NavigationStack to a central state object, your app becomes a reactive engine. When the AI model finishes its work, it updates the state, and the UI automatically "navigates" to the next step.
Thread Safety with Actors and Sendable
AI operations are computationally expensive. Whether you are running Core ML locally or hitting a remote API, you must handle concurrency safely.
1. Isolating AI State with Actors
AI models often hold internal mutable states (like cached tensors). Accessing these from multiple threads is a recipe for crashes. We use Actors to ensure that only one task interacts with the AI engine at a time.
@available(iOS 18.0, *)
actor CoreMLInferenceEngine {
private var model: MyImageModel
init() async {
self.model = await loadModel()
}
func performInference(input: MLInput) async throws -> MLOutput {
// Serialized execution ensures thread safety
print("Processing AI Task...")
try await Task.sleep(for: .seconds(2))
return MLOutput(image: UIImage(systemName: "sparkles")!)
}
}
2. Ensuring Data Safety with Sendable
When passing data between your background AI actor and the @MainActor (the UI), your data types must conform to Sendable. This tells the compiler that the data is safe to move across concurrency boundaries without causing race conditions.
Implementing the Pattern: The Smart Document Scanner
Let’s look at a practical implementation using a NavigationStack and a NavigationPath. This pattern allows you to programmatically "drive" the user through a multi-step process.
The State-Driven Coordinator
@Observable
class AIWorkflowCoordinator {
var path = [WorkflowStep]()
var isProcessing = false
enum WorkflowStep: Hashable {
case processing(Document)
case review(Document)
}
func startWorkflow(with doc: Document) async {
// 1. Move to processing state
path.append(.processing(doc))
// 2. Perform AI Task
do {
let processedDoc = try await processWithAI(doc)
// 3. Transition to review state
path.append(.review(processedDoc))
} catch {
// Handle error state
}
}
}
The SwiftUI View Hierarchy
In your view, you simply observe the path. SwiftUI handles the transitions automatically.
struct DocumentScannerView: View {
@State private var coordinator = AIWorkflowCoordinator()
var body: some View {
NavigationStack(path: $coordinator.path) {
CaptureView() // The Root
.navigationDestination(for: AIWorkflowCoordinator.WorkflowStep.self) { step in
switch step {
case .processing(let doc):
ProcessingSpinnerView(document: doc)
case .review(let doc):
ReviewView(document: doc)
}
}
}
}
}
Why This Matters for UX
When navigation is tied to state, the user experience feels "intelligent."
- Reactivity: If a stream of text comes in from an LLM, the UI can transition the moment the first token arrives.
- Robustness: If the network drops, the state machine moves to
.error, and the UI instantly shows a recovery screen—no manual "pop" or "dismiss" required. - Testability: You can test your entire AI workflow logic by simply asserting state transitions without ever touching a UI test.
Conclusion
The shift from imperative to declarative navigation is the "secret sauce" for modern AI applications. By treating your workflow as a state machine and leveraging Swift’s concurrency tools like Actors and @Observable, you create interfaces that are as dynamic as the models powering them. Stop fighting the navigation stack and start letting your state drive the experience.
Let's Discuss
- How are you currently handling long-running AI tasks in your UI? Do you prefer modal sheets or a linear navigation stack?
- Have you run into "race conditions" when passing AI model outputs between background threads and the Main Actor? How did you solve them?
The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the ebook
SwiftUI for AI Apps. Building reactive, intelligent interfaces that respond to model outputs, stream tokens, and visualize AI predictions in real time. You can find it here: Leanpub.com or Amazon.
Check also all the other programming ebooks on python, typescript, c#, swift: Leanpub.com or Amazon.
Top comments (0)