DEV Community

Programming Central
Programming Central

Posted on • Originally published at programmingcentral.hashnode.dev

Mastering AI UX: How to Animate Confidence Scores and Probability Distributions with Swift 6

Artificial Intelligence is often criticized for being a "black box." When a model tells a user an image is a "cat," the user is forced to take it at face value. But what if the model is only 51% sure? In modern app development, showing the result isn't enough; we need to show the process.

Animating confidence scores and probability distributions transforms static AI outputs into a living "thought process." By leveraging Swift 6 and SwiftUI, we can build interfaces that breathe life into raw data, fostering user trust through transparency and real-time interpretability.

Why Animate AI Interpretability?

Visualizing the internal state of an AI model serves four critical purposes:

  1. Enhanced User Trust: Seeing how a model arrives at a decision makes the system feel less like magic and more like a tool.
  2. Real-time Feedback: In live scenarios like object detection or transcription, flickering scores signal ambiguity, prompting the user to adjust their input.
  3. Better Debugging: Developers can instantly see where a model struggles or identify edge cases where confidence drops.
  4. Engaging UX: Dynamic visualizations make applications feel responsive and intelligent, rather than static and rigid.

The Architecture: SwiftUI Meets Swift 6 Concurrency

To build these reactive interfaces, we need a robust pipeline that handles heavy AI inference without freezing the UI. This is where the Swift 6 concurrency model shines.

1. Asynchronous Inference with async/await

AI model inference is computationally expensive. Using async/await, we can perform these operations on background threads, ensuring the main thread remains free to handle user interactions.

2. State Safety with Actors

When streaming predictions (like during live video processing), data races are a constant threat. Swift 6 Actors provide a safe way to manage mutable state, ensuring that our prediction history and latest scores are updated in a thread-safe environment.

3. Reactive UI with @Observable

The new @Observable macro in SwiftUI simplifies the bridge between our AI logic and the view layer. It allows the UI to automatically react to changes in confidence scores with minimal boilerplate and maximum performance.

Implementation: Building a Reactive AI ViewModel

Let’s look at how to implement a safe, concurrent pipeline for a text classification model.

import SwiftUI
import Observation

// The output must be Sendable to move safely between threads
struct TextClassifierOutput: Sendable {
    let prediction: String
    let confidenceScores: [String: Double]
}

@Observable class AIOutputViewModel {
    var currentPrediction: TextClassifierOutput?
    var isLoading: Bool = false

    @MainActor
    func performPrediction(for input: String) async {
        isLoading = true
        defer { isLoading = false }

        do {
            // Simulate an async AI inference call
            let output = try await simulateInference(input: input)
            self.currentPrediction = output
        } catch {
            print("Prediction failed: \(error)")
        }
    }

    private func simulateInference(input: String) async throws -> TextClassifierOutput {
        try await Task.sleep(for: .milliseconds(300))
        return TextClassifierOutput(
            prediction: "Positive", 
            confidenceScores: ["Positive": 0.92, "Negative": 0.08]
        )
    }
}
Enter fullscreen mode Exit fullscreen mode

Animating the Confidence Score

Once the data is flowing, we use SwiftUI’s declarative animation engine to visualize it. Instead of a number jumping from 0.7 to 0.9, we use a Gauge or a custom ProgressView with a .animation() modifier to create a fluid transition.

struct ConfidenceView: View {
    @State private var viewModel = AIOutputViewModel()
    @State private var textInput: String = ""

    var body: some View {
        VStack(spacing: 20) {
            TextField("Enter text to analyze", text: $textInput)
                .textFieldStyle(.roundedBorder)

            Button("Analyze Sentiment") {
                Task { await viewModel.performPrediction(for: textInput) }
            }

            if let prediction = viewModel.currentPrediction {
                let confidence = prediction.confidenceScores["Positive"] ?? 0.0

                VStack {
                    Text("Confidence: \(confidence, format: .percent)")
                        .font(.headline)

                    // The Gauge automatically animates its needle/bar 
                    // when the underlying value changes.
                    Gauge(value: confidence) {
                        Text("Sentiment")
                    }
                    .gaugeStyle(.linearCapacity)
                    .tint(confidence > 0.7 ? .green : .orange)
                    .animation(.spring(duration: 0.8), value: confidence)
                }
                .padding()
            }
        }
        .padding()
    }
}
Enter fullscreen mode Exit fullscreen mode

Moving Beyond Single Scores: Probability Distributions

A single score tells only half the story. To truly show the "AI’s mind," we should visualize the full probability distribution. If a model is classifying an image as a "Dog," but "Wolf" is a close second at 45%, the user needs to see that uncertainty.

By iterating through the confidenceScores dictionary and animating a series of bars, we create a "live" distribution chart. When the input changes, the bars shift and grow, providing a clear visual representation of the model's shifting certainty.

Conclusion: The New Standard for AI UX

The era of "black box" AI is ending. As developers, our job is to build the bridge between complex neural networks and human intuition. By leveraging Swift 6 Actors for safety, async/await for performance, and SwiftUI animations for clarity, we can create AI applications that are not just smart, but transparent and trustworthy.

Let's Discuss

  1. Do you think showing low confidence scores decreases a user's trust in an app, or does the transparency actually improve it?
  2. What are some other creative ways to visualize AI uncertainty beyond progress bars and gauges?

Leave a comment below and let’s talk about the future of AI-driven interfaces!

The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the ebook
SwiftUI for AI Apps. Building reactive, intelligent interfaces that respond to model outputs, stream tokens, and visualize AI predictions in real time. You can find it here: Leanpub.com or Amazon.
Check also all the other programming ebooks on python, typescript, c#, swift: Leanpub.com or Amazon.

Top comments (0)