Part 4 of 4 — The final part of this series. Parts 1, 2, and 3 cover the Ktor client setup, error-handling layer, and
RequestStatepattern that this part extends.
The first three parts built the foundation — a configured Ktor client, a typed error-handling layer, and a state management pattern that carries data cleanly from repository to UI. For a large class of screens, that's everything you need.
But some screens need more. A feed that loads pages of data as the user scrolls. A chat screen that receives real-time messages over a WebSocket connection while simultaneously loading paginated history. A screen that refreshes six independent data sources at once and needs to wait for all of them before hiding the pull-to-refresh indicator.
These aren't edge cases — they're common in any serious mobile app. And the good news is that none of them require new primitives. They all extend the same architecture you've already seen.
This part covers GenericPagingSource for pagination, PagingStateHandler for rendering every pagination state cleanly in the UI, WebSocketClientConnection and observeWebSocketMessages for real-time data, and RefreshManager for coordinated multi-source refreshes.
Pagination with GenericPagingSource
Writing a PagingSource for each paginated endpoint is repetitive. The structure is always the same: fetch a page, extract items from the response, determine the next page key, handle errors. The only things that change are the endpoint and the types.
GenericPagingSource eliminates that repetition entirely.
class GenericPagingSource<T : Any, R : Any>(
private val fetchPage: suspend (Int) -> ResponseResult<R>,
private val extractItems: (R) -> List<T>,
private val getNextPageKey: (R) -> Int?,
private val onError: (String, Int?, String?, ResponseResult.ErrorType?) -> Unit = { _, _, _, _ -> },
private val onEmpty: () -> Unit = {}
) : PagingSource<Int, T>() {
override fun getRefreshKey(state: PagingState<Int, T>): Int? {
return state.anchorPosition?.let { anchorPosition ->
state.closestPageToPosition(anchorPosition)?.prevKey?.plus(1)
?: state.closestPageToPosition(anchorPosition)?.nextKey?.minus(1)
}
}
override suspend fun load(params: LoadParams<Int>): LoadResult<Int, T> {
return try {
val position = params.key ?: FIRST_PAGE_INDEX
when (val result = fetchPage(position)) {
is ResponseResult.Success -> {
val data = result.data
?: return LoadResult.Error(Exception("Empty response"))
LoadResult.Page(
data = extractItems(data),
prevKey = if (position == FIRST_PAGE_INDEX) null else position - 1,
nextKey = getNextPageKey(data)
)
}
is ResponseResult.Empty -> {
onEmpty()
LoadResult.Page(data = emptyList(), prevKey = null, nextKey = null)
}
is ResponseResult.Error -> {
onError(result.message, result.code, result.serverMessage, result.errorType)
LoadResult.Error(Exception(result.message))
}
}
} catch (e: Exception) {
LoadResult.Error(e)
}
}
companion object {
private const val FIRST_PAGE_INDEX = 1
}
}
Four parameters define the entire pagination behaviour for any endpoint:
fetchPage — a suspend function that takes a page number and returns a ResponseResult<R>. This is always a repository call. It returns ResponseResult — the same type every other repository function in this architecture returns. GenericPagingSource handles all three outcomes: success with data, empty response, and error.
extractItems — given the full response type R, pull out the List<T> of actual items. This handles the typical API envelope pattern where your response carries { results: [...], nextPage: 2 }.
getNextPageKey — return the next page number, or null when you've reached the last page.
onError / onEmpty — optional callbacks for side effects, like showing a snackbar or updating a separate piece of UI state.
Using It in a ViewModel
A paginated list with type filtering, in about fifteen lines:
fun fetchItemsByType(type: String?): Flow<PagingData<ItemDetails>> {
return Pager(
config = PagingConfig(
pageSize = 10,
prefetchDistance = 1,
enablePlaceholders = false
),
pagingSourceFactory = {
GenericPagingSource(
fetchPage = { page ->
repository.fetchPaginatedItems(page = page, pageSize = 10, type = type)
},
extractItems = { it.details },
getNextPageKey = { it.nextPage }
)
}
).flow.cachedIn(viewModelScope)
}
cachedIn(viewModelScope) is important. Without it, the PagingData flow recreates from scratch on every subscription — including on recomposition. For a content-heavy feed where the user scrolls quickly, tune prefetchDistance accordingly:
val feedItems: Flow<PagingData<FeedItem>> = Pager(
config = PagingConfig(
pageSize = 10,
prefetchDistance = 5, // start loading next page when 5 items from the end
enablePlaceholders = false
),
pagingSourceFactory = {
GenericPagingSource(
fetchPage = { page ->
feedRepository.fetchFeed(page = page, pageSize = 10)
},
extractItems = { it.results },
getNextPageKey = { it.nextPage }
)
}
).flow.cachedIn(viewModelScope)
Pagination That Also Updates Other State
The fetchPage lambda is just a suspend function — there's nothing stopping you from doing more inside it. Consider a chat screen where the first page response also carries metadata about the conversation:
val messages: Flow<PagingData<ChatMessage>> = Pager(
config = PagingConfig(pageSize = 10, prefetchDistance = 5, enablePlaceholders = false),
pagingSourceFactory = {
GenericPagingSource(
fetchPage = { page ->
val response = chatRepository.fetchMessages(chatId = chatId, page = page)
// On first page only — extract metadata and bootstrap dependent state
if (response.isSuccess && _uiState.value.chatMeta.isOwnChat == null) {
val body = response.getOrNull()
_uiState.update {
it.copy(
chatMeta = ChatMeta(
isOwnChat = body?.isOwnChat,
isFeedbackPending = body?.isFeedbackPending,
isClosed = body?.isClosed
)
)
}
if (body?.isOwnChat == true && body.isFeedbackPending == true) {
fetchFeedbackOptions()
}
initializeSocketConnection()
}
response
},
extractItems = { it.results ?: emptyList() },
getNextPageKey = { it.nextPage }
)
}
).flow.cachedIn(viewModelScope)
The guard _uiState.value.chatMeta.isOwnChat == null ensures bootstrapping runs exactly once — on the first page load, not on every subsequent page or refresh. This is a production pattern worth internalising: pagination and dependent state initialisation co-located where they belong, without tangling the ViewModel's init block with timing dependencies.
Rendering Pagination State with PagingStateHandler
GenericPagingSource takes care of fetching pages. But the Paging library produces more than just data — it produces load states. There's an initial load, an append load as the user scrolls, a prepend load for reverse-layout lists, a refresh triggered by pull-to-refresh, and error states for each. Each one needs a corresponding UI.
PagingStateHandler encapsulates all of that.
@Composable
fun <T : Any> PagingStateHandler(
lazyPagingItems: LazyPagingItems<T>,
modifier: Modifier = Modifier,
initialLoadingContent: @Composable (Modifier) -> Unit = { DefaultLoading(it) },
refreshLoadingContent: @Composable (Modifier) -> Unit = initialLoadingContent,
appendLoadingContent: @Composable (Modifier) -> Unit = refreshLoadingContent,
prependLoadingContent: @Composable (Modifier) -> Unit = appendLoadingContent,
initialErrorContent: @Composable BoxScope.(Modifier, Throwable, () -> Unit) -> Unit = { mod, error, retry ->
DefaultInitialError(mod, error, retry)
},
appendErrorContent: @Composable ColumnScope.(Modifier, Throwable, () -> Unit) -> Unit = { mod, error, retry ->
DefaultAppendError(mod, error, retry)
},
prependErrorContent: @Composable ColumnScope.(Modifier, Throwable, () -> Unit) -> Unit = { mod, error, retry ->
DefaultPrependError(mod, error, retry)
},
emptyContent: @Composable BoxScope.(Modifier) -> Unit = { DefaultEmptyContent(it) },
fullyLoadedContent: @Composable ColumnScope.(Modifier) -> Unit = { },
showFullyLoaded: Boolean = true,
content: @Composable (LazyPagingItems<T>) -> Unit
) {
val loadState = lazyPagingItems.loadState
val isInitialLoading = loadState.refresh is LoadState.Loading
val isInitialError = loadState.refresh is LoadState.Error
val isEmpty = loadState.refresh is LoadState.NotLoading && lazyPagingItems.itemCount == 0
val isFullyLoaded = loadState.append.endOfPaginationReached && !isInitialLoading && lazyPagingItems.itemCount > 0
Box(modifier = modifier) {
when {
isInitialError -> {
val error = (loadState.refresh as LoadState.Error).error
initialErrorContent(Modifier.fillMaxSize(), error) { lazyPagingItems.retry() }
}
isEmpty -> {
emptyContent(Modifier.fillMaxSize())
}
isInitialLoading -> {
initialLoadingContent(Modifier.fillMaxSize())
}
else -> {
Column(modifier = Modifier.fillMaxSize()) {
when (loadState.prepend) {
is LoadState.Loading -> prependLoadingContent(Modifier.fillMaxWidth())
is LoadState.Error -> {
val error = (loadState.prepend as LoadState.Error).error
prependErrorContent(Modifier.fillMaxWidth(), error) { lazyPagingItems.retry() }
}
else -> {}
}
Box(modifier = Modifier.weight(1f)) { content(lazyPagingItems) }
when {
loadState.append is LoadState.Loading ->
appendLoadingContent(Modifier.fillMaxWidth())
loadState.append is LoadState.Error -> {
val error = (loadState.append as LoadState.Error).error
appendErrorContent(Modifier.fillMaxWidth(), error) { lazyPagingItems.retry() }
}
isFullyLoaded && showFullyLoaded ->
fullyLoadedContent(Modifier.fillMaxWidth())
}
}
}
}
// Refresh overlay — shown when pull-to-refresh fires while items are already visible
if (loadState.refresh is LoadState.Loading && !isInitialLoading) {
refreshLoadingContent(Modifier.align(Alignment.TopCenter))
}
}
}
The State Machine
Five scenarios, resolved in priority order:
- Initial error — the very first load failed. The entire content area is replaced with an error UI and a retry button
- Empty — first load succeeded but returned nothing. Shows an empty placeholder. This is a successful state, not an error — the distinction matters for the message you show the user
- Initial loading — first load in progress. Shows a full-area shimmer or skeleton
-
Active — normal operating state. A
Columnmanages prepend loading/error at the top, thecontentlambda in the middle, and append loading/error/fully-loaded at the bottom - Refresh overlay — pull-to-refresh while items are visible. A lightweight indicator overlaid at the top, not replacing the existing content
The content Lambda Pattern
In practice the content lambda is almost always left empty:
item("PagingFooter") {
PagingStateHandler(
lazyPagingItems = feedItems,
initialLoadingContent = { FeedLoadingSkeleton(count = 10) },
refreshLoadingContent = { FeedLoadingSkeleton(count = 2) },
emptyContent = {
EmptyPlaceholder(
message = stringResource(Res.string.no_items_found),
icon = painterResource(Res.drawable.empty_icon)
)
},
modifier = Modifier.fillMaxWidth().animateItem(),
content = { /* items rendered in the LazyColumn above */ }
)
}
PagingStateHandler is placed as a single item after all the real list items in the LazyColumn. The actual items are rendered in the items { } block above it — as usual. PagingStateHandler handles only state UI: the shimmer on first load, the error view, the empty placeholder, the append spinner.
Per-Screen Customisation
Every parameter except lazyPagingItems has a default. Override only what's different:
// Card-based list — custom skeletons, no "all loaded" footer
PagingStateHandler(
lazyPagingItems = items,
initialLoadingContent = { CardListSkeleton(cardHeight = 132.dp, count = 12) },
appendLoadingContent = { mod -> CardListSkeleton(modifier = mod, count = 2) },
refreshLoadingContent = { CardListSkeleton() },
emptyContent = { EmptyState(modifier = Modifier.padding(top = 84.dp)) },
showFullyLoaded = false,
modifier = Modifier.fillMaxWidth().animateItem(),
content = { /* items rendered above */ }
)
// Chat screen — message skeletons, prepend handled for reverse layout
item("ChatPagingFooter") {
PagingStateHandler(
lazyPagingItems = chatMessages,
initialLoadingContent = { MessageSkeleton(count = 7) },
refreshLoadingContent = { MessageSkeleton(count = 3) },
modifier = Modifier.fillMaxWidth().animateItem(),
content = { /* messages rendered above */ }
)
}
This is the same principle as displayResultWithDefaults from Part 3 — convention over configuration, with full override capability.
Real-Time Data with WebSockets
WebSocketClientConnection
WebSocketClientConnection is a self-contained WebSocket manager. You give it a URL and a coroutine scope, and it handles the rest — connection establishment, message routing, ping/pong, reconnection on failure, and clean shutdown.
class WebSocketClientConnection(
private val socketUrl: String,
private val autoReconnect: Boolean = true,
private val scope: CoroutineScope,
// ...
) {
private val _connectionState = MutableStateFlow<ConnectionState>(ConnectionState.Disconnected)
val connectionState: StateFlow<ConnectionState> = _connectionState.asStateFlow()
private val _messages = MutableSharedFlow<String>()
val messages: SharedFlow<String> = _messages.asSharedFlow()
private val outgoingMessages = Channel<String>(Channel.BUFFERED)
sealed class ConnectionState {
data object Disconnected : ConnectionState()
data object Connecting : ConnectionState()
data object Connected : ConnectionState()
data class Error(val error: Exception) : ConnectionState()
}
}
The public surface is small by design: two observable flows (connectionState, messages), one method for sending (sendMessage), one for shutdown (disconnect). All the complexity — mutex-protected session, separate coroutines for incoming and outgoing processing, exponential backoff reconnection — stays internal.
Connection and Reconnection
fun connect() {
if (_connectionState.value is ConnectionState.Connected ||
_connectionState.value is ConnectionState.Connecting) return
reconnectJob = scope.launch {
var retryCount = 0
var retryDelay = INITIAL_RETRY_DELAY_MS // 1000ms
while (isActive) {
try {
_connectionState.value = ConnectionState.Connecting
httpClient.webSocket(urlString = socketUrl, request = {
headers {
append("Authorization", userRepository.userTokenData.value?.accessToken ?: "")
append("Accept-Language", appLocalizationRepository.languagePreference.value.serverCode)
}
}) {
sessionMutex.withLock { session = this }
_connectionState.value = ConnectionState.Connected
retryCount = 0
retryDelay = INITIAL_RETRY_DELAY_MS
startMessageProcessing()
suspendCancellableCoroutine<Unit> { continuation ->
continuation.invokeOnCancellation { scope.launch { cleanup() } }
}
}
} catch (e: IOException) {
handleConnectionError(e, retryCount, retryDelay)
if (!shouldRetryConnection(retryCount)) break
retryCount++
retryDelay = (retryDelay * BACKOFF_MULTIPLIER).toLong().coerceAtMost(MAX_RETRY_DELAY_MS)
delay(retryDelay.milliseconds)
} catch (e: Exception) {
handleConnectionError(e, retryCount, retryDelay)
break // Non-IO exceptions are not retried
}
}
}
}
Key design decisions:
-
suspendCancellableCoroutinekeeps the coroutine and the session alive without busy-waiting. The actual work happens instartMessageProcessing(), which launches two child coroutines — one for incoming, one for outgoing -
IOExceptionis retried; other exceptions are not. Network errors are transient. Programming errors, protocol violations, and coroutine cancellations are not. This prevents the reconnection loop from spinning on non-recoverable errors - Exponential backoff is capped — 1s → 5s max, stops after 5 consecutive failures
-
The auth token is read at connection time, not construction time — same principle as
defaultRequestin the HTTP client
Incoming Message Processing
private suspend fun processIncomingMessages(session: DefaultClientWebSocketSession) {
try {
for (frame in session.incoming) {
when (frame) {
is Frame.Text -> _messages.emit(frame.readText())
is Frame.Close -> break
is Frame.Ping -> if (session.isActive) session.send(Frame.Pong(frame.data))
else -> { /* ignore binary frames */ }
}
}
} catch (e: EOFException) {
if (autoReconnect && scope.isActive) handleMessageProcessingError(e)
} catch (e: CancellationException) {
// Normal shutdown — do nothing
} catch (e: Exception) {
if (autoReconnect && scope.isActive) handleMessageProcessingError(e)
} finally {
cleanup()
}
}
Ping/pong is handled manually in addition to the plugin-level pingInterval. Responding to server-initiated pings keeps the connection alive from the server's perspective, while pingInterval handles client-initiated keepalives. Both together mean the connection survives aggressive network management on Android and iOS.
observeWebSocketMessages — Decoding and Routing in One Call
Note: The message routing here is specific to a backend envelope format with
typeanddatafields. Your backend may use a different shape — only the envelope parsing would need to change.
inline fun <reified T : Any?> observeWebSocketMessages(
client: WebSocketClientConnection?,
scope: CoroutineScope,
crossinline onMessage: (T?) -> Unit,
crossinline onProcessing: (T?) -> Unit = {},
crossinline onError: (Throwable) -> Unit = {},
crossinline onConnectionStateChanged: (WebSocketClientConnection.ConnectionState) -> Unit = {}
) {
if (client == null) return
val json = JsonConfig.parser
scope.launch {
client.messages.collect { rawMessage ->
try {
val envelope = json.decodeFromString<DefaultResponseDto<JsonElement>>(rawMessage)
when (envelope.type) {
SocketResponseType.DATA.key -> {
envelope.data?.let { payload ->
onMessage(json.decodeFromJsonElement<T>(payload))
}
}
SocketResponseType.PROCESSING.key -> {
envelope.data?.let { payload ->
onProcessing(json.decodeFromJsonElement<T>(payload))
}
}
else -> onProcessing(null)
}
} catch (e: Exception) {
onProcessing(null)
onError(e)
}
}
}
scope.launch {
client.connectionState.collect { state -> onConnectionStateChanged(state) }
}
client.connect()
}
The function is reified so it decodes the JSON payload into T without a class reference. The two-stage decode — first into DefaultResponseDto<JsonElement>, then from JsonElement into T — handles the envelope pattern: the outer wrapper carries a type field, and the inner data is the actual payload.
The function also calls client.connect(). Observing the messages is the signal to connect — no separate connect call needed. It's idempotent: connect() returns early if a connection is already in progress.
Using It in a ViewModel
private fun initializeSocketConnection() {
if (uiState.value.chatMeta.isOwnChat != true) return
webSocketClient = chatRepository.createWebSocketConnection(
chatId = chatId,
scope = viewModelScope
)
observeWebSocketMessages<ChatMessageDto?>(
client = webSocketClient,
scope = viewModelScope,
onMessage = { dto -> dto?.toDomain()?.let { appendMessageFromSocket(it) } },
onProcessing = { dto -> updateProcessingIndicator(dto?.toDomain()) },
onError = { error -> Logger.debug("WebSocket error: ${error.message}") },
onConnectionStateChanged = { state -> Logger.debug("WebSocket state: $state") }
)
}
override fun onCleared() {
super.onCleared()
webSocketClient?.disconnect()
}
The scope cancellation from onCleared() automatically stops the collecting coroutines, but disconnect() sends a proper WebSocket close frame to the server. Without it, the server may not know the client left and keeps the session allocated.
The repository side stays minimal:
override fun createWebSocketConnection(chatId: Int, scope: CoroutineScope): WebSocketClientConnection {
val url = "${WSS_URL}ws/chat/$chatId/"
return WebSocketClientConnection(socketUrl = url, scope = scope)
}
Coordinated Refresh with RefreshManager
typealias RefreshManagerFactory = (
scope: CoroutineScope,
onStart: () -> Unit,
onEnd: () -> Unit
) -> RefreshManager
class RefreshManager(
private val coroutineScope: CoroutineScope,
private val minDurationMillis: Long = 1500L,
private val onRefreshStart: () -> Unit,
private val onRefreshEnd: () -> Unit
) {
private val jobs = mutableListOf<Job>()
@Volatile
private var isRefreshing = false
fun add(jobProvider: () -> Job) { jobs.add(jobProvider()) }
fun addSuspending(block: suspend () -> Unit) {
jobs.add(coroutineScope.launch {
try { block() }
catch (e: Exception) { Logger.error("Error in refresh block: ${e.message}") }
})
}
fun start() {
if (isRefreshing) return
isRefreshing = true
coroutineScope.launch {
onRefreshStart()
try {
val delayJob = launch { delay(minDurationMillis) }
jobs.add(delayJob)
jobs.joinAll()
} finally {
onRefreshEnd()
jobs.clear()
isRefreshing = false
}
}
}
}
Register jobs with add() or addSuspending(). Call start(). RefreshManager fires onRefreshStart(), launches all jobs concurrently alongside a minimum-duration delay, waits for all of them to finish, then fires onRefreshEnd().
The minDurationMillis of 1500ms prevents the jarring flash of a spinner that appears and disappears in 200ms. If all API calls complete in 400ms, the refresh indicator still shows for the full 1.5 seconds.
fun refreshAllData() {
val refreshManager = refreshManagerFactory(
viewModelScope,
onStart = { _uiState.update { it.copy(isRefreshing = true) } },
onEnd = { _uiState.update { it.copy(isRefreshing = false) } }
)
viewModelScope.launch {
refreshManager.add { fetchWeatherData() }
refreshManager.add { fetchTaskReminders() }
refreshManager.add { fetchNotificationCount() }
refreshManager.add { fetchBanners() }
refreshManager.add { fetchMarketPrices() }
refreshManager.addSuspending { resetInputState() }
refreshManager.start()
}
}
refreshManagerFactory is a typealias rather than a direct constructor call. In tests, you swap the factory for a synchronous or mock implementation without coroutine timing concerns. The typealias keeps injection clean without needing a full interface.
Pagination + WebSockets: How They Coexist on the Same Screen
The real payoff of this architecture is how naturally these pieces compose. A chat screen combining paginated history with live incoming messages is a common requirement — here's how both data paths coexist without interfering.
Paginated history flows through GenericPagingSource as LazyPagingItems. New socket messages are stored in a separate socketMessages list in UiState. The typing indicator lives in its own processingMessage field. The two paths update independent state fields:
private fun appendMessageFromSocket(message: ChatMessage) {
_uiState.update { state ->
val updated = state.socketMessages.toMutableList()
updated.add(0, message) // prepend for reversed layout
state.copy(socketMessages = updated)
}
}
private fun updateProcessingIndicator(message: ChatMessage?) {
_uiState.update { state -> state.copy(processingMessage = message) }
}
A single LazyColumn renders four sections in order:
LazyColumn(reverseLayout = isOwnChat) {
// 1. Typing indicator — from WebSocket processing state
item {
AnimatedVisibility(visible = processingMessage?.isTyping == true) {
TypingIndicator()
}
}
// 2. Real-time messages — arrived via WebSocket, not yet in pagination
if (socketMessages.isNotEmpty()) {
itemsIndexed(
items = socketMessages,
key = { index, msg -> "${msg.id}-socket-$index" }
) { _, message ->
MessageBubble(message = message)
}
}
// 3. Historical messages — loaded via pagination
items(
count = chatMessages.itemCount,
key = { index -> chatMessages.peek(index)?.id ?: index }
) { index ->
chatMessages[index]?.let { MessageBubble(message = it) }
}
// 4. Pagination state — initial loading, append, empty, errors
item("ChatPagingFooter") {
PagingStateHandler(
lazyPagingItems = chatMessages,
initialLoadingContent = { MessageSkeleton(count = 7) },
refreshLoadingContent = { MessageSkeleton(count = 3) }
)
}
}
With reverseLayout = true, new socket messages prepended to the front appear at the bottom. Historical paged messages appear above as the user scrolls up. The typing indicator sits at the bottom above the newest message.
The two data sources are completely independent. If the socket drops and reconnects, the paginated history is unaffected. If the user scrolls back through history, new messages keep arriving at the bottom. The paths don't interfere because they never share the same data structure.
Putting It All Together
Looking back across all four parts, here's the complete architecture:
A single, consistently configured HttpClient — auth, token refresh, retries, timeouts, and logging declared once, injected everywhere as a singleton.
A complete error-handling boundary at safeRequest — every failure mode Ktor can produce is caught, classified, and returned as a typed ResponseResult with user-facing messages and error type enums.
A UI state machine in RequestState — the full lifecycle of any request (idle, loading, success, error, empty) flows through executeNetworkCall and executeNetworkCallWithState into UiState, and renders through displayResult and displayResultWithDefaults with animated transitions.
Infrastructure for the hard cases — GenericPagingSource for paginated lists with no boilerplate per endpoint, PagingStateHandler for rendering every load state with per-screen customisable defaults, WebSocketClientConnection for self-healing real-time connections, observeWebSocketMessages for typed message decoding and routing, and RefreshManager for coordinated multi-source refresh with minimum display duration.
Every screen follows the same pattern. The repository makes a typed call. The ViewModel runs it through executeNetworkCall or executeNetworkCallWithState. The UI observes a StateFlow<UiState> and renders through displayResult or PagingStateHandler. No special-casing for loading states, no inconsistent error handling, no try-catch scattered across ViewModel functions.
When a new feature needs to be added, the developer writes a repository function that returns ResponseResult<T>, wires it through executeNetworkCallWithState, stores the result in a RequestState field in UiState, and renders it with displayResultWithDefaults. The path is always the same. The only new code is the code that's actually unique to the feature.
That's the real value of this architecture — not any single clever piece, but the consistency it creates across a codebase that will grow, be maintained by different developers, and need to absorb new requirements over time.
Thanks for reading the full series. If you found it useful, sharing it with your team or leaving a comment goes a long way. All feedback welcome.
Top comments (0)