Welcome to Part 11 of the *Flutter Interview Questions * series! This part is a beast -- 50 questions across three critical domains that interviewers love to probe. Event loop ordering puzzles, Futures that never complete, 500MB RAM debugging, jank diagnosis, shader compilation deep-dives, the full rendering pipeline from setState to pixels, Navigator.push in build(), nested Navigators, and go_router gotchas. If you can master this material, you will stand out in any senior Flutter interview. This is part 11 of a 14-part series -- bookmark it now so you can revisit before your next interview.
What's in this part?
- Async & concurrency brain teasers (event loop ordering, Future.wait vs forEach, Isolate crashes, Streams)
- Zone error handling and Completer deadlocks
- Performance & memory puzzles (500MB RAM debugging, jank profiling, shader compilation, Impeller)
- Rendering pipeline from setState to pixels on screen
- Image loading, caching, and OOM fixes
- Widget recycling and Element reuse
- Navigation & routing gotchas (Navigator.push in build, nested navigators, go_router, deep links, auth guards)
- Double-tap navigation bugs and Bloc/Provider state lifetime with routes
SECTION 1: ASYNC & CONCURRENCY BRAIN TEASERS
Q1: What's the output order of this code?
void main() {
print('1');
Future(() => print('2'));
print('3');
Future.microtask(() => print('4'));
}
What the interviewer is REALLY testing:
Whether you understand the Dart event loop, the difference between the microtask queue and the event queue, and the order in which synchronous code, microtasks, and futures execute.
Answer:
The output order is: 1, 3, 4, 2.
Here is exactly why, step by step:
-
print('1')-- Synchronous. Runs immediately. Output: 1 -
Future(() => print('2'))-- Schedules a callback on the event queue. Does NOT run now. -
print('3')-- Synchronous. Runs immediately. Output: 3 -
Future.microtask(() => print('4'))-- Schedules a callback on the microtask queue. Does NOT run now. -
main()finishes. The event loop takes over. - The event loop always drains the microtask queue first before picking the next event from the event queue.
- Microtask queue has
print('4')-- runs. Output: 4 - Microtask queue is empty. Event queue has
print('2')-- runs. Output: 2
The rule: Synchronous code > Microtasks > Event queue tasks.
A more complex variant:
void main() {
print('1');
Future(() => print('2'));
scheduleMicrotask(() => print('3'));
Future(() => print('4')).then((_) => print('5'));
Future.microtask(() => print('6'));
print('7');
}
// Output: 1, 7, 3, 6, 2, 4, 5
// .then() on a completed Future schedules as microtask,
// but Future(() => ...) hasn't completed until its event runs,
// so .then() of '5' runs as a microtask right after '4' completes.
Q2: What happens if you await a Future that never completes? Does the app freeze?
What the interviewer is REALLY testing:
Whether you understand that await does NOT block the thread -- it suspends only the current async function and returns control to the event loop.
Answer:
The app does NOT freeze. The UI remains responsive.
When you write:
Future<void> fetchData() async {
final result = await Completer<String>().future; // Never completes
print(result); // Never reached
}
What actually happens:
-
awaittransforms the function into a state machine via the compiler. - When the
awaitis hit, the function suspends -- it registers a callback to be called when the Future completes. - Control returns to the event loop immediately.
- The event loop continues processing UI events, animations, taps, etc.
- Since the Future never completes, the callback is never invoked, and everything after the
awaitis simply never executed.
The danger is NOT freezing -- it is a memory leak. The suspended function's stack frame, the Completer, and any variables captured in the closure are all retained in memory forever because the garbage collector sees them as still reachable (the Future holds a reference to the callback).
// This leaks:
class _MyState extends State<MyWidget> {
@override
void initState() {
super.initState();
_loadData();
}
Future<void> _loadData() async {
final data = await neverCompletingFuture(); // State object is retained
setState(() {}); // Never called, but 'this' is captured
}
}
What DOES freeze the app is a synchronous infinite loop:
void main() {
while (true) {} // THIS freezes everything — blocks the event loop
}
Q3: Can two Futures run in parallel in Dart? Isn't Dart single-threaded? Explain the paradox.
What the interviewer is REALLY testing:
Whether you understand concurrency vs parallelism, and how Dart's single-threaded event loop achieves concurrent I/O without true parallelism.
Answer:
Yes, two Futures CAN run concurrently. No, Dart's main isolate is single-threaded. There is no paradox -- the confusion is between concurrency and parallelism.
Concurrency = multiple tasks making progress in overlapping time periods.
Parallelism = multiple tasks executing at the exact same instant on different CPU cores.
Dart Futures achieve concurrency without parallelism for I/O operations:
// These two HTTP calls run CONCURRENTLY:
final results = await Future.wait([
http.get(Uri.parse('https://api.example.com/users')),
http.get(Uri.parse('https://api.example.com/posts')),
]);
Here is what happens internally:
-
http.getcall #1 asks the OS to open a socket and send a request. Dart registers a callback and returns immediately. -
http.getcall #2 does the same -- asks OS, registers callback, returns immediately. - Both network requests are now in flight simultaneously at the OS/kernel level.
- The Dart event loop is free to handle other events while waiting.
- When response #1 arrives, the OS notifies Dart via an I/O completion port, and the callback is queued.
- When response #2 arrives, same thing happens.
The I/O is happening outside Dart's thread -- in the OS kernel, in the network card's firmware, etc. Dart's single thread only runs the Dart code (callbacks), and that code runs one piece at a time.
For true parallelism of CPU-bound work, you need Isolates:
// TRUE parallelism — different CPU cores:
final result1 = Isolate.run(() => heavyComputation(data1));
final result2 = Isolate.run(() => heavyComputation(data2));
final results = await Future.wait([result1, result2]);
Summary table:
| Scenario | Concurrent? | Parallel? | Mechanism |
|---|---|---|---|
| Future.wait with I/O | Yes | I/O is parallel at OS level | Event loop + OS kernel |
| Future.wait with CPU work | Concurrent but NOT parallel | No | Still one thread |
| Multiple Isolates | Yes | Yes | Separate threads/processes |
Q4: What happens if you call setState inside a Stream listener and the stream emits 100 events in 1 second?
What the interviewer is REALLY testing:
Whether you understand that setState triggers a rebuild, but Flutter batches rebuilds within a single frame, and what the real-world performance implications are.
Answer:
Flutter does NOT rebuild the widget 100 times. It batches.
When you call setState(), Flutter marks the element as dirty and schedules a frame (if one is not already scheduled). Multiple setState() calls between frames result in only one rebuild.
StreamSubscription? _sub;
@override
void initState() {
super.initState();
_sub = rapidStream.listen((event) {
setState(() {
_latestValue = event;
});
});
}
At 60fps, a frame is ~16.67ms. If 100 events arrive in 1 second:
- Frame 1 (0-16ms): Maybe 1-2 events arrive. setState called 1-2 times. Widget rebuilds once with the latest value.
- Frame 2 (16-33ms): Maybe 1-2 more events. Widget rebuilds once.
- This continues -- roughly 60 rebuilds over 1 second, not 100.
However, there are real problems:
State consistency: You only see the LAST value set before each frame. Intermediate values are lost. If you need to process every event (e.g., summing them), you must accumulate outside setState.
Backpressure: If event processing is slow, events queue up in memory.
Best practice -- throttle or debounce:
// Using stream transformation:
_sub = rapidStream
.throttleTime(Duration(milliseconds: 16)) // rx_dart
.listen((event) {
setState(() => _latestValue = event);
});
// Or use a manual approach:
bool _frameScheduled = false;
void _onEvent(dynamic event) {
_latestValue = event; // Update value immediately (no setState)
if (!_frameScheduled) {
_frameScheduled = true;
SchedulerBinding.instance.addPostFrameCallback((_) {
_frameScheduled = false;
if (mounted) setState(() {});
});
}
}
- Memory leak if you forget to cancel:
@override
void dispose() {
_sub?.cancel(); // CRITICAL — otherwise the listener keeps the State alive
super.dispose();
}
Q5: What's the difference between Future.wait and Future.forEach?
What the interviewer is REALLY testing:
Whether you understand concurrent vs sequential execution of Futures, and when to choose each.
Answer:
Future.wait |
Future.forEach |
|
|---|---|---|
| Execution | Concurrent -- all Futures start immediately | Sequential -- each waits for the previous to complete |
| Return type |
Future<List<T>> -- all results |
Future<void> -- no results collected |
| Error handling | Fails fast (by default) if ANY future throws | Stops at first error |
| Use case | Independent operations (parallel API calls) | Dependent/ordered operations (sequential DB writes) |
// Future.wait — ALL start immediately, complete concurrently:
final results = await Future.wait([
fetchUser(), // Starts now
fetchPosts(), // Starts now (doesn't wait for fetchUser)
fetchComments(), // Starts now
]);
// results[0] = user, results[1] = posts, results[2] = comments
// Future.forEach — Sequential, one at a time:
await Future.forEach<String>(
['file1.txt', 'file2.txt', 'file3.txt'],
(file) async {
await uploadFile(file); // Waits for each upload before starting next
},
);
Gotcha with Future.wait error handling:
// By default, Future.wait fails fast — but other futures keep running!
try {
await Future.wait([
Future.delayed(Duration(seconds: 1), () => throw 'Error A'),
Future.delayed(Duration(seconds: 5), () => print('B completes')),
]);
} catch (e) {
print(e); // 'Error A' — caught at 1 second
// But B is STILL running! It completes at 5 seconds silently.
}
// To collect all errors:
await Future.wait(
[future1, future2],
eagerError: false, // Wait for all to complete, even if some fail
);
Related alternatives:
// Future.any — returns the FIRST to complete, ignores the rest:
final fastest = await Future.any([api1(), api2(), api3()]);
// Stream.fromFutures — process results as they arrive:
await for (final result in Stream.fromFutures([api1(), api2()])) {
print(result); // Prints each result as its Future completes
}
Q6: What happens if an Isolate throws an unhandled exception?
What the interviewer is REALLY testing:
Whether you know that isolates are independent and their crashes do not bring down the main isolate -- plus how to handle errors properly.
Answer:
The isolate terminates. The main isolate does NOT crash. Isolates are fully independent -- they have their own memory heap, event loop, and error zone.
However, the behavior depends on HOW you spawned the isolate:
Scenario 1: Using Isolate.run() (recommended)
try {
final result = await Isolate.run(() {
throw Exception('Boom!');
});
} catch (e) {
// Exception IS caught here — Isolate.run propagates errors
print('Caught: $e');
}
Isolate.run() wraps the isolate in error handling and forwards the exception to the calling Future. Clean and simple.
Scenario 2: Using Isolate.spawn() with error port
final receivePort = ReceivePort();
final errorPort = ReceivePort();
errorPort.listen((error) {
// error is a List: [errorMessage, stackTrace]
print('Isolate error: ${error[0]}');
print('Stack: ${error[1]}');
});
await Isolate.spawn(
myEntryPoint,
receivePort.sendPort,
onError: errorPort.sendPort,
onExit: exitPort.sendPort,
);
Scenario 3: Using Isolate.spawn() WITHOUT error port
// DANGEROUS — the error is SILENTLY SWALLOWED
await Isolate.spawn(myFunction, message);
// If myFunction throws, the isolate dies silently.
// No crash, no log, no error — just silence. Very hard to debug.
Scenario 4: Using compute() (Flutter's wrapper)
try {
final result = await compute(riskyFunction, input);
} catch (e) {
// Exception IS propagated — compute() handles this
print('Caught: $e');
}
Key insight for interviews: Always emphasize that isolate crashes are isolated (pun intended). The main isolate keeps running. This is fundamentally different from threads in Java/C++ where an unhandled exception can corrupt shared state.
Q7: Why can't you pass a closure to Isolate.spawn?
What the interviewer is REALLY testing:
Whether you understand that isolates have separate memory heaps and closures capture references to objects in the parent isolate's heap, which cannot be shared.
Answer:
You CAN pass a closure to Isolate.spawn in modern Dart (2.15+), but with strict restrictions. The historical restriction was that only top-level or static functions were allowed. Let me explain the nuance.
The core problem -- closures capture context:
class MyService {
final HttpClient client;
void startIsolate() {
// This closure captures 'this' (MyService instance)
// and 'client' — objects on the MAIN isolate's heap
Isolate.spawn((message) {
client.get(...); // ERROR — client lives in the other isolate's memory!
}, 'hello');
}
}
Isolates do NOT share memory. When you send data to an isolate, Dart must copy (or transfer) the data. Objects are copied by serializing them through SendPort. Only certain types can be sent:
- Primitives (int, double, bool, String, null)
- Lists/Maps/Sets of sendable types
- SendPort/ReceivePort
- TransferableTypedData
- Types marked with
@pragma('vm:isolate-unsendable')are explicitly blocked
What works and what doesn't:
// WORKS — top-level function, no captured state:
void isolateEntry(String message) => print(message);
await Isolate.spawn(isolateEntry, 'hello');
// WORKS — closure that captures only sendable data:
final name = 'Flutter';
await Isolate.spawn((port) {
print(name); // String is sendable — Dart copies it
}, sendPort);
// FAILS — closure captures non-sendable object:
final socket = await Socket.connect('example.com', 80);
await Isolate.spawn((port) {
socket.write('GET /'); // Socket is a native resource — cannot be copied
}, sendPort);
// FAILS — closure captures a reference to a class with native resources:
final controller = StreamController<int>();
await Isolate.spawn((_) {
controller.add(1); // StreamController is not sendable
}, null);
Isolate.run is easier -- it handles the communication for you:
// Isolate.run copies the argument AND the return value:
final result = await Isolate.run(() {
// This closure must not capture non-sendable objects.
// The return value must also be sendable.
return expensiveComputation();
});
Interview-winning point: In Dart 3.x, the type system is moving toward SendPort-awareness. The Sendable concept will make this a compile-time check instead of a runtime error.
Q8: What happens if you have a StreamBuilder inside a widget that rebuilds -- does it create multiple subscriptions?
What the interviewer is REALLY testing:
Whether you understand how StreamBuilder manages its subscription lifecycle and what happens when the parent rebuilds, potentially passing a new stream instance.
Answer:
It depends on whether the same stream instance or a new stream instance is passed on rebuild.
Case 1: Same stream instance -- NO new subscription:
class _MyState extends State<MyWidget> {
final _stream = FirebaseFirestore.instance
.collection('items')
.snapshots(); // Created once, stored in state
@override
Widget build(BuildContext context) {
return StreamBuilder(
stream: _stream, // Same instance every rebuild
builder: (context, snapshot) => Text('${snapshot.data}'),
);
}
}
StreamBuilder checks oldWidget.stream == newWidget.stream in didUpdateWidget. If the stream is the same object (by identity), it keeps the existing subscription. Safe.
Case 2: New stream instance on every rebuild -- CREATES NEW SUBSCRIPTION:
@override
Widget build(BuildContext context) {
return StreamBuilder(
// BUG: .snapshots() creates a NEW Stream object every build!
stream: FirebaseFirestore.instance.collection('items').snapshots(),
builder: (context, snapshot) => Text('${snapshot.data}'),
);
}
What happens:
- First build: StreamBuilder subscribes to stream A.
- Parent rebuilds (e.g., setState elsewhere).
-
didUpdateWidgetfires.oldWidget.stream != newWidget.stream(different objects). - StreamBuilder cancels subscription to stream A.
- StreamBuilder subscribes to stream B (the new instance).
- You see
ConnectionState.waitingagain briefly -- a flash/flicker. - This repeats on every rebuild.
This causes:
- Unnecessary network requests (Firestore re-fetches each time).
- UI flicker (snapshot resets to waiting state).
- Performance degradation.
The fix -- always store the stream in state:
late final Stream<QuerySnapshot> _stream;
@override
void initState() {
super.initState();
_stream = FirebaseFirestore.instance.collection('items').snapshots();
}
@override
Widget build(BuildContext context) {
return StreamBuilder(
stream: _stream, // Stable reference
builder: (context, snapshot) => ...,
);
}
StreamBuilder's internal lifecycle:
initState() -> subscribe to stream
didUpdateWidget() -> if stream changed: cancel old, subscribe new
-> if stream same: do nothing
dispose() -> cancel subscription
Q9: What's the difference between stream.listen() and StreamBuilder -- which is better and when?
What the interviewer is REALLY testing:
Whether you understand declarative vs imperative stream handling and can articulate when each approach is appropriate.
Answer:
stream.listen() |
StreamBuilder |
|
|---|---|---|
| Style | Imperative | Declarative |
| Lifecycle management | Manual -- you must cancel in dispose() | Automatic -- handled internally |
| Widget rebuilds | Must call setState() manually | Rebuilds automatically |
| Where to use | initState, services, BLoCs, non-UI code | Inside build(), directly in the widget tree |
| Multiple listeners | Easy to set up multiple listeners | One stream per StreamBuilder |
| Error handling |
.onError callback |
snapshot.hasError |
Use StreamBuilder when:
// The stream directly drives UI — clean, declarative, no manual cleanup:
@override
Widget build(BuildContext context) {
return StreamBuilder<User>(
stream: authService.userChanges(),
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.waiting) {
return CircularProgressIndicator();
}
if (snapshot.hasError) return ErrorWidget(snapshot.error!);
if (!snapshot.hasData) return LoginScreen();
return HomeScreen(user: snapshot.data!);
},
);
}
Use stream.listen() when:
// 1. You need to perform side effects (not just update UI):
@override
void initState() {
super.initState();
_sub = connectivityStream.listen((status) {
if (status == ConnectivityResult.none) {
ScaffoldMessenger.of(context).showSnackBar(
SnackBar(content: Text('No internet!')),
);
}
setState(() => _isOnline = status != ConnectivityResult.none);
});
}
// 2. In non-widget code (services, BLoCs, repositories):
class AuthBloc {
final _sub = authRepo.tokenStream.listen((token) {
if (token == null) emit(AuthLoggedOut());
});
void dispose() => _sub.cancel();
}
// 3. When you need fine-grained control:
_sub = stream.listen(
(data) { /* onData */ },
onError: (e, st) { /* handle error */ },
onDone: () { /* stream closed */ },
cancelOnError: false, // Keep listening after errors
);
_sub.pause(); // Pause when backgrounded
_sub.resume(); // Resume when foregrounded
The common bug -- forgetting to cancel listen():
// MEMORY LEAK:
@override
void initState() {
super.initState();
myStream.listen((data) {
setState(() => _data = data); // 'this' is captured!
});
// No variable saved — can NEVER cancel this subscription.
// If widget is disposed, setState throws "setState called after dispose".
}
Q10: What happens if you use yield* vs yield in a Stream generator?
What the interviewer is REALLY testing:
Whether you understand delegation in async generators and the difference between emitting a single value vs forwarding an entire stream.
Answer:
yield emits a single value. yield* delegates to another Stream (or Iterable), forwarding all its values.
// yield — emits one value at a time:
Stream<int> countToThree() async* {
yield 1;
yield 2;
yield 3;
}
// Output: 1, 2, 3
// yield* — delegates to another stream, forwarding ALL its values:
Stream<int> countToSix() async* {
yield* countToThree(); // Forwards 1, 2, 3
yield* Stream.fromIterable([4, 5, 6]); // Forwards 4, 5, 6
}
// Output: 1, 2, 3, 4, 5, 6
Critical difference -- yield* WAITS for the delegated stream to finish:
Stream<String> merged() async* {
// SEQUENTIAL — stream2 doesn't start until stream1 completes:
yield* stream1(); // Waits for stream1 to close
yield* stream2(); // Only then starts stream2
}
// If you want to INTERLEAVE two streams, yield* is WRONG:
// Use StreamGroup or Rx merge instead:
Stream<String> interleaved() {
return StreamGroup.merge([stream1(), stream2()]);
}
Subtle gotcha -- yield* and error propagation:
Stream<int> outer() async* {
try {
yield* innerStreamThatThrows(); // Error propagates to this try/catch
} catch (e) {
yield -1; // Can recover and continue yielding
}
yield 999; // This DOES execute after catching
}
Without yield* you would write verbose forwarding code:
// Without yield* — manual forwarding (ugly):
Stream<int> countToSix() async* {
await for (final value in countToThree()) {
yield value; // Manual forwarding
}
await for (final value in Stream.fromIterable([4, 5, 6])) {
yield value;
}
}
For sync generators (Iterable), the same applies:
Iterable<int> flat(List<List<int>> nested) sync* {
for (final list in nested) {
yield* list; // Delegates to each inner iterable
}
}
Q11: Explain the event loop with a real scenario: user taps button, HTTP call fires, timer runs, microtask queued.
What the interviewer is REALLY testing:
Whether you can trace the exact execution order through the event loop with a concrete, realistic scenario.
Answer:
Let me walk through a real scenario step by step:
class _MyState extends State<MyWidget> {
void _onTap() {
print('A: Tap handler start');
Future(() => print('B: Event queue task'));
scheduleMicrotask(() => print('C: Microtask 1'));
Timer(Duration.zero, () => print('D: Timer.zero'));
http.get(Uri.parse('https://api.example.com/data')).then((response) {
print('E: HTTP response received');
scheduleMicrotask(() => print('F: Microtask after HTTP'));
});
Future.microtask(() => print('G: Microtask 2'));
setState(() {
print('H: Inside setState');
});
print('I: Tap handler end');
}
}
Execution order:
Phase 1: Synchronous code runs to completion
A: Tap handler start <- Synchronous
H: Inside setState <- setState callback is called SYNCHRONOUSLY
I: Tap handler end <- Synchronous
At this point, these are queued:
- Microtask queue: [C, G]
- Event queue: [B, D, rebuild-frame]
- Pending I/O: HTTP request is in flight (OS level)
Phase 2: Drain microtask queue
C: Microtask 1 <- Microtask queue (FIFO)
G: Microtask 2 <- Microtask queue (FIFO)
Phase 3: Process event queue (one event at a time, drain microtasks between each)
B: Event queue task <- Event queue
D: Timer.zero <- Event queue (Timer.zero ~ Future(), both go to event queue)
[Widget rebuild happens] <- Scheduled by setState, processed as a frame event
Phase 4: Much later... HTTP response arrives (OS signals I/O completion)
E: HTTP response received <- Event queue (I/O completion event)
F: Microtask after HTTP <- Microtask queue (drained before next event)
Complete order: A, H, I, C, G, B, D, [rebuild], ...(time passes)..., E, F
Visual model of the event loop:
+----------------------------------+
| EVENT LOOP |
| |
| 1. Run synchronous code |
| | |
| 2. Microtask queue empty? |
| NO -> run next microtask |
| -> go to step 2 |
| YES | |
| 3. Event queue empty? |
| NO -> run next event |
| -> go to step 2 |
| YES -> wait for events |
| -> go to step 2 |
+----------------------------------+
Key insight: Between EVERY event, ALL microtasks are drained. This is why microtask abuse can freeze the UI -- if microtasks keep scheduling more microtasks, the event queue is starved.
Q12: What happens if compute() function tries to access a static variable that references a Flutter binding?
What the interviewer is REALLY testing:
Whether you understand that isolates have completely separate memory, no shared statics, and no access to Flutter's engine bindings (which are initialized per-isolate and only in the main/root isolate).
Answer:
It crashes. The static variable exists in the new isolate, but Flutter bindings (WidgetsBinding, ServicesBinding, etc.) are NOT initialized in spawned isolates.
// In main isolate:
class AppConfig {
static final directory = getApplicationDocumentsDirectory(); // Uses PathProvider
static final prefs = SharedPreferences.getInstance(); // Uses platform channel
}
// This will FAIL:
final result = await compute(doWork, inputData);
void doWork(InputData data) {
// CRASH: PathProvider uses MethodChannel -> needs ServicesBinding -> not initialized
final dir = AppConfig.directory;
// CRASH: SharedPreferences also uses MethodChannel
final prefs = AppConfig.prefs;
// CRASH: Accessing rootBundle
final json = rootBundle.loadString('assets/config.json');
}
Why this happens:
- Isolates have separate heaps. Static variables are NOT shared -- each isolate gets its own copy, initialized fresh.
- Flutter bindings (
WidgetsFlutterBinding.ensureInitialized()) are only called in the main isolate. - Platform channels (MethodChannel) only work in the main isolate because they communicate with the platform engine, which is bound to the main isolate.
What works in an isolate:
void doWork(InputData data) {
// Pure Dart computation — FINE:
final sorted = data.items..sort();
final encoded = jsonEncode(sorted);
final hash = sha256.convert(utf8.encode(encoded));
// dart:io for file access — FINE (no Flutter binding needed):
final file = File('/path/to/file.txt');
final contents = file.readAsStringSync();
// dart:convert, dart:math, dart:typed_data — all FINE
}
The workaround -- pass data, not references:
// CORRECT: Read data in main isolate, pass to compute:
final directory = await getApplicationDocumentsDirectory();
final prefs = await SharedPreferences.getInstance();
final configJson = await rootBundle.loadString('assets/config.json');
final result = await compute(
doWork,
WorkInput(
directoryPath: directory.path, // Pass the String, not the Future
userName: prefs.getString('name') ?? '',
config: configJson,
),
);
Since Dart 2.19+ / Flutter 3.7+: BackgroundIsolateBinaryMessenger allows platform channel access from background isolates, but you must explicitly initialize it:
void doWork(RootIsolateToken token) {
BackgroundIsolateBinaryMessenger.ensureInitialized(token);
// Now platform channels work — but still no WidgetsBinding, no BuildContext, etc.
}
// Spawn:
final token = RootIsolateToken.instance!;
await Isolate.run(() => doWork(token));
Q13: Why does FutureBuilder trigger twice (snapshot.connectionState)?
What the interviewer is REALLY testing:
Whether you understand FutureBuilder's lifecycle and the common mistake of creating Futures inside build().
Answer:
FutureBuilder triggers its builder at minimum twice by design:
-
First call:
ConnectionState.waiting-- the Future is not yet complete. -
Second call:
ConnectionState.done-- the Future has completed (with data or error).
FutureBuilder<String>(
future: fetchData(),
builder: (context, snapshot) {
print('Build: ${snapshot.connectionState}'); // Called at least twice
switch (snapshot.connectionState) {
case ConnectionState.waiting:
return CircularProgressIndicator(); // First call
case ConnectionState.done:
if (snapshot.hasError) return Text('Error: ${snapshot.error}');
return Text('Data: ${snapshot.data}'); // Second call
default:
return SizedBox.shrink();
}
},
)
But the REAL problem -- it can trigger many MORE times if you create the Future inside build():
// BUG: Creates a NEW Future on every rebuild!
@override
Widget build(BuildContext context) {
return FutureBuilder<String>(
future: fetchData(), // NEW Future every time build() is called!
builder: (context, snapshot) => ...,
);
}
What happens:
- First build: FutureBuilder subscribes to Future A. Builder called with
waiting. - Future A completes. Builder called with
done. - Parent rebuilds (ANY reason -- setState elsewhere, keyboard appears, etc.).
-
build()runs again.fetchData()creates Future B (new instance). -
didUpdateWidget-- old Future != new Future. FutureBuilder resets. - Builder called with
waitingAGAIN. Loading spinner reappears! - Future B completes. Builder called with
done. - Repeat on every parent rebuild.
The fix -- create the Future in initState:
class _MyState extends State<MyWidget> {
late final Future<String> _dataFuture;
@override
void initState() {
super.initState();
_dataFuture = fetchData(); // Created ONCE
}
@override
Widget build(BuildContext context) {
return FutureBuilder<String>(
future: _dataFuture, // Same instance every rebuild
builder: (context, snapshot) => ...,
);
}
}
Another subtle trigger -- if Future completes synchronously:
// This Future completes synchronously:
final future = Future.value('instant');
FutureBuilder(
future: future,
builder: (context, snapshot) {
// STILL called twice:
// 1. ConnectionState.waiting (before microtask runs)
// 2. ConnectionState.done (after microtask runs)
// Because Future.value schedules completion as a microtask.
},
)
Q14: What happens if you create a Timer.periodic in initState but don't cancel in dispose?
What the interviewer is REALLY testing:
Whether you understand widget lifecycle, resource cleanup, and how retained references cause memory leaks and crashes.
Answer:
Three bad things happen:
1. Memory leak:
The Timer holds a reference to the callback closure. The closure captures this (the State object). The State object holds references to its widget, element, and all state variables. None of this can be garbage collected.
2. "setState called after dispose" error:
@override
void initState() {
super.initState();
Timer.periodic(Duration(seconds: 1), (timer) {
setState(() { // CRASH after widget is disposed!
_counter++;
});
});
}
// Navigate away -> widget disposed -> timer still fires ->
// setState() throws: "setState() called after dispose()"
3. Zombie behavior -- logic keeps executing:
Timer.periodic(Duration(seconds: 5), (timer) {
_sendAnalyticsEvent(); // Keeps sending even after user left the screen
_pollServer(); // Keeps polling even though nobody's watching
debugPrint('Still alive! Counter: ${_counter++}');
});
The correct pattern:
class _MyState extends State<MyWidget> {
Timer? _timer;
@override
void initState() {
super.initState();
_timer = Timer.periodic(Duration(seconds: 1), (timer) {
if (!mounted) return; // Extra safety check
setState(() => _counter++);
});
}
@override
void dispose() {
_timer?.cancel(); // CRITICAL
super.dispose();
}
}
Even better -- use a framework that handles lifecycle:
// In Riverpod — autoDispose handles cleanup:
final counterProvider = StreamProvider.autoDispose<int>((ref) {
return Stream.periodic(Duration(seconds: 1), (i) => i);
// Automatically cancelled when no one is listening
});
// Or use TickerProviderStateMixin for animations:
class _MyState extends State<MyWidget> with SingleTickerProviderStateMixin {
// AnimationController uses the vsync from the ticker,
// and gets disposed when you call controller.dispose()
}
How to detect this in production:
- Flutter's debug mode prints the "setState after dispose" error in console.
- Use
dart:developeror DevTools memory profiler to find leaked State objects. - Use
FlutterError.onErrorto capture these in crash reporting.
Q15: Explain: Why does this code cause a deadlock? (Completer scenario)
What the interviewer is REALLY testing:
Whether you understand that the Dart event loop is single-threaded and a synchronous block waiting for a Future that can only complete via the event loop creates an unresolvable dependency.
Answer:
The deadlock scenario:
void main() {
final completer = Completer<String>();
// Schedule completion on the event queue:
Future(() {
completer.complete('done');
});
// DEADLOCK: Synchronously wait for the completer's future
// This is pseudo-code — Dart doesn't have a true synchronous wait
// But here's a scenario that demonstrates the concept:
while (!completer.isCompleted) {
// Spin-waiting: This loop BLOCKS the event loop.
// The Future(() => completer.complete('done')) is on the event queue.
// But the event loop can never process it because we're blocking here.
// -> Infinite loop. Deadlock.
}
}
A more realistic Flutter deadlock:
// Scenario: Two Completers waiting on each other
final completerA = Completer<String>();
final completerB = Completer<String>();
Future(() async {
final valueB = await completerB.future; // Waiting for B
completerA.complete('A got: $valueB');
});
Future(() async {
final valueA = await completerA.future; // Waiting for A
completerB.complete('B got: $valueA');
});
// Neither completes:
// - Task 1 awaits completerB -> suspends
// - Task 2 awaits completerA -> suspends
// - completerA.complete() only runs AFTER completerB completes
// - completerB.complete() only runs AFTER completerA completes
// -> Classic circular dependency deadlock.
// The app doesn't FREEZE (event loop still runs), but these
// two Futures are permanently suspended.
Another practical deadlock -- synchronous wait in an isolate:
// In dart:io, you can do synchronous HTTP (for scripts):
import 'dart:io';
void main() {
final completer = Completer<void>();
Timer(Duration(seconds: 1), () {
print('Timer fired'); // NEVER fires
completer.complete();
});
// sleep() blocks the thread — the timer callback can never run:
sleep(Duration(seconds: 5));
print('After sleep'); // This prints after 5 seconds
// But the timer callback never had a chance to run during those 5 seconds
// because sleep() blocked the entire thread.
}
The key principle: In a single-threaded event loop model, you must NEVER synchronously block while waiting for something that requires the event loop to deliver. Since Dart code runs in the event loop, any blocking synchronous wait prevents the very mechanism needed to deliver the result.
How to avoid:
// WRONG: Synchronous wait patterns
while (!isReady) { /* spin */ } // Blocks event loop
sleep(Duration(seconds: 1)); // Blocks event loop
// CORRECT: Asynchronous patterns
await completer.future; // Suspends, doesn't block
await Future.delayed(Duration(seconds: 1)); // Suspends, doesn't block
Q16: What happens when you have an async* generator that yields values, but no one is listening?
What the interviewer is REALLY testing:
Whether you understand lazy evaluation in Dart streams -- that async* generators are pull-based and pause when there is no listener.
Answer:
Nothing happens. The generator does NOT execute until someone subscribes.
Stream<int> generateNumbers() async* {
print('Generator started'); // Never printed if no one listens!
for (int i = 0; i < 1000000; i++) {
print('Yielding $i');
yield i;
}
print('Generator done');
}
void main() {
final stream = generateNumbers();
// At this point, NOTHING has executed. No print output.
// The function body hasn't started running.
// Only when someone subscribes:
stream.listen((value) {
print('Received: $value');
});
// NOW the generator starts running.
}
Even more interesting -- pause/resume behavior:
Stream<int> gen() async* {
for (int i = 0; i < 10; i++) {
print('About to yield $i');
yield i;
print('Resumed after yield $i');
}
}
final sub = gen().listen((v) => print('Got $v'));
// After receiving a few values:
sub.pause();
// The generator SUSPENDS at the next yield point.
// It does NOT keep generating and buffering values.
sub.resume();
// Generator continues from where it paused.
sub.cancel();
// Generator stops. The rest of the loop does NOT execute.
This is why async* is memory-safe for large sequences:
// This does NOT load 1 billion items into memory:
Stream<int> hugeStream() async* {
for (int i = 0; i < 1000000000; i++) {
yield i; // Only generated on demand
}
}
Q17: What's the difference between a broadcast stream and a single-subscription stream, and what breaks if you get it wrong?
What the interviewer is REALLY testing:
Whether you know the two stream types, their behavioral differences, and the specific errors that occur when misused.
Answer:
| Single-subscription | Broadcast | |
|---|---|---|
| Listeners | Exactly ONE | Multiple |
| Buffering | Buffers events until a listener subscribes | Events are LOST if no one is listening |
| Lifecycle | Can only be listened to once. Second listen throws. | Can be listened to any number of times |
| Examples |
File.openRead(), HttpClient.send(), async* generators |
StreamController.broadcast(), DOM events, Stream.periodic
|
What breaks:
// ERROR: Listening twice to single-subscription stream:
final stream = File('big.txt').openRead();
stream.listen((bytes) => print('Listener 1'));
stream.listen((bytes) => print('Listener 2'));
// Throws: "Stream has already been listened to."
// FIX: Convert to broadcast:
final broadcastStream = stream.asBroadcastStream();
broadcastStream.listen((bytes) => print('Listener 1'));
broadcastStream.listen((bytes) => print('Listener 2'));
// Works! Both listeners receive all events.
// SURPRISE: Lost events on broadcast stream:
final controller = StreamController<int>.broadcast();
controller.add(1); // LOST — no one is listening yet!
controller.add(2); // LOST
controller.stream.listen((v) => print(v)); // Subscribes now
controller.add(3); // Received: 3
// GOTCHA with StreamBuilder and broadcast streams:
// StreamBuilder subscribes in initState and unsubscribes in dispose.
// If the stream is broadcast and events fire between dispose and re-subscribe
// (e.g., during a rebuild), those events are lost.
// For single-subscription streams, StreamBuilder can cause errors
// if the widget rebuilds with a new StreamBuilder pointing to the same stream:
// The old subscription is cancelled, new one is created -> "already listened" error!
// This is another reason to use broadcast streams with StreamBuilder.
When to use which:
- Single-subscription: When data is produced FOR a specific consumer (file read, HTTP response, compute result).
- Broadcast: When data is produced REGARDLESS of consumers (user events, sensor data, state changes).
Q18: What happens if a microtask schedules another microtask indefinitely?
What the interviewer is REALLY testing:
Whether you understand that the microtask queue is drained completely before ANY event queue task runs, so infinite microtask scheduling starves the event loop.
Answer:
The app freezes completely. The UI becomes unresponsive, animations stop, taps are ignored, and timers never fire.
void evil() {
scheduleMicrotask(() {
scheduleMicrotask(() {
scheduleMicrotask(() {
// ... infinite chain
evil(); // Each microtask schedules another
});
});
});
}
void main() {
evil();
// These NEVER execute:
Future(() => print('Event queue task')); // Starved
Timer(Duration(seconds: 1), () => print('Timer')); // Starved
// UI rebuilds -> Starved
// Tap events -> Starved
}
Why: The event loop algorithm is:
- Run all microtasks until the microtask queue is empty.
- Only THEN pick one event from the event queue.
- Go to step 1.
If step 1 never completes (microtasks keep adding more microtasks), step 2 is never reached.
This is different from event queue flooding:
// Event queue flooding does NOT freeze the app:
void floodEventQueue() {
for (int i = 0; i < 1000000; i++) {
Future(() => print('Event $i'));
}
}
// The UI still works because between each event,
// microtasks (including UI framework tasks) get to run.
Real-world scenario where this can happen accidentally:
Stream<int> badStream() async* {
while (true) {
yield counter++;
// async* uses microtasks internally for scheduling.
// If the listener processes synchronously and never yields
// to the event loop, this can starve it.
}
}
Q19: How does Zone relate to error handling in async code, and why does runZonedGuarded matter?
What the interviewer is REALLY testing:
Whether you understand Dart's Zone system and how it enables catching errors from async operations that would otherwise be unhandled.
Answer:
A Zone is an execution context that persists across async callbacks. It intercepts and can handle:
- Uncaught errors (async errors that nobody awaits)
- Timer creation
- Microtask scheduling
- Print statements
// Problem: This error is UNCAUGHT — no try/catch can catch it:
void main() {
try {
Future(() => throw Exception('Async error'));
} catch (e) {
// NEVER REACHED — the Future runs later, outside this try/catch
print('Caught: $e');
}
}
// Solution: runZonedGuarded catches async errors:
void main() {
runZonedGuarded(() {
Future(() => throw Exception('Async error'));
}, (error, stackTrace) {
print('Zone caught: $error'); // THIS catches it
});
}
Flutter's bootstrap uses this:
void main() {
runZonedGuarded(() {
WidgetsFlutterBinding.ensureInitialized();
// Catch Flutter framework errors:
FlutterError.onError = (details) {
FirebaseCrashlytics.instance.recordFlutterFatalError(details);
};
// Catch platform dispatcher errors:
PlatformDispatcher.instance.onError = (error, stack) {
FirebaseCrashlytics.instance.recordError(error, stack, fatal: true);
return true;
};
runApp(MyApp());
}, (error, stackTrace) {
// Catch everything else — truly unhandled async errors:
FirebaseCrashlytics.instance.recordError(error, stackTrace, fatal: true);
});
}
Zone properties -- storing context for async operations:
// Attach user ID to all async operations in this zone:
runZoned(() {
fetchData(); // Inside fetchData, Zone.current['userId'] == '123'
}, zoneValues: {
'userId': '123',
});
Q20: What is the difference between Isolate.spawn, Isolate.run, and compute? When would you pick each?
What the interviewer is REALLY testing:
Whether you understand the evolution of isolate APIs and can choose the right one for the job.
Answer:
Isolate.spawn |
Isolate.run |
compute |
|
|---|---|---|---|
| Dart version | All versions | Dart 2.19+ | Flutter only |
| Communication | Manual (SendPort/ReceivePort) | Automatic (single result) | Automatic (single result) |
| Lifecycle | Long-lived -- you manage it | One-shot -- runs and exits | One-shot -- runs and exits |
| Error handling | Manual (onError port) | Auto-propagated to caller | Auto-propagated to caller |
| Multiple messages | Yes | No -- one return value | No -- one return value |
| Use case | Worker isolates, background services | Quick one-off heavy computation | Same as Isolate.run (Flutter legacy) |
// Isolate.run — simplest for one-shot work:
final result = await Isolate.run(() {
return jsonDecode(hugeJsonString); // Runs in background, returns result
});
// compute — Flutter's version (predates Isolate.run):
final result = await compute(jsonDecode, hugeJsonString);
// Functionally identical to Isolate.run for single-argument functions.
// Isolate.spawn — for long-lived workers:
final receivePort = ReceivePort();
await Isolate.spawn(_worker, receivePort.sendPort);
void _worker(SendPort mainPort) {
final workerPort = ReceivePort();
mainPort.send(workerPort.sendPort); // Send port back to main
workerPort.listen((message) {
// Process multiple messages over time
final result = heavyWork(message);
mainPort.send(result);
});
}
When to use each:
- Isolate.run / compute: Image processing, JSON parsing, sorting large lists -- any heavy one-off computation.
- Isolate.spawn: Background music processing, real-time data transformation, persistent worker that processes a queue of tasks.
Recommendation: Use Isolate.run for new code. Use Isolate.spawn only when you need ongoing bidirectional communication.
SECTION 2: PERFORMANCE & MEMORY PUZZLES
Q1: Your app uses 500MB RAM -- how do you debug this? Walk through the exact steps.
What the interviewer is REALLY testing:
Whether you have hands-on experience with Flutter DevTools memory profiler and can systematically diagnose memory issues rather than guessing.
Answer:
Here is my exact debugging process, step by step:
Step 1: Confirm the problem -- open DevTools Memory tab
flutter run --profile # Must use profile mode for accurate memory readings
Open DevTools -> Memory tab. Look at the memory chart. Identify:
- Is memory climbing steadily (leak)?
- Is there a sudden spike (large allocation)?
- What is the baseline vs peak?
Step 2: Take heap snapshots
- Click "Take Heap Snapshot" at a known clean state (e.g., app just loaded home screen).
- Navigate to the screen suspected of leaking.
- Navigate back.
- Take another snapshot.
- Compare the two snapshots. If objects from that screen are still retained, you have a leak.
Step 3: Identify retained objects using the dominator tree
- Sort by "Retained Size" (not shallow size -- retained = the memory freed if this object were GC'd).
- Look for suspicious entries:
- Large
Uint8List-> images not disposed -
_Elementobjects accumulating -> widget tree leak -
Stateobjects from disposed widgets -> subscription/timer leak -
ImageCachegrowing -> cache not bounded
- Large
Step 4: Trace the retaining path
- Select a suspicious object.
- View "Retaining Path" -- this shows the chain of references preventing GC.
- Common patterns:
Root -> StreamSubscription -> closure -> State -> Element -> RenderObject -> Image
This means a stream subscription is holding your State alive.
Step 5: Common culprits and fixes
// CULPRIT 1: Images — the biggest RAM eater
// Problem: Loading full-resolution images
Image.network('https://example.com/huge-4k-photo.jpg')
// Fix: Use cacheWidth/cacheHeight to decode at display size:
Image.network(
'https://example.com/huge-4k-photo.jpg',
cacheWidth: 300, // Decode at 300px wide, not 4000px
)
// CULPRIT 2: ImageCache too large
// Default: max 1000 images, max 100MB
PaintingBinding.instance.imageCache.maximumSize = 50; // Limit count
PaintingBinding.instance.imageCache.maximumSizeBytes = 50 << 20; // 50MB
// CULPRIT 3: Undisposed controllers
class _MyState extends State<MyWidget> {
final _controller = TextEditingController();
final _scrollController = ScrollController();
final _animController = AnimationController(vsync: this);
@override
void dispose() {
_controller.dispose();
_scrollController.dispose();
_animController.dispose(); // MUST dispose all controllers
super.dispose();
}
}
// CULPRIT 4: Large lists held in state
// Fix: Use pagination, lazy loading, or dispose old data
Step 6: Automate detection
// In debug mode, detect leaks:
void main() {
// Enable leak tracking (Flutter 3.18+):
LeakTracking.start();
runApp(MyApp());
}
Q2: Why is ListView.builder better than ListView with children? What happens internally?
What the interviewer is REALLY testing:
Whether you understand lazy rendering, the viewport/sliver protocol, and the real performance implications at the framework level.
Answer:
ListView (with children:) creates ALL child widgets and their elements/render objects immediately, even those offscreen:
// BAD for large lists — creates ALL 10,000 widgets upfront:
ListView(
children: List.generate(10000, (i) => ListTile(title: Text('Item $i'))),
)
ListView.builder only creates widgets that are visible + a small buffer (cacheExtent):
// GOOD — only creates ~15-20 widgets at a time:
ListView.builder(
itemCount: 10000,
itemBuilder: (context, index) => ListTile(title: Text('Item $index')),
)
What happens internally with ListView.builder:
- Viewport determines the visible pixel range (e.g., 0px to 800px).
-
SliverList (the sliver delegate) asks its
SliverChildBuilderDelegateto build children lazily. - The delegate calls
itemBuilderONLY for indices whose items fall withinviewport + cacheExtent. -
cacheExtentdefaults to 250px -- so items slightly offscreen are pre-built for smooth scrolling. - As the user scrolls, items scrolling OUT of the viewport+cache are destroyed (element deactivated, render object detached).
- Items scrolling IN are built fresh.
The concrete difference:
| ListView(children:) | ListView.builder | |
|---|---|---|
| 10,000 items Widget objects created | 10,000 | ~20 (visible + cache) |
| Element objects in tree | 10,000 | ~20 |
| RenderObject layout/paint | 10,000 laid out | ~20 laid out |
| Initial build time | O(n) -- very slow | O(visible) -- instant |
| Memory | O(n) -- all items in memory | O(visible) -- constant |
| Scroll performance | Smooth (already built) | Smooth (build cost per item is tiny) |
Key subtlety -- widget creation vs rendering:
// Even ListView(children:) only PAINTS visible items.
// But it still BUILDS and LAYS OUT all items. The build and
// layout cost is the real problem, not painting.
When ListView(children:) is actually fine:
- Small lists (under 20-30 items).
- When all items must be measured upfront (e.g., Wrap-like behavior).
- When children have complex state that would be lost on rebuild.
ListView.builder also enables item recycling of Elements:
When an item scrolls offscreen and a new item of the same widget type scrolls in, Flutter can reuse the Element (and its State), updating it with new data. This is analogous to Android's RecyclerView ViewHolder pattern.
Q3: You have a list of 10,000 items -- the scroll is janky. Walk through your debugging process.
What the interviewer is REALLY testing:
Whether you can systematically diagnose jank rather than making random optimizations.
Answer:
Step 1: Measure, don't guess
flutter run --profile # MUST be profile mode — debug mode is always janky
Open DevTools -> Performance tab. Record while scrolling. Look for frames exceeding 16ms (the red bars).
Step 2: Identify the bottleneck -- build, layout, paint, or raster?
The performance overlay shows two graphs:
- UI thread (top): build + layout + paint work
- Raster thread (bottom): GPU compositing + shader compilation
If UI thread is slow -> problem is in your Dart code (build/layout).
If Raster thread is slow -> problem is in rendering (shaders, opacity layers, clip paths).
Step 3: If UI thread is janky -- analyze build costs
Open DevTools -> Performance -> select a slow frame -> look at the flame chart.
Common findings and fixes:
// PROBLEM 1: Not using ListView.builder
// Fix: Switch from ListView(children:) to ListView.builder
// PROBLEM 2: Expensive build method per item
Widget build(BuildContext context) {
// BAD: Heavy computation in build
final processedData = heavyTransform(rawData[index]); // Moves to initState or cache
return ExpensiveWidget(data: processedData);
}
// PROBLEM 3: Unnecessary rebuilds — parent setState rebuilds all items
// Fix: Extract item into a separate StatelessWidget (or const):
class _ItemWidget extends StatelessWidget {
const _ItemWidget({required this.item}); // const constructor
final Item item;
@override
Widget build(BuildContext context) => ListTile(title: Text(item.name));
}
// PROBLEM 4: Images loading synchronously or at full resolution
// Fix:
Image.network(
url,
cacheWidth: 100, // Decode at display size
cacheHeight: 100,
frameBuilder: (context, child, frame, loaded) {
if (!loaded) return const SizedBox(height: 100); // Placeholder
return child;
},
)
// PROBLEM 5: Items have variable height — layout is expensive
// Fix: Use itemExtent for fixed-height items:
ListView.builder(
itemExtent: 72.0, // Fixed height — skips measuring each child
itemCount: 10000,
itemBuilder: (context, i) => ListTile(title: Text('Item $i')),
)
// Or use prototypeItem for consistent heights determined by one item:
ListView.builder(
prototypeItem: ListTile(title: Text('Prototype')),
itemCount: 10000,
itemBuilder: (context, i) => ListTile(title: Text('Item $i')),
)
Step 4: If Raster thread is janky -- analyze rendering costs
// PROBLEM: Opacity/ColorFilter on complex subtrees
// BAD:
Opacity(opacity: 0.5, child: ComplexWidget())
// GOOD:
FadeTransition(opacity: animation, child: ComplexWidget())
// Or use color's alpha instead of Opacity widget
// PROBLEM: ClipRRect on every item
// BAD:
ClipRRect(borderRadius: BorderRadius.circular(8), child: ...)
// GOOD: Use decoration instead:
DecoratedBox(
decoration: BoxDecoration(borderRadius: BorderRadius.circular(8)),
child: ...,
)
Step 5: Add RepaintBoundary if items cause unnecessary repaints
ListView.builder(
itemBuilder: (context, index) {
return RepaintBoundary( // Each item paints independently
child: _ItemWidget(item: items[index]),
);
},
)
Q4: What causes "jank" and what exactly is "shader compilation jank"? How does Impeller solve it?
What the interviewer is REALLY testing:
Whether you understand the rendering pipeline at the GPU level and Flutter's historical Skia problem.
Answer:
Jank = any frame that takes longer than 16.67ms (at 60fps) or 8.33ms (at 120fps) to produce. The user sees a visible stutter or hitch.
Shader compilation jank is a specific type of jank that happens when the GPU encounters a new visual effect for the first time:
- Flutter uses a graphics library (Skia or Impeller) to draw UI.
- Skia converts drawing commands into GPU shaders (small programs that run on the GPU).
- Shaders must be compiled before they can execute.
- Shader compilation happens on the raster thread and can take 20-200ms.
- This only happens the FIRST TIME a particular visual effect is rendered.
- After compilation, the shader is cached -- subsequent frames are fast.
Why it is unpredictable:
// First time a user sees this animation -> shader compiles -> jank
// Second time -> cached -> smooth
AnimatedContainer(
decoration: BoxDecoration(
gradient: LinearGradient(...),
borderRadius: BorderRadius.circular(20),
boxShadow: [BoxShadow(blurRadius: 10)],
),
// The combination of gradient + border radius + shadow requires
// a specific shader that may not have been compiled yet.
)
The old Skia approach -- SkSL warmup (pre-Impeller):
# Capture shaders during a test run:
flutter run --profile --cache-sksl --purge-persistent-cache
# After exercising the app, press 'M' in terminal to export:
flutter build apk --bundle-sksl-path flutter_01.sksl.json
This pre-compiles shaders at build time. But it is fragile -- different GPUs need different shaders, and you cannot capture every possible visual combination.
How Impeller solves this:
Impeller (default on iOS since Flutter 3.16, Android since Flutter 3.24) takes a fundamentally different approach:
- All shaders are pre-compiled at BUILD TIME. Impeller knows every shader it will ever need because it uses a fixed set of shader programs.
- No runtime shader compilation. Zero. Ever. The "first frame" problem disappears.
- Simpler shader architecture. Instead of generating specialized shaders for each visual combination (like Skia), Impeller uses general-purpose shaders with uniform parameters.
- Tessellation on CPU. Complex paths (rounded rects, etc.) are converted to triangles on the CPU, then sent to the GPU as simple triangle draws.
| Skia | Impeller | |
|---|---|---|
| Shader compilation | Runtime (lazy) | Build time (AOT) |
| First-frame jank | Yes | No |
| Shader count | Thousands (generated per-effect) | Dozens (pre-defined) |
| Backend | OpenGL/Vulkan | Metal (iOS), Vulkan/OpenGL (Android) |
Q5: Why does calling setState on a parent cause ALL children to rebuild? How to prevent it?
What the interviewer is REALLY testing:
Whether you understand the widget rebuild mechanism and the specific techniques to scope rebuilds.
Answer:
When you call setState() on a widget, Flutter marks that element as dirty. During the next frame, build() is called on that element. The build method returns a new widget tree. Flutter then recursively reconciles (diffs) this new tree against the old tree.
The key insight: build() returns NEW widget instances for all children. Even if the data hasn't changed, new widget objects are created, and Flutter must check each one.
class ParentWidget extends StatefulWidget { ... }
class _ParentState extends State<ParentWidget> {
int _counter = 0;
@override
Widget build(BuildContext context) {
print('Parent build');
return Column(
children: [
Text('Counter: $_counter'), // Needs rebuild
const ExpensiveChart(), // Does NOT need rebuild
UserProfile(user: widget.user), // Does NOT need rebuild
HeavyList(items: widget.items), // Does NOT need rebuild
],
);
}
}
When setState(() => _counter++) is called, ALL four children are "rebuilt" in the sense that build() creates new widget objects for them. However, Flutter's reconciliation may short-circuit if it detects the widget hasn't changed.
Prevention techniques (from cheapest to most architectural):
1. const constructors -- cheapest and most effective:
// If a widget and all its descendants are const, Flutter SKIPS rebuild entirely:
const ExpensiveChart() // Same identical instance. Flutter: "nothing changed, skip."
// This works because const objects are compile-time constants —
// same arguments -> same object identity -> widget == oldWidget -> skip rebuild.
2. Extract child widgets into separate StatefulWidgets:
// BEFORE: Everything rebuilds when counter changes
class _ParentState extends State<ParentWidget> {
int _counter = 0;
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('$_counter'),
ExpensiveWidget(data: data), // Rebuilds unnecessarily!
],
);
}
}
// AFTER: Only the counter widget rebuilds
class _ParentState extends State<ParentWidget> {
@override
Widget build(BuildContext context) {
return Column(
children: [
CounterWidget(), // Has its own setState — isolated
ExpensiveWidget(data: data), // Only rebuilds if data changes
],
);
}
}
3. Use the "child" parameter pattern (Builder pattern):
// AnimatedBuilder, ValueListenableBuilder, etc. use this:
ValueListenableBuilder<int>(
valueListenable: _counter,
builder: (context, count, child) {
// 'child' is NOT rebuilt — it was created outside the builder
return Row(
children: [
Text('$count'), // Only this rebuilds
child!, // Preserved across rebuilds
],
);
},
child: const ExpensiveWidget(), // Built once, passed through
)
4. Use selective state management:
// BlocBuilder with buildWhen:
BlocBuilder<MyBloc, MyState>(
buildWhen: (previous, current) => previous.name != current.name,
builder: (context, state) => Text(state.name),
)
// Riverpod select:
final name = ref.watch(userProvider.select((u) => u.name));
// Provider Selector:
Selector<UserModel, String>(
selector: (_, user) => user.name,
builder: (_, name, __) => Text(name),
)
Q6: What's the cost of using MediaQuery.of(context) vs MediaQuery.sizeOf(context)?
What the interviewer is REALLY testing:
Whether you understand how InheritedWidget dependency tracking works and why granular subscriptions matter.
Answer:
MediaQuery.of(context) subscribes the calling widget to ALL MediaQuery changes.
MediaQuery.sizeOf(context) subscribes ONLY to size changes.
// EXPENSIVE — rebuilds on ANY MediaQuery change:
@override
Widget build(BuildContext context) {
final size = MediaQuery.of(context).size;
// This widget now rebuilds when ANY of these change:
// - Screen size (rotation, resize)
// - Padding (keyboard appears)
// - Text scale factor
// - Platform brightness (dark mode toggle)
// - View insets (keyboard)
// - Gesture settings
// - High contrast mode
// - Bold text
// ...and more
}
// CHEAP — rebuilds ONLY when size changes:
@override
Widget build(BuildContext context) {
final size = MediaQuery.sizeOf(context);
// Only rebuilds when width or height changes.
// Keyboard appearing? Text scale change? Dark mode? -> No rebuild.
}
Why does this matter?
The most common scenario: keyboard appears. When the soft keyboard opens:
-
MediaQuery.viewInsetschanges (new bottom inset). -
MediaQuery.paddingmay change. -
MediaQuery.sizedoes NOT change (screen is same size).
If your widget uses MediaQuery.of(context).size, it STILL rebuilds when the keyboard appears, because .of(context) subscribes to the entire MediaQueryData object. Any field change triggers a rebuild.
// Available granular accessors (Flutter 3.10+):
MediaQuery.sizeOf(context) // Only size
MediaQuery.paddingOf(context) // Only padding
MediaQuery.viewInsetsOf(context) // Only view insets (keyboard)
MediaQuery.textScalerOf(context) // Only text scale
MediaQuery.platformBrightnessOf(context) // Only brightness
MediaQuery.orientationOf(context) // Only orientation
How it works internally:
MediaQuery is an InheritedModel, not just an InheritedWidget. InheritedModel allows widgets to subscribe to specific "aspects" of the data:
// MediaQuery.of — subscribes to ALL aspects:
static MediaQueryData of(BuildContext context) {
return context.dependOnInheritedWidgetOfExactType<MediaQuery>()!.data;
// dependOnInheritedWidgetOfExactType -> registers dependency on the
// entire InheritedWidget. ANY change -> rebuild.
}
// MediaQuery.sizeOf — subscribes to only the 'size' aspect:
static Size sizeOf(BuildContext context) {
return InheritedModel.inheritFrom<MediaQuery>(context, aspect: _MediaQueryAspect.size)!.data.size;
// inheritFrom with aspect -> registers dependency on ONLY that aspect.
// Only notified when updateShouldNotifyDependent returns true for 'size'.
}
Real-world impact: In a complex app, a single MediaQuery.of(context) at a high level can cause hundreds of widgets to rebuild when the keyboard appears. Replacing it with MediaQuery.sizeOf(context) eliminates all of those unnecessary rebuilds.
Q7: Explain RepaintBoundary -- when does it help and when does it make things WORSE?
What the interviewer is REALLY testing:
Whether you understand the repaint mechanism, compositing layers, and the tradeoff between repaint isolation and memory cost.
Answer:
What RepaintBoundary does:
It creates a separate compositing layer for its subtree. When a child needs repainting, only that layer is repainted -- the parent layer is untouched. Without it, repainting one widget can cascade repaints up to the nearest ancestor RepaintBoundary.
When it HELPS:
// Scenario: An animation next to static content
Row(
children: [
RepaintBoundary(
child: SpinningLogo(), // Repaints 60 times/sec
),
ExpensiveStaticChart(), // Now does NOT repaint when logo spins
],
)
Without RepaintBoundary, every frame the logo spins, Flutter would repaint the ENTIRE Row, including the expensive chart. With it, only the logo's layer is repainted.
Good use cases:
- Animations next to static content.
- Scrollable list items (ListView.builder adds these automatically).
- Frequently updating widgets (clocks, progress bars, real-time data).
- Custom painters that repaint often.
When it makes things WORSE:
// BAD: RepaintBoundary on every tiny widget
Column(
children: List.generate(100, (i) => RepaintBoundary(
child: Text('Item $i'), // Tiny widget — overhead > benefit
)),
)
Why it can be worse:
Memory cost: Each RepaintBoundary creates an
OffsetLayerwhich allocates a GPU texture (bitmap). For a 300x50 list item at 3x device pixel ratio, that is 900x150x4 bytes = ~540KB per item. 100 items = 54MB of GPU memory just for layer textures.Compositing cost: More layers means more work for the compositor to merge layers together. If you have 200 layers, the GPU must composite 200 textures every frame.
When everything repaints anyway: If the parent and children all repaint at the same rate, RepaintBoundary adds overhead with zero benefit.
// BAD: Everything inside this subtree repaints together anyway
RepaintBoundary(
child: AnimatedWidget(
child: AnotherAnimatedWidget(
child: YetAnotherAnimatedWidget(),
),
),
)
// All three animate simultaneously. No repaint is saved.
// You just added layer overhead for nothing.
How to verify if RepaintBoundary is helping:
// Enable repaint rainbow in DevTools or:
import 'package:flutter/rendering.dart';
debugRepaintRainbowEnabled = true;
// Each repaint changes the overlay color. If a region changes color
// every frame, it's repainting. RepaintBoundary should isolate this
// to only the animated region.
Flutter adds RepaintBoundary automatically in these places:
- Each item in ListView.builder / GridView.builder
- Each Route (so one page repainting doesn't affect the page below)
- The root of the app
- EditableText (text fields)
Q8: What happens if you use a heavy computation in build()? How does this affect frame rendering?
What the interviewer is REALLY testing:
Whether you understand that build() runs on the UI thread and blocking it directly delays frame delivery.
Answer:
Heavy computation in build() blocks the UI thread, causing dropped frames (jank).
The rendering pipeline per frame is:
Vsync signal (every 16.67ms)
-> Build phase (run build() for dirty widgets)
-> Layout phase (compute sizes and positions)
-> Paint phase (generate display list)
-> Compositing (send to raster thread)
ALL of these run on the UI thread, sequentially. If build() takes 50ms, the total frame time exceeds 16.67ms, and frames are dropped.
// BAD: O(n^2) sort in build — called every rebuild:
@override
Widget build(BuildContext context) {
// This runs on the UI thread. 10,000 items -> ~50ms -> 3 dropped frames
final sorted = List.of(items)..sort((a, b) => complexComparison(a, b));
final filtered = sorted.where((item) => expensiveFilter(item)).toList();
return ListView.builder(
itemCount: filtered.length,
itemBuilder: (_, i) => Text(filtered[i].name),
);
}
The fixes, from simplest to most robust:
// FIX 1: Cache the computation — don't recompute on every build:
class _MyState extends State<MyWidget> {
late List<Item> _sortedItems;
@override
void initState() {
super.initState();
_sortedItems = _computeSorted(widget.items);
}
@override
void didUpdateWidget(MyWidget oldWidget) {
super.didUpdateWidget(oldWidget);
if (oldWidget.items != widget.items) {
_sortedItems = _computeSorted(widget.items);
}
}
List<Item> _computeSorted(List<Item> items) {
return List.of(items)..sort((a, b) => complexComparison(a, b));
}
@override
Widget build(BuildContext context) {
return ListView.builder(
itemCount: _sortedItems.length,
itemBuilder: (_, i) => Text(_sortedItems[i].name),
);
}
}
// FIX 2: Move to isolate for truly heavy work:
void _onDataChanged(List<Item> newItems) async {
final sorted = await Isolate.run(() {
return List.of(newItems)..sort((a, b) => complexComparison(a, b));
});
setState(() => _sortedItems = sorted);
}
// FIX 3: Use a select/memo pattern with state management:
// Riverpod example:
final sortedItemsProvider = Provider<List<Item>>((ref) {
final items = ref.watch(rawItemsProvider);
return List.of(items)..sort((a, b) => complexComparison(a, b));
// Riverpod caches this — only recomputes when rawItemsProvider changes
});
Rule of thumb for build():
- build() should take < 1ms for simple widgets, < 4ms for complex screens.
- NEVER do: file I/O, JSON parsing, sorting large lists, image processing, regex on large strings.
- Build should be a pure function: inputs (state + widget properties) -> output (widget tree). No side effects, no computation.
Q9: Why is it bad to create objects (like BoxDecoration) inside build()? Or is it a myth?
What the interviewer is REALLY testing:
Whether you can distinguish between real performance problems and cargo-cult optimization, and whether you understand const propagation.
Answer:
It is mostly a myth for simple objects. But it becomes real for specific cases.
The "myth" part -- object allocation in Dart is nearly free:
@override
Widget build(BuildContext context) {
// This creates a new BoxDecoration every build. Is it bad?
return Container(
decoration: BoxDecoration(
color: Colors.blue,
borderRadius: BorderRadius.circular(8),
),
);
}
Dart's garbage collector is generational. Short-lived objects (created and discarded within a frame) are collected in the young-generation space, which is essentially free. Creating a BoxDecoration takes nanoseconds. For this level of allocation, there is NO measurable performance impact.
When it IS a real problem:
1. Preventing const optimization:
// WITHOUT const — new object every build, Flutter must diff:
Container(
decoration: BoxDecoration(color: Colors.blue),
child: Text('Hello'),
)
// WITH const — same object identity, Flutter SKIPS diffing:
const DecoratedBox(
decoration: BoxDecoration(color: Colors.blue),
child: Text('Hello'),
)
// Since the widget is identical (same instance), Flutter skips
// calling build on this entire subtree. This IS a real optimization
// for deep trees.
2. Objects with expensive constructors:
// BAD: RegExp compilation in build — actually expensive:
@override
Widget build(BuildContext context) {
final regex = RegExp(r'complex|pattern|with|many|alternatives');
// RegExp compilation is O(pattern_length). Do this once.
return Text(input.replaceAll(regex, '***'));
}
// BAD: Creating ImageFilter in build:
@override
Widget build(BuildContext context) {
return BackdropFilter(
filter: ImageFilter.blur(sigmaX: 10, sigmaY: 10), // Allocates native resource
child: child,
);
}
// ImageFilter allocates a native Skia object. Not free.
3. Creating objects that trigger work downstream:
// BAD: New TextEditingController every build -> resets text field state:
@override
Widget build(BuildContext context) {
return TextField(
controller: TextEditingController(), // BUG! Resets on every build
);
}
Summary:
| Object | In build()? | Why |
|--------|------------|-----|
| BoxDecoration, EdgeInsets, TextStyle | OK (prefer const) | Cheap allocation, GC handles it |
| Colors.blue, BorderRadius.circular(8) | OK (prefer const) | Tiny objects |
| RegExp, ImageFilter | Avoid | Expensive to construct |
| Controllers (TextEditing, Animation, Scroll) | NEVER | Stateful resources -- must live in State |
| Lists from .map().toList() | OK for small lists | GC handles it |
| Large sorted/filtered lists | Avoid | O(n log n) per frame adds up |
The real rule: Use const wherever possible. Not because allocation is expensive, but because const enables Flutter to skip entire subtrees during reconciliation.
Q10: What's the difference between const and final for performance in Flutter widgets?
What the interviewer is REALLY testing:
Whether you understand compile-time constants vs runtime immutability, and specifically how const widgets enable Flutter's reconciliation optimization.
Answer:
const |
final |
|
|---|---|---|
| When resolved | Compile time | Runtime |
| Identity | All const objects with same values are identical (same memory address) |
Different objects even with same values |
| Widget rebuilds | Flutter detects identical(oldWidget, newWidget) -> skips rebuild
|
Flutter must call operator == -> may still rebuild |
| Memory | One instance in memory regardless of how many times used | New instance each time constructor is called |
// const — same instance everywhere, compile-time constant:
const Text('Hello') // Widget A
const Text('Hello') // Widget B — SAME object as A (identical)
// final — runtime immutable, but new instance each time:
final text = Text('Hello'); // New object each time this line runs
The performance implication in Flutter's element reconciliation:
@override
Widget build(BuildContext context) {
return Column(
children: [
// CONST: If parent rebuilds, Flutter checks:
// identical(oldWidget, newWidget) -> true -> SKIP updateChild entirely
const Text('Static Label'),
// NON-CONST: Flutter checks:
// identical(oldWidget, newWidget) -> false (different instances)
// Then: oldWidget.runtimeType == newWidget.runtimeType -> true
// Then: oldWidget.key == newWidget.key -> true
// Then: calls updateChild -> calls widget.canUpdate -> true
// Then: calls Element.update(newWidget) -> compares properties
Text('Counter: $_counter'),
],
);
}
For const Text('Static Label'):
- On rebuild, the EXACT SAME OBJECT is returned.
- Flutter's
updateChildseesidentical(old, new) == trueand skips immediately. - Zero comparison work. Zero rebuild of children.
For non-const Text('Counter: $_counter'):
- On rebuild, a NEW Text object is created with the new counter value.
- Flutter must check type, key, then call
update, then diff properties. - Still fast (Text is simple), but NOT zero-cost.
Where const makes the biggest difference:
// A deeply nested const tree — ENTIRE subtree skipped on rebuild:
const Card(
child: Padding(
padding: EdgeInsets.all(16),
child: Column(
children: [
Icon(Icons.star, size: 48),
SizedBox(height: 8),
Text('Featured', style: TextStyle(fontSize: 24)),
Text('This entire card is a compile-time constant'),
],
),
),
)
// This is ONE object in memory. No matter how many times the parent
// rebuilds, this entire subtree is skipped. 6 widgets x 0 cost = free.
When const is impossible:
// Cannot be const — depends on runtime value:
Text('Hello ${user.name}') // Runtime string interpolation
Padding(padding: EdgeInsets.all(spacing)) // Runtime variable
Icon(isSelected ? Icons.star : Icons.star_border) // Runtime condition
Q11: Your app crashes with OOM on low-end devices when loading images -- how do you fix it?
What the interviewer is REALLY testing:
Whether you understand image memory management, decoding costs, and practical solutions for memory-constrained environments.
Answer:
The core problem: A 4000x3000 JPEG is 2MB on disk but 48MB in memory when decoded (4000 x 3000 x 4 bytes per pixel). Load 10 such images = 480MB. Low-end devices with 2GB total RAM will OOM.
Fix 1: Decode at display size (most impactful):
// BAD: Decodes at full resolution (4000x3000 = 48MB):
Image.network('https://example.com/photo.jpg')
// GOOD: Decodes at display size (300x225 = 270KB):
Image.network(
'https://example.com/photo.jpg',
cacheWidth: 300, // Decode at this width
cacheHeight: 225, // Decode at this height
)
// BETTER: Calculate from constraints:
LayoutBuilder(builder: (context, constraints) {
final devicePixelRatio = MediaQuery.devicePixelRatioOf(context);
return Image.network(
'https://example.com/photo.jpg',
cacheWidth: (constraints.maxWidth * devicePixelRatio).toInt(),
);
})
Fix 2: Limit the image cache:
// Default cache is 1000 images / 100MB — too much for low-end devices:
void main() {
PaintingBinding.instance.imageCache.maximumSize = 50; // Max 50 images
PaintingBinding.instance.imageCache.maximumSizeBytes = 30 << 20; // 30MB max
runApp(MyApp());
}
Fix 3: Evict images when navigating away:
@override
void dispose() {
// Evict specific images:
imageCache.evict('https://example.com/large-photo.jpg');
// Or clear entire cache when leaving an image-heavy screen:
imageCache.clear();
super.dispose();
}
Fix 4: Use thumbnail URLs from your backend:
// Request appropriately sized images from CDN:
String getThumbnailUrl(String originalUrl, int width) {
// Cloudinary example:
return originalUrl.replaceFirst('/upload/', '/upload/w_$width,c_scale/');
// imgix example:
return '$originalUrl?w=$width&auto=format';
}
Fix 5: Use cached_network_image with resize:
CachedNetworkImage(
imageUrl: url,
memCacheWidth: 300, // Memory cache at this resolution
maxWidthDiskCache: 600, // Disk cache at this resolution
placeholder: (_, __) => const SizedBox(
height: 200,
child: Center(child: CircularProgressIndicator()),
),
fadeInDuration: Duration.zero, // Skip fade to reduce compositing work
)
Fix 6: For lists with many images -- ensure images are freed when offscreen:
// ListView.builder automatically destroys offscreen items.
// But if you use keepAlive: true (AutomaticKeepAliveClientMixin),
// images stay in memory! Only use keepAlive when necessary.
// Also: use the 'gaplessPlayback' flag to prevent flicker:
Image.network(url, gaplessPlayback: true)
Fix 7: For truly massive image grids (like a gallery):
// Use a package like flutter_staggered_grid_view with:
// 1. Low-res thumbnails in the grid
// 2. Full-res only when user taps to view
// 3. precacheImage for the next image in a viewer
// 4. Explicitly evict when moving to a different image
// Pre-cache the next image in a gallery viewer:
precacheImage(
ResizeImage(NetworkImage(nextUrl), width: screenWidth.toInt()),
context,
);
Q12: What is widget recycling and does Flutter actually recycle widgets like RecyclerView in Android?
What the interviewer is REALLY testing:
Whether you understand the critical difference between Flutter's element reuse model and Android's ViewHolder pattern.
Answer:
Flutter does NOT recycle widgets. It recycles Elements (and their associated State and RenderObjects).
This is a fundamental architectural difference from Android:
Android RecyclerView:
Scrolling down:
ViewHolder (View) for Item 0 scrolls off-screen -> placed in recycler pool
Item 10 needs a view -> takes ViewHolder from pool -> rebinds data
The SAME View object is reused with different data.
Flutter ListView.builder:
Scrolling down:
Widget for Item 0 scrolls off-screen -> Widget is discarded (it's immutable, cheap)
Element for Item 0 -> deactivated, placed in the inactive element list
Item 10 needs to be built -> itemBuilder returns a NEW Widget
Flutter checks: can this new Widget reuse an existing Element?
If same widget TYPE and KEY -> YES: Element is updated with new widget (recycled)
If not -> Element is disposed, new Element created
What exactly gets recycled:
ListView.builder(
itemBuilder: (context, index) {
return ListTile( // New Widget every time — cheap, immutable
title: Text(items[index].name),
);
},
)
When you scroll:
- Item 0 (a ListTile) scrolls offscreen.
- Item 15 scrolls onscreen.
-
itemBuildercreates a new ListTile widget for index 15. - Flutter finds the deactivated Element that was for item 0.
- Since both are ListTile (same type, no key), Flutter REUSES the Element.
- The Element updates its reference from the old widget to the new widget.
- The Element's RenderObject is also reused -- just updates its properties.
What this means in practice:
// STATE is tied to Elements, not Widgets:
class _ItemState extends State<ItemWidget> {
bool isExpanded = false; // This state lives in the Element
@override
Widget build(BuildContext context) {
return ListTile(
title: Text(widget.title),
trailing: Icon(isExpanded ? Icons.expand_less : Icons.expand_more),
onTap: () => setState(() => isExpanded = !isExpanded),
);
}
}
// BUG: If Item 0 was expanded and Item 15 reuses its Element,
// Item 15 will appear expanded! The state was "recycled" with the Element.
// FIX: Use a Key to force fresh Elements:
ListView.builder(
itemBuilder: (context, index) {
return ItemWidget(
key: ValueKey(items[index].id), // Unique key -> no Element reuse across different items
title: items[index].name,
);
},
)
Summary:
| | Android RecyclerView | Flutter ListView.builder |
|---|---|---|
| What's recycled | View objects | Element + State + RenderObject |
| Explicitly managed | Yes (ViewHolder, onBindViewHolder) | Automatic by framework |
| State bugs | Less common (you manually bind all data) | Possible if you forget Keys |
| Pool size | Configurable | Automatic (deactivated Elements) |
Q13: Why does Navigator.push sometimes cause memory leaks? How to detect and fix?
What the interviewer is REALLY testing:
Whether you understand route lifecycle, how pushed routes retain the previous route in memory, and how closures/callbacks in routes can prevent garbage collection.
Answer:
The leak mechanisms:
1. Route stack retention -- pushed routes keep previous routes alive:
// Every push ADDS to the stack. Nothing is released until popped.
Navigator.push(context, route1); // Stack: [home, route1]
Navigator.push(context, route2); // Stack: [home, route1, route2]
Navigator.push(context, route3); // Stack: [home, route1, route2, route3]
// All four routes are in memory simultaneously.
// If each route has images/data, memory grows linearly.
Fix: Use pushReplacement when appropriate:
// Login -> Home: Don't keep login screen in memory
Navigator.pushReplacement(context, MaterialPageRoute(
builder: (_) => HomeScreen(),
));
// Or push and remove all previous routes:
Navigator.pushAndRemoveUntil(
context,
MaterialPageRoute(builder: (_) => HomeScreen()),
(route) => false, // Remove ALL previous routes
);
2. Callback references preventing GC:
// Screen A pushes Screen B with a callback:
Navigator.push(context, MaterialPageRoute(
builder: (_) => ScreenB(
onComplete: (result) {
// This closure captures Screen A's State ('this').
// Even after Screen A is popped, if Screen B holds this callback,
// Screen A's State cannot be garbage collected.
setState(() => _result = result);
},
),
));
// Fix: Use Navigator.pop with a result instead:
// Screen A:
final result = await Navigator.push<String>(
context,
MaterialPageRoute(builder: (_) => ScreenB()),
);
if (result != null) setState(() => _result = result);
// Screen B:
Navigator.pop(context, 'the result');
3. Global references to BuildContext from routes:
// BAD: Storing context in a global/singleton:
class NavigationService {
static BuildContext? lastContext;
}
// In any screen:
NavigationService.lastContext = context;
// This keeps the ENTIRE route's Element tree alive indefinitely!
4. StreamSubscriptions / Timers not cancelled:
// If Screen B starts a stream listener pointing back to Screen A's data,
// and Screen A is popped without cancelling, the listener retains
// references to Screen A's objects.
How to detect:
// 1. DevTools Memory tab — take snapshots before and after navigation:
// Push Screen B -> Pop Screen B -> Take snapshot
// Search for Screen B's State class. If it exists, it leaked.
// 2. Use Flutter's built-in leak detector:
// In debug mode, disposed widgets that are still referenced will show warnings.
// 3. Manual check — override dispose and verify it's called:
@override
void dispose() {
debugPrint('ScreenB disposed'); // If this never prints after popping, it leaked
super.dispose();
}
Q14: What happens if you have 50 AnimationControllers active simultaneously?
What the interviewer is REALLY testing:
Whether you understand how AnimationControllers work with the Ticker system and the actual CPU/GPU cost of many simultaneous animations.
Answer:
50 AnimationControllers with active Tickers will all tick on every frame. Each frame (every 16.67ms at 60fps), the framework calls each ticker's callback, which updates the controller's value, which notifies all listeners, which typically call setState or trigger repaints.
The actual cost breakdown per frame:
- Ticker callbacks: 50 callbacks. Each is trivially cheap (~microseconds). Not a problem.
- Value notification: Each controller notifies its listeners. If each has 1 listener, that is 50 listener calls. Still cheap.
-
setState / rebuild: If each animation drives a separate widget via
setState, that is 50setStatecalls. Flutter batches these into one frame, butbuild()runs for all 50 dirty widgets. This can be expensive depending on widget complexity. - Repaint: If each animated widget is in the same repaint boundary, ALL repaint as one unit. If each has its own RepaintBoundary, 50 separate paints. More memory but better isolation.
- Compositing: 50 animated layers must be composited by the GPU. Moderate cost.
When 50 is fine:
// 50 simple animations (opacity fades, size changes) with RepaintBoundaries:
// Total frame time: ~2-4ms. Well within 16ms budget.
// The GPU handles simple compositing easily.
When 50 causes jank:
// 50 animations each triggering rebuild of a complex widget tree:
AnimatedBuilder(
animation: controller,
builder: (context, child) {
// If this build method is expensive (5ms each),
// 50 x 5ms = 250ms per frame -> massive jank
return ComplexLayoutWithShadowsAndClips(
value: controller.value,
);
},
)
Best practices for many simultaneous animations:
// 1. Use AnimatedBuilder with child parameter — most important:
AnimatedBuilder(
animation: controller,
child: const ExpensiveChild(), // Built ONCE, not every frame
builder: (context, child) {
return Transform.translate(
offset: Offset(0, controller.value * 100),
child: child, // Reused — not rebuilt
);
},
)
// 2. Prefer Transform/Opacity (compositing-only) over rebuilds:
// Transform, Opacity, and similar "compositing" widgets can animate
// without rebuilding or repainting. They just change the compositing matrix.
FadeTransition(opacity: animation, child: child) // Cheap — compositing only
// vs
AnimatedBuilder(
animation: animation,
builder: (_, __) => Opacity(opacity: animation.value, child: child), // Forces rebuild
)
// 3. Use a single AnimationController with multiple Tweens:
final controller = AnimationController(duration: Duration(seconds: 2), vsync: this);
final fadeAnim = CurvedAnimation(parent: controller, curve: Interval(0.0, 0.5));
final slideAnim = CurvedAnimation(parent: controller, curve: Interval(0.5, 1.0));
// One controller, one ticker, multiple animations — much cheaper than 50 controllers.
// 4. Use staggered animations:
// Instead of 50 controllers, use 1 controller with staggered intervals.
// Each item reads a different segment of the same animation.
// 5. Reduce ticker rate for non-critical animations:
// There is no built-in way to reduce ticker rate per controller,
// but you can skip frames:
controller.addListener(() {
if (frameCount++ % 2 == 0) return; // Process every other frame
setState(() {});
});
Memory concern: Each AnimationController is small (~few hundred bytes). 50 controllers = negligible memory. The concern is CPU, not memory.
Q15: Explain the rendering pipeline step by step -- what happens between setState() and pixels on screen?
What the interviewer is REALLY testing:
Whether you understand Flutter's rendering pipeline at a deep, framework-internals level.
Answer:
Here is the complete pipeline, step by step:
setState()
|
v
1. SCHEDULE FRAME
| Element is marked dirty (added to _dirtyElements list)
| SchedulerBinding.scheduleFrame() is called
| This requests a VSync callback from the engine
|
v
2. VSYNC SIGNAL ARRIVES (from OS — every 16.67ms at 60fps)
| Engine calls SchedulerBinding.handleBeginFrame()
|
v
3. TRANSIENT CALLBACKS (handleBeginFrame)
| Ticker callbacks (AnimationControllers) fire here
| They update animation values, which may mark more widgets dirty
|
v
4. PERSISTENT CALLBACKS (handleDrawFrame)
|
|--> 4a. BUILD PHASE (WidgetsBinding.drawFrame -> buildScope)
| | _dirtyElements is sorted by depth (parents before children)
| | For each dirty element:
| | element.rebuild() -> calls widget.build(context)
| | Returns a new widget tree
| | Reconciliation (diff) with old tree:
| | - Same type & key? -> Update existing Element
| | - Different? -> Create new Element, dispose old
| | Children are recursively processed
| |
| v
|--> 4b. LAYOUT PHASE (PipelineOwner.flushLayout)
| | Visits RenderObjects marked as "needs layout" (depth-first)
| | Each RenderObject.performLayout() is called:
| | - Receives constraints from parent
| | - Measures children (passes constraints down)
| | - Determines its own size
| | - Positions children (sets parentData offsets)
| | After layout, each RenderObject has a final size and position
| |
| v
|--> 4c. COMPOSITING BITS UPDATE (PipelineOwner.flushCompositingBits)
| | Walks the RenderObject tree
| | Determines which RenderObjects need their own compositing layer
| | (RepaintBoundary, Opacity, Transform, etc.)
| |
| v
|--> 4d. PAINT PHASE (PipelineOwner.flushPaint)
| | Visits RenderObjects marked as "needs paint"
| | Each RenderObject.paint(PaintingContext, Offset) is called
| | Paint commands are recorded into a Layer tree:
| | - OffsetLayer, OpacityLayer, TransformLayer, etc.
| | - PictureLayer contains actual draw commands (lines, rects, text)
| | No pixels are actually rendered yet — just commands recorded
| |
| v
|--> 4e. SEMANTICS PHASE (PipelineOwner.flushSemantics)
| | Updates the semantics tree for accessibility
| | (Screen readers, TalkBack, VoiceOver)
| |
| v
5. COMPOSITING (RenderView.compositeFrame)
| The Layer tree is compiled into a Scene object
| Scene is sent to the Flutter Engine via window.render(scene)
|
v
6. ENGINE / RASTER THREAD
| The engine receives the Scene on the RASTER THREAD (separate thread!)
| Skia/Impeller converts the display list to GPU commands
| Shader programs execute on the GPU
| Pixels are written to a framebuffer
|
v
7. DISPLAY
The OS swaps the framebuffer at the next VSync
Pixels appear on screen
Key timing insight:
- Steps 1-5 happen on the UI thread (your Dart code's thread).
- Step 6 happens on the Raster thread (separate native thread).
- Both must complete within 16.67ms total for a smooth 60fps frame.
- If the UI thread takes 10ms and raster takes 10ms, but they overlap for 5ms, total is still 15ms -- smooth. They run in a pipeline.
What setState specifically does:
void setState(VoidCallback fn) {
// 1. Calls fn() synchronously — updates your state variables
fn();
// 2. Marks this Element as dirty
_element!.markNeedsBuild();
// markNeedsBuild adds element to the dirty list
// and calls SchedulerBinding.scheduleFrame() if not already scheduled
}
Q16: What is the cost of using Opacity widget vs FadeTransition? Why does Flutter documentation warn about Opacity?
What the interviewer is REALLY testing:
Whether you understand compositing layers and the difference between "repaint every frame" vs "composite layer transform."
Answer:
Opacity causes its child to be repainted to an offscreen buffer, then composited with the specified opacity. FadeTransition also uses a layer, but when animated, the difference is in how rebuilds are triggered.
The actual rendering cost:
// Opacity widget — when parent rebuilds, a new Opacity widget is created,
// which may trigger a repaint of the child subtree:
AnimatedBuilder(
animation: animation,
builder: (context, child) {
return Opacity(
opacity: animation.value, // New Opacity widget every frame
child: child, // Even with child param, Opacity triggers repaint
);
},
child: ExpensiveChild(),
)
// FadeTransition — directly updates the OpacityLayer without repainting:
FadeTransition(
opacity: animation,
child: ExpensiveChild(), // Painted ONCE, then just layer opacity changes
)
How FadeTransition is cheaper:
-
ExpensiveChildis painted once into anOffsetLayer. - On each frame,
FadeTransitiononly updates theOpacityLayer.opacityproperty. - The raster thread composites the cached layer with the new opacity.
- No Dart build, no repaint, no re-recording of paint commands.
When Opacity IS fine:
// Static opacity — no animation, no frequent rebuilds:
Opacity(
opacity: isVisible ? 1.0 : 0.5,
child: Text('Hello'),
)
// The child is painted once and composited with opacity.
// Since it doesn't change, the cost is a one-time layer allocation.
The real danger -- Opacity on complex subtrees during animation:
// BAD: Complex child repainted every frame during animation
Opacity(
opacity: _animatingValue, // Changes 60x/sec via setState
child: Column(
children: [
CustomPaint(painter: ExpensivePainter()), // Repainted every frame!
Image.asset('large_image.png'), // Re-decoded? No, but re-composited
ListView(children: hundredsOfItems), // All repainted!
],
),
)
Alternative: Use color alpha for simple cases:
// Instead of Opacity on a colored container:
// BAD:
Opacity(opacity: 0.5, child: Container(color: Colors.blue))
// GOOD: No extra layer needed:
Container(color: Colors.blue.withOpacity(0.5))
Q17: How does Flutter's image loading and caching actually work internally?
What the interviewer is REALLY testing:
Whether you understand the multi-layer caching system and can diagnose image-related performance issues.
Answer:
Flutter has a three-layer image system:
Layer 1: ImageProvider — resolves the image source
|
Layer 2: ImageStream / ImageStreamCompleter — manages async loading
|
Layer 3: ImageCache — caches decoded image data in memory
Detailed flow for Image.network(url):
1. Image widget creates a NetworkImage (ImageProvider)
2. ImageProvider.resolve() is called:
a. Check ImageCache (in-memory) -> if found, return cached ImageStreamCompleter
b. If not cached, call ImageProvider.loadImage():
- Download bytes from network (or check HTTP cache / disk cache)
- Decode bytes to ui.Codec -> ui.FrameInfo -> ui.Image
- ui.Image is a GPU texture handle (lives in GPU memory)
3. Cache the ImageStreamCompleter in ImageCache
4. Image widget receives the ui.Image and passes it to RenderImage
5. RenderImage paints it using Canvas.drawImage
ImageCache details:
// Default limits:
imageCache.maximumSize = 1000; // Max number of images
imageCache.maximumSizeBytes = 100 << 20; // 100MB
// Cache key = ImageProvider instance (uses == and hashCode)
// NetworkImage key = url + scale + headers
// AssetImage key = asset name + bundle + devicePixelRatio
// Cache stores ImageStreamCompleter, NOT raw bytes.
// The ui.Image (GPU texture) is what consumes real memory.
Common image performance mistakes and fixes:
// MISTAKE 1: Different ImageProvider instances for same URL
// These are treated as DIFFERENT cache keys because headers differ:
Image.network(url, headers: {'Auth': 'token_v1'}) // Cache key A
Image.network(url, headers: {'Auth': 'token_v2'}) // Cache key B — separate cache entry!
// MISTAKE 2: Not using ResizeImage for memory control
// 4000x3000 image decoded fully = 48MB GPU memory
// Fix:
Image(
image: ResizeImage(
NetworkImage(url),
width: 300,
height: 225,
),
)
// Or use the shorthand:
Image.network(url, cacheWidth: 300)
// MISTAKE 3: Precaching images inefficiently
// precacheImage loads the full image — blocks the image cache:
await precacheImage(NetworkImage(largeUrl), context);
// Better: precache at the display size:
await precacheImage(
ResizeImage(NetworkImage(largeUrl), width: 300),
context,
);
Q18: What's the actual cost of using keys in Flutter, and when do keys cause performance PROBLEMS?
What the interviewer is REALLY testing:
Whether you understand keys beyond the basic "use them in lists" advice, including the Global key cost and reconciliation implications.
Answer:
ValueKey / ObjectKey / UniqueKey -- minimal cost:
Keys participate in the reconciliation algorithm. When Flutter reconciles a list of children, it uses keys to match old elements with new widgets:
// WITHOUT keys — positional matching (O(n)):
// Old: [A, B, C] New: [B, C, A]
// Flutter sees: position 0 changed (A->B), position 1 changed (B->C), position 2 changed (C->A)
// Result: Updates all three elements. State is WRONG (B has A's state).
// WITH keys — key-based matching (O(n) with hash map):
// Old: [A(key:1), B(key:2), C(key:3)] New: [B(key:2), C(key:3), A(key:1)]
// Flutter sees: same keys, different order -> MOVES elements (preserves state)
// Result: Moves three elements. State is CORRECT.
Cost of ValueKey: one == comparison per child during reconciliation. For ValueKey<int>, this is trivially cheap.
When keys cause performance PROBLEMS:
1. GlobalKey -- expensive and dangerous:
// GlobalKey maintains a reference in a global registry:
final key = GlobalKey<_MyWidgetState>();
// Cost:
// - Registration/deregistration on every mount/unmount
// - Global lookup map maintained across the entire app
// - Prevents widget subtree from being garbage collected if key is held
// - Can cause "Duplicate GlobalKey detected" errors if misused
// ONLY use GlobalKey when you need:
// - Access to State: key.currentState?.doSomething()
// - Access to RenderObject: key.currentContext?.findRenderObject()
// - Moving a widget between different parents (reparenting)
2. Unnecessary keys prevent Element reuse:
// BAD: UniqueKey on every item -> forces new Element EVERY build:
ListView.builder(
itemBuilder: (context, index) {
return ListTile(
key: UniqueKey(), // New key every build -> old Element destroyed, new one created
title: Text(items[index].name),
);
},
)
// This is WORSE than no key — it prevents all Element recycling.
// GOOD: Stable keys based on data identity:
ListView.builder(
itemBuilder: (context, index) {
return ListTile(
key: ValueKey(items[index].id), // Stable — same id = same Element
title: Text(items[index].name),
);
},
)
3. Keys on widgets that don't need them:
// UNNECESSARY: Keys on static, non-reorderable lists
Column(
children: [
Text('Hello', key: ValueKey('hello')), // Pointless — Column children
Text('World', key: ValueKey('world')), // are matched by position anyway
],
)
// These keys add comparison overhead with zero benefit.
When you NEED keys:
- Lists where items can be reordered, inserted, or removed.
- AnimatedList / AnimatedSwitcher -- to identify which widget is entering/leaving.
- Forms with dynamic fields -- to preserve TextEditingController state.
- Forcing rebuild:
MyWidget(key: ValueKey(refreshCounter))-- changing key forces recreation.
SECTION 3: NAVIGATION & ROUTING GOTCHAS
Q1: What happens if you call Navigator.push inside build()?
What the interviewer is REALLY testing:
Whether you understand that build() can be called multiple times and that side effects in build() lead to unpredictable behavior.
Answer:
It crashes or causes an infinite loop. Calling Navigator.push inside build() is a critical mistake for several reasons:
// BAD — DO NOT DO THIS:
@override
Widget build(BuildContext context) {
if (someCondition) {
Navigator.push(context, MaterialPageRoute(builder: (_) => NextScreen()));
// PROBLEMS:
// 1. push() modifies the navigator state during build
// 2. This triggers another build (the new route needs to be built)
// 3. Which may trigger this push again -> infinite loop
// 4. Flutter throws: "setState() or markNeedsBuild() called during build"
}
return Container();
}
The specific error:
FlutterError: setState() or markNeedsBuild() called during build.
This Overlay widget cannot be marked as needing to build because the
framework is already in the process of building widgets.
Why it happens: Navigator.push internally calls setState on the Navigator's Overlay widget. This happens synchronously during the current build phase, which is not allowed -- Flutter is already building the widget tree and you cannot trigger another build mid-build.
The correct approaches:
// APPROACH 1: Use addPostFrameCallback to defer navigation:
@override
Widget build(BuildContext context) {
if (shouldNavigate) {
WidgetsBinding.instance.addPostFrameCallback((_) {
if (mounted) {
Navigator.push(context, MaterialPageRoute(builder: (_) => NextScreen()));
}
});
}
return Container();
}
// APPROACH 2: Navigate in response to events (correct place):
ElevatedButton(
onPressed: () {
Navigator.push(context, MaterialPageRoute(builder: (_) => NextScreen()));
},
child: Text('Go'),
)
// APPROACH 3: Use a BlocListener / ref.listen for reactive navigation:
BlocListener<AuthBloc, AuthState>(
listener: (context, state) {
// listener is called OUTSIDE build — safe to navigate
if (state is AuthUnauthenticated) {
Navigator.pushReplacement(context, MaterialPageRoute(
builder: (_) => LoginScreen(),
));
}
},
child: HomeScreen(),
)
// APPROACH 4: Use a redirect in go_router:
GoRouter(
redirect: (context, state) {
final isLoggedIn = authNotifier.isLoggedIn;
if (!isLoggedIn && state.matchedLocation != '/login') {
return '/login'; // go_router handles this safely
}
return null;
},
routes: [...],
)
Q2: Why does Navigator.pop sometimes return null? How to safely handle it?
What the interviewer is REALLY testing:
Whether you understand route result types, the generic type system of Navigator, and how to write safe async navigation code.
Answer:
Navigator.pop returns null in several scenarios:
1. No result was passed to pop:
// Screen B pops without a result:
Navigator.pop(context); // No second argument -> result is null
// Screen A receives null:
final result = await Navigator.push<String>(
context,
MaterialPageRoute(builder: (_) => ScreenB()),
);
print(result); // null
2. User pressed the system back button:
// Android back button or iOS swipe-back gesture:
// Flutter automatically calls Navigator.pop(context) with no result
// -> result is null
3. Route was removed by pushReplacement or pushAndRemoveUntil:
// If a route below in the stack was waiting for a result,
// but a higher route replaces it, the result is lost.
Safe handling patterns:
// PATTERN 1: Null check on result
final result = await Navigator.push<String>(
context,
MaterialPageRoute(builder: (_) => ScreenB()),
);
if (result != null) {
setState(() => _selectedItem = result);
}
// PATTERN 2: Default value
final result = await Navigator.push<bool>(
context,
MaterialPageRoute(builder: (_) => ConfirmDialog()),
) ?? false; // Default to false if null (back button pressed)
if (result) {
deleteItem();
}
// PATTERN 3: Type-safe pop with generics
// Define a clear return type:
class _ScreenBState extends State<ScreenB> {
void _onConfirm() {
Navigator.pop<ConfirmResult>(context, ConfirmResult(
confirmed: true,
selectedOption: _currentOption,
));
}
void _onCancel() {
Navigator.pop<ConfirmResult>(context, ConfirmResult(
confirmed: false,
));
// Or just: Navigator.pop(context); -> null means cancelled
}
}
// PATTERN 4: WillPopScope/PopScope to control back button
PopScope(
canPop: false,
onPopInvokedWithResult: (didPop, result) async {
if (didPop) return;
// Show confirmation dialog
final shouldPop = await showConfirmDialog(context);
if (shouldPop && context.mounted) {
Navigator.pop(context, 'user confirmed exit');
}
},
child: Scaffold(...),
)
Gotcha -- mounted check after await:
void _openPicker() async {
final color = await Navigator.push<Color>(
context,
MaterialPageRoute(builder: (_) => ColorPicker()),
);
// DANGER: Widget might be disposed while we were awaiting!
if (!mounted) return; // Must check before using context or calling setState
setState(() => _selectedColor = color);
}
Q3: What happens to the previous route's state when you push a new route?
What the interviewer is REALLY testing:
Whether you understand that routes remain in the widget tree when obscured and how the route lifecycle works.
Answer:
The previous route's State is PRESERVED. It is NOT disposed. The widget tree for the previous route remains intact in memory, hidden behind the new route.
// Route stack after push:
// [HomeScreen(State alive)] <- hidden but NOT disposed
// [DetailScreen(State alive)] <- visible, on top
// HomeScreen's:
// - State object: still alive, initState NOT called again on pop
// - AnimationControllers: still ticking (unless you pause in deactivate)
// - StreamSubscriptions: still active (still receiving events!)
// - Timers: still firing
// - TextEditingControllers: still hold their text
Route lifecycle methods:
class _HomeState extends State<HomeScreen> with RouteAware {
@override
void didChangeDependencies() {
super.didChangeDependencies();
routeObserver.subscribe(this, ModalRoute.of(context)!);
}
@override
void dispose() {
routeObserver.unsubscribe(this);
super.dispose();
}
// RouteAware callbacks:
@override
void didPush() {
// This route was pushed (first time visible)
print('HomeScreen pushed');
}
@override
void didPopNext() {
// The route on top was popped — we're visible again!
print('HomeScreen visible again — refresh data?');
_refreshData(); // Common pattern: reload data when returning
}
@override
void didPushNext() {
// A new route was pushed on top — we're now hidden
print('HomeScreen hidden — pause expensive operations?');
_pauseVideoPlayer();
}
@override
void didPop() {
// This route was popped
print('HomeScreen popped — about to be disposed');
}
}
// Set up the RouteObserver:
final routeObserver = RouteObserver<ModalRoute<void>>();
MaterialApp(
navigatorObservers: [routeObserver],
// ...
)
Performance implications:
// If HomeScreen has active animations/streams, they keep running
// even when hidden behind DetailScreen. This wastes CPU.
// Fix: Pause when hidden, resume when visible:
@override
void didPushNext() {
_animationController.stop();
_streamSubscription.pause();
}
@override
void didPopNext() {
_animationController.forward();
_streamSubscription.resume();
}
Exception: When route is REPLACED:
Navigator.pushReplacement(context, newRoute);
// NOW the previous route IS disposed — State.dispose() is called.
// This is the correct way to "replace" login with home.
Q4: WillPopScope/PopScope -- what's the difference and what happens if you return false?
What the interviewer is REALLY testing:
Whether you know the API migration from WillPopScope (deprecated) to PopScope (Flutter 3.16+) and the behavioral nuances.
Answer:
WillPopScope (deprecated in Flutter 3.16):
WillPopScope(
onWillPop: () async {
// Return true -> allow pop
// Return false -> prevent pop
final shouldPop = await showDialog<bool>(
context: context,
builder: (_) => AlertDialog(
title: Text('Discard changes?'),
actions: [
TextButton(onPressed: () => Navigator.pop(context, false), child: Text('No')),
TextButton(onPressed: () => Navigator.pop(context, true), child: Text('Yes')),
],
),
);
return shouldPop ?? false;
},
child: Scaffold(body: EditForm()),
)
PopScope (Flutter 3.16+ replacement):
PopScope(
canPop: false, // Prevents the default pop behavior
onPopInvokedWithResult: (bool didPop, dynamic result) {
if (didPop) {
// Pop already happened (canPop was true or system forced it)
return;
}
// Pop was intercepted. Show confirmation:
_showExitConfirmation(context);
},
child: Scaffold(body: EditForm()),
)
Key differences:
| WillPopScope | PopScope | |
|---|---|---|
| API style | Async callback returns bool | Sync: canPop bool + notification callback |
| Prevents pop | By returning false from onWillPop
|
By setting canPop: false
|
| Predictive back | Broken -- async callback doesn't work with predictive back gesture | Designed for predictive back (Android 14+) |
| Can conditionally block | Yes (async logic in callback) | Yes (change canPop state dynamically) |
Why the migration happened -- Predictive Back:
Android 14 introduced "predictive back" -- the system shows a preview of the previous screen/app as the user starts the back gesture. This requires knowing synchronously, before the gesture completes whether back will be handled. An async callback (onWillPop) cannot answer this synchronously.
// Dynamic canPop based on form state:
class _EditFormState extends State<EditForm> {
bool _hasUnsavedChanges = false;
@override
Widget build(BuildContext context) {
return PopScope(
canPop: !_hasUnsavedChanges, // Sync — system knows immediately
onPopInvokedWithResult: (didPop, result) {
if (!didPop) {
_showDiscardDialog();
}
},
child: TextField(
onChanged: (value) {
setState(() => _hasUnsavedChanges = value.isNotEmpty);
},
),
);
}
void _showDiscardDialog() async {
final discard = await showDialog<bool>(
context: context,
builder: (_) => AlertDialog(
title: Text('Discard changes?'),
actions: [
TextButton(
onPressed: () => Navigator.pop(context, false),
child: Text('Keep editing'),
),
TextButton(
onPressed: () => Navigator.pop(context, true),
child: Text('Discard'),
),
],
),
);
if (discard == true && mounted) {
Navigator.pop(context);
}
}
}
Nested PopScope:
// If multiple PopScopes are nested, the INNERMOST one with canPop:false wins.
PopScope(
canPop: true, // Outer — allows pop
child: PopScope(
canPop: false, // Inner — blocks pop
onPopInvokedWithResult: (didPop, result) { ... },
child: MyWidget(),
),
)
// Result: Pop is blocked (inner wins).
Q5: What happens if you have nested Navigators and call Navigator.pop?
What the interviewer is REALLY testing:
Whether you understand Navigator scoping and how Navigator.of(context) resolves to the nearest Navigator ancestor.
Answer:
Navigator.of(context) finds the NEAREST Navigator ancestor. In a nested navigator setup, pop affects the inner navigator, not the outer one.
// App structure:
// MaterialApp (has Navigator — "root navigator")
// +-- HomeScreen
// +-- Navigator (nested — "tab navigator")
// |-- TabA
// +-- TabB
// +-- DetailScreen
// Inside DetailScreen:
Navigator.pop(context);
// This pops from the INNER (tab) navigator, NOT the root navigator.
// DetailScreen is removed, TabB is shown.
// The root navigator is unaffected.
Common problem -- trying to push a full-screen route from inside a nested navigator:
// Inside a tab that has a nested navigator:
Navigator.push(context, MaterialPageRoute(
builder: (_) => FullScreenPage(),
));
// This pushes INSIDE the tab navigator — the bottom tab bar is still visible!
// Fix: Use the ROOT navigator:
Navigator.of(context, rootNavigator: true).push(
MaterialPageRoute(builder: (_) => FullScreenPage()),
);
// Now it pushes on the root navigator — truly full screen, tabs are hidden.
Practical nested navigator setup:
class HomeScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Navigator(
key: navigatorKey,
onGenerateRoute: (settings) {
return MaterialPageRoute(
builder: (_) {
switch (settings.name) {
case '/': return TabHomeContent();
case '/detail': return DetailScreen();
default: return NotFoundScreen();
}
},
);
},
),
bottomNavigationBar: BottomNavigationBar(...),
);
}
}
// Inside TabHomeContent:
void _onItemTap(Item item) {
// Pushes within the nested navigator — tabs stay visible:
Navigator.pushNamed(context, '/detail', arguments: item);
}
// Inside DetailScreen:
void _onClose() {
Navigator.pop(context); // Pops from nested navigator — back to TabHomeContent
}
void _onOpenFullScreen() {
// Push on root navigator — tabs hidden:
Navigator.of(context, rootNavigator: true).push(...);
}
Back button behavior with nested navigators:
// Android back button sends pop to the ROOT navigator by default.
// If the nested navigator has routes to pop, you must intercept:
PopScope(
canPop: false,
onPopInvokedWithResult: (didPop, result) {
if (didPop) return;
// Check if nested navigator can pop:
if (nestedNavigatorKey.currentState?.canPop() ?? false) {
nestedNavigatorKey.currentState!.pop();
} else {
// No more routes in nested navigator — let system handle it
SystemNavigator.pop(); // Or exit the app
}
},
child: ...,
)
Q6: Why does go_router's context.go() vs context.push() behave differently with back button?
What the interviewer is REALLY testing:
Whether you understand declarative (URL-based) routing vs imperative (stack-based) routing and the implications for navigation history.
Answer:
context.go('/path') |
context.push('/path') |
|
|---|---|---|
| Navigation model | Declarative -- replaces the entire route stack based on the URL | Imperative -- pushes a single route on top of the current stack |
| Back button | Goes to whatever the URL path hierarchy implies (or exits app) | Goes to the previous route in the stack (the one you came from) |
| Browser analogy | Typing a URL in the address bar | Clicking a link |
| URL changes | Yes | Yes |
| Stack behavior | Rebuilds stack to match URL hierarchy | Adds one route to existing stack |
// go_router configuration:
GoRouter(
routes: [
GoRoute(path: '/', builder: (_, __) => HomeScreen()),
GoRoute(
path: '/products',
builder: (_, __) => ProductsScreen(),
routes: [
GoRoute(
path: ':id', // Full path: /products/:id
builder: (_, state) => ProductDetailScreen(id: state.pathParameters['id']!),
),
],
),
GoRoute(path: '/settings', builder: (_, __) => SettingsScreen()),
],
)
Scenario: User is on HomeScreen (/)
// Using context.go('/products/42'):
// Stack becomes: [HomeScreen, ProductsScreen, ProductDetailScreen(42)]
// go_router builds the ENTIRE path hierarchy: / -> /products -> /products/42
// Back button: ProductDetailScreen -> ProductsScreen -> HomeScreen
// Because each ancestor in the URL path is a route in the stack.
// Using context.push('/products/42'):
// Stack becomes: [HomeScreen, ProductDetailScreen(42)]
// ProductsScreen is NOT in the stack — it was skipped.
// Back button: ProductDetailScreen -> HomeScreen
// Because push just adds one route on top.
Another scenario: User is on SettingsScreen (/settings)
// context.go('/products/42'):
// Stack becomes: [HomeScreen, ProductsScreen, ProductDetailScreen(42)]
// SettingsScreen is GONE — not in the stack at all.
// go() completely replaces the stack based on the new URL.
// context.push('/products/42'):
// Stack becomes: [HomeScreen, SettingsScreen, ProductDetailScreen(42)]
// SettingsScreen stays — push just adds on top.
// Back button: ProductDetailScreen -> SettingsScreen -> HomeScreen
When to use each:
// USE go() when:
// - Top-level navigation (tab switching, main sections)
// - Deep links (user opens a URL)
// - After login (replace entire stack)
// - When the back button should follow the URL hierarchy
context.go('/products'); // "Navigate to products section"
// USE push() when:
// - Pushing a detail screen from a list
// - Opening a modal/dialog route
// - When user should be able to "go back" to where they came from
context.push('/products/${product.id}'); // "Show this product, let me go back"
The ShellRoute complication:
// With ShellRoute (for persistent bottom nav):
ShellRoute(
builder: (_, __, child) => ScaffoldWithNav(child: child),
routes: [
GoRoute(path: '/home', builder: (_, __) => HomeTab()),
GoRoute(path: '/search', builder: (_, __) => SearchTab()),
],
)
// context.go('/search') — switches tab, back button exits app (no stack)
// context.push('/search') — pushes OVER the shell, bottom nav disappears
Q7: What happens if you deep link to a route that requires authentication?
What the interviewer is REALLY testing:
Whether you can design a route guard system that handles the redirect-after-login pattern correctly.
Answer:
Without proper handling, the user sees a crash, a blank screen, or the protected content briefly flashes before redirect.
The correct approach uses a redirect mechanism:
// go_router approach — the gold standard:
final router = GoRouter(
refreshListenable: authNotifier, // Re-evaluates redirect when auth changes
redirect: (context, state) {
final isLoggedIn = authNotifier.isLoggedIn;
final isLoggingIn = state.matchedLocation == '/login';
// Not logged in and not on login page -> redirect to login
if (!isLoggedIn && !isLoggingIn) {
// Save the intended destination for after login:
return '/login?redirect=${state.matchedLocation}';
}
// Logged in but on login page -> redirect to intended destination or home
if (isLoggedIn && isLoggingIn) {
final redirect = state.uri.queryParameters['redirect'];
return redirect ?? '/';
}
// No redirect needed
return null;
},
routes: [
GoRoute(path: '/login', builder: (_, __) => LoginScreen()),
GoRoute(path: '/', builder: (_, __) => HomeScreen()),
GoRoute(
path: '/profile',
builder: (_, __) => ProfileScreen(), // Protected route
),
GoRoute(
path: '/orders/:id',
builder: (_, state) => OrderDetailScreen(
id: state.pathParameters['id']!,
),
),
],
);
// Login screen — redirect after successful login:
class _LoginScreenState extends State<LoginScreen> {
void _onLoginSuccess() {
final redirect = GoRouterState.of(context).uri.queryParameters['redirect'];
if (redirect != null) {
context.go(redirect); // Go to the original deep link destination
} else {
context.go('/'); // Default to home
}
}
}
The deep link flow:
1. User taps: myapp://example.com/orders/42
2. OS opens app with deep link path: /orders/42
3. go_router's redirect fires BEFORE building any route
4. redirect checks: isLoggedIn? -> NO
5. redirect returns: /login?redirect=/orders/42
6. LoginScreen is shown
7. User logs in -> authNotifier.isLoggedIn = true
8. refreshListenable fires -> redirect re-evaluates
9. User is now logged in AND on login page -> redirect to /orders/42
10. OrderDetailScreen(id: '42') is shown
Common mistakes:
// MISTAKE 1: Checking auth in build() — too late, shows protected content briefly
@override
Widget build(BuildContext context) {
if (!authService.isLoggedIn) {
// BAD: The widget already built! User may see a flash of content.
WidgetsBinding.instance.addPostFrameCallback((_) {
Navigator.pushReplacement(context, LoginRoute());
});
return SizedBox.shrink();
}
return ProtectedContent();
}
// MISTAKE 2: Not saving the redirect destination
// User deep links to /orders/42 -> redirected to /login -> logs in -> goes to / (home)
// The user never reaches /orders/42 — frustrating UX.
// MISTAKE 3: Redirect loop
redirect: (context, state) {
if (!isLoggedIn) return '/login'; // This fires on /login too!
// /login is not logged in -> redirect to /login -> still not logged in -> infinite loop
// Fix: Check if already on /login (as shown above)
}
Q8: How does Navigator 2.0 (Router) differ from Navigator 1.0 conceptually -- and why was it needed?
What the interviewer is REALLY testing:
Whether you understand the imperative vs declarative navigation paradigm shift and the problems Navigator 1.0 couldn't solve.
Answer:
Navigator 1.0 (imperative):
// You tell Flutter WHAT TO DO:
Navigator.push(context, MaterialPageRoute(builder: (_) => DetailScreen()));
Navigator.pop(context);
Navigator.pushReplacementNamed(context, '/home');
// The app COMMANDS individual navigation actions.
Navigator 2.0 / Router (declarative):
// You tell Flutter WHAT THE STATE IS:
// Flutter figures out what navigation changes are needed.
class MyRouterDelegate extends RouterDelegate<MyRoutePath> {
@override
Widget build(BuildContext context) {
return Navigator(
pages: [
MaterialPage(child: HomeScreen()),
if (_selectedProduct != null)
MaterialPage(child: ProductScreen(product: _selectedProduct!)),
if (_showCart)
MaterialPage(child: CartScreen()),
],
onPopPage: (route, result) {
// Update state when user presses back
if (_showCart) { _showCart = false; }
else { _selectedProduct = null; }
notifyListeners();
return route.didPop(result);
},
);
}
}
Why Navigator 1.0 was insufficient:
| Problem | Navigator 1.0 | Navigator 2.0 |
|---|---|---|
| Web URL sync | Cannot sync URL bar with route stack | RouteInformationParser keeps URL in sync |
| Deep linking | Manual, fragile -- must replay push/pop commands | Declarative -- just set the state, routes build automatically |
| Back button (web) | Breaks browser back/forward | Integrates with browser history |
| Complex flows | Auth redirects, conditional routes are hacky | redirect callback handles cleanly |
| Nested navigation | Possible but error-prone | First-class support with ShellRoute (go_router) |
| State-driven routing | App state and route state are separate -> out of sync | Routes ARE derived from app state |
Practical example -- the problem Navigator 1.0 cannot solve cleanly:
// Scenario: User opens app via deep link: /products/42/reviews
// Required stack: [Home, Products, ProductDetail(42), Reviews(42)]
// Navigator 1.0 — must manually build the stack:
void handleDeepLink(String path) {
Navigator.pushAndRemoveUntil(context, HomeRoute(), (r) => false);
Navigator.push(context, ProductsRoute());
Navigator.push(context, ProductDetailRoute(id: 42));
Navigator.push(context, ReviewsRoute(productId: 42));
// Fragile, error-prone, and the intermediate screens flash briefly.
}
// Navigator 2.0 / go_router — declare the route hierarchy:
GoRoute(
path: '/products',
builder: (_, __) => ProductsScreen(),
routes: [
GoRoute(
path: ':id',
builder: (_, state) => ProductDetailScreen(id: state.pathParameters['id']!),
routes: [
GoRoute(
path: 'reviews',
builder: (_, state) => ReviewsScreen(productId: state.pathParameters['id']!),
),
],
),
],
)
// Deep link to /products/42/reviews:
// go_router automatically builds: [Home, Products, ProductDetail(42), Reviews(42)]
// No manual stack manipulation. Back button works correctly through the hierarchy.
In practice, most developers use go_router (which implements Navigator 2.0) rather than writing raw RouterDelegate/RouteInformationParser. go_router provides the declarative benefits without the boilerplate.
Q9: What happens if you push the same route twice rapidly (double-tap)?
What the interviewer is REALLY testing:
Whether you are aware of this common UX bug and know how to prevent it.
Answer:
Two copies of the same screen are pushed onto the stack. The user sees the same screen, and pressing back reveals the duplicate underneath.
// User double-taps rapidly:
// Tap 1 -> Navigator.push(DetailScreen(id: 42))
// Tap 2 -> Navigator.push(DetailScreen(id: 42)) <- before first animation completes
// Stack: [Home, DetailScreen(42), DetailScreen(42)]
// Back button: DetailScreen(42) -> DetailScreen(42) -> Home
// User is confused: "Why did I have to press back twice?"
Prevention strategies:
// STRATEGY 1: Debounce at the tap level
bool _isNavigating = false;
void _onItemTap(Item item) {
if (_isNavigating) return;
_isNavigating = true;
Navigator.push(
context,
MaterialPageRoute(builder: (_) => DetailScreen(item: item)),
).then((_) {
_isNavigating = false; // Reset when we come back
});
}
// STRATEGY 2: Use a helper extension
extension NavigatorThrottle on NavigatorState {
static DateTime? _lastNavTime;
Future<T?> pushThrottled<T>(Route<T> route, {Duration cooldown = const Duration(milliseconds: 500)}) {
final now = DateTime.now();
if (_lastNavTime != null && now.difference(_lastNavTime!) < cooldown) {
return Future.value(null);
}
_lastNavTime = now;
return push(route);
}
}
// STRATEGY 3: Use AbsorbPointer during navigation animation
// Wrap the tappable area:
AbsorbPointer(
absorbing: _isNavigating,
child: ListTile(
onTap: () => _navigate(),
title: Text('Item'),
),
)
// STRATEGY 4: go_router handles this partially
// context.push('/detail/42') during an active transition
// go_router queues navigation, but duplicates can still occur.
// STRATEGY 5: Check if route is already on top (Navigator 1.0)
void _onItemTap(Item item) {
final currentRoute = ModalRoute.of(context)?.settings.name;
if (currentRoute == '/detail/${item.id}') return; // Already here
Navigator.pushNamed(context, '/detail/${item.id}');
}
Most robust approach -- a NavigatorObserver:
class ThrottleNavigatorObserver extends NavigatorObserver {
bool _isNavigating = false;
bool get isNavigating => _isNavigating;
@override
void didPush(Route route, Route? previousRoute) {
_isNavigating = true;
// Reset after animation completes:
route.animation?.addStatusListener((status) {
if (status == AnimationStatus.completed) {
_isNavigating = false;
}
});
}
}
// Usage:
final throttleObserver = ThrottleNavigatorObserver();
MaterialApp(
navigatorObservers: [throttleObserver],
)
// Before navigating:
void _onTap() {
if (throttleObserver.isNavigating) return;
Navigator.push(context, ...);
}
Q10: What happens to Bloc/Provider state when you navigate away and come back?
What the interviewer is REALLY testing:
Whether you understand how state management lifetime interacts with navigation, and the difference between scoped vs global state.
Answer:
It depends entirely on WHERE the state is provided in the widget tree.
Scenario 1: State provided ABOVE the Navigator (global) -- state survives navigation:
// main.dart
MaterialApp(
home: BlocProvider(
create: (_) => CartBloc(), // Created above Navigator
child: Navigator(...),
),
)
// CartBloc survives ALL navigation.
// Push, pop, replace — doesn't matter. CartBloc is above the Navigator
// in the widget tree, so it's never disposed.
Scenario 2: State provided INSIDE a route (scoped) -- state dies when route is popped:
// ProductScreen:
class ProductScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return BlocProvider(
create: (_) => ProductBloc()..add(LoadProduct(id)), // Created WITH this route
child: ProductView(),
);
}
}
// Push ProductScreen -> ProductBloc is created
// Pop ProductScreen -> ProductBloc.close() is called -> state is GONE
// Push ProductScreen again -> NEW ProductBloc is created -> starts from scratch
Scenario 3: State provided inside a route but route is only hidden (push on top):
// Route A: has BlocProvider<CounterBloc>
// Route B: pushed on top of Route A
// Route A is NOT disposed — it's just hidden in the stack.
// CounterBloc is still alive, still holds its state.
// Pop Route B -> Route A is visible again with its previous state intact.
Scenario 4: Riverpod -- state lifetime depends on the provider type:
// autoDispose: disposed when no widget is reading it
final productProvider = StateNotifierProvider.autoDispose<ProductNotifier, ProductState>(
(ref) => ProductNotifier(),
);
// Navigate away from the screen that watches this -> disposed
// Navigate back -> recreated from scratch
// Without autoDispose: lives forever (until ProviderScope is disposed)
final cartProvider = StateNotifierProvider<CartNotifier, CartState>(
(ref) => CartNotifier(),
);
// Navigate away -> still alive. Navigate back -> same state.
Common patterns:
// PATTERN 1: Keep state alive across navigation — provide above Navigator
MultiBlocProvider(
providers: [
BlocProvider(create: (_) => AuthBloc()),
BlocProvider(create: (_) => CartBloc()),
BlocProvider(create: (_) => ThemeBloc()),
],
child: MaterialApp.router(routerConfig: router),
)
// PATTERN 2: Page-scoped state — provide inside the page
class OrderDetailPage extends StatelessWidget {
final String orderId;
@override
Widget build(BuildContext context) {
return BlocProvider(
create: (_) => OrderDetailBloc(orderId: orderId)..add(LoadOrder()),
child: OrderDetailView(),
);
}
}
// PATTERN 3: Persist state across navigation without global scope
// Use a repository/cache layer:
class ProductRepository {
final _cache = <String, Product>{};
Future<Product> getProduct(String id) async {
if (_cache.containsKey(id)) return _cache[id]!;
final product = await api.fetchProduct(id);
_cache[id] = product;
return product;
}
}
// The BLoC/Notifier can be recreated, but data comes from cache -> fast.
The critical mental model:
Widget Tree:
MaterialApp
|-- BlocProvider<AuthBloc> <- lives as long as MaterialApp
| +-- Navigator
| |-- HomeScreen
| | +-- BlocProvider<HomeBloc> <- lives as long as HomeScreen is in stack
| +-- ProfileScreen (pushed on top)
| +-- BlocProvider<ProfileBloc> <- lives while ProfileScreen is in stack
|
// HomeBloc is ALIVE (HomeScreen is hidden but still in stack)
// ProfileBloc is ALIVE (ProfileScreen is visible)
// AuthBloc is ALIVE (above Navigator — always alive)
//
// Pop ProfileScreen -> ProfileBloc is DISPOSED
// -> HomeBloc is still alive
Q11: What happens when you use Navigator.pushNamedAndRemoveUntil -- and why is the predicate dangerous?
What the interviewer is REALLY testing:
Whether you understand route predicates and the risk of removing too many or too few routes.
Answer:
pushNamedAndRemoveUntil pushes a new route and then removes all routes below it until the predicate returns true.
// Push HomeScreen and remove everything below it:
Navigator.pushNamedAndRemoveUntil(
context,
'/home',
(Route<dynamic> route) => false, // predicate: remove ALL routes
);
// Stack: [HomeScreen] <- clean stack
// Push HomeScreen and keep the root route:
Navigator.pushNamedAndRemoveUntil(
context,
'/home',
ModalRoute.withName('/'), // Keep the '/' route and everything below it
);
// Stack: [Root, HomeScreen]
The danger -- predicate never matches:
// If the predicate never returns true, ALL routes are removed:
Navigator.pushNamedAndRemoveUntil(
context,
'/home',
ModalRoute.withName('/nonexistent'), // No route has this name!
);
// Stack: [HomeScreen] <- / route was removed too!
// If you later try to pop HomeScreen, the app exits (no route below it).
Another danger -- removing routes that hold important state:
// Stack: [Login, Setup1, Setup2, Setup3]
// You want to go to Home after setup completes:
Navigator.pushNamedAndRemoveUntil(context, '/home', (route) => false);
// This removes Login, Setup1, Setup2, Setup3 — disposing all their State.
// If any of those had unsaved data or active connections, they're gone.
// BlocProviders scoped to those routes -> Blocs closed.
Named routes vs settings.name:
// GOTCHA: ModalRoute.withName() checks route.settings.name
// If you used anonymous routes (MaterialPageRoute without settings), the name is null:
Navigator.push(context, MaterialPageRoute(
builder: (_) => HomeScreen(),
// No settings -> name is null!
));
// Later:
Navigator.pushNamedAndRemoveUntil(
context,
'/profile',
ModalRoute.withName('/home'), // Won't match! The home route has no name.
);
// Result: ALL routes removed, including home. Unexpected behavior.
// Fix: Always set route settings:
Navigator.push(context, MaterialPageRoute(
builder: (_) => HomeScreen(),
settings: RouteSettings(name: '/home'),
));
Q12: How do you handle navigation when the app is in the background and receives a push notification deep link?
What the interviewer is REALLY testing:
Whether you understand the app lifecycle and how navigation works when the app resumes from background.
Answer:
There are three scenarios based on app state when the notification arrives:
Scenario 1: App is in FOREGROUND (running)
// Push notification received while user is using the app.
// Use FirebaseMessaging.onMessage to handle:
FirebaseMessaging.onMessage.listen((RemoteMessage message) {
// Show an in-app notification (SnackBar, overlay, local notification)
// Let USER decide to navigate — don't auto-navigate (disruptive).
showInAppNotification(
title: message.notification?.title ?? '',
onTap: () {
_navigateToDeepLink(message.data['route']);
},
);
});
Scenario 2: App is in BACKGROUND (suspended but alive)
// User taps notification -> app resumes.
// The widget tree exists. Navigator exists. Just navigate.
FirebaseMessaging.onMessageOpenedApp.listen((RemoteMessage message) {
final route = message.data['route']; // e.g., '/orders/42'
// The NavigatorState is available via a GlobalKey or go_router:
router.go(route); // go_router
// OR
navigatorKey.currentState?.pushNamed(route); // Navigator 1.0
});
Scenario 3: App is TERMINATED (cold start from notification)
// User taps notification -> app launches from scratch.
// You must handle this BEFORE building the widget tree.
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp();
// Check if the app was opened from a notification:
final initialMessage = await FirebaseMessaging.instance.getInitialMessage();
runApp(MyApp(initialDeepLink: initialMessage?.data['route']));
}
// In your router configuration:
GoRouter(
initialLocation: initialDeepLink ?? '/',
redirect: (context, state) {
// Also handle auth check here
if (!isLoggedIn && state.matchedLocation != '/login') {
return '/login?redirect=${state.matchedLocation}';
}
return null;
},
)
The tricky part -- navigating before the widget tree is ready:
// WRONG: Navigating immediately on cold start
void main() async {
WidgetsFlutterBinding.ensureInitialized();
final message = await FirebaseMessaging.instance.getInitialMessage();
runApp(MyApp());
// BUG: Navigator doesn't exist yet — the first frame hasn't built!
navigatorKey.currentState?.pushNamed(message?.data['route'] ?? '/');
}
// CORRECT: Pass the deep link into the app and let the router handle it
runApp(MyApp(initialRoute: message?.data['route']));
// Or save it and process it in the first widget's initState with a post-frame callback.
Complete robust solution:
class DeepLinkService {
static String? _pendingDeepLink;
static void init() {
// Cold start
FirebaseMessaging.instance.getInitialMessage().then((message) {
if (message != null) _handleMessage(message);
});
// Background -> foreground
FirebaseMessaging.onMessageOpenedApp.listen(_handleMessage);
// Dynamic links, uni_links, etc.
uniLinks.getInitialLink().then((link) {
if (link != null) _handleLink(link);
});
uniLinks.linkStream.listen(_handleLink);
}
static void _handleMessage(RemoteMessage message) {
final route = message.data['route'];
if (route != null) _navigate(route);
}
static void _navigate(String route) {
if (router.configuration != null) {
router.go(route); // Router is ready — navigate now
} else {
_pendingDeepLink = route; // Router not ready — save for later
}
}
static void processPendingDeepLink() {
if (_pendingDeepLink != null) {
router.go(_pendingDeepLink!);
_pendingDeepLink = null;
}
}
}
Top comments (0)