When Your DFS Crashes in Production
Recursion seems elegant until your graph has 50,000 nodes and Python's default stack limit is 1,000. I've seen this exact scenario take down a recommendation engine that worked perfectly in dev (test graphs had ~100 nodes) but crashed with RecursionError: maximum recursion depth exceeded the moment real user data hit it.
The problem isn't that recursion is bad — it's that most interview prep uses tiny examples where stack depth never matters. Then you deploy the same DFS code to production and discover that real-world graphs don't fit in 1,000 stack frames.
Here's what actually breaks and how to fix it without rewriting everything.
The Classic Recursive DFS That Fails
Let's start with the textbook solution everyone writes in interviews:
def dfs_recursive(graph, node, visited=None):
if visited is None:
visited = set()
visited.add(node)
print(f"Visiting: {node}")
for neighbor in graph.get(node, []):
if neighbor not in visited:
dfs_recursive(graph, neighbor, visited)
return visited
# Test with a small graph
small_graph = {
'A': ['B', 'C'],
'B': ['D'],
'C': ['E'],
'D': [],
'E': ['F'],
'F': []
}
result = dfs_recursive(small_graph, 'A')
print(f"Visited nodes: {result}")
This works great. Clean, readable, passes all interview tests.
But what happens when your graph is a long chain?
python
import sys
---
*Continue reading the full article on [TildAlice](https://tildalice.io/recursion-stack-overflow-dfs-iterative-fix/)*
Top comments (0)