| Previous Lecture | lect15 | Next Lecture |
lect15,
Graph Search: Depth first traversal
Lecture Plan
1. DFS Visualizer
Step through the DFS algorithm on a small graph — trace the call stack, see how backtracking works, and understand why a single call to exploreDFS(v) is not enough for a disconnected graph. The third mode shows the outer loop in DepthFirstSearch(G) that handles disconnected components.
2. LeetCode Problem: Keys and Rooms
Apply DFS to a new graph problem. Given the DFS pattern from the visualizer, identify what the graph is, what the visited set tracks, and how to detect whether all rooms are reachable.
Input: vector<vector
Discuss with your peer (5 mins):
- What are the nodes and what are the edges in this problem?
- What does the input (rooms) represent in graph terms?
3. DFS Applied to PA03: Backpropagation
The neural network is a directed graph where edges only go forward. To compute the nudge (delta) for each weight, you need values from nodes in the next layer — but there is no backward edge to follow. The recursive DFS naturally solves this: dive to the output first, compute the error there, then pass results back up the call stack. A contributions map (memoization) ensures each node is computed only once.
Interactive Backpropagation Visualizer (PA03)
What this visualizer does not show — questions for the curious student:
- How is the error signal at the output node actually computed? What formula seeds the backward pass, and where does it come from?
- Why do these weight updates actually make predictions better? What guarantees that subtracting a delta moves the network in the right direction?
- Why does the learning rate matter? What goes wrong if it is too large or too small?
- Why average the deltas across a batch of training examples rather than updating after each one?
These videos answer those questions, visually and without heavy math:
- 3Blue1Brown — Gradient descent, how neural networks learn (20 min)
- 3Blue1Brown — Backpropagation, intuitively (20 min)
To see a real neural network train live in your browser — adjusting weights, watching the decision boundary shift, experimenting with architecture and learning rate: