Machine Learning Heart

Gradient Descent Visualizer

Play with learning rates and starting positions to master how algorithms find the minimum of complex 3D loss functions.

Interactive 3D Descent

Open external

Things to Try

1. Play with the Learning Rate (α)

Start an optimization cycle with a very small learning rate. Notice how it takes forever to reach the bottom.

Then, crank the learning rate up high. Watch what happens—it will start bouncing wildly and might even diverge completely instead of finding the minimum!

2. Escape Saddle Points

Change the loss surface to a more complex function that contains "Saddle Points" (areas that look flat but aren't the true bottom). Try starting right over a saddle point.

Observe how standard gradient descent can get stuck, which is why advanced optimizers like Adam add momentum to power through flat spots.

Why is this important for Machine Learning?

When you train a Neural Network, the computer is looking for the optimal combination of thousands (or billions) of weights. We visualize this hunt for the lowest error as a ball rolling down a complex, multi-dimensional hill. This visualizer gives you an intuition for exactly how that "ball" decides which way is down, how fast it should roll, and what obstacles it faces.