Calculus

Derivatives, integrals, and optimization for machine learning

Calculus in Machine Learning

Calculus is essential for understanding how machine learning algorithms learn and optimize their performance. It provides the mathematical foundation for gradient descent, backpropagation, and other optimization techniques.

Core Concepts

Derivatives

Derivatives
  • Rate of change
  • Gradient vectors
  • Chain rule

Integrals

Integrals
  • Area under curve
  • Definite integrals
  • Multiple integrals

Optimization Techniques

Gradient Descent

Gradient Descent

Finding the minimum of a function using its gradient

Backpropagation

Backpropagation

Chain rule application in neural networks

Implementation Example

import numpy as np
from scipy import optimize

# Define a function and its derivative
def f(x):
    return x**2 + 2*x + 1

def df(x):
    return 2*x + 2

# Gradient descent implementation
def gradient_descent(f, df, x0, learning_rate=0.1, n_iter=100):
    x = x0
    history = [x]
    
    for _ in range(n_iter):
        gradient = df(x)
        x = x - learning_rate * gradient
        history.append(x)
    
    return x, history

# Find minimum using gradient descent
x_min, history = gradient_descent(f, df, x0=2.0)
print(f"Minimum found at x = {x_min:.4f}")

# Using SciPy's optimizer
result = optimize.minimize(f, x0=2.0, method='BFGS')
print(f"SciPy minimum: x = {result.x[0]:.4f}")

# Numerical integration example
from scipy import integrate

def g(x):
    return np.exp(-x**2)

# Compute definite integral
result, error = integrate.quad(g, -np.inf, np.inf)
print(f"Integral result: {result:.4f} ± {error:.4f}")

ML Applications

Neural Networks

Neural Network Calculus
  • • Weight updates
  • • Error propagation
  • • Activation functions

Loss Optimization

Loss Optimization
  • • Cost functions
  • • Learning rate tuning
  • • Convergence analysis