Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Instantiation

To instantiate a GradientDescent, the user has to write:

#![allow(unused)]
fn main() {
let optimizer = Optimizer::GradientDescent {
    learning_rate: 0.1,
};
}

For the momentum-based gradient descent, the instantiation becomes more cumbersome:

#![allow(unused)]
fn main() {
let optimizer = Optimizer::Momentum {
    learning_rate: 0.1,
    momentum: 0.9,
    velocity: vec![0.0; 3],
};
}

To make this more user-friendly, we can define more convenient constrcutors such as:

#![allow(unused)]
fn main() {
let optimizer = Optimizer::gradient_descent(0.1);
}

and

#![allow(unused)]
fn main() {
let optimizer = Optimizer::momentum(0.1, 0.9, 3);
}

This can be achieved by adding two implementations:

#![allow(unused)]
fn main() {
impl Optimizer {
    /// Creates a new Gradient Descent optimizer.
    ///
    /// # Arguments
    /// - `learning_rate`: Step size for the parameter updates.
    pub fn gradient_descent(learning_rate: f64) -> Self {
        Self::GradientDescent { learning_rate }
    }

    /// Creates a new Momentum optimizer.
    ///
    /// # Arguments
    /// - `learning_rate`: Step size for the updates.
    /// - `momentum`: Momentum coefficient.
    /// - `dim`: Number of parameters (used to initialize velocity vector).
    pub fn momentum(learning_rate: f64, momentum: f64, dim: usize) -> Self {
        Self::Momentum {
            learning_rate,
            momentum,
            velocity: vec![0.0; dim],
        }
    }
}
}

This is optional but it helps create optimizers easily.