Predict and update
The Kalman filter works in two main phases:
- Prediction: we project the system state forward in time, based on the model.
- Update (or correction): we adjust this prediction using the latest observation.
Finally, the step function combines both into one routine, making it convenient to advance the filter by a single time step.
Click here to view the full implementation: algorithm.rs. We break into down in the sequel of this section.
#![allow(unused)] fn main() { // ANCHOR: imports use nalgebra::{Cholesky, DMatrix, DVector}; use rand::thread_rng; use rand_distr::{Distribution, StandardNormal}; // ANCHOR_END: imports // ANCHOR: struct pub struct KalmanFilter { _state: DVector<f64>, _covariance: DMatrix<f64>, _state_transition_matrix: DMatrix<f64>, _observation_matrix: DMatrix<f64>, _state_noise_covariance: DMatrix<f64>, _observation_noise_covariance: DMatrix<f64>, } // ANCHOR_END: struct // ANCHOR: error_enum #[derive(Debug, thiserror::Error)] pub enum KalmanError { /// Innovation covariance was not symmetric and positive definite #[error("innovation covariance is not SPD")] InnovationNotSpd, /// Dimension mismatch error #[error("dimension mismatch: {0}")] Dim(String), } // ANCHOR_END: error_enum // ANCHOR: new impl KalmanFilter { pub fn new( init_state: Option<DVector<f64>>, init_covariance: Option<DMatrix<f64>>, state_transition_matrix: DMatrix<f64>, observation_matrix: DMatrix<f64>, state_noise_covariance: DMatrix<f64>, observation_noise_covariance: DMatrix<f64>, ) -> Result<Self, KalmanError> { let n: usize = state_transition_matrix.ncols(); let state = init_state.unwrap_or_else(|| { let mut rng = thread_rng(); DVector::from_iterator(n, (0..n).map(|_| StandardNormal.sample(&mut rng))) }); let covariance = init_covariance.unwrap_or_else(|| DMatrix::identity(n, n)); // Check square matrices if state_transition_matrix.nrows() != n { return Err(KalmanError::Dim("A must be square".to_string())); } if state_noise_covariance.shape() != (n, n) { return Err(KalmanError::Dim("Q must be n × n".to_string())); } // Check observation dimensions let m: usize = observation_matrix.nrows(); if observation_matrix.ncols() != n { return Err(KalmanError::Dim("H must be m x n".to_string())); }; if observation_noise_covariance.shape() != (m, m) { return Err(KalmanError::Dim("R must be m × m".to_string())); } // Check state init if state.len() != n { return Err(KalmanError::Dim("x0 must have length n".to_string())); } if covariance.shape() != (n, n) { return Err(KalmanError::Dim("P0 must be n×n".to_string())); } Ok(Self { _state: state, _covariance: covariance, _state_transition_matrix: state_transition_matrix, _observation_matrix: observation_matrix, _state_noise_covariance: state_noise_covariance, _observation_noise_covariance: observation_noise_covariance, }) } } // ANCHOR_END: new // ANCHOR: accessors impl KalmanFilter { pub fn state(&self) -> &DVector<f64> { &self._state } pub fn covariance(&self) -> &DMatrix<f64> { &self._covariance } } // ANCHOR_END: accessors // ANCHOR: predict_and_update impl KalmanFilter { // ANCHOR: predict fn predict_step(&mut self) { self._state = &self._state_transition_matrix * &self._state; self._covariance = &self._state_transition_matrix * &self._covariance * &self._state_transition_matrix.transpose() + &self._state_noise_covariance; } // ANCHOR_END: predict // ANCHOR: update fn update_step(&mut self, observation: DVector<f64>) -> Result<(), KalmanError> { let h_matrix = &self._observation_matrix; let r_matrix = &self._observation_noise_covariance; let innovation = observation - h_matrix * &self._state; // Innovation covariance: S = H P^- H^T + R (SPD) let hp = h_matrix * &self._covariance; let s = &hp * h_matrix.transpose() + r_matrix; // K^T = S^{-1} (H P^-) let Some(chol) = Cholesky::new(s) else { return Err(KalmanError::InnovationNotSpd); }; let k_t = chol.solve(&hp); let kalman_gain = k_t.transpose(); // State update: x = x^- + K y self._state = &self._state + &kalman_gain * innovation; // Joseph covariance update // P = (I - K H) P^- (I - K H)^T + K R K^T let n = self._covariance.nrows(); let i = DMatrix::<f64>::identity(n, n); let ikh = &i - &kalman_gain * h_matrix; self._covariance = &ikh * &self._covariance * ikh.transpose() + &kalman_gain * r_matrix * kalman_gain.transpose(); Ok(()) } // ANCHOR_end: update // ANCHOR: step pub fn step(&mut self, observation: Option<DVector<f64>>) -> Result<(), KalmanError> { self.predict_step(); if let Some(obs) = observation { self.update_step(obs)?; } Ok(()) } // ANCHOR_END: step } // ANCHOR_END: predict_and_update #[cfg(test)] mod tests { use super::*; use nalgebra::DMatrix; #[test] fn test_kf_constructor() { let a_state = DMatrix::<f64>::identity(2, 2); let h_obs = DMatrix::<f64>::identity(2, 2); let state_noise = DMatrix::<f64>::identity(2, 2) * 1e-3; let obs_noise = DMatrix::<f64>::identity(2, 2) * 1e-2; // no initial state or covariance provided let kf_model = KalmanFilter::new( None, // init_state None, // init_covariance a_state, h_obs, state_noise, obs_noise, ); // check kf_model is not an error assert!(kf_model.is_ok()); let kf_model = kf_model.unwrap(); // check dimensions assert_eq!(kf_model.state().len(), 2); assert_eq!(kf_model.covariance().shape(), (2, 2)); } #[test] fn test_kf_constructor_misspecified() { let a_state = DMatrix::<f64>::identity(2, 2); let h_obs = DMatrix::<f64>::identity(2, 3); let state_noise = DMatrix::<f64>::identity(2, 2) * 1e-3; let obs_noise = DMatrix::<f64>::identity(2, 2) * 1e-2; // no initial state or covariance provided let kf_model = KalmanFilter::new( None, // init_state None, // init_covariance a_state, h_obs, state_noise, obs_noise, ); // check kf_model is not an error assert!(kf_model.is_err()); } } }
Predict step
The predict step function simply updates the state and its covariance.
The method takes &mut self
. This means we are borrowing the filter mutably, because the internal state (_state
and _covariance
) will be modified in place. A shared reference &self
would not allow us to update fields.
The code uses &self._state_transition_matrix * &self._state
. In nalgebra
, the multiplication operator (*
) is overloaded for matrices and vectors. We pass references to avoid unnecessary cloning of large matrices/vectors.
#![allow(unused)] fn main() { fn predict_step(&mut self) { self._state = &self._state_transition_matrix * &self._state; self._covariance = &self._state_transition_matrix * &self._covariance * &self._state_transition_matrix.transpose() + &self._state_noise_covariance; } }
Update/correction step
This method ingests a new observation and corrects the predicted state and covariance. For the sake of clarity, the code is hidden but you can view it below.
Our implementation has two notable features:
&mut self
: the method mutates the internal fields (_state
,_covariance
) of the filter in place.Result<(), KalmanError>
: numerically, the innovation covarianceS
must be symmetric positive definite (SPD) to invert/factorize. IfS
is not SPD, the update is invalid and we signal that with a pecific error rather than panic.
Regarding nalgebra
details:
- Borrowing to avoid clones: expressions like
h_matrix * &self._covariance
or&kalman_gain * r_matrix
pass references so large temporaries aren’t cloned. - Operator overloading:
*
performs matrix–matrix or matrix–vector multiplication andtranspose()
creates a transposed view when possible. - Dimension expectations: if state dimension is
n
and measurement dimension ism
, then
H: m×n
,R: m×m
,P: n×n
,K: n×m
,S: m×m
. Mismatches will cause compile‑time or runtime panics depending on how they occur. - SPD check via Cholesky:
Cholesky::new(s)
returnsNone
whens
is not SPD. That’s why this method can fail and returns aKalmanError
.
Click here to view the update function
#![allow(unused)] fn main() { fn update_step(&mut self, observation: DVector<f64>) -> Result<(), KalmanError> { let h_matrix = &self._observation_matrix; let r_matrix = &self._observation_noise_covariance; let innovation = observation - h_matrix * &self._state; // Innovation covariance: S = H P^- H^T + R (SPD) let hp = h_matrix * &self._covariance; let s = &hp * h_matrix.transpose() + r_matrix; // K^T = S^{-1} (H P^-) let Some(chol) = Cholesky::new(s) else { return Err(KalmanError::InnovationNotSpd); }; let k_t = chol.solve(&hp); let kalman_gain = k_t.transpose(); // State update: x = x^- + K y self._state = &self._state + &kalman_gain * innovation; // Joseph covariance update // P = (I - K H) P^- (I - K H)^T + K R K^T let n = self._covariance.nrows(); let i = DMatrix::<f64>::identity(n, n); let ikh = &i - &kalman_gain * h_matrix; self._covariance = &ikh * &self._covariance * ikh.transpose() + &kalman_gain * r_matrix * kalman_gain.transpose(); Ok(()) } // ANCHOR_end: update pub fn step(&mut self, observation: Option<DVector<f64>>) -> Result<(), KalmanError> { self.predict_step(); if let Some(obs) = observation { self.update_step(obs)?; } Ok(()) } } #[cfg(test)] mod tests { use super::*; use nalgebra::DMatrix; #[test] fn test_kf_constructor() { let a_state = DMatrix::<f64>::identity(2, 2); let h_obs = DMatrix::<f64>::identity(2, 2); let state_noise = DMatrix::<f64>::identity(2, 2) * 1e-3; let obs_noise = DMatrix::<f64>::identity(2, 2) * 1e-2; // no initial state or covariance provided let kf_model = KalmanFilter::new( None, // init_state None, // init_covariance a_state, h_obs, state_noise, obs_noise, ); // check kf_model is not an error assert!(kf_model.is_ok()); let kf_model = kf_model.unwrap(); // check dimensions assert_eq!(kf_model.state().len(), 2); assert_eq!(kf_model.covariance().shape(), (2, 2)); } #[test] fn test_kf_constructor_misspecified() { let a_state = DMatrix::<f64>::identity(2, 2); let h_obs = DMatrix::<f64>::identity(2, 3); let state_noise = DMatrix::<f64>::identity(2, 2) * 1e-3; let obs_noise = DMatrix::<f64>::identity(2, 2) * 1e-2; // no initial state or covariance provided let kf_model = KalmanFilter::new( None, // init_state None, // init_covariance a_state, h_obs, state_noise, obs_noise, ); // check kf_model is not an error assert!(kf_model.is_err()); } } }
Step function
#![allow(unused)] fn main() { pub fn step(&mut self, observation: Option<DVector<f64>>) -> Result<(), KalmanError> { self.predict_step(); if let Some(obs) = observation { self.update_step(obs)?; } Ok(()) } }
The step
method combines prediction and update into a single call, representing one full iteration of the Kalman filter. It works as follows:
- First, it always performs a predict step, advancing the state forward in time.
- Then, if an observation is provided, it runs the update step with that measurement.
- Finally, it returns
Ok(())
if successful, or propagates any error fromupdate_step
.
Implementation details
-
Option<DVector<f64>>
: Observations may not always be available (e.g., missing sensor data). By usingOption
, we can handle both cases:Some(obs)
→ we have a measurement to incorporate.None
→ we skip the update and rely on prediction only.
-
Pattern matching with
if let Some(obs)
: This is a concise way to branch only when the observation is present. -
Error propagation (
?
): Ifupdate_step
returns an error (e.g., covariance not SPD), the?
operator bubbles it up immediately, stopping execution.
Summary
The step
function is the public API entry point for advancing the filter by one timestep.
It hides the internal details of calling predict_step
and update_step
separately, making the filter easier to use in simulations or real-time systems:
- Call
step(None)
when no measurement is available. - Call
step(Some(obs))
when you do have a measurement.