Skip to content

Kalman Filter - Stochastic Process (random sensor noise, random process noise)

Notifications You must be signed in to change notification settings

ko3365/Linear-Kalman-Filter-StateEstimation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

1. Linear Kalman Filter Derivation

Kalman Filter is an estimation method that uses a series of measurments observed over time. It implements minimum mean square error that minimize the difference between the true state and state estimate:

Z is the series of measurements. Thus, given all the measurement, we want to find an estimate of the state to minimize the mean square error. Setting the derivative to 0, the optimal state estimation can be obtained:

Deriving Optimal Estimate

First, define the prediction error and solve for the state estimation:

Since, all densities are assumed to be Gaussian, we can find the mean of the conditional PDF of estimation error and measurement to obtain the expected value:

The mean of the conditional pdf and kalman gain L is:

Prediction and Correction

Thus, we have the equations required to obtain the optimal estimation: (Prediction + Correction)

Prediction

Correction

2. Applying to an Arbitrary System (SISO and Infinite Horizon):

Now, let's apply the Kalman Filter to an arbitrary system A=1, B=1, C=2, D=0 with process and measurement noise random variance 5.

Sig_w = 5; %Process Noise
Sig_v = 5; %Sensor Noise
A = 1; B = 1; C = 2; D = 0; % Simple State-Space Model

Also, arbitrarily define the input (deterministic)

u = sin(k/(2*pi))+randn/3;

The iteration steps of prediction and correction to obtain the optimal estimation

for k=1:iteration
    %Kalman Filter Prediction
    xhat = A*xhat+B*u_prev;
    SigX = A*SigX*A'+Sig_w;
    zhat = C*xhat+D*u;

    %Kalman Filter Correction 
    L = SigX*C'/(C*SigX*C'+Sig_v);
    xhat = xhat + L*(z-zhat);
    SigX = SigX-L*C*SigX;
end

The real state is unknown and we estimate the state using the Kalman Filter derived from above. The figure below shows the comparison between the real state and estimated state:

Steady State Kalman Filter (Infinite Horizon)

It is possible for the Kalman Filter recursion to converge to a unique steady state solution when the system is non-time varying, {A,C} observable, and {A,Process noise variance} controllerable.

Hamiltonian Approach.

If we define our state prediction variance to be:

We can derive the following relationship:

Perform the diagonal transformation of the Hamiltonian Matrix and define Y:

If the equation is from k steps after time zero and assuming k is arbitrarily high number,

To achieve steady state kalman gain and state variance:

hamiltonian = [A^(-1)' A^(-1)'*C'*Sig_v^(-1)*C; Sig_w*A^(-1)' A+Sig_w*A^(-1)'*C'*Sig_w^(-1)*C];
[evector,evalue] = eig(hamiltonian);
psi12 = evector(1,2);
psi22 = evector(2,2);
sigX_ss_minus = psi22*psi12^(-1);
L_ss = sigX_ss_minus*C'*(C*sigX_ss_minus*C'+Sig_v)^(-1);
sigX_ss_plus = sigX_ss_minus-L_ss*C*sigX_ss_minus;

Using the steady-state approach, the kalman gain and the state variance are not a function of step k. It is computationally simple with a slight penalty on optimality. The comparison between estimated state and real state is shown in the figure below (if we compare with the Linear KF approach above, the result is almost the same even if we used the constant Kalman gain:

3. Applying to an Arbitrary System (MIMO and Robust Model):

If we have two inputs and two outputs with two states, the A,B,C matrix is now a 2x2 matrix:

SigmaW = [0.5 0;0 1]; %Process Noise
SigmaV = [.5 0;0 .5]; %Sensor Noise
A = [.1 .2; .5 .2]; B = [2 0;1 2]; C = [.2 0;-.1 .3]; D = 0; % Simple State-Space Model

The iteration steps of prediction and correction to obtain the optimal estimation are shown above, and the figure below shows the comparison between the two real states and estimated states:

Robust Kalman Filter (Joseph Form Covariance Update & Sequential Processing of Measurement)

During the covariance update, we can lose the postive definiteness of matrix due to the rounding error due to the subtraction:

Joseph Form Covariance Update

We can convert this to the Joseph form so that the subtraction occurs in the "square" term, and thus, the postive definiteness of covariance matrix is guaranteed:

Sequential Processing of Measurement

If there exists large number of sensors, the Kalman gain calculation can be computationally intensive due to the matrix inverse operation O(m3), where m is the number of sensors. If we break the m measurements into single measurement, we can improve the efficiency improves to O(m2). The measurments can be splitted as shown below:

And now we need to perform m steps of prediction correction:

for t=1:iteration

%... Prediction ... %

 % Kalman Filter Correction (Using the Sequential Measurement Processing!)
    for m=1:length(C*xhat2)
        C_m = C(m,:)';
        zvar = C_m'*SigX2*C_m+SigmaV(m,m);
        L2 = SigX2*C_m/zvar;
        xhat2 = xhat2+L2*(z2(m)-C_m'*xhat2);
        % (Joseph Form Covariance Update)
        SigX2 = (eye(length(xhat2))-L2*C_m')*SigX2*(eye(length(xhat2))-L2*C_m')'+L2*zvar*L2';
    end
end

The figure below is the same as the regular MIMO KF, but now it's more robust (Joseph form covariance update) and efficent (squential measurement processing)

References

[1] Optimal State Estimation, D. Simon, 2006

[2] State Estimation: Applied Kalman Filter, G. Plett (Lecture Notes)

About

Kalman Filter - Stochastic Process (random sensor noise, random process noise)

Topics

Resources

Stars

Watchers

Forks

Languages