[kalmf,L,P] = kalman(sys,Q,R,N) creates a Kalman filter given the plant model sys and the noise covariance data Q, R, and N. The function computes a Kalman filter for use in a Kalman estimator with the configuration shown in the following diagram.
You construct the model sys with known inputs u and white process noise inputs w, such that w consists of the last Nw inputs to sys. The "true" plant output yt consists of all outputs of sys. You also provide the noise covariance data Q, R, and N. The returned Kalman filter kalmf is a state-space model that takes the known inputs u and the noisy measurements y and produces an estimate y^ of the true plant output and an estimate x^ of the plant states. kalman also returns the Kalman gains L and the steady-state error covariance matrix P.
kalman filter for beginners with matlab examples 61
[kalmf,L,P,Mx,Z,My] = kalman(___) also returns the innovation gains Mx and My and the steady-state error covariances P and Z for a discrete-time sys. You can use this syntax with any of the previous input argument combinations.
The Kalman filter kalmf is a state-space model having two inputs and four outputs. kalmf takes as inputs the plant input signal u and the noisy plant output y=yt+v. The first output is the estimated true plant output yˆ. The remaining three outputs are the state estimates xˆ. Examine the input and output names of kalmf to see how kalman labels them accordingly.
kalman uses the dimensions of Q to determine which inputs are known and which are the noise inputs. For scalar Q, kalman assumes one noise input and uses the last input, unless you specify otherwise (see Plant with Unmeasured Outputs).
Process noise covariance, specified as a scalar or Nw-by-Nw matrix, where Nw is the number of noise inputs to the plant. kalman uses the size of Q to determine which inputs of sys are noise inputs, taking the last Nw = size(Q,1) inputs to be the noise inputs unless you specify otherwise with the known input argument.
kalman assumes that the process noise w is Gaussian noise with covariance Q = E(wwT). When the plant has only one process noise input, Q is a scalar equal to the variance of w. When the plant has multiple, uncorrelated noise inputs, Q is a diagonal matrix. In practice, you determine the appropriate values for Q by measuring or making educated guesses about the noise properties of your system.
Measurement noise covariance, specified as a scalar or Ny-by-Ny matrix, where Ny is the number of plant outputs. kalman assumes that the measurement noise v is white noise with covariance R = E(vvT). When the plant has only one output channel, R is a scalar equal to the variance of v. When the plant has multiple output channels with uncorrelated measurement noise, R is a diagonal matrix. In practice, you determine the appropriate values for R by measuring or making educated guesses about the noise properties of your system.
Kalman estimator or kalman filter, returned as a state-space (ss) model. The resulting estimator has inputs [u;y] and outputs [y^;x^]. In other words, kalmf takes as inputs the plant input u and the noisy plant output y, and produces as outputs the estimated noise-free plant output y^ and the estimated state values x^.
The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear."[13] Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.[14]
Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.
Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999)[24] and Hamilton (1994), Chapter 13.[25]
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step k, following:
Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.[26][27]
The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate.
One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue to be computed as a negative number. This renders the numerical representation of the state covariance matrix P indefinite, while its true form is positive-definite. 2ff7e9595c
Comments