代写ECE5550: Applied Kalman Filtering代做留学生SQL语言程序
- 首页 >> Java编程ECE5550: Applied Kalman Filtering
THE LINEAR KALMAN FILTER
4.1: Introduction
■ The principal goal of this course is to learn how to estimate the present hidden state (vector) value of a dynamic system, using noisy measurements that are somehow related to that state (vector).
■ We assume a general, possibly nonlinear, model
xk = fk−1(xk−1, uk−1, wk−1)
zk = hk(xk, uk, vk),
where uk is a known (deterministic/measured) input signal, wk is a process-noise random input, and vk is a sensor-noise random input.
SEQUENTIAL PROBABILISTIC INFERENCE: Estimate the present state xk of a dynamic system using all measurements Zk = {z0,z1,· · · ,zk} .
■ This notes chapter provides a unified theoretic framework to develop a family of estimators for this task: particle filters, Kalman filters, extended Kalman filters, sigma-point (unscented) Kalman filters. . .
A smattering of estimation theory
■ There are various approaches to “optimal estimation” of some unknown quantity x.
■ One says that we would like to minimize the expected magnitude (length) of the error vector between x and the estimate xˆ.
■ This turns out to be the median of the a posteriori pdf f (x | Z).
■ A similar result, but easier to derive analytically minimizes the expected length squared of that error vector.
■ This is the minimum mean square error (MMSE) estimator
■ We solve for xˆ by differentiating the cost function and setting the result to zero
■ Another approach to estimation is to optimize a likelihood function
■ Yet a fourth is the maximum a posteriori estimate
■ In general, xˆ MME ≠ ˆx MMSE ≠ ˆx ML ≠ ˆx MAP, so which is “best”?
■ Answer: It probably depends on the application.
■ The text gives some metrics for comparison: bias, MSE, etc.
■ Here, we use xˆ MMSE = E[x | Z] because it “makes sense” and works well in a lot of applications and is mathematically tractable.
■ Also, for the assumptions that we will make, xˆ MME = ˆx MMSE = ˆx MAP but xˆ MMSE ≠ ˆx ML . xˆ MMSE is unbiased and has smaller MSE than xˆ ML .
Some examples
■ In example 1, mean, median, and mode are identical. Any of these statistics would make a good estimator of x.
■ In example 2, mean, median, and mode are all different. Which to choose is not necessarily obvious.
■ In example 3, the distribution is multi-modal. None of the estimates is likely to be satisfactory!
4.2: Developing the framework
■ The Kalman filter applies the MMSE estimation criteria to a dynamic system. That is, our state estimate is the conditional mean
where Rxk is the set comprising the range of possible xk.
■ To make progress toward implementing this estimator, we must break f (xk | Zk) into simpler pieces.
■ We first use Bayes’ rule to write:
■ We then break up Zk into smaller constituent parts within the joint probabilities as Zk−1 and zk
■ We then break up Zk into smaller constituent parts within the joint probabilities as Zk−1 and zk
■ Next, we apply Bayes’ rule once again in the terms within the [ ]
■ We now cancel some terms from numerator and denominator
■ Finally, recognize that zk is conditionally independent of Zk−1 given xk
■ So, overall, we have shown that
KEY POINT #1: This shows that we can compute the desired density recursively with two steps per iteration:
■ The first step computes probability densities for predicting xk given all past observations
■ The second step updates the prediction via
■ Therefore, the general sequential inference solution breaks naturally into a prediction/update scenario.
■ To proceed further using this approach, the relevant probability densities may be computed as
KEY POINT #2: Closed-form. solutions to solving the multi-dimensional integrals is intractable for most real-world systems.
■ For applications that justify the computational expense, the integrals may be approximated using Monte Carlo methods (particle filters).
■ But, besides applications using particle filters, this approach appears to be a dead end.
KEY POINT #3: A simplified solution may be obtained if we are willing to make the assumption that all probability densities are Gaussian.
■ This is the basis of the original Kalman filter, the extended Kalman filter, and the sigma-point (unscented) Kalman filters to be discussed.