Num | Date | Summary |
---|---|---|
08 | 18.Sept | First, we discussed metrics and looked at the iso-contours of various p-norms. Then, we discussed Best Uniform Approximation. Specifically, we looked at the problem of approximating known functions using polynomials of order n (or less) so as to minimize the maximum error (i.e., minimize worst-case error). A key tool is the Chebyshev Equioscillation Theorem. See page 3 of the notes on approximation. For a more detailed analysis along with proofs, see this article. We also looked at Chebyshev polynomials more generally, and showed how they sometimes are useful for constructing best uniform approximations. |
09 | 23.Sept | Least-Squares Approximation (of sampled functions): In lecture, we discussed the normal equations for approximating data with functions that are linear combinations of "basis" functions φ1(x),...,φk(x). (One chooses these functions in an application-dependent way. We considered an example with φ1(x)=1, φ2(x)=x, and φ2(x)=x2.) We discussed inner products. We showed how to define an inner product on function spaces using an integral. Least-Squares Approximation (of known functions): We began our discussion of orthogonal approximation in function spaces, based on inner products. "Least squares" means we are trying to select a function p(x) from some class of functions (such as all polynomials of a certain maximal degree) to approximate a known function f(x) so as minimize a cost of the form ∫ab(e(x))2w(x)dx. Here e(x) = f(x) - p(x) is the error of the approximation and w(x) is some strictly positive weighting function on [a, b] (perhaps simply the constant 1). Such a cost is the inner product of e(x) with itself, where the inner product is defined as the integral <f,g> = ∫abf(x)g(x)w(x)dx. Given an inner product, one can construct an orthogonal set of functions {p0(x), p1(x), …} that spans a desired space of interest, for instance, all cubic polynomials. One then approximates f(x) by projecting orthogonally onto this subspace. Orthogonal projection ensures that one is minimizing distance, i.e., the least squares error. The key step is expressing the projection as a linear sum of the pi(x) by computing the inner product of f(x) with each pi(x): p(x) = Σidipi(x), with di = <f,pi>/<pi,pi>. We will work through an example in the next lecture. |
10 | 25.Sep | Least-Squares Approximation (of known functions): We finished our discussion of Least-Squares Approximation, in the context of function spaces. As an example, we constructed the Legendre polynomials, then used those to approximate the exponential function. Least-Squares Approximation (of periodic functions): We briefly discussed Fourier series approximations to periodic functions, using exponential functions of the form eik(2π/τ)x, with τ the period of the function being approximated, i=√-1, and k varying over all integers. (Such complex exponentials effectively amount to cosines and sines). Once again we took the perspective that these functions constitute an orthonormal basis for a vector space, now extended to infinite sums. The Fourier coefficients {fk} of a given function f are the inner products of f with each of these basis functions: fk = <f,eik(2π/τ)x>. In other words, they are projections onto "perpendicular coordinate axes" in function space. (Each basis function eik(2π/τ)x gives one such coordinate axis.) Each Fourier coefficient fk is a number computed via an integral. Computing the integral numerically amounts to sampling the function, meaning one is really computing a discrete inner product, leading to aliasing. We mentioned the Nyquist Sampling Theorem. |