Adaptive filters are systems that respond to variations in their environment by adapting their internal structure in order to meet certain performance specifications. Such systems are widely used in communications, biomedical applications, signal processing, and control. The performance of an adaptive filter is evaluated in terms of its transient behavior and its steady-state behavior. The former provides information about how fast a filter learns, while the latter provides information about how well a filter learns. Such performance analyses are usually challenging since adaptive filters are, by design, time-variant, nonlinear, and stochastic systems. For this reason, it has been common in the literature to study different adaptive schemes separately due to the differences that exist in their update equations.
The purpose of this talk is to provide an overview of an energy conservation approach to the performance analysis of adaptive filters. The framework is based on studying the energy flow through successive iterations of an adaptive filter and on establishing a fundamental energy conservation relation; the relation bears resemblance with Snell’s Law in optics and has far reaching consequences to the study of adaptive schemes. In this way, many new and old results can be pursued uniformly across different classes of algorithms.
In particular, the talk will highlight some recently discovered phenomena pertaining to the learning ability of adaptive filters. It will be seen that adaptive filters generally learn at a rate that is better than that predicted by least-mean-squares theory; that is, they are “smarter” than originally thought! It will also be seen that adaptive filters actually have two distinct rates of convergence; they learn at a slower rate initially and at a faster rate later; perhaps in a manner that mimics the human learning process.