Adaptive filters are inherently nonlinear, stochastic, and time-variant devices that adjust themselves to an ever-changing environment; the structure of an adaptive system changes in such a way that its performance improves through a continuing interaction with its surroundings.
The learning curve of an adaptive filter provides a reasonable measure of how fast and how well it reacts to its environment. This learning process has been extensively studied in the literature for slowly adapting systems. That is, for systems that employ infinitesimally small step-sizes.
In this talk, we shall discuss several interesting phenomena that characterize the learning capabilities of adaptive filters when larger step-sizes are used. These phenomena actually occur even for slowly adapting systems but are less pronounced, which explains why they may have gone unnoticed.
The phenomena however become more pronounced for larger step-sizes and lead to several interesting observations. In particular, we shall show that an adaptive filter generally learns at a rate that is better than that predicted by least-mean-squares theory; that is, they seem to be “smarter” than we think! We shall also show that adaptive filters actually have two distinct rates of convergence; they learn at a slower rate initially and at a faster rate later (perhaps in a manner that mimics the human learning process). We shall also argue that special care is needed in interpreting learning curves. A useful conclusion will be that mean-square theory alone may not be enough to capture the full potential that adaptive filters have to offer. Several examples will be provided.