Vitor H. Nascimento 1999

Abstract of PhD Dissertation
Electrical Engineering Department, UCLA, March 1999
Advisor: Prof. Ali H. Sayed

Stability and Performance of Adaptive Filters without Slow Adaptation Approximations

Vitor H. Nascimento, UCLA


The performance of an adaptive filter is crucially dependent on its rate of convergence, steady-state mean-square error, and stability properties, especially in finite-precision implementations. Exact performance analyses only exist for infinitesimally small step-sizes or under certain so-called independence assumptions. There are practically no counterparts of these analyses for larger step-sizes in the literature. Such results are desirable since they would serve as a guide for the design, and also for a better understanding, of adaptive filters with faster convergence speeds. Progress in this direction is often hindered by the complexity of the (possibly time-variant and nonlinear) update relations that arise when slow adaptation approximations are not employed.

This dissertation develops techniques for the stability and performance analyses of adaptive filters without resorting to slow adaptation approximations. The work expands the four main methods of analysis that have been used so far in the literature, namely, mean-square stability analysis, almost-sure stability analysis, Lyapunov stability analysis, and analysis by simulation or experimentation.

Among the original contributions of this work are the first computable lower bound on the largest step-size that guarantees mean-square stability in the absence of the independence assumptions (Ch. 4); a detailed study of the behavior and properties of ensemble-average learning curves and how special care is needed in using them to predict or evaluate the performance of an adaptive filter (Ch. 5); a proof that an adaptive filter can actually have two rates of convergence; one rate for the initial phase of operation and another faster rate for later time instants (Ch. 5); a new leakage-based algorithm that avoids both the drift and bias problems of existing adaptive methods (Ch. 6); and a Lyapunov stability analysis for floating-point implementations in worst-case scenarios (Ch. 6). Further contributions to the independence theory itself are provided in Chs. 2 and 3, especially an analysis of the normalized LMS algorithm. The introduction and the concluding remarks of each chapter indicate the specific contributions of that chapter and how they relate to available results in the literature.

Acknowledgment This work was supported in part by the National Science Foundation under grants MIP-9796147 and CCR-9732376. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.