A Leakage Quasi-Newton Adaptation Algorithm
Marcello Campos, Fabiano T Castoldi

DOI: 10.14209/sbrt.2007.31326
Evento: XXV Simpósio Brasileiro de Telecomunicações (SBrT2007)
Keywords: Convex Optimization Adaptation Algorithms Adaptive signal processing
Abstract
This paper proposes an alternative view on adaptive filtering algorithm development and analysis. More than just rewriting objective functions minimized by the algorithms, the alternative approach explored in this article gives us extra tools for optimizing with respect to other parameters, for example the convergence factor µ. Adaptation algorithms are usually developed based either on a stochastic approximation of the gradient vector and Hessian matrix, or on a deterministic minimization of quadratic a posteriori output errors. Gradient-descent algorithms, such as the LMS (Least Mean Squares) algorithm and the QN (Quasi Newton) algorithm, are usually placed in the first group, whereas the RLS (Recursive Least Squares) algorithm is placed in the second group. Obviously these are just how algorithms are usually presented and analyzed; the RLS algorithm can also be seen as an stochastic approximation algorithm, and the LMS algorithm also does minimize a deterministic objective function. However, some descriptions of deterministic functions minimized by some algorithms, such as the LMS algorithm, offer very limited insight on its behavior. In this work we propose to shed new light onto known adaptation algorithms by means of describing their deterministic objective function as a quadratic norm of the coefficients, optionally subjected to equality constraints, which are functions of the output error. We show how this approach can be used to derive some LS-based and QN-based adaptation algorithms, such as the Leakage QN Algorithm mentioned in the title.

Download