KEYWORDS: Condition numbers, Matrices, Algorithm development, Mathematics, Algorithms, Wireless communications, Signal processing, Data processing, Current controlled current source, Ruthenium
Although the LLL algorithm1 was originally developed for lattice basis reduction, the method can also be
used2 to reduce the condition number of a matrix. In this paper, we propose a pivoted LLL algorithm that
further improves the conditioning. Our experimental results demonstrate that this pivoting scheme works well
in practice.
KEYWORDS: Matrices, Condition numbers, Algorithms, Control systems, Mathematics, Wireless communications, Cryptography, Global Positioning System, Signal processing, Standards development
The LLL algorithm is widely used to solve the integer least squares problems that arise in many engieering
applications. As most practitioners did not understand how the LLL algorithm works, they avoided the issue
by referring to the method as an integer Gram Schmidt approach (without explaining what they mean by this
term). Luk and Tracy1 were first to describe the behavior of the LLL algorithm, and they presented a new
numerical implementation that should be more robust than the original LLL scheme. In this paper, we compare
the numerical properties of the two different LLL implementations.
The classic Lanczos method is an effective method for tridiagonalizing real symmetric matrices. Its block algorithm can significantly improve performance by exploiting memory hierarchies. In this paper, we present a block Lanczos method for tridiagonalizing complex symmetric matrices. Also, we propose a novel componentwise technique for detecting the loss of orthogonality to stablize the block Lanczos algorithm. Our experiments have shown our componentwise technique can reduce the number of orthogonalizations.
In this paper, we present a novel approach to the problem of exponential decomposition. This method can compute the knots in O(n2) floating-point operations and O(n) storage, where n is the length of the signal sequence.
This paper analyzes the important steps of an O(n2 log n) algorithm for finding the eigenvalues of a complex Hankel matrix. The three key steps are a Lanczos-type tridiagonalization algorithm, a fast FFT-based Hankel matrix-vector product procedure, and a QR eigenvalue method based on complex-orthogonal transformations. In this paper, we present an error analysis of the three steps, as well as results from numerical experiments.
In this paper, we propose the use of complex-orthogonal transformations for finding the eigenvalues of a complex symmetric matrix. Using these special transformations can significantly reduce computational costs because the tridiagonal structure of a complex symmetric matrix is maintained.
In array processing, one technique for cancelling interference in the presence of colored noise is the ULLV decomposition of a pair of matrices. The factorization is stable and accurate, and is easy to update when a row is added to either one of the two matrices. In earlier work, we made the assumptions that both matrices must have at least as many rows as columns and that the matrix representing interference must have full column rank. In this paper, we relax the latter restriction and allow the interference matrix to be rank deficient. We present an algorithm for computing the decomposition and two techniques for updating it. The details on updating are quite different from those in the full rank case.
In signal and image processing, regularization often requires a rank-revealing decomposition of a symmetric Toeplitz matrix with a small rank deficiency. In this paper, we present an efficient factorization method that exploits symmetry as well as the rank and Toeplitz properties of the given matrix.
The ULLVD is an approximation of the generalized singular value decomposition; it is also an extension of the ULV decomposition. In this paper, we describe an implementation of a ULLVD algorithm and show how to compute the ULLVD accurately, stably, and efficiently. A MATLAB program is included.
Despite its important signal processing applications, the generalized singular value decomposition (GSVD) is under-utilized due to the high updating cost. In this paper, we consider the noise subspace problem and introduce a new approximate GSVD that is easily amenable to updating.
The fast recursive least squares (RLS) algorithms have wide applications in signal processing and control. They are computationally efficient. Thus their stability is of major concern. In this paper, we investigate the error propagation and stability of some typical fast RLS algorithms. Through a random example, we show that a typical conventional fast RLS algorithm is weakly unstable in computing both the residuals and the gain vectors and a QR based algorithm is expected to be weakly stable in computing the residuals but weakly unstable in computing the gain vectors. We propose an error correction scheme for computing the gain vectors.
The problem of linearly constrained least squares has many applications in signal processing. In this paper, we present a perturbation analysis of a novel, linearly constrained least squares algorithm for adaptive beamforming. The perturbation bounds for the solution as well as for the latest residual element are derived. We also propose an error estimation scheme for the residual element. This error estimation scheme can be incorporated into a systolic array implementation of the algorithm. The additional time required for error estimation is small, as we show how the processor idle time can be used to carry out almost all the calculations.
A new derivation of the QR-based fast recursive least squares algorithms for Toeplitz matrices is presented. Algorithms for computing Q and R in the QR decomposition of the data matrix are proposed. These algorithms can be efficiently incorporated with the fast recursive least squares algorithm and can be performed only when they are needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.