In this dissertation work, we are concerned with both improving the convergence performance and achieving the computational efficiency of block adaptive filtering algorithm for finite impulse response (FIR) adaptive digital filters (ADF's). The rapid convergence rate is achieved by employing the nested iteration technique and the preconditioning technique. The computational efficiency is achieved by using the fast convolution technique and the fast deconvolution technique based on an approximation of the autocorrelation matrix. Using these techniques, several block adaptive filtering algorithms are proposed.
First, to introduce a new updating procedure called the nested iteration technique that can update the filter tap weights several times for each data block, we define an estimate of the block mean-square error (BMSE) as an objective function. Based on the BMSE estimate, the block least mean-square (BLMS) and optimum block adaptive (OBA) algorithms are reformulated and the frequency-domain BLMS (FBLMS) and frequency-domain OBA (FOBA) algorithms are reviewed as frequency-domain implementations of the BLMS and OBA algorithms, respectively. In derivation of these algorithms, we assume that the direction vector is based on the steepest descent method and the descent process is terminated after only one iteration for each block.
Second, we propose the nested OBA (NOBA) algorithm as a fast version of the OBA algorithm by employing the nested iteration technique, in which the direction vector is based on the steepest descent method as a descent method. In formulation of the algorithm, we assume that the BMSE estimate is time-invariant since the processed data blocks are disjointed rather than overlapping, in contrast to the optimum block adaptive shifting (OBAS) algorithm, where the data block can be shifted by some samples. Thus, for each iteration, a descent direction is given by the negative gradient of the BMSE estimate, and a time-varying step size is determined to be optimal by minimizing the BMSE estimate along the descent direction.
Third, in order to improve the convergence performance of the NOBA algorithm, we propose the block conjugate gradient (BCG) algorithm. In the NOBA algorithm, the BMSE estimate is iteratively minimized along the descent directions. But the descent process may be unable to penetrate the whole N-dimensional space in search for the minimum of the BMSE estimate since in general the descent directions generated by the steepest descent method may not be linearly independent. This fact indicates that the NOBA algorithm may not converge to the minimum even after N iterations. This problem can be overcome by introducing the conjugate gradient method as a descent method since the method is quadratically convergent. Thus, using the conjugate gradient method instead of the steepest descent method, we formulate the BCG algorithm as another fast version of the OBA algorithm.
Fourth, the preconditioning technique is applied to the OBA algorithm. In general, the descent direction of the OBA algorithm does not indicate the minimum point of the BMSE function at every iteration, especially when the contours of the BMSE is eccentric. This fact results in that the convergence rate of the OBA algorithm strongly depends on the eigenvalue spread of the input auto-correlation matrix. Thus the descent direction is required to be transformed from the gradient vector by using preconditioners so that the direction vector indicates the minimum point as exactly as possible. Three preconditioners are used as estimates of the autocorrelation matrix. The first is the Toeplitz preconditioner that is assumed to be a Toeplitz matrix, the second is the SSOR preconditioner composing of triangular matrices, and the third is a circulant matrix called the circulant preconditioner. We propose three preconditioned OBA algorithms by employing these preconditioners.
Finally, the preconditioning technique is combined with the BCG algorithm to improve the convergence property of the BCG algorithm. Although the BCG algorithm is less sensitive to the changes of the eigenvalue spread of the auto-correlation matrix than the NOBA or the OBA algorithm, the convergence rate of the BCG algorithm slows down when the eigenvalue spread becomes large. This drawback results from the fact that the direction vector of the BCG algorithm does not indicate the minimum point of the BMSE estimate as in the NOBA algorithm. Thus, to reduce the influence of the eigenvalue spread on the convergence performance, we propose three preconditioned BCG algorithms by employing the preconditioners utilized in the preconditioned OBA algorithms.
본 논문에서는 유한 응답 특성을 갖는 적응 디지탈 필터중에서 모든 입.출력 신호들이 블럭 형태로 처리되는 블럭 적응 필터의 성능 개선 및 효율적인 구현 방법에 관하여 연구하였다.
첫째, 기존의 optimum block adaptive (OBA) 알고리즘은 normalized least mean-square (NLMS)같은 sample-based 알고리즘과 비교할때 효율적으로 구현될 수 있는 장점을 갖는 반면에 수렴 특성 (convergence) 및 추적 특성 (tracking)에 있어서는 그 특성이 저하되는 단점을 갖고 있다. 따라서 이러한 OBA 알고리즘의 문제점을 해결하기 위해 nested iteration technique을 이용하여 nested optimum block adaptive (NOBA) 알고리즘을 제안하였다.
둘째, NOBA 알고리즘의 수렴 특성을 더욱 개선하기 위하여 block conjugate gradient (BCG) 알고리즘을 제안하였다. NOBA 알고리즘은 steepest descent method를 이용하기 때문에 필터 탭 수보다 많은 iteration을 수행한다 하더라도 minimum point에 이르지 못할 수 있다. 따라서 steepest descent method 대신에 quadratic convergence가 보장되는 conjugate gradient method를 채용하여 NOBA가 갖고 있는 문제를 해결하였다.
셋째, OBA 알고리즘의 단점중 다른 하나는 그 수렴 특성이 입력 신호의 eigenvalue 분포 변화에 따라 크게 영향을 받는다는 것이다. 즉 입력 신호가 유색화될수록 그 수렴 특성이 더욱 나빠진다. 따라서 이러한 OBA 알고리즘의 단점을 개선하기 위하여 preconditioning technique을 도입하였다. 또한 preconditioning의 효율적인 구현을 위하여 세 종류의 preconditioner를 제시하고 이들을 이용한 preconditioned OBA 알고리즘을 제안하였다.
마지막으로, preconditioning technique를 BCG 알고리즘에 적용하여 세 종류의 preconditioned BCG 알고지름을 제안하였다. OBA 알고리즘과 마찬가지로 BCG 알고리즘 역시 입력 신호의 조건에 따라 그 수렴 특성이 달라진다. 따라서 preconditioning에 의하여 수렴 특성을 크게 개선 할 수 있었다.