The lattice vector quantization (LVQ) is convenient and efficient for uniformly distributed sources and high bit rates coding. Yet its performance degrades for non-uniformly distributed sources and for low bit rates coding. This dissertation is to present some methods to improve it for non-uniform source at low and medium bit rates.
First, a review on LVQ is presented and its basic performances for non-uniform source are investigated. The complexity and the signal to quantization noise ratio (SQNR) performance of LVQ is compared with those of LBG vector quantization developed by Linde, Buzo, and Gray. For memoryless Gaussian and Laplacian sources, the best lattices for quantization in dimensions and the relation between the probability density function of source and the boundary shape of the codebook are examined experimentally.
Secondly, the quantization effects are examined for the overload vectors that are located outside of the external shell of a chosen lattice of codebook. There are two methods in using scaling factors to map overload vectors to a lattice vector (codevector): the one is to use the same scaling factor as for granular vectors, and the other is to use separate overload scaling factor for each overload vector. In the first one, we reduce the quantization distortion by choosing the codevector nearest to the scaled overload vector in the neighborhood of the projected overload vector on the external shell of lattice. In the second one, an overload scaling factor for each overload vector is chosen on the basis of the selection of a shell on which the overload vector is projected. The overload scaling factor is then adjusted for minimum distortion in reconstruction by orthogonality principle. It is shown that this algorithm improves the SQNR performance in low bit rate coding.
Finally, an efficient quantization of shape vector is proposed for lattice-based gain-shape vector quantization (LGSVQ). It is based on the observation that the reconstruction distortion is proportional to the angle between the input shape vector and the shape codevector. The algorithm selects the shape codevector among the candidate vectors in each coset, that gives the minimum angle between the input shape vector and a codevector. It is shown that this algorithm gives better performance in peak SQNR than the conventional LVQ in $E_8$ or $∧_16$-lattice.