As the usage of mobile devices are increasing, the needs for more comfortable interface are also increasing. Many man-machine interfaces were developed and used. Among them, speech interface is preferable because it is an intuitive and familier communication method. There were many researches about speech recognition. Among the several speech recognition models, HMM gives higher performance. But if HMM based speech recognition is applied to mobile devices, there are several limits such as poor memory capacity and processing power.
For memory reduction, a shared codebook approach for feature parameter tying HMM and SDCHMM were developed. Especially SDCHMM ties CHMM at the finer level of gaussian distributions. And it is efficient in terms of memory usage and accuracy. But SDCHMM ties the subspace Gaussian distributions. As a result, the quantization error is increased and it gives lower recognition accuracy. If mean and veriance vectors of Gaussian distributions are quantized seperately, quantization error can be reduced. So we made codebooks of mean vectors and variance vectors separately.
In general, the Gaussian mixtures are very sensitive to errors of their means but variance is not. So mean vectors should be quantized correctly more than variance vectors. we proposed a stream definition for mean vectors and applied the same definition in variance vectors. The proposed stream definition uses the distributions of mean values in the same dimension of whole gaussian mixtures. In case of two dimension vectors, vector quantization errors get smaller as vectors are spreaded close to circle. As a result, each stream has to consist of similarly distributed elements. In order to meet the requriement, We proposed distance measure between the variance of mean vectors. In the recognition experiments using RM DB, this approach leads to a relative decrease of word error rate of 24.52% using about same memory compared with SDCHMM.