Trellis coded quantization : an efficient technique for data compression /

Bibliographic Details
Main Author: Marcellin, Michael Wesley
Other Authors: Cantrell, P. E. (degree committee member.), Dahm, P. F. (degree committee member.), Georghiades, C. N. (degree committee member.)
Format: Thesis Book
Language:English
Published: 1987.
Subjects:
Online Access:ProQuest, Abstract
Link to OAKTrust copy
Description
Abstract:The ideas of signal set expansion and set partitioning from trellis coded modulation (TCM) are used to develop a new source coding technique which we call trellis coded quantization (TCQ). TCQ is theoretically justified by alphabet constrained rate distortion theory, which is an exact dual of the channel capacity argument used to justify TCM. The resulting structure is computationally efficient and achieves excellent performance for the memoryless uniform, Gaussian, and Laplacian sources. The effects of channel errors are examined and specific bounds are developed for the number of samples that can be affected by a single channel error. For the memoryless uniform source, TCQ achieves a mean squared error (MSE) within .21 dB of the distortion rate function at all positive integer encoding rates. This performance is superior to that of the best lattice quantizers known in up to 24 dimensions. For encoding the memoryless Gaussian source at encoding rates of 0.5, 1, and 2 bits per sample, TCQ outperforms all source coding schemes we have seen in the literature (including stochastic trellis coders and entropy coded quantization). The computational requirements of TCQ are quite modest. In the most important case, encoding a memoryless source at an encoding rate of R bits per sample using TCQ requires only 4 multiplies, 2N + 4 adds, N compares, and 4 rate-(R - 1) scalar quantizations per data sample, where N is the number of states in the encoding trellis. Trellis coded quantization is used as the basis of a predictive source coding scheme for Gauss-Markov sources and sampled speech. The performance for Gauss-Markov sources is quite good. Mean squared error within 1.3 dB of the distortion rate function is achieved at encoding rates of 1, 2, and 3 bits per sample for several Gauss-Markov sources used as models for sampled speech. Systems with fixed prediction/fixed residual encoding, fixed prediction/adaptive residual encoding, and adaptive prediction/adaptive residual encoding are considered for encoding sampled speech. Segmental signal-to-noise ratios in excess of 20 dB are obtained for encoding sampled speech at an encoding rate of 2 bits per sample (16,000 bits per second). The encoded speech can be described as being of excellent communications quality.
Item Description:Typescript (photocopy).
Vita.
"Major subject: Electrical Engineering."
Physical Description:xiv, 116 leaves : illustrations ; 29 cm
Bibliography:Includes bibliographical references (leaves 92-95).