Next Article in Journal
Interactive Blow and Burst of Giant Soap Bubbles
Previous Article in Journal
Ear Detection under Uncontrolled Conditions with Multiple Scale Faster Region-Based Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

Department of Electrical Engineering, National Chin-Yi University of Technology, 57, Section 2, Zhongshan Road, Taiping District, Taichung 41170, Taiwan
Symmetry 2017, 9(4), 54; https://doi.org/10.3390/sym9040054
Submission received: 6 March 2017 / Revised: 5 April 2017 / Accepted: 10 April 2017 / Published: 12 April 2017

Abstract

:
The adaptive multi-rate wideband (AMR-WB) speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ) of immittance spectral frequency (ISF) coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ) algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE), a TIE with dynamic and intersection mechanisms (DI-TIE), and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS) approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

1. Introduction

With a 16 kHz sampling rate, the adaptive multi-rate wideband (AMR-WB) speech codec [1,2,3,4] is one of the speech codecs applied to modern mobile communication systems as a way to remarkably improve the speech quality on handheld devices. The AMR-WB is a speech codec developed on the basis of an algebraic code-excited linear-prediction (ACELP) coding technique [4,5], and provides nine coding modes with bitrates of 23.85, 23.05, 19.85, 18.25, 15.85, 14.25, 12.65, 8.85, and 6.6 kbps. The ACELP-based technique is developed as an excellent speech coding technique, having a double advantage of low bit rates and high speech quality, but a price paid is a high computational complexity required in an AMR-WB codec. Using an AMR-WB speech codec, the speech quality of a smartphone can be improved but at the cost of high battery power consumption.
In an AMR-WB encoder, a vector quantization (VQ) of immittance spectral frequency (ISF) coefficients [6,7,8,9,10] occupies a significant computational load in various coding modes. The VQ structure in AMR-WB adopts a combination of a split VQ (SVQ) and a multi-stage VQ (MSVQ) techniques, referred to as split-multistage VQ (S-MSVQ), to quantize the 16-order ISF coefficient [1]. Conventionally, VQ uses a full search to obtain a codeword best matched with an arbitrary input vector, but the full search requires an enormous computational load. Therefore, many studies [11,12,13,14,15,16,17,18,19] have made efforts to simplify the search complexity of an encoding process. These approaches are classified into two types in terms of the manner the complexity is reduced: triangular inequality elimination (TIE)-based approaches [11,12,13,14] and equal-average equal-variance equal-norm nearest neighbor search (EEENNS)-based algorithms [15,16,17,18,19].
As in [11], a TIE algorithm is proposed to address the computational load issue in a VQ-based image coding. Improved versions of TIE approaches are presented in [12,13] to reduce the scope of a search space, giving rise to further reduction in the computational load. However, there exists a high correlation between ISF coefficients of neighboring frames in AMR-WB, that is, ISF coefficients evolve smoothly over successive frames. This feature benefits TIE-based VQ encoding, according to which a considerable computational load reduction is demonstrated. Yet, a moving average (MA) filter is employed to smooth the data in advance of VQ encoding of ISF coefficients. It means that the high correlation feature is gone, resulting in a poor performance in computational load reduction. Recently, a TIE algorithm equipped with a dynamic and an intersection mechanism, named DI-TIE, is proposed to effectively simplify the search load, and this algorithm is validated as the best candidate among the TIE-based approaches so far. On the other hand, an EEENNS algorithm was derived from equal-average nearest neighbor search (ENNS) and equal-average equal-variance nearest neighbor search (EENNS) approaches [15,16,17,18,19]. In contrast to TIE-based approaches, the EEENNS algorithm uses three significant features of a vector, i.e., mean, variance, and norm, as a three-level elimination criterion to reject impossible codewords.
Furthermore, a binary search space-structured VQ (BSS-VQ) is presented in [20] as a simple as well as efficient way to quantize line spectral frequency (LSF) coefficients in the ITU-T G.729 speech codec [5]. This algorithm demonstrated that a significant computational load reduction is achieved with a well maintained speech quality. In view of this, this paper will apply the BSS-VQ search algorithm to the ISF coefficients quantization in AMR-WB. This work aims to verify whether the performance superiority of the BSS-VQ algorithm remains, for the reason that the VQ structure in AMR-WB is different from that in G.729. On the other hand, another major motivation behind this is to meet the energy saving requirement on handheld devices, e.g. smartphones, for an extended operation time period.
The rest of this paper is outlined as follows. Section 2 gives the description of ISF coefficients quantization in AMR-WB. The BSS-VQ algorithm for ISF quantization is presented in Section 3. Section 4 demonstrates experimental results and discussions. This work is summarized at the end of this paper.

2. ISF Coefficients Quantization in AMR-WB

In a quantization process of AMR-WB [1], a speech frame of 20 ms is firstly applied to evaluate linear predictive coefficients (LPCs), which are then converted into ISF coefficients. Subsequently, quantized ISF coefficients are obtained following a VQ encoding process, which is detailed below.

2.1. Linear Prediction Analysis

In a linear prediction, a Levinson–Durbin algorithm is used to compute a 16th order LPC, ai, of a linear prediction filter [1], defined as:
1 A ( z ) = 1 1 + i = 1 16 a i z i .
Subsequently, the LPC parameters are converted into the immittance spectral pair (ISP) coefficients for the purposes of parametric quantization and interpolation. The ISP coefficients are defined as the roots of the following two polynomials:
F 1 ( z ) = A ( z ) + z 16 A ( z 1 )
F 2 ( z ) = A ( z ) z 16 A ( z 1 ) .
F 1 ( z ) and F 2 ( z ) are symmetric and antisymmetric polynomials, respectively. It can be proven that all the roots of such two polynomials lie and alternate successively on a unit circle in the z-domain. Additionally, F 2 ( z ) has two roots at z = 1 (ω = 0) and z = −1 (ω = π). Such two roots are eliminated by introducing the following polynomials, with eight and seven conjugate roots respectively on the unit circle, expressed as:
F 1 ( z ) = F 1 ( z ) = ( 1 + a [ 16 ] ) i = 0 , 2 , , 14 ( 1 2 q i z 1 + z 2 )
F 2 ( z ) = F 2 ( z ) ( 1 z 2 ) = ( 1 a [ 16 ] ) i = 1 , 3 , , 13 ( 1 2 q i z 1 + z 2 )
where the coefficients qi are referred to as the ISPs in the cosine domain, and a[16] is the last predictor coefficient. A Chebyshev polynomial is used to solve Equations (4) and (5). Finally, 16th order ISF coefficients ωi can be obtained by taking the transformation ωi = arccos(qi).

2.2. Quantization of ISF Coefficients

Before a quantization process, a mean-removed and first order MA filtering are performed on the ISF coefficients to obtain a residual ISF vector [1], that is:
r ( n ) = z ( n ) p ( n )
where z(n) and p(n) respectively denote the mean-removed ISF vector and the predicted ISF vector at frame n by a first order MA prediction, defined as:
p ( n ) = 1 3 r ^ ( n 1 )
where r ^ ( n 1 ) is the quantized residual vector at the previous frame.
Subsequently, S-MSVQ is performed on r(n). As presented in Table 1 and Table 2, S-MSVQ is categorized into two types in terms of the bit rate of the coding modes. In Stage 1, r(n) is split into two subvectors, namely, a 9-dimensional subvector r1(n) associated with codebook CB1 and a 7-dimensional subvector r2(n) associated with codebook CB2, for VQ encoding. As a preliminary step of Stage 2, the quantization error vectors are split into three subvectors for the 6.60 kbps mode or five for the modes with bitrates between 8.85 and 23.85 kbps, symbolized as r i ( 2 ) = r i r ^ i , i = 1 , 2 , respectively. For instance, r(2)1,1–3 in Table 1 represents the subvector split from the 1st to the 3rd components of r1, and then VQ encoding is performed thereon over codebook CB11 in Stage 2. Likewise, r(2)2,4–7 stands for the subvector split from the 4th to the 7th components of r2, after which VQ encoding is performed over codebook CB22 in Stage 2. Finally, a squared error ISF distortion measure, that is, Euclidean distance, is used in all quantization processes.

3. BSS-VQ Algorithm for ISF Quantization

The basis of the BSS-VQ algorithm is that an input vector is efficiently assigned to a subspace where a small number of codeword searches is carried out using a combination of a fast locating technique and lookup tables, as a prerequisite of a VQ codebook search. In this manner, a significant computational load reduction can be achieved.
At the start of this algorithm, each dimension is dichotomized into two subspaces, and an input vector is then assigned to a corresponding subspace according to the entries of the input vector. This idea is illustrated in the following example. There are 29 = 512 subspaces for a 9-dimensional subvector r1(n) associated with codebook CB1, and an input vector can then be assigned to one of the 512 subspaces by means of a dichotomy according to each entry of the input vector. Finally, VQ encoding is performed using a prebuilt lookup table containing the statistical information on sought codewords.
In this proposal, the lookup table in each subspace is pre-built in a way that requires lots of data for training purposes. A training as well as an encoding procedure in BSS-VQ is illustrated with the example of a 9-dimensional codebook CB1 with 256 entries in AMR-WB.

3.1. BSS Generation with Dichotomy Splitting

As a preliminary step of a training procedure, each dimension is dichotomized into two subspaces, and a dichotomy position is defined as the mean of all the codewords contained in a codebook, formulated as:
d p ( j ) = 1 C S i z e i = 1 C S i z e c i ( j ) ,   0 j < Dim
where c i ( j ) represents the jth component of the ith codeword ci, dp(j) the mean value of all the jth components. Taking the codebook CB1 as an instance, CSize = 256, Dim = 9. As listed in Table 3, all the dp(j) values are saved and then presented in a tabular form.
Subsequently, for vector quantization on the nth input vector xn, a quantity νn(j) is defined as:
v n ( j ) = { 2 j , x n ( j ) d p ( j ) 0 , x n ( j ) < d p ( j ) ,   0 j < Dim
where xn(j) denotes the jth component of xn. Then xn is assigned to subspace k (bssk), with k given as the sum of νn(j) over the entire dimensions, formulated as:
Assigning : x n b s s k | k = j = 0 D i m 1 v n ( j ) .
In this study, 0 ≤ k < BSize and BSize = 29 = 512 represents the total number of subspaces. Taking an input vector xn = {20.0, 20.1, 20.2, 20.3, 20.4, 20.5, 20.6, 20.7, 20.8} as an instance, νn(j) = {20, 21, 22, 0, 0, 0, 0, 0, 28} for each j, 0 ≤ j ≤ 8, and k = 263 can be obtained by Equations (9) and (10) respectively. Thus, the input vector xn is assigned to the subspace bssk with k = 263.
By means of Equations (9) and (10), it is noted that this algorithm requires a small number of basic operations, i.e., comparison, shift and addition, such that an input vector is assigned to a subspace in a highly efficient manner.

3.2. Training Procedure of BSS-VQ

Following the determination of the dichotomy position for each dimension, a training procedure is performed to build a lookup table in each subspace. The lookup tables give the probability that each codeword serves as the best-matched codeword in each subspace, referred to as the hit probability of a codeword in a subspace for short.
A training procedure is stated below as Algorithm 1. With more than 1.56 GB of memory, a duration longer than 876 min and a total of 2,630,045 speech frames, a large speech database, covering a diversity of contents and multiple speakers, is employed as the training data in a training procedure.
Algorithm 1: Training procedure of BSS-VQ
Step 1. 
Initial setting: assign each codeword to all the subspaces, and then set the probability that the codeword ci corresponds to the best-matched codeword in bssk P h i t ( c i | b s s k ) = 0 , 1 ≤ iCSize, 0 ≤ k < BSize.
Step 2. 
Referencing Table 3 and through Equations (9) and (10), an input vector can be efficiently assigned to bssk.
Step 3. 
A full search is conducted on all the codewords according to the Euclidean distance, given as:
d i s t ( x n , c i ) = j = 0 D i m 1 ( x n ( j ) c i ( j ) ) 2
and an optimal codeword copt satisfies:
c o p t = arg min c i { d i s t ( x n , c i ) } ,   1 i CSize .
Step 4. 
Update the statistics on the optimal codeword, that is, P h i t ( c o p t | b s s k ) .
Step 5. 
Repeat Steps 2–4, until the training is performed on all the input vectors.
A lookup table is built for each subspace, following the completion of a training procedure. The lookup table gives the hit probability of each codeword in a subspace. For sorting purposes, a quantity P h i t ( m | b s s k ) , 1 ≤ mCSize, is defined as the m ranked probability that a codeword hits the best-matched codeword in subspace bssk. Taking m = 1 as an instance, P h i t ( m | b s s k ) | m = 1 = max c i { P h i t ( c i | b s s k ) } , namely, the highest hit probability in bssk. As it turns out, the lookup table in each subspace gives the ranked hit probability in descending order and the corresponding codeword.

3.3. Encoding Procedure of BSS-VQ

In the encoding procedure of BSS-VQ, the cumulative probability P c u m ( M | b s s k ) is firstly defined as the sum of the top M P h i t ( m | b s s k ) in bssk, that is:
P c u m ( M | b s s k ) = m = 1 M P h i t ( m | b s s k ) ,   1 M CSize .
Subsequently, given a threshold of quantization accuracy (TQA), a quantity Mk(TQA) represents the minimum value of M that satisfies the condition P c u m ( M | b s s k ) T Q A in bssk, that is:
M k ( T Q A ) = arg min M { M : P c u m ( M | b s s k ) T Q A } ,   1 M CSize ,   0 k < BSize .
For a given TQA, a total of 512 Mk(TQA)s are evaluated by Equation (14) for all the subspaces, and the mean value is then given as:
M ¯ ( T Q A ) = 1 B S i z e k = 0 B S i z e 1 M k ( T Q A ) .
Illustrated in Figure 1 is a plot of the average number of searches M ¯ ( T Q A ) corresponding to the values of TQA ranging between 0.90 and 0.99. Given a TQA = 0.95 as an instance, a mere average of 14.58 codeword searches is required to reach a search accuracy as high as 95%. In simple terms, the search performance can be significantly improved at the cost of a small drop in search accuracy. Furthermore, a trade-off can be made instantly between the quantization accuracy and the search load according to Figure 1. Hence, a BSS-VQ encoding procedure is described below as Algorithm 2.
Algorithm 2: Encoding procedure of BSS-VQ
Step 1. 
Given a TQA, Mk(TQA) satisfying Equation (14) is found directly in the lookup table in bssk.
Step 2. 
Referencing Table 3 and by means of Equations (9) and (10), an input vector is assigned to a subspace bssk in an efficient manner.
Step 3. 
A full search for the best-matched codeword is performed on the top Mk(TQA) sorted codewords in bssk, and then the output is the index of the found codeword.
Step 4. 
Repeat Steps 2 and 3 until all the input vectors are encoded.
The BSS-VQ algorithm is briefly summarized as follows. Table 3 is the outcome by performing Equation (8) and is saved as the first lookup table. Subsequently, the second lookup table concerning P h i t ( m | b s s k ) and the corresponding codeword is built for each subspace according to the training procedure. Accordingly, the VQ encoding can be performed using Algorithm 2.

4. Experimental Results

There are three experiments conducted in this work. The first is a search load comparison among various search approaches. The second is a quantization accuracy (QA) comparison among a full search and other search approaches. The third is a performance comparison among various approaches in terms of ITU-T P.862 perceptual evaluation of speech quality (PESQ) [21] as an objective measure of speech quality. A speech database, completely different from all the training data, is employed for outside testing purposes. With one male and one female speaker, the speech database in total takes up more than 221 MB of memory, occupies more than 120 min, and covers 363,281 speech frames.
Firstly, Table 4 lists a comparison on the average number of searches among full search, multiple TIE (MTIE) [13], DI-TIE, and EEENNS, while Table 5 gives the search load corresponding to TQA values of the BSS-VQ algorithm. Moreover, with the search load required in the full search algorithm as a benchmark, Table 6 and Table 7 present comparisons on the load reduction (LR) with respect to Table 4 and Table 5. A high value of LR reflects a high search load reduction. Table 6 indicates that DI-TIE provides a higher value of LR than MTIE and EEENNS search approaches among all the codebooks. It is also found that most LR values of BSS-VQ are higher than the DI-TIE approach by an observation in Table 6 and Table 7. For example, the LR values of BSS-VQ are indeed higher than DI-TIE in case the TQA is equal to or smaller than 0.99, 0.98, 0.96, and 0.99 in codebooks CB1, CB2, CB21, and CB22, respectively. Accordingly, a remarkable search load reduction is reached by the BSS-VQ search algorithm.
In the QA aspect, a 100% QA is obtained by the MTIE, DI-TIE, and EEENNS algorithms as compared with a full search approach. Thus, only the QA experiment of BSS-VQ is conducted. The QA corresponding to TQA values of the BSS-VQ algorithm is given in Table 8. It reveals that QA is an approximation of TQA in either inside or outside testing cases. Moreover, this algorithm provides an LR between 77.78% and 93.98% at TQA = 0.90 as well as an LR between 67.23% and 88.39% at TQA = 0.99, depending on the codebooks. In other words, a trade-off can be made between the quantization accuracy and the search load.
Furthermore, an overall LR is evaluated to observe the total search load of an entire VQ encoding procedure of an input vector. The overall LR refers to the total search load, defined as the sum of the average number of searches multiplied by the vector dimension in each codebook. Thus, an overall LR comparison with the full search as a benchmark is presented as a bar graph in Figure 2. As clearly indicated in Figure 2, the overall LR of BSS-VQ is higher than MTIE, DI-TIE, and EEENNS approaches, but at the same time the QA is as high as 0.98. Moreover, Table 9 gives a PESQ comparison, including the mean and the STD, among various approaches. Since MTIE, DI-TIE, and EEENNS provide a 100% QA, they both share the same PESQ with a full search, meaning that there is no deterioration in the speech quality. A close observation reveals little difference between PESQs obtained in a full search and in this search algorithm, that is, the speech quality is well maintained in BSS-VQ at TQA not less than 0.90. This BSS-VQ search algorithm is experimentally validated as a superior candidate relative to its counterparts.

5. Conclusions

This paper presents a BSS-VQ codebook search algorithm for ISF vector quantization in the AMR-WB speech codec. Using a combination of a fast locating technique and lookup tables, an input vector is efficiently assigned to a search subspace where a small number of codeword searches is carried out and the aim of remarkable search load reduction is reached consequently. Particularly, a trade-off can be made between the quantization accuracy and the search load to meet a user’s need when performing a VQ encoding. This BSS-VQ search algorithm, providing a considerable search load reduction as well as nearly lossless speech quality, is experimentally validated as superior to MTIE, DI-TIE, and EEENNS approaches. Furthermore, this improved AMR-WB speech codec can be adopted to upgrade the VoIP performance on a smartphone. As a consequence, the energy efficiency requirement is achieved for an extended operation time period due to computational load reduction.

Acknowledgments

This research was financially supported by the Ministry of Science and Technology under grant number MOST 105-2221-E-167-031, Taiwan. The author felt deeply indebted as well to the co-authors of two cited papers, including Tzu-Hung Lin, Shaw-Hwa Hwang, Shun-Chieh Chang and Bing-Jhih Yao, for their contribution as the basis of this work.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. 3rd Generation Partnership Project (3GPP). Adaptive Multi-Rate—Wideband (AMR-WB) Speech Codec; Transcoding Functions; TS 26.190; 3GPP: Valbonne, France, 2012. [Google Scholar]
  2. Ojala, P.; Lakaniemi, A.; Lepanaho, H.; Jokimies, M. The adaptive multirate wideband speech codec: System characteristics, quality advances, and deployment strategies. IEEE Commun. Mag. 2006, 44, 59–65. [Google Scholar] [CrossRef]
  3. Varga, I.; De Lacovo, R.D.; Usai, P. Standardization of the AMR wideband speech codec in 3GPP and ITU-T. IEEE Commun. Mag. 2006, 44, 66–73. [Google Scholar] [CrossRef]
  4. Bessette, B.; Salami, R.; Lefebvre, R.; Jelínek, M.; Rotola-Pukkila, J.; Vainio, J.; Mikkola, H.; Järvinen, K. The adaptive multirate wideband speech codec (AMR-WB). IEEE Trans. Speech Audio Process. 2002, 10, 620–636. [Google Scholar] [CrossRef]
  5. Salami, R.; Laflamme, C.; Adoul, J.P.; Kataoka, A.; Hayashi, S.; Moriya, T.; Lamblin, C.; Massaloux, D.; Proust, S.; Kroon, P.; et al. Design and description of CS-ACELP: A toll quality 8 kb/s speech coder. IEEE Trans. Speech Audio Process. 1998, 6, 116–130. [Google Scholar] [CrossRef]
  6. Linde, Y.; Buzo, A.; Gray, R.M. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  7. Wang, L.; Chen, Z.; Yin, F. A novel hierarchical decomposition vector quantization method for high-order LPC parameters. IEEE Trans. Audio Speech Lang. Process. 2015, 23, 212–221. [Google Scholar] [CrossRef]
  8. Ramirez, M.A. Intra-predictive switched split vector quantization of speech spectra. IEEE Signal Process. Lett. 2013, 20, 791–794. [Google Scholar] [CrossRef]
  9. Han, W.J.; Kim, E.K.; Oh, Y.H. Multicodebook split vector quantization of LSF parameters. IEEE Signal Process. Lett. 2002, 9, 418–421. [Google Scholar]
  10. Chatterjee, S.; Sreenivas, T.V. Optimum switched split vector quantization of LSF parameters. Signal Process. 2008, 88, 1528–1538. [Google Scholar] [CrossRef]
  11. Hwang, S.H.; Chen, S.H. Fast encoding algorithm for VQ-based image coding. Electron. Lett. 1990, 26, 1618–1619. [Google Scholar]
  12. Choi, S.Y.; Chae, S.I. Incremental-search fast vector quantiser using triangular inequalities for multiple anchors. Electron. Lett. 1998, 34, 1192–1193. [Google Scholar] [CrossRef]
  13. Hsieh, C.H.; Liu, Y.J. Fast search algorithms for vector quantization of images using multiple triangle inequalities and wavelet transform. IEEE Trans. Image Process. 2000, 9, 321–328. [Google Scholar] [CrossRef] [PubMed]
  14. Yao, B.J.; Yeh, C.Y.; Hwang, S.H. A search complexity improvement of vector quantization to immittance spectral frequency coefficients in AMR-WB speech codec. Symmetry 2016, 8, 1–8. [Google Scholar] [CrossRef]
  15. Lu, Z.M.; Sun, S.H. Equal-average equal-variance equal-norm nearest neighbor search algorithm for vector quantization. IEICE Trans. Inf. Syst. 2003, 86, 660–663. [Google Scholar]
  16. Chu, S.C.; Lu, Z.M.; Pan, J.S. Hadamard transform based fast codeword search algorithm for high-dimensional VQ encoding. Inf. Sci. 2007, 177, 734–746. [Google Scholar] [CrossRef]
  17. Chen, S.X.; Li, F.W.; Zhu, W.L. Fast searching algorithm for vector quantisation based on features of vector and subvector. IET Image Process. 2008, 2, 275–285. [Google Scholar] [CrossRef]
  18. Chen, S.X.; Li, F.W. Fast encoding method for vector quantisation of images using subvector characteristics and Hadamard transform. IET Image Process. 2011, 5, 18–24. [Google Scholar] [CrossRef]
  19. Xia, S.; Xiong, Z.; Luo, Y.; Dong, L.; Zhang, G. Location difference of multiple distances based k-nearest neighbors algorithm. Knowl.-Based Syst. 2015, 90, 99–110. [Google Scholar] [CrossRef]
  20. Lin, T.H.; Yeh, C.Y.; Hwang, S.H.; Chang, S.C. Efficient binary search space-structured VQ encoder applied to a line spectral frequency quantisation in G.729 standard. IET Commun. 2016, 10, 1183–1188. [Google Scholar] [CrossRef]
  21. ITU-T Recommendation P.862. Perceptual Evaluation of Speech Quality (PESQ): An Objective Method for End-to-End Speech Quality Assessment of Narrow-Band Telephone Networks and Speech Codecs; ITU (International Telecommunication Union): Geneva, Switzerland, 2001. [Google Scholar]
Figure 1. A plot of the average number of searches versus TQA.
Figure 1. A plot of the average number of searches versus TQA.
Symmetry 09 00054 g001
Figure 2. Comparison of overall search load reduction among various approaches.
Figure 2. Comparison of overall search load reduction among various approaches.
Symmetry 09 00054 g002
Table 1. Structure of S-MSVQ in AMR-WB in the 8.85–23.85 kbps coding modes.
Table 1. Structure of S-MSVQ in AMR-WB in the 8.85–23.85 kbps coding modes.
Structure of S-MSVQ
Stage 1CB1: r1 (1–9 order of r) (8 bits)CB2: r2 (10–16 order of r) (8 bits)
Stage 2CB11: r(2)1,1–3 (6 bits)CB12: r(2)1,4–6 (7 bits)CB13: r(2)1,7–9 (7 bits)CB21: r(2)2,1–3 (5 bits)CB22: r(2)2,4–7 (5 bits)
Table 2. Structure of S-MSVQ in AMR-WB in the 6.60 kbps coding mode.
Table 2. Structure of S-MSVQ in AMR-WB in the 6.60 kbps coding mode.
Structure of S-MSVQ
Stage 1CB1: r1 (1–9 order of r) (8 bits)CB2: r2 (10–16 order of r) (8 bits)
Stage 2CB11: r(2)1,1–5 (7 bits)CB12: r(2)1,6–9 (7 bits)CB21: r(2)2,1–7 (6 bits)
Table 3. Dichotomy position for each dimension in the codebook CB1.
Table 3. Dichotomy position for each dimension in the codebook CB1.
jth-OrderMean
015.3816
119.0062
215.4689
321.3921
426.8766
528.1561
628.0969
721.6403
816.3302
Table 4. Average number of searches among various algorithms in the 8.85–23.85 kbps modes.
Table 4. Average number of searches among various algorithms in the 8.85–23.85 kbps modes.
CodebooksFull SearchMTIEDI-TIEEEENNS
Stage 1CB125691.5742.4658.82
CB2256112.9342.7963.87
Stage 2CB116438.9412.3114.17
CB1212882.5914.4022.91
CB1312879.3313.5021.01
CB213223.558.9511.08
CB223227.5512.4217.44
Table 5. Search load of the BSS-VQ algorithm versus TQA values in the 8.85–23.85 kbps modes.
Table 5. Search load of the BSS-VQ algorithm versus TQA values in the 8.85–23.85 kbps modes.
TQAAverage Number of Searches in Various Codebooks
CB1CB2CB11CB12CB13CB21CB22
0.9015.4026.4512.4719.9619.937.116.66
0.9116.1027.5212.8620.8420.627.116.72
0.9216.8028.8512.9921.5021.197.646.91
0.9317.7930.2313.5221.8422.027.647.15
0.9418.8731.7114.0422.8522.728.007.57
0.9520.0333.5814.6123.8523.728.267.84
0.9621.3635.8115.0424.7624.958.878.21
0.9723.1838.3715.8625.9326.249.278.51
0.9825.7141.8216.7327.6027.8610.009.33
0.9929.7147.1218.2129.6129.9910.4910.15
Table 6. LR percentage representation of Table 4.
Table 6. LR percentage representation of Table 4.
CodebooksFull Search (Benchmark)MTIE LR (%)DI-TIE LR (%)EEENNS LR (%)
Stage 1CB1064.2383.4277.03
CB2055.8983.2975.05
Stage 2CB11039.1680.7777.86
CB12035.4788.7582.10
CB13038.0289.4683.58
CB21026.4072.0465.37
CB22013.8961.1845.50
Table 7. LR percentage representation of Table 5.
Table 7. LR percentage representation of Table 5.
TQALR (%) in Various Codebooks
CB1CB2CB11CB12CB13CB21CB22
0.9093.9889.6780.5184.4184.4377.7879.20
0.9193.7189.2579.9083.7283.8977.7879.00
0.9293.4488.7379.7183.2083.4576.1178.41
0.9393.0588.1978.8882.9482.8076.1177.67
0.9492.6387.6178.0782.1582.2575.0176.34
0.9592.1886.8877.1881.3781.4774.1975.50
0.9691.6686.0176.5080.6580.5172.2774.34
0.9790.9585.0175.2279.7479.5071.0273.40
0.9889.9683.6673.8678.4478.2368.7670.85
0.9988.3981.6071.5576.8776.5767.2368.29
Table 8. Comparison of QA percentage of the BSS-VQ algorithm versus TQA values in the 8.85–23.85 kbps modes among codebooks.
Table 8. Comparison of QA percentage of the BSS-VQ algorithm versus TQA values in the 8.85–23.85 kbps modes among codebooks.
TQAQA (%) in Various Codebooks
CB1CB2CB11CB12CB13CB21CB22
0.9090.4290.6390.8490.8991.0693.1693.10
0.9191.5291.4691.9692.1892.1293.1693.27
0.9292.3292.5892.4493.1893.0594.8693.92
0.9393.2593.6093.5493.6794.1594.8694.59
0.9494.2094.3994.6894.6595.0095.6695.93
0.9595.2495.3595.8895.7195.8296.4096.46
0.9696.0396.3096.3596.4396.8597.3397.11
0.9797.0197.1697.3497.3697.4797.9097.57
0.9897.9198.0898.1798.3498.3299.0098.75
0.9998.8299.0199.2399.1799.0999.3999.33
Table 9. Comparison on mean opinion score (MOS) values using the PESQ algorithm among various methods.
Table 9. Comparison on mean opinion score (MOS) values using the PESQ algorithm among various methods.
MethodsMOS
MeanSTD
Full search3.9310.329
BSS-VQ (TQA)0.903.9300.328
0.913.9310.328
0.923.9310.330
0.933.9310.328
0.943.9300.329
0.953.9330.329
0.963.9310.330
0.973.9300.330
0.983.9320.328
0.993.9330.328

Share and Cite

MDPI and ACS Style

Yeh, C.-Y. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding. Symmetry 2017, 9, 54. https://doi.org/10.3390/sym9040054

AMA Style

Yeh C-Y. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding. Symmetry. 2017; 9(4):54. https://doi.org/10.3390/sym9040054

Chicago/Turabian Style

Yeh, Cheng-Yu. 2017. "An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding" Symmetry 9, no. 4: 54. https://doi.org/10.3390/sym9040054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop