Next Article in Journal
Effect of Heat Input on LMHMW Joint of Carbon Steel
Previous Article in Journal
Evaluating Human versus Machine Learning Performance in a LegalTech Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Unsupervised Spectral Clustering for Pure-Tone Audiograms towards Hearing Aid Filter Bank Design and Initial Configurations

by
Abeer Elkhouly
1,2,3,
Allan Melvin Andrew
1,4,
Hasliza A Rahim
1,2,*,
Nidhal Abdulaziz
3,
Mohamedfareq Abdulmalek
3,
Mohd Najib Mohd Yasin
1,
Muzammil Jusoh
1,2,
Thennarasan Sabapathy
1,2 and
Shafiquzzaman Siddique
5,*
1
Advanced Communication Engineering, Centre of Excellence (ACE), Universiti Malaysia Perlis (UniMAP), Kangar 01000, Perlis, Malaysia
2
Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
3
Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai 20183, United Arab Emirates
4
Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis (UniMAP), Arau 02600, Perlis, Malaysia
5
Biotechnology Research Institute, Universiti Malaysia Sabah (UMS), Jln UMS, Kota Kinabalu 88400, Sabah, Malaysia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 298; https://doi.org/10.3390/app12010298
Submission received: 4 November 2021 / Revised: 27 November 2021 / Accepted: 3 December 2021 / Published: 29 December 2021

Abstract

:

Featured Application

Application: This work will contribute to easing and reducing the complexity of the filter bank structure of hearing aids. In addition, it will facilitate the process of configuring hearing aids.

Abstract

The current practice of adjusting hearing aids (HA) is tiring and time-consuming for both patients and audiologists. Of hearing-impaired people, 40–50% are not satisfied with their HAs. In addition, good designs of HAs are often avoided since the process of fitting them is exhausting. To improve the fitting process, a machine learning (ML) unsupervised approach is proposed to cluster the pure-tone audiograms (PTA). This work applies the spectral clustering (SP) approach to group audiograms according to their similarity in shape. Different SP approaches are tested for best results and these approaches were evaluated by Silhouette, Calinski-Harabasz, and Davies-Bouldin criteria values. Kutools for Excel add-in is used to generate audiograms’ population, annotated using the results from SP, and different criteria values are used to evaluate population clusters. Finally, these clusters are mapped to a standard set of audiograms used in HA characterization. The results indicated that grouping the data in 8 groups or 10 results in ones with high evaluation criteria. The evaluation for population audiograms clusters shows good performance, as it resulted in a Silhouette coefficient >0.5. This work introduces a new concept to classify audiograms using an ML algorithm according to the audiograms’ similarity in shape.

1. Introduction and Motivation

The World Health Organization (WHO) estimates that by 2050, nearly 2.5 billion people are projected to have some degree of hearing loss, which poses an annual global cost of US $980 billion [1]. Daniela Bagozzi, a WHO Senior Information Officer, wrote an article to make a call for the private sector to provide affordable hearing aids in developing countries (as their current cost ranges from US $200 to over US $500) [2]. In addition, the Healthline Organization reported that a set of hearing aids might cost $5000 [3].
The main components of a digital hearing aid are shown in Figure 1, comprising of a microphone, an analogue to digital converter (A/D), filter banks, gain blocks, and a digital to analogue converter (D/A). First, the analogue sound signal detected by the microphone is converted into digital form by an A/D converter. Next, this digital signal is applied to a filter bank. Different digital signal theories can be applied to divide the input digitized sound signal spectrum into sub-bands with different bandwidths. Then, gain blocks are applied to the outputs of the filter bank to amplify the sound signal to the desired hearing level. In the last stage, the signal is converted from digital to analogue by the D/A converter [4,5]. However, it is better to design digital filters that can match multiple audiograms for patients who suffer from hearing loss. This approach lowers the cost of manufacturing hearing aids as they can be produced on a large scale as designed by many research techniques [4,6,7,8,9]. On the other hand, it increases the complexity of hearing aid design, requiring high operating power and big chip area, leading to improperly fitted hearing aid [10].
This research was motivated by all these considerations in hearing aid design and the impact of hearing loss on nations’ economies and patients’ ability to afford the hearing aid. The main idea here is to use artificial intelligence and machine learning to facilitate the whole process for the patients and the hearing aid designers. In addition, the process of fitting hearing aids is tiring and consuming in time as it depends on many trials which require the patient to be highly responsive. A study stated that only 50–60% are satisfied with their hearing aids use [11]. Furthermore, integrating all these factors with the severe shortage of numbers of audiologists who are very rare and hard to find in rural areas [12,13]. This urges the use of artificial intelligence new technology to resolve these problems, especially when the fitting process depends on the skills and experience of audiologists [14].
In this work, the authors apply unsupervised learning to cluster audiograms using spectral clustering. These audiograms are taken from a database of 28,244 audiograms used by Bisgaard, Vlaming and Dahlquist [15] to produce a standard set of audiograms for the IEC (International Electrotechnical Commission) 60118-15 measurement procedure. These audiograms were clustered by vector quantization analysis of size 60. Here, the researchers excluded five audiograms of the quantized results, representing normal hearing levels (0–20 dB) defined by different health organizations [1,16,17]. These five audiograms are removed since the study aims to produce clusters representing different audiograms for patients who experience hearing loss. Different audiograms with the same shape but with different levels can be realized with the same set of filters by adjusting the gains to match the required audiogram shape. Another reason to classify audiograms according to shape is, fitting hearing aid process will be easier. A supervised machine learning model can be built based on this work to classify patients’ audiograms then program or adjust hearing aid according to pre-set configurations. These configurations are linked to the clusters produced by this work then at the end of the process of hearing aid fitting, a fine tuning might be needed.
This introduction is kept simple to be comprehended easily by both experts in the field of hearing aid design from engineering perspectives and audiologist medical perspectives. The technical parts can be found in the following sections of the paper.
This paper is organized as follows. Firstly, we discuss recent audiogram classifiers and what are the limitations of these works then the main contribution of this work is highlighted. Subsequently data clustering algorithm is explained in Section 3, where the algorithm description and implementation are elaborately discussed. This is followed by a discussion concerning how this algorithm is evaluated and how data sets are prepared. Then, the results are demonstrated and discussed in Section 4 to find the optimum number of clusters. The clustering algorithm is evaluated for audiogram population that produced the quantized data. The generated clusters are mapped and compared to the standard sets selected by Bisgaard in the last subsection. Finally, a summary for the results, conclusion, and prospects for future work are presented in Section 5.

2. Related Work

In 2016, Rahne, et al., have built an excel sheet as an audiogram classifier with pre-set inputs that can be defined according to inclusion criteria in the clinical trial. This tool provides inclusion decision based on the predefined audiological criteria [18]. Then, in 2018, Sanchez, et al. have classified the hearing tests data in two stages. The first stage is unsupervised learning to define trends and spot patterns in data obtained from different hearing tests. In the second stage a supervised learning algorithm is built in which different outcomes from different hearing tests were explored. In the second stage, the subjects were assigned a profile then the data were analyzed again to find the best classification structure of the subjects into the four auditory profiles. This classifier was based on data analysis to audiograms which reflects the loss of sensitivity and other hearing tests to reflect loss of clarity that was not captured by the audiogram [19]. Belitz et al., in 2019 have also combined unsupervised and supervised machine learning methods to map audiograms to a small number of hearing aid configurations. The target of this study was to use these configurations as a starting point for hearing aid fitting. This method was applied in two steps, the first one started by performing different unsupervised clustering algorithms to determine a limited number of pre-set configurations for a hearing aid. The centroids of the clusters were chosen to represent fittings targets which can be used as starting configurations for hearing aid adjustments for each individual. The second step was to assign to each audiogram a class based on the results from the first stage comfort target clustering. Various supervised machine learning techniques were used to assign to each audiogram a pre-set configuration. The classifier accuracy of the second stage was low when they selected single configuration and it was improved when they allowed two configurations to each audiogram [20]. In 2018, a research team developed their first steps of a machine learning classifier by the use of unsupervised learning to cluster audiograms [21]. In this work, audiograms were clustered with the target to make them maximally informative audiograms. Then the clustered data was prepared to be a good training set for supervised machine learning classifiers. They built an approach to get a set of non-redundant unannotated audiograms with minimal loss of information from very big data set. In 2020, the same group used the data preparation procedure carried out by them to produce a machine learning classifier. They applied supervised ML to 270 audiograms annotated by three experts in the field. The results have good accuracy to annotate the audiograms concisely in terms of shape, severity and symmetry [12]. The classifier can be integrated as a mobile application to help the user to describe audiogram concisely so it can be interpreted by non-experts. The classifier outputs can be used by non-experts to decide if the patient needs to be checked by a specialist. It can resolve partially the problem of having a shortage of specialists and it can be the first step towards a more sophisticated algorithm to help experts of the audiology field.
Crowson et al., used deep learning convolutional neural network architecture to classify audiograms of normal hearing, sensorineural, conductive, and mixed hearing loss. The audiograms were converted to jpeg formatted picture files. Image transformation techniques were used to increase the number of images available as a training data for the classifier. Image rotation, wrapping, contrast, lighting and zoom were applied to the audiogram images in the training set. They achieved 97.5% accuracy of their model to classify hearing loss types based on features extraction of the audiograms [13]. In this research, the study aimed at classifying audiograms to detect the cause of hearing loss which is not helping in configuring hearing aids or not conducted for this purpose [13].
Musiba [22], has classified audiograms based on UKHSE (United Kingdom Health and Safety Executive) categorization scheme. The sum of pure tone audiometry test hearing levels at frequencies 1 kHz, 2 kHz, 3 kHz, 4 kHz and 6 kHz, was obtained. Then, compared with the figures set by UKHSE and classified as one of the following: acceptable hearing ability, mild hearing impairment, poor hearing, or rapid hearing loss. The aim of this classification was to prompt proper actions to prevent noise-induced hearing loss. The annotation process was carried out by experts in the field who applied the UKHSE standards.
Cruickshanks and his team [23] made a longitudinal study on how the shape of audiograms change over time. The follow up was carried out based on four stages; 1993–1995, 1998–2000, 2003–2005, and 2009–2010. The audiograms were classified into eight levels and the change in hearing ability over time was recorded based on these classes. Musiba and Cruickshanks [22,23] didn’t implement any intelligent solutions as they counted on the experience of the specialists in the field.
Classifier techniques found in the literature, are summarized in Table 1, showing the limitations and short-coming of each technique.
To the best of our knowledge, the classifiers that are built with the purpose to classify audiograms are very few and not suitable as a refence to the specialists in the field, such as audiologists, hearing aid specialists, and hearing aid designers. This study is the first study to classify audiograms according to the similarity in shape with the aim to reduce the complexity of the filter bank used to realize the audiogram shape of the patients. According to signal processing techniques, it is important to know the shape of the audiogram to apply different gains to different filters that cover the entire band of hearing (125 Hz–8 KHz). This classifier is built to capture different shapes of audiograms and not to classify hearing loss type as achieved by current existing works. Audiograms of similar shape at different levels can be realized by a group of filters by changing the gain coefficients of each filter or the overall gain of the cascaded filters. This classification will help hearing aid designers to reduce the complexity of their filter designs and can be a good start for the future supervised learning algorithm to classify audiograms according to these detected shapes. Applying novel methods such as sophisticated machine learning algorithm will facilitate the whole process for the experts and to increase patients’ satisfaction.

3. Data Clustering Algorithm

The study is conducted to group audiograms according to similarity in shape. For this purpose, spectral clustering was used to provide clusters that can be technically used by the experts in the field. This section starts with a general description of the algorithm showing the main steps of how the algorithm was implemented and evaluated. Then, the details of the implementation process were discussed and finally, the evaluation criteria for different numbers of clusters and for the selected clusters, all were explained.

3.1. Algorithm Description

The spectral clustering algorithm is a graph-based technique to find k clusters in data [24,25]. It calculates a similarity matrix of a similarity graph from the data to determine the Laplacian matrix. A similarity graph models the local neighborhood relationships between the data points, where the matrix representation of this graph is the similarity matrix. The similarity matrix contains pairwise similarity values between connected nodes in the similarity graph and can be represented by Laplacian matrix. This algorithm starts by representing data in a lower dimension space in which the data are classified. The reduction in data dimension is based on the eigenvectors of the Laplacian matrix. The columns of the eigenvectors correspond to the k smallest eigenvalues of the Laplacian matrix. These eigenvectors are a low-dimensional representation of the input data in a new space, where the clusters are well-separated [25]. This algorithm aims to classify data into clusters, such that the parts of data in the same cluster are similar and others in different clusters are dissimilar to each other [26]. The authors decided to use spectral clustering for their data as it can produce accurate clustering results by solving the features of the Laplacian matrix. This clustering method can be used for any shape of data, with the advantage of dealing with non-convex data distributions [27]. Since the data used in this research is mostly convex and sometimes non-convex, spectral clustering is a suitable method for unsupervised learning to detect different audiograms shapes.
The authors started by clustering the data into seven clusters using spectral clustering. Two methods were used to construct the similarity matrix, namely the nearest neighbors and radius search methods. Then, the Laplacian matrix was generated and normalized with different methods, such as random-walk normalization and symmetric normalization. The produced seven clusters were checked by looking into the eigenvalues then visually inspected by plotting a scatter plot for the clusters. If all the eigenvalues are zero or the plot did not indicate credible clusters then the method is not considered. The selected methods were furtherly assessed by checking eigenvalues one more time if they indicate a gap then the Silhouette coefficient is calculated for these seven clusters. This process was repeated to generated eight clusters and then evaluate the model performance. Then, other number of clusters (9, 10, and 11 clusters) were also tested. It was decided that we would start with seven clusters in order to detect as many shapes as possible of the audiograms for future supervised machine learning model with good accuracy. The lower the number of audiograms clusters the lower the accuracy of the predictions. On the other hand, the authors decided to stop at 11 clusters as the Silhouette coefficient dropped significantly. The algorithm steps are shown in Figure 2.
Then authors picked the number of clusters with the highest 2 silhouette coefficients for further evaluation. These two numbers of clusters were compared by evaluating Silhouette coefficients, Calinski-Harabasz criterion, and Davies-Bouldin criterion. This was followed by generating audiogram population and annotating them according to the produced clusters and then these clusters were evaluated with the same three criteria methods Silhouette coefficients, Calinski-Harabasz criterion, and Davies-Bouldin criterion. This was carried out to test the clustering method for a large number of audiograms. Finally, the authors mapped the generated clusters to Bisgaard selected levels to compare their clusters with existing standards used in hearing aid measurements.

3.2. Clustering Implementation

Spectral clustering is a well-established algorithm but can be carried out using many input arguments. The authors tried many of them and the trials were evaluated statistically. First, the similarity graphs are generated using two ways (number of nearest neighbors and according to a certain value that represent radius to search for the nearest neighbors). Then, the similarity graphs are represented using Laplacian matrix. The clustering results were evaluated for different forms of this matrix; without normalization, random-walk normalization, and symmetric normalization. Finally, two clustering methods (k-means and k-medoids) were tested to cluster the eigenvectors of the Laplacian matrix. In each case, the eigenvalues were checked, and the silhouette coefficients were calculated for performance evaluation [28].
MATLAB is the selected platform to perform spectral clustering. The similarity graph was generated using kernel nearest neighbors, where, it connects two points i and j , when either i is the nearest neighbor of j or j is the nearest neighbor of i . These distances are calculated using Euclidean formula, then, transformed with a scaled kernel with a value that is selected using heuristic procedure. The clustering method used to cluster eigenvectors of the Laplacian matrix is the K-medoids. A medoid in the K-medoids algorithm is the most centrally located point with minimum distance with respect to other points and is not influenced by the outliers or extremities [29,30]. Finally, the similarity graph is represented with normalized Laplacian matrix using random-walk.

3.3. Clustering Performance Evaluation

Four criteria values are calculated to find the best number of clusters and to evaluate clustering method. These are: eigenvalues, silhouette coefficients, Calinski-Harabasz criterion and Davies-Bouldin criterion. To obtain well-separated clusters, eigenvalues should ideally be zero or small. To determine the proper number of data clusters, the number of clusters is increased gradually till a gap is observed in the eigenvalues [31]. If it is not possible to reach this gap, the silhouette analysis is used to measure how well the data is clustered. This analysis results in a coefficient in the range of [−1, 1], where silhouette coefficients close to +1 indicate that the sample is far away from the neighboring clusters. On the other hand, a value of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters. In contrast, negative values indicate that those samples are assigned to the wrong cluster. The silhouette index (SI) is the average of these coefficients, the closer to +1 the better the separation between clusters [32,33]. Calinski-Harabasz index (CHI) is the ratio between the variance of the sums of squares of the distances of individual data points to their cluster center and the sum of squares of the distance between the cluster centers. The higher the value the better the performance of the clustering model [34]. The Davies-Bouldin analysis calculates two values (within cluster variance and the distance between the centroids of different clusters). Then, the nearest neighboring cluster is identified for each cluster and the sum of within cluster variances is divided by distance difference between clusters centroids. The Davies-Bouldin index (DBI) is the average of these values and it ranges from zero to infinity and the smaller the value the better the separation between clusters [35]. The last three criteria are suitable to evaluate clusters as they give more accurate results for convex data [36]. The optimal number of clusters occurs at the highest Calinski-Harabasz and silhouette coefficients while it is the lowest for the Davies-Bouldin value [37].

3.4. Data Sets Preprocessing

The authors used two sets of data. The first one consists of 55 audiograms and the second one is generated using Kutools Excel add-in.

3.4.1. First Data Set

To perform spectral clustering to group large data set, it should go through two steps [38]:
The first step is data reduction, which is carried out mostly using k-means to cluster the given data set. From each cluster, some data are picked normally. They are the ones near the center of the cluster. Thus, each cluster is represented by one set [39,40]. Then, the spectral clustering can be applied to construct the similarity matrix and to classify the reduced size data into final classes.
Bisgaard et al. [15] did the data reduction to a database of 28,244 audiograms using vector quantization of size 60. Then the authors in this paper, applied the spectral clustering to these 60 audiograms. This data can be found in Table A.1. in Bisgaard work [15]. In fact, it was furtherly reduced by eliminating five audiograms that represent individuals with normal hearing. These levels are removed since the model is built for patients who experience hearing loss to assist in configuring or designing hearing aids. The audiograms were measured in standard audiometry booths at eight test frequencies. Air conduction thresholds are measured at 250 Hz, 500 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 3000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz.

3.4.2. Second Data Set

The authors generated data with the original size (audiograms) by repeating the 60 audiograms with the frequencies indicated in Table A.1 in Bisgaard work. This percentages represent the part of population audiograms within a specified range around the 60 audiograms. This range is decided based on minimizing the calculated Euclidean distance from each measured audiogram to its corresponding “typical” code vector audiogram. Based on this training technique, the authors believe repeating these audiograms would result in good representation and carry enough information about the original database. The used tool to generate this data set is the Kutools Excel add-in.

4. Results and Discussion

The authors decided to consider a big number of clusters due to the nature of data. The audiograms have high overlap which make it difficult to detect different shapes of patients’ audiograms with a small number of clusters. In addition, the authors wanted to detect steep sloping audiograms as their filter realizing and adjusting would be different from technical point. This section starts by finding the optimum number of clusters, then assessing the clustering method when applied to audiograms’ population. Finally, the authors compared the generated clusters to the standard levels chosen by Bisgaard.

4.1. Finding the Optimum Number of Clusters

Silhouette Criteria Clustering Evaluation was used to determine the best number of clusters, ranging between −1 to +1. A positive value implies good clustering, and the best number of clusters are associated with the highest criterion values. The results are shown in Table 2, indicating that the best number of clusters are 8 followed by 10. The wrongly assigned audiograms were removed in another two consecutive stages, then the corresponding criteria values were recalculated as shown in Table 2. The criteria values for 8 and 10 clusters are found to be close.
The following 2 subsubsections are introduced to show and discuss the results of different stages of 8 and 10 clusters. These stages are implemented to remove the wrongly assigned audiograms with a negative Silhouette coefficient.

4.1.1. Eight Clusters Evaluation Criteria

The selected 55 audiograms are classified into 8 clusters using spectral clustering. The stage 1 silhouette plot indicates that seven audiograms are wrongly assigned, with one on the boundary between two clusters (very small negative Silhouette coefficient of −0.006072). In the second stage, the wrongly assigned audiograms are removed. The second stage Silhouette plot shows one wrongly assigned audiogram, which is removed in the third stage. The three stages plots are shown in Figure 3. To evaluate the number of clusters, the Eigenvalues were generated, indicating no gap. Thus, the authors calculated the Silhouette criterion values, Calinski-Harabasz clustering evaluation criterion, and Davies-Bouldin criterion values. The results are shown in Table 3, where the first stage has 55 audiograms and the evaluation criteria values are; SI = 0.3907, CHI = 36.7956 and DBI = 1.0427, stage 2 of 48 audiograms with SI = 0.464, CHI = 38.5503 and DBI = 0.9670 while stage 3 of 47 audiograms with SI = 0.4814, CHI = 38.5476, and DBI = 0.9426. These results indicate that the best clustering algorithm performance is for stage 3 where SI is the highest and DBI is the lowest.

4.1.2. Ten Clusters Evaluation Criteria

Similarly, the selected 55 audiograms are classified into 10 clusters. The Silhouette plot indicates that four audiograms are wrongly assigned. The first two stages are conducted to remove the wrongly assigned audiograms, as shown in Figure 4. In the second stage, the plot shows that two audiograms are on the border between two clusters (very small negative Silhouette coefficients of −0.004755, −0.0007911). The third stage shows that all audiograms are with positive Silhouette coefficients. Other clustering evaluation criteria values are listed in Table 4. As seen from the results the best clustering performance is for stage 3, where SI and CHI are the highest and DBI is the lowest.

4.2. Audiograms’ Population Clusters Evaluation

The clustering algorithm is applied to the data set generated as described in Section 3.4.2. The Original data size was generated (25,307 audiograms) to represent the selected 55 audiograms with the corresponding associated percentages of the total population (28,244 audiograms). Then, the authors applied a spectral clustering algorithm with the same input arguments used earlier to these numbers of audiograms. Still, the analysis failed to generate a similarity matrix to cluster this large data, reflected by the high negative silhouette coefficient of value 0.6012. This result matches the findings from the literature, and the spectral clustering technique is not practical for large data [41,42,43,44]. Next, the authors annotated the generated 25,307 audiograms with the produced 8 and 10 clusters. The wrongly assigned ones with negative Silhouette coefficient were removed in stage 2. Stage 3, has only audiograms with positive Silhouette coefficients in their clusters. Hence, 20,956 audiograms were annotated using 8 clusters in stage 3, and 22,002 audiograms were annotated using 10 clusters. Figure 5 shows Silhouette plots of stage 1 and 3 for 8 clusters, while Figure 6 shows the results of stage 1 and stage 3 for the 10 clusters. Table 5 summarizes criteria evaluation values for both numbers of cluster. as it can be seen from the results that the SI values are higher than 0.5 which indicates good performance of the algorithm in both cases 8 clusters and 10 clusters. At stage 3, the data is cleaned and the SI, CHI, and DBI have the best values. These criteria values are better for eight clusters.

4.3. Mapping Bisgaard Standard Levels to the Implemented Clusters

The aim of this part is to compare the clustering results with the chosen standard hearing levels by Bisgaard [15]. He chose seven flat and moderately sloping standard audiograms; named N1 to N7 and three steep sloping ones named S1 to S3 [15]. The eight clusters implemented by this work mapped N1, S1, and N2 in the same cluster, N4 and N5 to the same cluster and N6 and N7 to the same cluster. Meanwhile, N3, S2, and S3 are classified in different clusters. However, for the 10 clusters, S1 and N2 are mapped to the same cluster and N6 and N7 to the same cluster. Thus, N1, N3, N4, N5, S2, and S3 are in different classes. The mapping results are shown in Figure 7, where the x-axis represents the 10 standards N1–N7 and S1–S3, while the y-axis is the cluster number in our work. These mapping results are also displayed in Table 6 for further clarification and comparison.

5. Results Summary and Conclusions

A comparison between the results of 8 clusters and 10 clusters can be summarized in Table 7. As shown in Table 7, different criteria values are slightly better for 8 clusters than 10 clusters. Silhouette coefficients and Calinski-Harabasz clustering evaluation criterion values are higher for eight clusters, while Davies-Bouldin values are lower. For the population audiograms, the Silhouette and the Davies-Bouldin values are better for eight clusters, but the Calinski-Harabasz value is better for 10 clusters. This could be explained as Calinski-Harabasz is the most sensitive parameter to the number of observations considered to calculate this criterion [45]. The number of audiograms considered for 8 clusters in stage 3 is 20,957 while it is 22,002 audiograms for 10 clusters in stage 3 (as shown in Table 7). Eigenvalues are small, but no gap is indicated to confirm selection between 8 and 10 clusters. The number of audiograms’ population considered in the last stage for 10 clusters is higher than those in 8 clusters. The 10 clusters can be preferred as more patients’ audiogram shapes are considered using 10 clusters. The Silhouette coefficients of the audiogram population are higher than 0.5 for both number of clusters, which suggests good clustering.
The trial to classify audiograms by Belitz [20] to adjust hearing aid, has low accuracy. 68% when they assigned one configuration to each audiogram. We believe that their accuracy might have increased if they considered higher number of clusters as the data has a nature of high overlap. This matches with the results found in this research to cluster data into 8 or 10 classes.
This work can be considered the first step to change the way of designing hearing aid filter banks. The existing filter bank designs use digital filters with different techniques to divide the entire frequency band (125 Hz–8 KHz) non-uniformly. It then applies gain controls to configure the hearing aid to match patient’s audiogram. The current practice aims to design digital filters that can match multiple patients’ audiograms which leads to very complex designs as conducted in [6,7,8,9]. These designs lower the cost of manufacturing hearing aids as they can be produced on a large scale since they accommodate multiple users. However, complex designs require high operating power and big chip area, which leads to an improper fitted hearing aid. When there is a reduction in complexity, the hearing aid prototype can match a limited number of audiograms effectively. On the other hand, a low complex hearing aid design makes it properly fitted as it does not require large area to be implemented since it has a small number of filter coefficients [4,8]. Normally, hardware complexity when designing filter bank structures is measured by the components needed to realize these filters (which are multipliers, adders, and shifters). However, in many researches, only multipliers are considered as they are the most power-consuming elements in the digital signal processing (DSP) hardware [46]. Hence, to summarize, the current practice, trials are made to mask the hearing frequency band with a large number of filters with complex techniques. Another consequence to these complex designs the process of adjusting hearing aid becomes difficult for both the patients and audiologists. Instead of attempting to match different types of hearing loss with one design to satisfy the needs of many patients aiming to lower the cost of manufacturing, designs can be implemented according to categories produced by our intelligent solution. The filter bank can be designed to match the shapes of a number of these clusters not all. This will result in designs that less complex, with low delay, a small chip area, and reduced cost. In addition, these clusters will facilitate the process of programming or adjusting hearing aid to match the user needs by assigning each patient’s audiogram a configuration related to the produced clusters.
Consequently, configuring a hearing aid will be easy and less exhausting for the patients and the audiologists as it will require less response from the patients. The power of intelligent solutions does not depend on the skills, experience, and knowledge of limited number of skillful experienced audiologists. In addition, as it requires less response from the patients, it will introduce big help to cohorts such as older people, individuals with dementia and children who are experience hearing loss. In addition, this method can be applied to any set of test frequencies as it is not restricted to the used set in the study. Data can be pre-processed such that any missed frequency can be interpolated. The needed input is the hearing levels to be tested at eight different frequencies. Those eight frequencies can vary according to the protocol used in the hearing test. Hence, what is considered in this study is to test the air conduction thresholds at eight test frequencies with/without masking in the non-test ear. Bone conduction thresholds are not considered in this study.
To conclude, the authors do not count only on rigid statistical analysis. The results should be interpreted according to the needed solution to be introduced. The authors prefer to consider the 10 clusters since more shapes of patients’ audiograms will be included. In addition, the authors predict that grouping the standard levels S1, N1, N2, and N4, N5 in the same cluster might be a source of confusion for any future supervised machine learning algorithm. It is seen that 10 clusters might produce a higher accuracy supervised learning model due to the high overlap nature of data, which means that introducing more clusters might help resolving this problem.
The authors recommend, for future work, the use of regression analysis to generate one polynomial that represents each cluster. These polynomials are predicted using regression with the least square method to minimize the difference between clustered audiograms in the same cluster and predicted polynomials (as carried out in [47,48]).

Author Contributions

Conceptualization, A.E. and A.M.A.; methodology, A.E., H.A.R. and A.M.A.; software, A.E.; validation, A.E. and H.A.R.; formal analysis, N.A. and M.A.; investigation, N.A. and M.A.; resources, M.N.M.Y.; data curation, A.E.; writing—original draft preparation, A.E. and A.M.A.; writing—review and editing, H.A.R. and A.M.A.; visualization, N.A.; supervision, H.A.R., M.J. and T.S.; project administration, M.N.M.Y. and S.S.; funding acquisition, H.A.R. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This project was partially funded by Universiti Malaysia Sabah graduate students scheme (UMS GREAT) project number ~GUGO237-1/2018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Approval to reuse the data from Table A1 from paper [15] is obtained from SAGE Publishing at no cost for the life of the research work. The permission is obtained on 2 September 2021 via email for request RP-6079.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. Deafness and Hearing Loss. Available online: https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss (accessed on 1 April 2020).
  2. Bagozzi, D. WHO Calls on Private Sector to Provide Affordable Hearing Aids in Developing World. Available online: https://www.who.int/news/item/11-07-2001-who-calls-on-private-sector-to-provide-affordable-hearing-aids-in-developing-world (accessed on 15 June 2021).
  3. Whelan, C. What to Know about Hearing Aid Costs. Available online: https://www.healthline.com/health/cost-of-hearing-aids#a-quick-look-at-costs (accessed on 25 February 2020).
  4. Abdul, A.; Bindiya, T.S.; Elias, E. Design and implementation of reconfigurable filter bank structure for low complexity hearing aids using 2-level sound wave decomposition. Biomed. Signal Process. Control 2018, 43, 96–109. [Google Scholar]
  5. Wei, Y.; Wang, Y. Design of Low Complexity Adjustable Filter Bank for Personalized Hearing Aid Solutions. IEEE/ACM Trans. Audio Speech Lang. Process. 2015, 23, 923–931. [Google Scholar] [CrossRef]
  6. Huang, S.; Tian, L.; Ma, X.; Wei, Y. A Reconfigurable Sound Wave Decomposition Filterbank for Hearing Aids Based on Nonlinear Transformation. IEEE Trans. Biomed. Circuits Syst. 2016, 10, 487–496. [Google Scholar] [CrossRef] [PubMed]
  7. Haridas, N.; Elias, E. Design of reconfigurable low-complexity digital hearing aid using Farrow structure based variable bandwidth filters. J. Appl. Res. Technol. 2016, 14, 154–165. [Google Scholar] [CrossRef]
  8. Indrakanti, R.; Haridas, N.; Elias, E. High performance continuous variable bandwidth digital filter design for hearing aid application. AEU—Int. J. Electron. Commun. 2018, 92, 36–53. [Google Scholar] [CrossRef]
  9. Abdul, A.; Bindiya, T.S.; Elias, E. Low-complexity implementation of efficient reconfigurable structure for cost-effective hearing aids using fractional interpolation. Comput. Electr. Eng. 2019, 74, 391–412. [Google Scholar]
  10. Chong, K.S.; Gwee, B.H.; Chang, J.S. A 16-Channel Low-Power Nonuniform Spaced Filter Bank Core for Digital Hearing Aids. IEEE Trans. Circuits Syst. II Express Briefs 2006, 53, 853–857. [Google Scholar] [CrossRef]
  11. Girish, G.K.; Pinjare, S.L. Audiogram equalizer using fast fourier transform. In Proceedings of the 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), Paralakhemundi, India, 3–5 October 2016; pp. 1877–1881. [Google Scholar]
  12. Charih, F.; Bromwich, M.; Mark, A.E.; Lefrancois, R.; Green, J.R. Data-Driven Audiogram Classification for Mobile Audiometry. Sci. Rep. 2020, 10, 3962. [Google Scholar] [CrossRef] [Green Version]
  13. Crowson, M.G.; Lee, J.W.; Hamour, A.; Mahmood, R.; Babier, A.; Lin, V.; Tucci, D.L.; Chan, T.C.Y. AutoAudio: Deep Learning for Automatic Audiogram Interpretation. J. Med. Syst. 2020, 44, 163. [Google Scholar] [CrossRef]
  14. Liang, R.; Guo, R.; Xi, J.; Xie, Y.; Zhao, L. Self-Fitting Algorithm for Digital Hearing Aid Based on Interactive Evolutionary Computation and Expert System. Appl. Sci. 2017, 7, 272. [Google Scholar] [CrossRef] [Green Version]
  15. Bisgaard, N.; Vlaming, M.S.; Dahlquist, M. Standard audiograms for the IEC 60118-15 measurement procedure. Trends Amplif. 2010, 14, 113–120. [Google Scholar] [CrossRef] [PubMed]
  16. Clason, D. Understanding the Degrees of Hearing Loss. Available online: https://www.healthyhearing.com/report/41775-Degrees-of-hearing-loss (accessed on 4 April 2020).
  17. BSA FAQs. British Society of Audiology. Available online: https://www.thebsa.org.uk/public-engagement/faqs/ (accessed on 20 March 2020).
  18. Rahne, T.; Buthut, F.; Plossl, S.; Plontke, S.K. A software tool for puretone audiometry. Classification of audiograms for inclusion of patients in clinical trials. English version. HNO 2016, 64 (Suppl. S1), S1–S6. [Google Scholar] [CrossRef] [Green Version]
  19. Sanchez Lopez, R.; Bianchi, F.; Fereczkowski, M.; Santurette, S.; Dau, T. Data-Driven Approach for Auditory Profiling and Characterization of Individual Hearing Loss. Trends Hear. 2018, 22, 2331216518807400. [Google Scholar] [CrossRef]
  20. Belitz, C.; Ali, H.; Hansen, J.H. A Machine Learning Based Clustering Protocol for Determining Hearing Aid Initial Configurations from Pure-Tone Audiograms. Interspeech 2019, 2019, 2325–2329. [Google Scholar] [PubMed] [Green Version]
  21. Charih, F.; Bromwich, M.; Lefrancois, R.; Mark, A.E.; Green, J.R. Mining Audiograms to Improve the Interpretability of Automated Audiometry Measurements. In Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 11–13 June 2018; pp. 1–6. [Google Scholar]
  22. Musiba, Z. Classification of audiograms in the prevention of noise-induced hearing loss: A clinical perspective. S. Afr. J. Commun. Disord. 2020, 67, e1–e5. [Google Scholar] [CrossRef] [PubMed]
  23. Cruickshanks, K.J.; Nondahl, D.M.; Fischer, M.E.; Schubert, C.R.; Tweed, T.S. A Novel Method for Classifying Hearing Impairment in Epidemiological Studies of Aging: The Wisconsin Age-Related Hearing Impairment Classification Scale. Am. J. Audiol. 2020, 29, 59–67. [Google Scholar] [CrossRef] [PubMed]
  24. Nascimento, M.C.V.; de Carvalho, A.C.P.L.F. Spectral methods for graph clustering—A survey. Eur. J. Oper. Res. 2011, 211, 221–231. [Google Scholar] [CrossRef]
  25. Matlab Spectralcluster. Available online: https://se.mathworks.com/help/stats/spectralcluster.html#mw_d30c2539-9b01-4ee2-a5f6-9018ca8021e0 (accessed on 1 April 2021).
  26. Chen, Y.; Li, X.; Liu, J.; Xu, G.; Ying, Z. Exploratory Item Classification Via Spectral Graph Clustering. Appl. Psychol. Meas. 2017, 41, 579–599. [Google Scholar] [CrossRef]
  27. Fu, L.L.; Liu, Y.L.; Hao, L.J. Research on Spectral Clustering. Appl. Mech. Mater. 2014, 687–691, 1350–1353. [Google Scholar] [CrossRef]
  28. Aggarwal, C.C.; Reddy, C.K. Data Clustering: Algorithms and Applications; CRC Press LLC: Philadelphia, PA, USA, 2013. [Google Scholar]
  29. Jin, X.; Han, J. K-Medoids Clustering. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2010; pp. 564–565. [Google Scholar]
  30. Shiledarbaxi, N. Comprehensive Guide to K-Medoids Clustering Algorithm. Analytics India Magazine. Available online: https://analyticsindiamag.com/comprehensive-guide-to-k-medoids-clustering-algorithm/ (accessed on 25 April 2021).
  31. von Luxburg, U. A tutorial on spectral clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  32. Nidheesh, N.; Nazeer, K.A.A.; Ameer, P.M. A Hierarchical Clustering algorithm based on Silhouette Index for cancer subtype discovery from genomic data. Neural Comput. Appl. 2019, 32, 11459–11476. [Google Scholar] [CrossRef]
  33. Shutaywi, M.; Kachouie, N.N. Silhouette Analysis for Performance Evaluation in Machine Learning with Applications to Clustering. Entropy 2021, 23, 756. [Google Scholar] [CrossRef]
  34. Liu, E. Calinski-Harabasz Index and Boostrap Evaluation with Clustering Methods. Available online: https://ethen8181.github.io/machine-learning/clustering_old/clustering/clustering.html (accessed on 15 July 2021).
  35. Rhys, H. Machine Learning with R, the Tidyverse, and mlr. Manning: Shelter Island, NY, USA, 2020; p. 536. [Google Scholar]
  36. Wei, H. How to Measure Clustering Performances When There Are No Ground Truth? Available online: https://medium.com/@haataa/how-to-measure-clustering-performances-when-there-are-no-ground-truth-db027e9a871c (accessed on 1 June 2021).
  37. Piao Tan, M.; Floudas, C.A. Determining the optimal number of clustersDetermining the Optimal Number of Clusters. In Encyclopedia of Optimization; Floudas, C.A., Pardalos, P.M., Eds.; Springer: Boston, MA, USA, 2009; pp. 687–694. [Google Scholar]
  38. Dudek, A. Evaluation of Two-Step Spectral Clustering Algorithm for Large Untypical Data Sets. In Data Analysis and Classification; Springer: Cham, Switzerland, 2021; pp. 3–9. [Google Scholar]
  39. Li, T.; Zhang, Y.; Liu, H.; Xue, G.; Liu, L. Fast Compressive Spectral Clustering for Large-Scale Sparse Graph. IEEE Trans. Big Data 2019, 1. [Google Scholar] [CrossRef]
  40. Shinnou, H.; Sasaki, M. Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size. In The International Conference on Language Resources and Evaluation; European Language Resources Association (ELRA): Marrakech, Morocco, 2008. [Google Scholar]
  41. Taşdemir, K. Vector quantization based approximate spectral clustering of large datasets. Pattern Recognit. 2012, 45, 3034–3044. [Google Scholar] [CrossRef]
  42. Langone, R.; Suykens, J.A.K. Fast kernel spectral clustering. Neurocomputing 2017, 268, 27–33. [Google Scholar] [CrossRef]
  43. He, L.; Ray, N.; Guan, Y.; Zhang, H. Fast Large-Scale Spectral Clustering via Explicit Feature Mapping. IEEE Trans. Cybern. 2019, 49, 1058–1071. [Google Scholar] [CrossRef] [PubMed]
  44. Chen, G. A general framework for scalable spectral clustering based on document models. Pattern Recognit. Lett. 2019, 125, 488–493. [Google Scholar] [CrossRef]
  45. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
  46. Bindima, T.; Elias, E. A novel design and implementation technique for low complexity variable digital filters using multi-objective artificial bee colony optimization and a minimal spanning tree approach. Eng. Appl. Artif. Intell. 2017, 59, 133–147. [Google Scholar] [CrossRef]
  47. Elkhouly, A.; Rahim, H.A.; Abdulaziz, N.; Abd Malek, M.F. Modelling Audiograms for People with Dementia Who Experience Hearing Loss Using Multiple Linear Regression Method. In Proceedings of the 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), Sharjah, United Arab Emirates, 3–5 November 2020; pp. 1–4. [Google Scholar]
  48. Ahmad, M.I.; Husin, Z.; Ahmad, R.B.; Rahim, H.A.; Hassan, M.S.A.; Md Isa, M.N. FPGA based control IC for multilevel inverter. In Proceedings of the 2008 International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 319–322. [Google Scholar] [CrossRef]
Figure 1. Hearing aid main stages.
Figure 1. Hearing aid main stages.
Applsci 12 00298 g001
Figure 2. Architecture of the algorithm used to determine the proper number of clusters.
Figure 2. Architecture of the algorithm used to determine the proper number of clusters.
Applsci 12 00298 g002
Figure 3. Silhouette plots for eight clusters.
Figure 3. Silhouette plots for eight clusters.
Applsci 12 00298 g003
Figure 4. Silhouette plots for 10 clusters.
Figure 4. Silhouette plots for 10 clusters.
Applsci 12 00298 g004
Figure 5. Silhouette plots 8 clusters for 25,307 audiograms stage 1 and 20,956 audiograms for stage 3.
Figure 5. Silhouette plots 8 clusters for 25,307 audiograms stage 1 and 20,956 audiograms for stage 3.
Applsci 12 00298 g005
Figure 6. Silhouette plots 10 clusters for 25,307 audiograms stage 1 and 22,002 audiograms for stage 3.
Figure 6. Silhouette plots 10 clusters for 25,307 audiograms stage 1 and 22,002 audiograms for stage 3.
Applsci 12 00298 g006
Figure 7. Standard set chosen by Bisgaard [15] mapped to the generated clusters.
Figure 7. Standard set chosen by Bisgaard [15] mapped to the generated clusters.
Applsci 12 00298 g007
Table 1. Summary of audiogram classifiers.
Table 1. Summary of audiogram classifiers.
ReferenceClassification TechniqueLimitations
Sanchez Lopez et al. [19]
  • It is a two-stage classifier.
  • The first stage is unsupervised machine learning then followed by supervised learning.
  • It uses different hearing tests to classify hearing loss into 4 types related to sensitivity and clarity loss.
  • It used different types of hearing tests, not only audiograms, to classify data to detect type of hearing loss.
Belitz [20]
  • It is a classifier to audiograms with two steps; the first step is unsupervised learning to cluster audiograms into 4 pre-set configurations for a hearing aid.
  • Second, audiograms are mapped to these 4 configurations with supervised learning.
  • The supervised learning algorithm gives low accuracy when one configuration is set to each audiogram. However, the accuracy improves significantly when two configurations possibilities are set to each audiogram.
  • Data clustered in 4 classes which are not enough to describe different shapes of patients’ audiograms.
F. Charih et al. [12]
  • It is used supervised learning to 270 audiograms annotated by three experts in the field.
  • Audiograms are classified concisely in terms of shape, severity and symmetry.
  • A limited number of audiograms are used as a training data set.
  • The classifier outputs are a concise description of audiograms.
Musiba [22]
  • It is used the sum of hearing levels at frequencies (1 k–6 k) to classify the data.
  • Data are classified into 4 groups to assess hearing ability; acceptable hearing ability, mild hearing impairment, poor hearing and rapid hearing loss.
  • The output from the used classification process is the hearing ability, and experts in the field classify it. Therefore, the output is dependent on the experience and skills of the annotator.
Cruickshanks et al. [23]
  • It is a longitudinal study to observe the change of audiogram shape over time.
  • The audiograms are classified into 8 levels, and the change in hearing ability over time was recorded.
  • The findings of this study were related to the change of audiograms of patients during the follow-up period.
  • Experts in the field did the classification, so the classes are dependent on their knowledge.
Crowson et al. [13]
  • It is used deep learning convolutional neural network to classify audiograms formatted as jpeg pictures.
  • The audiograms are classified to categorize hearing loss into 4 classes; normal hearing, sensorineural, conductive, and mixed hearing loss.
  • The outputs of this classifier are the hearing loss types to detect the cause of hearing loss. Hence these classes cannot be used to help in hearing aid design or configuration.
Table 2. Silhouette criteria clustering values stage 1 to stage 3 for a different number of clusters (testing 7–11 clusters).
Table 2. Silhouette criteria clustering values stage 1 to stage 3 for a different number of clusters (testing 7–11 clusters).
No of Clusters Criterion Values Stage 1Criterion Values Stage 2Criterion Values Stage 3
70.358040.39850.4100
80.390680.43190.4598
90.353180.39240.4150
100.381410.42390.4461
110.318110.36500.3350
Table 3. Different clustering evaluation criteria for eight clusters.
Table 3. Different clustering evaluation criteria for eight clusters.
CriterionStage 1Stage 2Stage 3
No. of audiograms 554847
Silhouette Criterion Values0.39070.46400.4814
Calinski-Harabasz Criterion Values36.795638.550338.5476
Davies-Bouldin Criterion Values 1.04270.96700.9426
Eigenvalues000
0.05020.03640.0357
0.13130.08420.0795
0.20770.16360.1548
0.27490.26670.2727
0.37000.29780.3163
0.41630.37350.3914
0.50620.41220.4137
Table 4. Different clustering evaluation criteria for 10 clusters.
Table 4. Different clustering evaluation criteria for 10 clusters.
CriterionStage 1Stage 2Stage 3
No. of audiograms 555149
Silhouette Criterion Values0.38140.42390.4461
Calinski-Harabasz Criterion Values34.61237.390537.3974
Davies-Bouldin Criterion Values 1.04241.03291.0234
Eigenvalues000
0.05030.02840.0305
0.13120.10100.0991
0.20780.17350.1403
0.27500.26650.2790
0.36930.31550.3239
0.41530.35890.3620
0.50600.40450.4607
0.56860.50860.5045
0.62610.56240.5625
Table 5. Different clustering evaluation criteria for audiograms population.
Table 5. Different clustering evaluation criteria for audiograms population.
Criterion8 Clusters10 Clusters
Stage 1Stage 3Stage 1Stage 3
Silhouette coefficient0.45060.56160.47500.5507
Calinski-Harabasz Value 1.8447 × 10 4 1.9315 × 10 4 1.8026 × 10 4 1.9870 × 10 4
Davies-Bouldin Value1.07510.91891.06731.0404
Table 6. Standard levels chosen by Bisgaard mapped to the generated clusters.
Table 6. Standard levels chosen by Bisgaard mapped to the generated clusters.
Bisgaard Standard LevelCluster Number in the Generated 8 ClustersCluster Number in the Generated 10 Clusters
N1 18
N214
N351
N445
N546
N663
N763
S114
S2210
S382
37
79
Table 7. Summary of different clustering evaluation criteria.
Table 7. Summary of different clustering evaluation criteria.
Criterion8 Clusters10 Clusters
Stage 1Stage 2Stage 3Stage 1Stage 2Stage 3
No. of audiograms 554847555149
Silhouette Criterion Values 10.39070.46400.48140.38140.42390.4461
Calinski-Harabasz Criterion Values 136.795638.550338.547634.612037.390537.3974
Davies-Bouldin Criterion Values 21.04270.96700.94261.04241.03291.0234
No. of classified audiograms population 25,30721,66320,95725,30723,27322,002
Silhouette Criterion Values for audiograms population0.4506 0.56160.4750 0.5507
Calinski-Harabasz Criterion Values for audiograms population 1.8447 × 10 4 1.9315 × 10 4 1.8026 × 10 4 1.9870 × 10 4
Davies-Bouldin Criterion Values for audiograms population1.0751 0.91891.0673 1.0404
Eigenvalues000000
0.05020.03640.03570.05030.02840.0305
0.13130.08420.07950.13120.10100.0991
0.20770.16360.15480.20780.17350.1403
0.27490.26670.27270.27500.26650.2790
0.37000.29780.31630.36930.31550.3239
0.41630.37350.39140.41530.35890.3620
0.50620.41220.41370.50600.40450.4607
0.56860.50860.5045
0.62610.56240.5625
1 Max for an optimal number of clusters. 2 Min for an optimal number of clusters.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Elkhouly, A.; Andrew, A.M.; Rahim, H.A.; Abdulaziz, N.; Abdulmalek, M.; Mohd Yasin, M.N.; Jusoh, M.; Sabapathy, T.; Siddique, S. A Novel Unsupervised Spectral Clustering for Pure-Tone Audiograms towards Hearing Aid Filter Bank Design and Initial Configurations. Appl. Sci. 2022, 12, 298. https://doi.org/10.3390/app12010298

AMA Style

Elkhouly A, Andrew AM, Rahim HA, Abdulaziz N, Abdulmalek M, Mohd Yasin MN, Jusoh M, Sabapathy T, Siddique S. A Novel Unsupervised Spectral Clustering for Pure-Tone Audiograms towards Hearing Aid Filter Bank Design and Initial Configurations. Applied Sciences. 2022; 12(1):298. https://doi.org/10.3390/app12010298

Chicago/Turabian Style

Elkhouly, Abeer, Allan Melvin Andrew, Hasliza A Rahim, Nidhal Abdulaziz, Mohamedfareq Abdulmalek, Mohd Najib Mohd Yasin, Muzammil Jusoh, Thennarasan Sabapathy, and Shafiquzzaman Siddique. 2022. "A Novel Unsupervised Spectral Clustering for Pure-Tone Audiograms towards Hearing Aid Filter Bank Design and Initial Configurations" Applied Sciences 12, no. 1: 298. https://doi.org/10.3390/app12010298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop