Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization
Abstract
:1. Introduction
2. Background
2.1. Non-Negative Matrix Factorization
2.2. Non-Negative Matrix Partial Co-Factorization
3. Proposed Method
3.1. Time-Frequency Signal Representation
3.2. Wheezing Sound Separation Using Informed Inter-Segment NMPCF
- (i)
- RS are often characterized by similar spectral patterns that represent a wideband noise spectrum showing time and frequency smoothness [32]. In this way, can be useful to replicate these similar RS spectro-temporal behaviors observed in most of the subjects.
- (ii)
- In addition, RS can be considered as repetitive events in human breathing so, RS can be modeled sharing common spectral patterns that can be found throughout all breathing stages (segments), that is, some basis vectors can be shared during the inter-segment analysis due to the repeatability of RS. If we divide the input mixture spectrogram into segments ,, …,, we can get L-segments from the given mixture that share common spectral patterns. For this purpose, we have used AMIE_SEG [53] that automatically allows to segment the mixture spectrogram into inspiratory and expiratory stages.
- (iii)
- However, WS can be present or absent in the respiratory stages due to the pulmonary disorder. Therefore, we can define an indicator to distinguish between non-wheezing () and wheezing () segments. Note that the term refers to the segment identifier of the mixture spectrogram . In the case of wheezing segments, the spectral patterns of both RS and WS are present. For this reason, we propose to weight the importance of wheezing and non-wheezing segments into the conventional NMPCF decomposition to improve the wheezing sound separation performance. The classification between non-wheezing and wheezing segments is provided by a wheezing detection algorithm previously developed by authors [54].
- (a)
- According to the estimated basis matrix or , the weighting factor can be classified as or , respectively. As mentioned above, WS are always overlapped with RS so, we assume that none of the segments will model the behaviour of WS better than another. However, RS can be found isolated in some segments of human breathing due to the unpredictable nature of the pulmonary disorder. In this case, those segments in which WS are not contained will be more relevant to model the behaviour of RS. In this manner, will set the same value for all segments, that is, , and will be variable depending on the type of segment, wheezing () or non-wheezing (), is analyzed. In addition, the value assigned to the weighing factors must satisfy > (see Section 4.4) since RS are always present in all segments of the input mixture and WS may not be.
- (b)
- Focusing on the type of segment indicated by the parameter , the weighting factor can be classified as or . The parameter is associated with the non-wheezing segments () and is associated with the wheezing segments (). This allows to give greater importance to non-wheezing segments for the modeling of respiratory basis . As consequence, the value assigned to the weighing factors must satisfy > (see Section 4.4).
Algorithm 1 Wheezing sound separation using IIS-NMPCF. |
Require: , , , , , , , and M.
return and |
4. Experimental Results
4.1. Dataset and Metric
4.2. Experiments Setup
4.3. Comparison Methods
- A training signal , created to simulate the behavior of RS, is used in the baseline methods SNMF, SSNMF, 1S-NMPCF, 2S-NMPCF, ST-NMPCF and the proposed method IIS-NMPCF. The training signal has been created by concatenating randomly a set of normal respiratory stages only composed of RS obtained from the previously mentioned Internet pulmonary repositories [56,57,58,59,60,61,62,63,64,65,66,67,68]. Specifically, the signal has a temporal duration of 128 s and 54 respiratory stages (inspiration or expiration). Note that the normal respiratory stages used to construct y[n] do not correspond to any of the respiratory stages used in the databases P1 or T1.
- SNMF and 2S-NMPCF must use a training signal to simulate the behaviour of wheezing sounds. Taking into account that WS can be defined as continuous adventitious sounds that show a pitched sound (see Section 1), a signal has been created by concatenating a set of single pitches located along the frequency band 100 Hz–1000 Hz in which WS are typically present. Each pitch is represented by a sinusoidal signal multiplied by a Hamming window of N samples. The distance between the frequencies of each pitch is equal to the value provided by the spectral spacing of the model. Considering that all evaluated methods have used the same parameters previously mentioned in Section 4.2, the spectral spacing equals to 4 Hz.
- T-NMPCF and ST-NMPCF as well as IIS-NMPCF has been implemented using AMIE_SEG [53] to divide the input spectrogram into the L-segments ,, …, .
- CNMF has been evaluated using its optimal parameters found in [32].
4.4. Optimization
4.5. Results and Discussion
- The decrease in SNR affects significantly the SDR and SIR results for both WS and RS. Focusing on Figure 7 in which SNR = 5 dB, results tend to be higher for reconstructed WS compared to the reconstructed RS because WS are louder than RS, so the sound separation benefits the audio quality of the reconstructed WS. Focusing on Figure 8 in which SNR = 0 dB, results for both WS and RS tend to remain stable because both WS and RS are similarly audible, so the performance of the sound separation seems to work equally between WS and RS. However, in Figure 9 in which SNR = −5 dB, results tend to be better for reconstructed RS since RS are louder than WS. This decrease in SNR implies that SDR and SIR results are worse in T1L compared to T1H. The reason is because RS are louder than WS when SNR < 0 dB (T1L) and as a consequence, WS be inaudible in this acoustic scenario so, the reduction of the SNR implies a greater time-frequency overlapping from RS to WS than the opposite.
- The standard NMF is ranked at the bottom, obtaining the worst sound separation performance since it achieves the signal reconstruction but not a factorization composed of audio events with physical meaning. The standard NMF cannot group the factorized bases to the sound source that generated them unlike the other methods because the standard NMF does not incorporate any type of information into the factorization process to model the spectro-temporal characteristics shown by WS and RS.
- Semi-supervised approaches (SSNMF and 1S-NMPCF) obtain better performance compared to supervised approaches (SNMF and 2S-NMPCF). Regardless of the approach, NMF or NMPCF, the use of the RS training signal is more effective that the use of both RS and WS training signals. It indicates that both training signals provide over-information that causes spectro-temporal ambiguity in the factorization of both WS and RS dictionaries.
- NMPCF-based methods (1S-NMPCF) obtain better separation performance than NMF-based methods (SSNMF). This fact seems to be because SSNMF uses a fixed dictionary composed of respiratory bases previously trained. However, 1S-NMPCF does not need a previous training stage, since it applies a joint matrix factorization using the input mixture and the respiratory training to obtain a dynamic dictionary of respiratory bases shared between both signals, obtaining a different dictionary of bases for each input mixture.
- Comparing NMPCF-based methods, T-NMPCF improves the separation performance compared to 1S-NMPCF. Results suggest that the dictionary of respiratory bases is more efficient when the input mixture is divided into segments in order to find repetitive patterns of RS.
- ST-NMPCF, the combination of the approaches 1S-NMPCF and T-NMPCF, obtains a significant improvement of the wheezing separation performance. Specifically, SDR = 5.96 dB and SIR = 9.73 dB evaluating T1H (Figure 7). It indicates that a more reliable modelling of RS can be achieved using jointly the shared respiratory spectral patterns along the segments and a prior knowledge of the respiratory spectral content by means of the respiratory training signal.
- CNMF [32] obtains competitive SDR SIR and SAR results compared to the methods above, ranking fourth. In some cases, WS and RS are modelled efficiently by applying its proposed constraints, but in other cases in which WS and RS are uncommon, CNMF does not model properly the spectro-temporal behavior of the target sounds.
- A significant separation performance improvement over the conventional T-NMPCF and ST-NMPCF is achieved adding greater importance to the non-wheezing segments in the co-factorization process. The SDR improvement of IIS-NMPCF over T-NMPCF is about 8.31 dB (T1H), 5.18 dB (T1M) and 4.85 dB (T1L). The SIR improvement of IIS-NMPCF over T-NMPCF is about 11.09 dB (T1H), 10.18 dB (T1M) and 8.33 dB (T1L). The SDR improvement of IIS-NMPCF over ST-NMPCF is about 2.67 dB (T1H), 3.03 dB (T1M) and 1.69 dB (T1L). The SIR improvement of IIS-NMPCF over ST-NMPCF is about 1.98 dB (T1H), 2.25 dB (T1M) and 1.87 dB (T1L). Results suggest that the inclusion of inter-segment information into the co-factorization process for modeling repetitive RS improves significantly the separation performance because it avoids that the respiratory spectral patterns obtained from the factorization remaining uncontaminated in wheezing segments.
- Adding prior knowledge of RS to IIS-NMPCF improves significantly the sound separation performance. The SDR improvement of IIS-NMPCF over IIS-NMPCF is about 3.07 dB (T1H), 2.89 dB (T1M) and 4.12 dB (T1L). The SIR improvement of IIS-NMPCF over IIS-NMPCF is about 4.96 dB (T1H), 3.23 dB (T1M) and 3.02 dB (T1L). However, the dispersion between SDR and SIR results increases when the respiratory training signal is incorporated into the co-factorization process.
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- World Health Organization. Chronic Respiratory Diseases. Available online: https://www.who.int/health-topics/chronic-respiratory-diseases#tab=tab_1 (accessed on 6 February 2020).
- Fenton, T.R.; Pasterkamp, H.; Tal, A.; Chernick, V. Automated spectral characterization of wheezing in asthmatic children. IEEE Trans. Biomed. Eng. 1985, 32, 50–55. [Google Scholar] [CrossRef] [PubMed]
- Pramono, R.X.A.; Imtiaz, S.A.; Rodriguez-Villegas, E. Evaluation of features for classification of wheezes and normal respiratory sounds. PLoS ONE 2019, 14, e0213659. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pasterkamp, H.; Kraman, S.S.; Wodicka, G.R. Respiratory sounds: Advances beyond the stethoscope. Am. J. Respir. Crit. Care Med. 1997, 156, 974–987. [Google Scholar] [CrossRef] [PubMed]
- Sovijarvi, A.; Dalmasso, F.; Vanderschoot, J.; Malmberg, L.; Righini, G.; Stoneman, S. Definition of terms for applications of respiratory sounds. Eur. Respir. Rev. 2000, 10, 597–610. [Google Scholar]
- Salazar, A.J.; Alvarado, C.; Lozano, F.E. System of heart and lung sounds separation for store-and-forward telemedicine applications. Rev. Fac. Ing. Univ. Antioq. 2012, 64, 175–181. [Google Scholar]
- Forkheim, K.E.; Scuse, D.; Pasterkamp, H. A comparison of neural network models for wheeze detection. In Proceedings of the IEEE WESCANEX 95 Communications, Power, and Computing, Winnipeg, MB, Canada, 15–16 May 1995; Volume 1, pp. 214–219. [Google Scholar]
- Wiederhold, B.K.; Cipresso, P.; Pizzioli, D.; Wiederhold, M.; Riva, G. Intervention for physician burnout: A systematic review. Open Med. 2018, 13, 253–263. [Google Scholar] [CrossRef]
- Iskander, M. Burnout, cognitive overload, and metacognition in medicine. Med. Sci. Educ. 2019, 29, 325–328. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Q.; Feng, Z.; Benetos, E. Adaptive Noise Reduction for Sound Event Detection Using Subband-Weighted NMF. Sensors 2019, 19, 3206. [Google Scholar] [CrossRef] [Green Version]
- Emmanouilidou, D.; McCollum, E.D.; Park, D.E.; Elhilali, M. Adaptive noise suppression of pediatric lung auscultations with real applications to noisy clinical settings in developing countries. IEEE Trans. Biomed. Eng. 2015, 62, 2279–2288. [Google Scholar] [CrossRef] [Green Version]
- Homs-Corbera, A.; Fiz, J.A.; Morera, J.; Jané, R. Time-frequency detection and analysis of wheezes during forced exhalation. IEEE Trans. Biomed. Eng. 2004, 51, 182–186. [Google Scholar] [CrossRef]
- Alic, A.; Lackovic, I.; Bilas, V.; Sersic, D.; Magjarevic, R. A novel approach to wheeze detection. In World Congress on Medical Physics and Biomedical Engineering; Springer: Berlin/Heidelberg, Germany, 2007; pp. 963–966. [Google Scholar]
- Taplidou, S.A.; Hadjileontiadis, L.J. Wheeze detection based on time-frequency analysis of breath sounds. Comput. Biol. Med. 2007, 37, 1073–1083. [Google Scholar] [CrossRef] [PubMed]
- Emrani, S.; Gentimis, T.; Krim, H. Persistent homology of delay embeddings and its application to wheeze detection. IEEE Signal Process. Lett. 2014, 21, 459–463. [Google Scholar] [CrossRef]
- Mendes, L.; Vogiatzis, I.; Perantoni, E.; Kaimakamis, E.; Chouvarda, I.; Maglaveras, N.; Tsara, V.; Teixeira, C.; Carvalho, P.; Henriques, J.; et al. Detection of wheezes using their signature in the spectrogram space and musical features. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 5581–5584. [Google Scholar]
- Bokov, P.; Mahut, B.; Flaud, P.; Delclaux, C. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population. Comput. Biol. Med. 2016, 70, 40–50. [Google Scholar] [CrossRef]
- Lozano-García, M.; Fiz, J.A.; Martínez-Rivera, C.; Torrents, A.; Ruiz-Manzano, J.; Jané, R. Novel approach to continuous adventitious respiratory sound analysis for the assessment of bronchodilator response. PLoS ONE 2017, 12, e0171455. [Google Scholar] [CrossRef] [PubMed]
- Nabi, F.G.; Sundaraj, K.; Lam, C.K. Identification of asthma severity levels through wheeze sound characterization and classification using integrated power features. Biomed. Signal Process. Control 2019, 52, 302–311. [Google Scholar] [CrossRef]
- Wisniewski, M.; Zielinski, T.P. Joint application of audio spectral envelope and tonality index in an e-asthma monitoring system. IEEE J. Biomed. Health Inform. 2015, 19, 1009–1018. [Google Scholar] [CrossRef] [PubMed]
- Lozano, M.; Fiz, J.A.; Jané, R. Automatic differentiation of normal and continuous adventitious respiratory sounds using ensemble empirical mode decomposition and instantaneous frequency. IEEE J. Biomed. Health Inform. 2015, 20, 486–497. [Google Scholar] [CrossRef]
- Shaharum, S.M.; Sundaraj, K.; Aniza, S.; Palaniappan, R.; Helmy, K. Classification of asthma severity levels by wheeze sound analysis. In Proceedings of the IEEE Conference on Systems, Process and Control (ICSPC), Bandar Hilir, Malaysia, 16–18 December 2016; pp. 172–176. [Google Scholar]
- Pramono, R.X.A.; Imtiaz, S.A.; Rodriguez-Villegas, E. Evaluation of Mel-Frequency Cepstrum for Wheeze Analysis. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 4686–4689. [Google Scholar]
- Mayorga, P.; Druzgalski, C.; Morelos, R.; Gonzalez, O.; Vidales, J. Acoustics based assessment of respiratory diseases using GMM classification. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 6312–6316. [Google Scholar]
- Le Cam, S.; Belghith, A.; Collet, C.; Salzenstein, F. Wheezing sounds detection using multivariate generalized Gaussian distributions. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 541–544. [Google Scholar]
- Ulukaya, S.; Serbes, G.; Kahya, Y.P. Wheeze type classification using non-dyadic wavelet transform based optimal energy ratio technique. Comput. Biol. Med. 2019, 104, 175–182. [Google Scholar] [CrossRef]
- Lin, B.S.; Wu, H.D.; Chen, S.J. Automatic wheezing detection based on signal processing of spectrogram and back-propagation neural network. J. Healthc. Eng. 2015, 6, 649–672. [Google Scholar] [CrossRef] [Green Version]
- Kochetov, K.; Putin, E.; Azizov, S.; Skorobogatov, I.; Filchenkov, A. Wheeze detection using convolutional neural networks. In EPIA Conference on Artificial Intelligence; Springer: Cham, Switzerland, 2017; pp. 162–173. [Google Scholar]
- Jin, F.; Krishnan, S.; Sattar, F. Adventitious sounds identification and extraction using temporal-spectral dominance-based features. IEEE Trans. Biomed. Eng. 2011, 58, 3078–3087. [Google Scholar]
- Riella, R.; Nohama, P.; Maia, J. Method for automatic detection of wheezing in lung sounds. Braz. J. Med. Biol. Res. 2009, 42, 674–684. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Torre-Cruz, J.; Canadas-Quesada, F.; Vera-Candeas, P.; Montiel-Zafra, V.; Ruiz-Reyes, N. Wheezing Sound Separation Based on Constrained Non-Negative Matrix Factorization. In Proceedings of the 10th International Conference on Bioinformatics and Biomedical Technology (ICBBT), Amsterdam, The Netherlands, 16–18 May 2018; pp. 18–24. [Google Scholar]
- Torre-Cruz, J.; Canadas-Quesada, F.; Carabias-Orti, J.; Vera-Candeas, P.; Ruiz-Reyes, N. A novel wheezing detection approach based on constrained non-negative matrix factorization. Appl. Acoust. 2019, 148, 276–288. [Google Scholar] [CrossRef]
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
- Lee, D.D.; Seung, H.S. Algorithms for non-negative matrix factorization. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 3–8 December 2001; pp. 556–562. [Google Scholar]
- Zafeiriou, S.; Tefas, A.; Buciu, I.; Pitas, I. Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification. IEEE Trans. Neural Netw. 2006, 17, 683–695. [Google Scholar] [CrossRef] [Green Version]
- Benetos, E.; Kotropoulos, C. Non-negative tensor factorization applied to music genre classification. IEEE Trans. Audio Speech Lang. Process. 2010, 18, 1955–1967. [Google Scholar] [CrossRef] [Green Version]
- Févotte, C.; Bertin, N.; Durrieu, J.L. Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis. Neural Comput. 2009, 21, 793–830. [Google Scholar] [CrossRef]
- Canadas-Quesada, F.; Ruiz-Reyes, N.; Carabias-Orti, J.; Vera-Candeas, P.; Fuertes-Garcia, J. A non-negative matrix factorization approach based on spectro-temporal clustering to extract heart sounds. Appl. Acoust. 2017, 125, 7–19. [Google Scholar] [CrossRef]
- Laroche, C.; Kowalski, M.; Papadopoulos, H.; Richard, G. A structured nonnegative matrix factorization for source separation. In Proceedings of the 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2033–2037. [Google Scholar]
- Kitamura, D.; Ono, N.; Saruwatari, H.; Takahashi, Y.; Kondo, K. Discriminative and reconstructive basis training for audio source separation with semi-supervised nonnegative matrix factorization. In Proceedings of the 2016 IEEE International Workshop on Acoustic Signal Enhancement (IWAENC), Xi’an, China, 13–16 September 2016; pp. 1–5. [Google Scholar]
- Wang, Z.; Sha, F. Discriminative non-negative matrix factorization for single-channel speech separation. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 3749–3753. [Google Scholar]
- Chung, H.; Plourde, E.; Champagne, B. Discriminative training of NMF model based on class probabilities for speech enhancement. IEEE Signal Process. Lett. 2016, 23, 502–506. [Google Scholar] [CrossRef]
- Smaragdis, P.; Raj, B.; Shashanka, M. Supervised and semi-supervised separation of sounds from single-channel mixtures. In International Conference on Independent Component Analysis and Signal Separation; Springer: Berlin, Germany, 2007; pp. 414–421. [Google Scholar]
- Lee, H.; Yoo, J.; Choi, S. Semi-supervised nonnegative matrix factorization. IEEE Signal Process. Lett. 2009, 17, 4–7. [Google Scholar]
- Lu, N.; Li, T.; Pan, J.; Ren, X.; Feng, Z.; Miao, H. Structure constrained semi-nonnegative matrix factorization for EEG-based motor imagery classification. Comput. Biol. Med. 2015, 60, 32–39. [Google Scholar] [CrossRef]
- Cañadas-Quesada, F.J.; Vera-Candeas, P.; Martinez-Munoz, D.; Ruiz-Reyes, N.; Carabias-Orti, J.J.; Cabanas-Molero, P. Constrained non-negative matrix factorization for score-informed piano music restoration. Digit. Signal Process. 2016, 50, 240–257. [Google Scholar] [CrossRef]
- Carabias-Orti, J.; Canadas-Quesada, F.; Vera-Candeas, P.; Ruiz-Reyes, N. Non-Negative Matrix Factorization (NMF) Applied to Monaural Audio Signal Processing. In Independent Component Analysis (ICA): Algorithms, Applications and Ambiguities; Salazar, A., Vergara, L., Eds.; Nova Science Publisher’s: Hauppauge, NY, USA, 2018; Chapter 7. [Google Scholar]
- Yoo, J.; Kim, M.; Kang, K.; Choi, S. Nonnegative matrix partial co-factorization for drum source separation. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 1942–1945. [Google Scholar]
- Kim, M.; Yoo, J.; Kang, K.; Choi, S. Blind rhythmic source separation: Nonnegativity and repeatability. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 2006–2009. [Google Scholar]
- Kim, M.; Yoo, J.; Kang, K.; Choi, S. Nonnegative matrix partial co-factorization for spectral and temporal drum source separation. IEEE J. Sel. Top. Signal Process. 2011, 5, 1192–1204. [Google Scholar] [CrossRef]
- Hu, Y.; Liu, G. Separation of singing voice using nonnegative matrix partial co-factorization for singer identification. IEEE/ACM Trans. Audio Speech Lang. Process. 2015, 23, 643–653. [Google Scholar] [CrossRef]
- Seichepine, N.; Essid, S.; Févotte, C.; Cappé, O. Soft nonnegative matrix co-factorization. IEEE Trans. Signal Process. 2014, 62, 5940–5949. [Google Scholar] [CrossRef]
- Chen, H.; Yuan, X.; Li, J.; Pei, Z.; Zheng, X. Automatic Multi-Level In-Exhale Segmentation and Enhanced Generalized S-Transform for wheezing detection. Comput. Methods Progr. Biomed. 2019, 178, 163–173. [Google Scholar] [CrossRef] [PubMed]
- Torre-Cruz, J.; Canadas-Quesada, F.; García-Galán, S.; Ruiz-Reyes, N.; Vera-Candeas, P.; Carabias-Orti, J. A constrained tonal semi-supervised non-negative matrix factorization to classify presence/absence of wheezing in respiratory sounds. Appl. Acoust. 2020, 161, 107–188. [Google Scholar] [CrossRef]
- Grais, E.M.; Erdogan, H. Single channel speech music separation using nonnegative matrix factorization and spectral masks. In Proceedings of the 2011 17th International Conference on Digital Signal Processing (DSP), Corfu, Greece, 6–8 July 2011; pp. 1–6. [Google Scholar]
- The r.a.l.e. Repository. Available online: http://www.rale.ca (accessed on 6 February 2020).
- Stethographics Lung Sound Samples. Available online: http://www.stethographics.com (accessed on 6 February 2020).
- 3 m Littmann Stethoscopes. Available online: https://www.3m.com (accessed on 6 February 2020).
- East Tennessee State University Pulmonary Breath Sounds. Available online: http://faculty.etsu.edu (accessed on 6 February 2020).
- ICBHI 2017 Challenge. Available online: https://bhichallenge.med.auth.gr (accessed on 6 February 2020).
- Lippincott NursingCenter. Available online: https://www.nursingcenter.com (accessed on 6 February 2020).
- Thinklabs Digital Stethoscope. Available online: https://www.thinklabs.com (accessed on 6 February 2020).
- Thinklabs Youtube. Available online: https://www.youtube.com/channel/UCzEbKuIze4AI1523_AWiK4w (accessed on 6 February 2020).
- Emedicine/Medscape. Available online: https://emedicine.medscape.com/article/1894146-overview#a3 (accessed on 6 February 2020).
- E-learning Resources. Available online: https://www.ers-education.org/e-learning/reference-database-of-respiratory-sounds.aspx (accessed on 6 February 2020).
- Respiratory Wiki. Available online: http://respwiki.com/Breath_sounds (accessed on 6 February 2020).
- Easy Auscultation. Available online: https://www.easyauscultation.com/lung-sounds-reference-guide (accessed on 6 February 2020).
- Colorado State University. Available online: http://www.cvmbs.colostate.edu/clinsci/callan/breath_sounds.htm (accessed on 6 February 2020).
- Vincent, E.; Gribonval, R.; Févotte, C. Performance measurement in blind audio source separation. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1462–1469. [Google Scholar] [CrossRef] [Green Version]
- Févotte, C.; Gribonval, R.; Vincent, E. BSS_EVAL Toolbox User Guide—Revision 2.0. 2005; p. 19. Available online: https://hal.inria.fr/inria-00564760 (accessed on 1 May 2020).
ID1 | ID2 | ID3 | ID4 | ID5 | ID6 | ID7 | ID8 | ID9 |
---|---|---|---|---|---|---|---|---|
P1 | 48 | 5–24 | 721 | [0–9] | [4–16] | 496 | [1–8] | 92 |
T1H | 16 | 7–22 | 251 | 5 | [6–14] | 126 | [1–5] | 41 |
T1M | 16 | 7–22 | 251 | 0 | [6–14] | 126 | [1–5] | 41 |
T1L | 16 | 7–22 | 251 | −5 | [6–14] | 126 | [1–5] | 41 |
IIS-NMPCF Approach Parameters | ||||||
Optimal values | 64 | 32 | 10 | 1 | 0.1 | 0.01 |
Method | Approach | Modelling Associated to WS and RS |
---|---|---|
NMF | NMF | |
SSNMF | NMF | |
SNMF | NMF | and |
CNMF | NMF | Sparseness and Smoothness constraints |
1S-NMPCF | NMPCF | |
2S-NMPCF | NMPCF | and |
T-NMPCF | NMPCF | L-segments |
ST-NMPCF | NMPCF | L-segments and |
IIS-NMPCF | NMPCF | L-segments and |
IIS-NMPCF | NMPCF | L-segments, and |
Method | SDR | SIR | SAR | SDR | SIR | SAR |
---|---|---|---|---|---|---|
NMF | ||||||
SSNMF | ||||||
SNMF | ||||||
2S-NMPCF | ||||||
1S-NMPCF | ||||||
T-NMPCF | ||||||
CNMF | ||||||
ST-NMPCF | ||||||
IIS-NMPCF |
Method | SDR | SIR | SAR | SDR | SIR | SAR |
---|---|---|---|---|---|---|
NMF | ||||||
SNMF | ||||||
SSNMF | ||||||
2S-NMPCF | ||||||
1S-NMPCF | ||||||
T-NMPCF | ||||||
CNMF | ||||||
ST-NMPCF | ||||||
IIS-NMPCF |
Method | SDR | SIR | SAR | SDR | SIR | SAR |
---|---|---|---|---|---|---|
NMF | ||||||
SNMF | ||||||
SSNMF | ||||||
2S-NMPCF | ||||||
1S-NMPCF | ||||||
T-NMPCF | ||||||
CNMF | ||||||
ST-NMPCF | ||||||
IIS-NMPCF |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
De La Torre Cruz, J.; Cañadas Quesada, F.J.; Ruiz Reyes, N.; Vera Candeas, P.; Carabias Orti, J.J. Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization. Sensors 2020, 20, 2679. https://doi.org/10.3390/s20092679
De La Torre Cruz J, Cañadas Quesada FJ, Ruiz Reyes N, Vera Candeas P, Carabias Orti JJ. Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization. Sensors. 2020; 20(9):2679. https://doi.org/10.3390/s20092679
Chicago/Turabian StyleDe La Torre Cruz, Juan, Francisco Jesús Cañadas Quesada, Nicolás Ruiz Reyes, Pedro Vera Candeas, and Julio José Carabias Orti. 2020. "Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization" Sensors 20, no. 9: 2679. https://doi.org/10.3390/s20092679
APA StyleDe La Torre Cruz, J., Cañadas Quesada, F. J., Ruiz Reyes, N., Vera Candeas, P., & Carabias Orti, J. J. (2020). Wheezing Sound Separation Based on Informed Inter-Segment Non-Negative Matrix Partial Co-Factorization. Sensors, 20(9), 2679. https://doi.org/10.3390/s20092679