1. Introduction
The boom in Internet information technology has made digital audio, such as telephone recordings, voice messages, and music files, readily available in our daily lives [
1,
2]. Due to the low threshold and powerful operation of existing audio editing software, digital audio tampering can be easily accomplished by an average user without any expertise in audio processing [
3,
4]. In addition, millisecond digital audio tampering fragments are often difficult to identify [
1,
5,
6], and unscrupulous individuals may use digital audio tampering to try to evade legal sanctions and even cause harm to society. As a result, digital audio forensic methods are increasingly in demand in areas such as judicial forensics, scientific discovery, and commercial applications to reduce the impact caused by such incidents [
7,
8,
9,
10,
11,
12].
Forensic techniques for digital audio are mainly divided into two types: active forensics and passive forensics [
1]. The active forensic technique of digital audio is mainly used to determine the authenticity or integrity of audio by detecting whether the pre-embedded digital signature or digital watermark is corrupted. However, in practical applications, most of the audio signals are not pre-embedded with watermarks or signatures at the time of recording, so digital audio active forensic techniques have limitations in applications. Passive detection of digital audio tampering means that there is no need to add any information, and the authenticity and integrity of digital audio are discriminated only by the characteristics of digital audio itself. Passive detection of digital audio tampering is more practical for audio forensics in complex environments, which is why the method proposed in this paper focuses on it.
In recent years, research results in the field of passive detection of digital audio tampering have focused on the selection of audio features, such as difference information of background noise [
13,
14,
15,
16,
17], spectrograms of audio content [
18,
19], pitch [
20,
21,
22], resonance peaks [
22], ENF difference information [
23], ENF harmonic signals [
24], ENF phase and frequency information [
6,
25,
26], etc. The ENF is automatically embedded in the audio when recording and is characterized by random fluctuations at nominal frequencies (50 Hz or 60 Hz) with some stability and uniqueness [
27]. Therefore, ENF is widely used for tampering detection of digital audio.
Most of the methods based on the ENF digital audio tampering detection extract ENF feature information and achieve tampering detection by classification algorithms. However, in the selection of features, most researchers only use spatial or temporal information of the ENF, resulting in a certain degree of loss of tampering information, which leads to a weak characterization of features and a low classification accuracy.
To solve the problems of weak feature representation and low classification accuracy, inspired by the success of deep representation learning in speaker recognition [
28,
29], computer vision [
30,
31,
32,
33,
34,
35,
36,
37,
38], and big data [
39,
40], this paper proposes a digital audio tampering detection method based on the ENF deep temporal–spatial feature. Moreover, this paper first extracts the phase sequence of the ENF by using the first-order DFT analysis method and then divides the frames according to the ENF temporal variation information to obtain the time series matrix of ENF phases to represent the shallow temporal features of ENF. Finally, the unequal phase sequences are framed by adaptive frameshifting to obtain a matrix of the same size to represent the shallow spatial features of ENF. The construction of the parallel RDTCN-CNN network model mainly consists of four parts: a deep temporal feature extraction module, a deep spatial feature extraction module, a temporal–spatial feature fusion module, and a classification module. In the deep temporal feature extraction module, we extract deep temporal features based on the causal convolution principle of RDTCN. In the deep spatial feature extraction module, we extract deep spatial features by using the excellent spatial representation ability of CNN. In the temporal–spatial feature fusion module, we use the branch attention mechanism to adaptively assign weights to deep spatial and temporal features to obtain the fused temporal–spatial features. In the classification module, we determine whether the audio is tampered with or not by a multilayer perceptron (MLP).
The main contributions made in this paper are as follows:
Based on the extraction of high-precision ENF phase sequences, we frame the ENF according to its temporal volatility variation to represent the temporal features of the ENF and frame the ENF by adaptive frameshifting to obtain a phase feature matrix of the same size to represent the spatial features of the ENF. The feature representation capability is enhanced by deeply mining the tampering information of different dimensions in the ENF.
We exploit the excellent modeling ability of RDTCN in the time domain and the spatial representation ability of CNN to extract deep temporal and spatial features and use the branch attention mechanism to adaptively assign the weights of temporal and spatial information to achieve the fusion of temporal–spatial features. The fused temporal–spatial features, with complementary advantages, are beneficial to improve detection accuracy. The implementation code for this study is posted at
https://github.com/CCNUZFW/DTSF-ENF (accessed on 21 February 2023).
The proposed framework achieves state-of-the-art performance on the datasets Carioca, New Spanish, and ENF_Audio compared to the four baseline methods. Compared with the baseline model, the accuracy is improved by 0.80% to 7.51% and the F1-score is improved by 0.86% to 7.53%.
The rest of this paper is organized as follows.
Section 2 describes the existing related work. In
Section 3, we provide the problem definition for this study and summarize the important symbols that appear in this paper.
Section 4 describes the framework proposed in this paper.
Section 5 presents the dataset used to evaluate the performance of the framework, details of the specific experimental setup, and comparison experiments. Finally, the paper concludes in
Section 6 and lists the directions for future work.
Author Contributions
Conceptualization, Z.W. and C.Z.; methodology, Z.W. and C.Z.; software, S.K.; validation, Z.W., K.L., Y.Z. and C.Z.; formal analysis, Z.W. and C.Z.; investigation, Z.W. and C.Z.; resources, Z.W. and C.Z.; data curation, Z.W. and C.Z.; writing—original draft preparation, Z.W. and C.Z.; writing—review and editing, Z.W. and C.Z.; visualization, Z.W., K.L., Y.Z. and C.Z.; supervision, Z.W.; project administration, Z.W.; funding acquisition, Z.W. and C.Z. All authors have read and agreed to the published version of the manuscript.
Funding
The research work in this paper was supported by the National Natural Science Foundation of China (No. 62177022, 61901165), AI and Faculty Empowerment Pilot Project (No. CCNUAI&FE2022-03-01), Collaborative Innovation Center for Informatization and Balanced Development of K-12 Education by MOE and Hubei Province (No. xtzd2021-005), National Natural Science Foundation of China (No. 61501199), and Natural Science Foundation of Hubei Province (No. 2022CFA007).
Informed Consent Statement
This study did not involve humans.
Data Availability Statement
Data will be made available on reasonable request.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
ENF | Electrical Network Frequency |
RDTCN | Residual Dense Temporal Convolutional Networks |
CNN | Convolutional Neural Networks |
MLP | Multilayer Perceptron |
MFCC | Mel Frequency Cepstral Coefficient |
SVM | Support Vector Machine |
ENFC | ENF Component |
RFA | Robust Filtering Algorithm |
PSTN | Public Switched Telephone Network |
RNN | Recurrent Neural Networks |
LSTM | Long Short-Term Memory |
References
- Liu, Z.; Lu, W. Fast Copy-Move Detection of Digital Audio. In Proceedings of the 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC), Shenzhen, China, 26–29 June 2017; pp. 625–629. [Google Scholar] [CrossRef]
- Zeng, C.; Zhu, D.; Wang, Z.; Wang, Z.; Zhao, N.; He, L. An End-to-End Deep Source Recording Device Identification System for Web Media Forensics. Int. J. Web Inf. Syst. 2020, 16, 413–425. [Google Scholar] [CrossRef]
- Yan, Q.; Yang, R.; Huang, J. Detection of Speech Smoothing on Very Short Clip. IEEE Trans. Inf. Forensics Secur. 2019, 9, 2441–2453. [Google Scholar] [CrossRef]
- Wang, Z.; Yang, Y.; Zeng, C.; Kong, S.; Feng, S.; Zhao, N. Shallow and Deep Feature Fusion for Digital Audio Tampering Detection. EURASIP J. Adv. Signal Process. 2022, 2022, 1–20. [Google Scholar] [CrossRef]
- Zeng, C.; Yang, Y.; Wang, Z.; Kong, S.; Feng, S. Audio Tampering Forensics Based on Representation Learning of ENF Phase Sequence. Int. J. Digit. Crime Forensics 2022, 14, 1–19. [Google Scholar] [CrossRef]
- Wang, Z.F.; Wang, J.; Zeng, C.Y.; Min, Q.S.; Tian, Y.; Zuo, M.Z. Digital Audio Tampering Detection Based on ENF Consistency. In Proceedings of the 2018 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR) IEEE, Chengdu, China, 15–18 July 2018; pp. 209–214. [Google Scholar] [CrossRef]
- Hua, G.; Liao, H.; Wang, Q. Detection of Electric Network Frequency in Audio Recordings–From Theory to Practical Detectors; IEEE Press: Piscataway, NJ, USA, 2021; Volume 1, pp. 1556–6013. [Google Scholar] [CrossRef]
- Hajj-Ahmad, A.; Garg, R.; Wu, M. Instantaneous frequency estimation and localization for ENF signals. In Proceedings of the 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference IEEE, Hollywood, CA, USA, 3–6 December 2012; pp. 1–10. [Google Scholar]
- Bykhovsky, D. Recording Device Identification by ENF Harmonics Power Analysis. Forensic Sci. Int. 2020, 307, 110100. [Google Scholar] [CrossRef] [PubMed]
- Zeng, C.; Zhu, D.; Wang, Z.; Wu, M.; Xiong, W.; Zhao, N. Spatial and Temporal Learning Representation for End-to-End Recording Device Identification. EURASIP J. Adv. Signal Process. 2021, 2021, 41. [Google Scholar] [CrossRef]
- Lin, X.; Zhu, J.; Chen, D. Subband Aware CNN for Cell-Phone Recognition. IEEE Signal Process. Lett. 2020, 27, 5. [Google Scholar] [CrossRef]
- Verma, V.; Khanna, N. Speaker-Independent Source Cell-Phone Identification for Re-Compressed and Noisy Audio Recordings. Multimed. Tools Appl. 2021, 80, 23581–23603. [Google Scholar] [CrossRef]
- Meng, X.; Li, C.; Tian, L. Detecting Audio Splicing Forgery Algorithm Based on Local Noise Level Estimation. In Proceedings of the 2018 5th International Conference on Systems and Informatics (ICSAI), Nanjing, China, 10–12 November 2018; pp. 861–865. [Google Scholar] [CrossRef]
- Lin, X.; Kang, X. Exposing speech tampering via spectral phase analysis. Digit. Signal Process. 2017, 1, 63–74. [Google Scholar] [CrossRef]
- Yan, D.; Dong, M.; Gao, J. Exposing Speech Transsplicing Forgery with Noise Level Inconsistency. Secur. Commun. Netw. 2021, 1, 6. [Google Scholar] [CrossRef]
- Narkhede, M.; Patole, R. Acoustic scene identification for audio authentication. Soft Comput. Signal Process. 2021, 1, 593–602. [Google Scholar]
- Capoferri, D.; Borrelli, C. Speech Audio Splicing Detection and Localization Exploiting Reverberation Cues. In Proceedings of the 2020 IEEE International Workshop on Information Forensics and Security (WIFS), New York, NY, USA, 6–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Jadhav, S.; Patole, R.; Rege, P. Audio Splicing Detection using Convolutional Neural Network. In Proceedings of the 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kanpur, India, 6–8 July 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Saleem, S.; Dilawari, A.; Khan, U. Spoofed Voice Detection using Dense Features of STFT and MDCT Spectrograms. In Proceedings of the 2021 International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 5–7 April 2021; pp. 56–61. [Google Scholar] [CrossRef]
- Li, C.; Sun, Y.; Meng, X. Homologous Audio Copy-move Tampering Detection Method Based on Pitch. In Proceedings of the 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi’an, China, 16–19 October 2019; pp. 530–534. [Google Scholar] [CrossRef]
- Yan, Q.; Yang, R.; Huang, J. Robust Copy–Move Detection of Speech Recording Using Similarities of Pitch and Formant. IEEE Trans. Inf. Forensics Secur. 2019, 9, 2331–2341. [Google Scholar] [CrossRef]
- Xie, X.; Lu, W.; Liu, X. Copy-move detection of digital audio based on multi-feature decision. J. Inf. Secur. Appl. 2018, 10, 37–46. [Google Scholar] [CrossRef]
- Lin, X.; Kang, X. Supervised audio tampering detection using an autoregressive model. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2142–2146. [Google Scholar] [CrossRef]
- Hua, G.; Liao, H.; Zhang, H. Robust ENF Estimation Based on Harmonic Enhancement and Maximum Weight Clique. IEEE Trans. Inf. Forensics Secur. 2021, 7, 3874–3887. [Google Scholar] [CrossRef]
- Nicolalde, D.; Apolinario, J. Audio Authenticity: Detecting ENF Discontinuity With High Precision Phase Analysis. IEEE Trans. Inf. Forensics Secur. 2010, 9, 534–543. [Google Scholar] [CrossRef]
- Reis, P.; Lustosa, J.; Miranda, R. ESPRIT-Hilbert-Based Audio Tampering Detection With SVM Classifier for Forensic Analysis via Electrical Network Frequency. IEEE Trans. Inf. Forensics Secur. 2017, 4, 853–864. [Google Scholar] [CrossRef]
- Zakariah, M.; Khan, M.; Malik, H. Digital multimedia audio forensics: Past, present and future. Multimed. Tools Appl. 2017, 1, 1009–1040. [Google Scholar] [CrossRef]
- Bai, Z.; Zhang, X.L. Speaker Recognition Based on Deep Learning: An Overview. Neural Netw. 2021, 140, 65–99. [Google Scholar] [CrossRef]
- Mohd Hanifa, R.; Isa, K.; Mohamad, S. A Review on Speaker Recognition: Technology and Challenges. Comput. Electr. Eng. 2021, 90, 107005. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, Z.; Zeng, C.; Yu, Y.; Wan, X. High-Quality Image Compressed Sensing and Reconstruction with Multi-Scale Dilated Convolutional Neural Network. Circuits Syst. Signal Process. 2022, 42, 1–24. [Google Scholar] [CrossRef]
- Abdu, S.A.; Yousef, A.H.; Salem, A. Multimodal Video Sentiment Analysis Using Deep Learning Approaches, a Survey. Inf. Fusion 2021, 76, 204–226. [Google Scholar] [CrossRef]
- Bayoudh, K.; Knani, R.; Hamdaoui, F.; Mtibaa, A. A Survey on Deep Multimodal Learning for Computer Vision: Advances, Trends, Applications, and Datasets. Vis. Comput. 2022, 38, 2939–2970. [Google Scholar] [CrossRef]
- Chango, W.; Lara, J.A.; Cerezo, R.; Romero, C. A Review on Data Fusion in Multimodal Learning Analytics and Educational Data Mining. WIREs Data Min. Knowl. Discov. 2022, 12, e1458. [Google Scholar] [CrossRef]
- Dimitri, G.M. A Short Survey on Deep Learning for Multimodal Integration: Applications, Future Perspectives and Challenges. Computers 2022, 11, 163. [Google Scholar] [CrossRef]
- Gandhi, A.; Adhvaryu, K.; Poria, S.; Cambria, E.; Hussain, A. Multimodal Sentiment Analysis: A Systematic Review of History, Datasets, Multimodal Fusion Methods, Applications, Challenges and Future Directions. Inf. Fusion 2023, 91, 424–444. [Google Scholar] [CrossRef]
- Han, X.; Wang, Y.T.; Feng, J.L.; Deng, C.; Chen, Z.H.; Huang, Y.A.; Su, H.; Hu, L.; Hu, P.W. A Survey of Transformer-Based Multimodal Pre-Trained Modals. Neurocomputing 2023, 515, 89–106. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
- Wang, Z.; Yan, W.; Zeng, C.; Tian, Y.; Dong, S. A Unified Interpretable Intelligent Learning Diagnosis Framework for Learning Performance Prediction in Intelligent Tutoring Systems. Int. J. Intell. Syst. 2023, 2023, 1–20. [Google Scholar] [CrossRef]
- Wu, T.; Ling, Q. Self-Supervised Heterogeneous Hypergraph Network for Knowledge Tracing. Inf. Sci. 2023, 624, 200–216. [Google Scholar] [CrossRef]
- Pan, X.; Zhang, X. Detecting splicing in digital audios using local noise level estimation. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 1841–1844. [Google Scholar] [CrossRef]
- Malik, H. Acoustic environment identification and its applications to audio forensics. IEEE Trans. Inf. Forensics Secur. 2013, 8, 1827–1837. [Google Scholar] [CrossRef]
- Mascia, M.; Canclini, A.; Antonacci, F. Forensic and anti-forensic analysis of indoor/outdoor classifiers based on acoustic clues. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2072–2076. [Google Scholar] [CrossRef]
- Ikram, S.; Malik, H. Digital audio forensics using background noise. In Proceedings of the 2010 IEEE International Conference on Multimedia and Expo, Singapore, 19–23 July 2010; pp. 106–110. [Google Scholar] [CrossRef]
- Chen, J.; Xiang, S.; Huang, H. Detecting and locating digital audio forgeries based on singularity analysis with wavelet packet. Multimed. Tools Appl. 2016, 2, 2303–2325. [Google Scholar] [CrossRef]
- Imran, M.; Xiang, S.; Huang, H. Blind detection of copy-move forgery in digital audio forensics. IEEE Access 2017, 6, 12843–12855. [Google Scholar] [CrossRef]
- Esquef, P.A.A.; Apolinário, J.A.; Biscainho, L.W.P. Edit Detection in Speech Recordings via Instantaneous Electric Network Frequency Variations. IEEE Trans. Inf. Forensics Secur. 2014, 10, 2314–2326. [Google Scholar] [CrossRef]
- Mao, M.; Xiao, Z.; Kang, X.; Li, X. Electric Network Frequency Based Audio Forensics Using Convolutional Neural Networks. IFIP Adv. Inf. Commun. Technol. 2020, 8, 253–270. [Google Scholar] [CrossRef]
- Sarkar, M.; Chowdhury, D.; Shahnaz, C.; Fattah, S.A. Application of Electrical Network Frequency of Digital Recordings for Location-Stamp Verification. Appl. Sci. 2019, 9, 3135. [Google Scholar] [CrossRef]
- Karantaidis, G.; Kotropoulos, C. Blackman–Tukey spectral estimation and electric network frequency matching from power mains and speech recordings. IET Signal Process. 2021, 6, 396–409. [Google Scholar] [CrossRef]
- Hua, G.; Zhang, H. ENF Signal Enhancement in Audio Recordings. IEEE Trans. Inf. Forensics Secur. 2020, 11, 1868–1878. [Google Scholar] [CrossRef]
- Ortega-Garcia, J.; Gonzalez-Rodriguez, J. Audio Speech variability in automatic speaker recognition systems for commercial and forensic purposes. IEEE Aerosp. Electron. Syst. Mag. 2000, 11, 27–32. [Google Scholar] [CrossRef]
Figure 1.
Digital audio tampering detection task flowchart, where and denote the probabilities of tampered and untampered categories and denotes the maximum of and .
Figure 2.
A framework diagram of digital audio tampering detection based on ENF deep temporal–spatial feature, the model is divided into two steps: (1) shallow temporal and spatial feature extraction and (2) parallel RDTCN-CNN network model construction.
Figure 3.
Phase curve diagramof the original and tampered audio. (a) is the waveform graph of the original audio, (b) is the waveform graph of the tampered audio. When the audio is not tampered with, the phase curve is relatively smooth, as shown in (c). The phase of the audio; when phase tampering occurs, the phase curve changes abruptly at the point of tampering, as shown in (d). The phase of the tampered audio, where the audio undergoes a tampering operation of deletion at around 9 s.
Figure 4.
RDTCN network structure figure (l: activation values in the l-th layer, d: dilation rate, +: concatenate operation, ⊕: add operation).
Figure 5.
Based on the CNN deep space feature extraction module, the network input is the shallow space features , m is 45, and the output is the deep space features extracted by the CNN network.
Figure 6.
Branch attention mechanism, where ⊕ is the operation and ⊗ is the dot product.
Figure 7.
Comparison between this method and the four baseline methods under different datasets and different evaluation indexes, respectively. DFT1-SVM, ES-SVM, PF-SVM, and X-BiLSTM are the baseline methods, RDTCN-CNN is this paper’s method, and are the evaluation metrics, and Carioca, New Spanish, and ENF_Audio are the three audio tampering detection databases.
Figure 8.
Comparison between this method and the four baseline methods under different datasets and different evaluation indexes, respectively.
Table 1.
Mathematical notations and descriptions.
Notations | Descriptions |
---|
, , , n | Digital audio signal, downsampled signal, ENFC signal, n is indexed |
| Downsampling frequency |
, | 1- order , k- order |
, | Signal after of and |
k, | Indexing of signal and signal peak points per frame |
, | 0th order phase sequence, 1st order phase sequence |
| Frequency value of the first-order ENF signal |
| Shallow temporal feature of ENF |
| Shallow spatial feature of ENF |
, | Rounding down, rounding up |
| Calculation formula of the frameshift |
| Dilated convolution formula |
| Binary cross-entropy loss function |
, | Prediction accuracy, F1-score |
Table 2.
Dataset information.
The Dataset | Carioca | New Spanish | ENF_Audio |
---|
Edited audio | 250 | 502 | 752 |
Original audio | 250 | 251 | 501 |
Total audio | 500 | 753 | 1253 |
Audio duration | 9∼35 s | 16∼35 s | 9∼35 s |
The training set | 350 | 527 | 877 |
The validation set | 50 | 75 | 125 |
The test set | 100 | 151 | 251 |
Table 3.
Comparison with baseline.
Method | Carioca | New Spanish | ENF_Audio |
---|
ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) |
---|
DFT1-SVM [25] | 89.90 | 90.22 | 88.86 | 86.84 | 90.51 | 90.55 |
ES-SVM [26] | 90.88 | 90.62 | 90.62 | 88.26 | 93.52 | 93.44 |
PF-SVM [6] | 93.05 | 92.86 | 90.22 | 87.56 | 92.60 | 92.82 |
X-BiLSTM [5] | 97.03 | 97.22 | 92.14 | 90.62 | 97.22 | 97.02 |
RDTCN-CNN | 97.96 | 97.54 | 95.60 | 94.50 | 98.02 | 97.88 |
Table 4.
Comparison of RDTCN and ordinary TCN.
Method | Carioca | New Spanish | ENF_Audio |
---|
ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) |
---|
Ordinary TCN | 96.58 | 96.22 | 93.56 | 91.88 | 97.42 | 97.46 |
RDTCN | 97.96 | 97.54 | 95.60 | 94.50 | 98.02 | 97.88 |
Table 5.
A comparative experiment of branch attention mechanisms.
Method | Carioca | New Spanish | ENF_Audio |
---|
ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) | ACC (%) | F1-Score (%) |
---|
Splice Fusion | 96.02 | 96.42 | 94.42 | 92.82 | 97.20 | 97.22 |
Branch | 97.96 | 97.54 | 95.60 | 94.50 | 98.02 | 97.88 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).