Next Article in Journal
Field Test of the MiniRadMeter Gamma and Neutron Detector for the EU Project CLEANDEM
Previous Article in Journal
Experimental Validation of Realistic Measurement Setup for Quantitative UWB-Guided Hyperthermia Temperature Monitoring
Previous Article in Special Issue
Performance Evaluation Method for Intelligent Computing Components for Space Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gas Detection and Classification Using Multimodal Data Based on Federated Learning

1
Business School, Henan University of Science and Technology, Luoyang 471300, China
2
Department of Informatics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand, India
3
Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
4
Department of CSE, Graphic Era Hill University, Dehradun 248007, Uttarakhand, India
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(18), 5904; https://doi.org/10.3390/s24185904
Submission received: 4 June 2024 / Revised: 25 June 2024 / Accepted: 28 June 2024 / Published: 11 September 2024
(This article belongs to the Special Issue Advanced Sensor Fusion in Industry 4.0)

Abstract

:
The identification of gas leakages is a significant factor to be taken into consideration in various industries such as coal mines, chemical industries, etc., as well as in residential applications. In order to reduce damage to the environment as well as human lives, early detection and gas type identification are necessary. The main focus of this paper is multimodal gas data that were obtained simultaneously by using multiple sensors for gas detection and a thermal imaging camera. As the reliability and sensitivity of low-cost sensors are less, they are not suitable for gas detection over long distances. In order to overcome the drawbacks of relying just on sensors to identify gases, a thermal camera capable of detecting temperature changes is also used in the collection of the current multimodal dataset The multimodal dataset comprises 6400 samples, including smoke, perfume, a combination of both, and neutral environments. In this paper, convolutional neural networks (CNNs) are trained on thermal image data, utilizing variants such as bidirectional long–short-term memory (Bi-LSTM), dense LSTM, and a fusion of both datasets to effectively classify comma separated value (CSV) data from gas sensors. The dataset can be used as a valuable source for research scholars and system developers to improvise their artificial intelligence (AI) models used for gas leakage detection. Furthermore, in order to ensure the privacy of the client’s data, this paper explores the implementation of federated learning for privacy-protected gas leakage classification, demonstrating comparable accuracy to traditional deep learning approaches.

1. Introduction

Engineering design innovations are helping humanity in solving economic and societal issues. Chemical firms are using technology to solve numerous issues; however, industrial risks might harm the ecology in the area. A frequent issue in the chemical industries is gas leakage. Industrial disasters frequently occur because of explosions, leaks, waste emissions, fires, etc. The improper disposal of household waste also causes leaks of toxic gases and odors. Wood burning is another significant contributor to air pollution. Workers’ deaths have also been caused by the gases that spilled during mining operations. Even though the machines are tested for gas leaks before installation, instances of gas leakage have been documented [1]. Gas-detecting sensors are frequently used close to regions where leaks are likely to occur. However, the sensors can sense only one particular gas, and they are not able to detect leaks when a combination of gases is found. Due to the hazardous nature of chemicals, manual gas identification using chemical apparatus is never a workable solution; for instance, smoke leaks impair visibility. Because of this, in gas leak scenarios, automatic gas identification is crucial. Both human and machine lives are saved as a result.
Utilizing sensor data fusion techniques to aggregate and compress the amount of data analyzed is one such solution. Data fusion is the process of combining information and data from various sources [2]. The objective is to blend data from several sources to create a cohesive picture of the application or job at hand. The application of data fusion techniques has a number of benefits, including increased data availability and authenticity, a reduction in duplicate data exchange, and a reduction in the amount of energy used to transmit the data [3]. As a result, given the basic design of gas leak detection systems, they represent a promising solution.
Federated learning (FL) is another paradigm that is worth utilizing. Using the FL machine learning paradigm, a high-quality centralized model is trained using data that are scattered across several locations [4,5,6]. The phrase originated with Google, which, in 2016, announced a method that could use data from various sources to compute an update to the current machine learning (ML) model [4] independently. After that, a central service receives this update, collects it into a novel hybrid model, and distributes it to the different sites. Therefore, rather than “bringing the data to the code,” this paradigm supports “bringing the code to the data” [5]. As a result, issues with data ownership, privacy, and location are resolved by the FL paradigm. FL appears to be a possible method for deriving relevant information from the acquired data while retaining its privacy and localization This is as a result of the fact that gas leak monitoring systems are spread and sensors are gathering data at a number of different places.
The following are the key contributions of this paper:
  • Using a CSV dataset, the identification of gas leaks based on fundamental deep learning models is performed.
  • Using a thermal image data set, the identification of gas leakage based on fundamental deep learning models is performed.
  • Identification for a multimodal gas leakage scenario using data from both CSV and image datasets is implemented.
  • An implementation of a gas identification method utilizing FL is established to ensure the confidentiality of the consumers’ personal data.
Additionally, the structure of this document is as follows: The details and connections to the current sensor-based gas datasets are provided in Section 2. Section 3 provides a thorough explanation of the suggested multimodal gas data and how they will be collected. Section 3 also includes a link to the dataset for download. The paper is finally concluded in Section 4.

2. Literature Review

Gas detection and classification play crucial roles in ensuring safety and security across various industrial and environmental contexts. Several approaches are used in the literature to identify gas leakage detection. A variety of inexpensive sensors have been developed in recent years to enable Internet of Things (IoT)-enabled systems that can detect leakages of gas [7,8,9,10]. However, because low-sensitivity sensors are used, these systems have restricted possibilities. It is a difficult undertaking that calls for the employment of technology to detect gas in a mixed environment. Popular chemical methods for identifying specific gas amounts in a mixed environment are colorimetric tape and gas chromatography [11,12,13]. Khalaf suggests the least-squares-based approach for categorizing gases and calculating the gas concentration [14]. For the goal of gas detection, machine learning techniques are used in [15]. For precise gas detection applications, deep-neural-network-based approaches are also used in [16,17,18,19]. All of these papers’ approaches, however, are predicated on the input data from various gas-detecting sensors. In experiments, a variety of sensors that are sensitive to various gases are created and taken into consideration. These techniques are elaborated in Table 1, as given below:
There are certain issues related to the recognition and detection process relying upon gas sensors. In general, there are not many gases in the air, and sometimes they cannot even be detected by a typical set of gas sensors. As a result, the framework’s detection accuracy suffers and detection becomes ambiguous. Additionally, less expensive sensors are typically less accurate and may not provide precise estimates. The temperature in the immediate area rises when there is a leak; the thermal camera [20] can also detect this temperature shift. The benefit of being able to locate the gas leaking from a further distance is provided by the use of a thermal camera. The demand for an accurate training dataset is growing as a result of the increased usefulness of data analytics and artificial intelligence techniques. The dataset’s accessibility will aid in system training as well as serve as a foundation and platform for the creation of new datasets. The sensor array is the primary method used to collect the existing datasets to detect gas leakages.
The identification of gas leaks has also been carried out using thermal imaging [21,22]. Little research has used it to accomplish this goal, however. Machine learning models have been used to analyze infrared (IR) thermal pictures for detection of gas leakages, for example, in [23]. Tensor-based leakage detection (TBLD), a method for finding gas leakage in remote regions using thermal cameras, is suggested in [24]. In the classification stage of leakage, various classification techniques are examined. To precisely identify gas leaks, a residual network with 50 layers was used (ResNet50). The study [25] proposed a novel biochar adsorbent with bimetal-doping demonstrated high Hg0 removal capabilities. The adsorbent injection method using electrostatic precipitators and fabric filters shows promise in reducing mercury emissions. The Hg0 removal amount of modified biochar is 13 times that of unmodified biochar.
However, only a small number of research studies have considered the E-nose’s numerous gas sensors and multimodal fusion of thermal images. Compared to 93% and 82% accuracy of the individual modalities of gas sensor data and thermal pictures, respectively, the authors in [26] found that accuracy achieved using the multimodal fusion of gas sensor data and thermal images was found to be 96%. The study [27] also used the multimodal fusion of sensor data and thermal imaging to find gas leaks. The study’s authors contrasted intermediate fusion and multitasking techniques. The findings showed that multitask fusion is more dependable and accurate than intermediate fusion. The accuracy of the fusion model is higher than that of separate models since it uses data from both modalities.
Based on the above literature study, the following gaps are identified:
(a)
Traditional gas detection systems, predominantly based on microcontrollers and IoT technologies, offer reliable means of detecting gas leakages but often face limitations in scalability and real-time monitoring capabilities [7,8,9].
(b)
Techniques such as gas chromatography and electronic nose systems provide accurate analysis, but they may struggle with detecting multiple gases simultaneously or handling complex environmental conditions [11,12,14].
(c)
Moreover, advancements in machine learning, particularly deep learning approaches, show promise in enhancing detection accuracy and robustness [16,17,18]. However, existing studies often focus on single sensor modalities and fail to fully leverage the potential of multimodal data integration for improved detection performance [19,26].
(d)
Additionally, concerns regarding data privacy and security in centralized machine learning approaches hinder the widespread adoption of these techniques in sensitive industrial settings [20]. Therefore, there is a need for innovative methodologies that can address these gaps and provide scalable, privacy-preserving solutions for accurate and robust gas detection and classification.
The proposed methodology integrates federated learning to address the challenges of scalability and data privacy. The various contributions of the proposed work are described as follows:
  • Novel Multimodal Dataset: The authors have proposed the novel, first-of-its-kind multimodal dataset for gas detection and introduced both the numerical data collected from the gas sensors and the thermal images. The dataset consists of 6400 samples across four classes: fragrance, smoke, aerosol mixture, and smoke–aerosol mixture, and ordinary setting.
  • Multimodal Deep Learning Approach: The research incorporates gas sensors with thermal cameras in a multimodal deep learning framework that the authors developed to optimally detect and distinguish gas leaks. The first set of models includes CNNs, LSTM, and Bidirectional LSTM, which is trained and tested on multimodal datasets.
  • Federated Learning Implementation: Thus, in response to the data privacy and scalability issues, the authors use federated learning methodologies. FL allows for training models cooperatively on the data present on distributed devices without compromising data privacy, as data holders can only upload a portion of their data to the server while the model can still be improved using the wisdom of crowds.
  • Comprehensive Evaluation: The proposed multimodal and federated models are thus thoroughly assessed based on metrics like accuracy, precision, recall, loss, etc. The results show a higher accuracy of the methods being proposed in comparison with single-modality approaches and other studies using the same dataset.
  • Competitive Performance: The above presented approach is more efficient than the contemporary research, which examined the same dataset for detecting the gas leakage, since the federated learning hit rate was 0.985 and 0.992, which is higher than the accuracies presented in the prior research findings.
  • Industry 5. 0 Relevance: In the context of Industry 5, the study suggests that advances in machine learning models such as multimodal data analysis and federated learning can benefit Industry 5 applications, which demonstrates the growing demand for data-oriented control and security systems. The proposed approach ensures the provision of the much-needed ways and means for efficient, accurate, privacy-conscious, and scalable solutions for the detection and identification of gases leakage in industries.
In sum, this work introduces a new multimodal dataset, the deep-learning-based system of utilizing multiple modalities, and the federated learning deployment for gas leakage detection and its classification, which is potentially beneficial to develop the data-driven smart system for industrial safety and security domains. Top of form.

3. Proposed Work

The research outlined in this study has four primary stages for identifying gas leakage: (1) Pre-processing of data from gas sensors and thermal cameras, (2) Classification of data using a multimodal system for deep learning techniques. (3) Training and analyzing results using CNN model architectures, and (4) implementing federated learning (FL) by establishing a central server and client site for processing multimodal input. Figure 1 illustrates the data classification scheme of the multimodal system. The FL design depicted in Figure 2 showcases many architectural configurations, with the left side representing its implementation within a single facility and the right side representing its implementation across multiple facilities.

3.1. Dataset

For the current research, a novel multimodal dataset that was gathered utilizing thermal imaging technology and gas sensors is utilized. The following is a list of the major features of the current dataset:
  • The dataset is recorded using two modalities: numerical values collected from gas sensors and images captured by the thermal camera.
  • Four classifications, namely, perfume, smoke, mixes of smoke and perfume, and neutral environment, are derived from a dataset that is collected using two gas sources, which are smoke and perfume.
  • This dataset is believed to be the first of its kind in gas detection and is offered for free usage.
The total number of samples in the multimodal gas data dataset is 6400. Four classes comprising 6400 samples are equally distributed. The dataset includes 1600 samples for the perfume class, 1600 samples for the smoke class, and 1600 samples for a combination of the two classes, as shown in Table 2 and Table 3. For the neutral environment (No gas) class, the remaining 1600 samples were gathered. The statistical analysis is also conducted to highlight the variation in the created dataset, and the results are given as a box plot in Figure 3, along with the statistical properties for the data acquired from gas sensors.

3.2. Multimodal Gas Detection Dataset

The dataset that was gathered utilizing numerous gas detectors and a thermal camera is the subject of the current study. In order to create a multimodal dataset, the gas sensors and thermal camera are utilized in tandem to gather information on the presence of a gas.

3.2.1. Gas Sensors

Seven metal-oxide gas detecting sensors and a thermal camera make up the apparatus utilized for collecting the dataset. The data collecting framework is depicted in Figure 4. There are several sensors used, including a thermal camera, Sensor2, Sensor3, Sensor5, Sensor6, Sensor7, Sensor8, and Sensor135. These sensors are sensitive to a number of gases, including carbon monoxide, methane, butane, LPG, alcohol, smoke, and others, as depicted in Table 2.

3.2.2. Thermal Camera

A thermal camera is a tool that uses infrared light to measure temperature fluctuations. Every pixel on a camera’s image sensor functions as an infrared temperature sensor and simultaneously measures the temperature of every point. The photos are produced using a temperature-based format and are displayed as red, green, and blue (RGB). In contrast to conventional image cameras, thermal cameras may operate in any environment, regardless of its shape or texture, and are not limited by dark environments [28]. The thermal camera that was employed in this study has a 36-degree field of view, a measurement range of 40 °C to 330 °C, a frame rate of 9 Hz, and 32,136 thermal pixels to enable easy viewing of a thermal image. It has 206,156 thermal sensors. The data for training and testing of the created fusion model are collected simultaneously using the thermal camera and gas sensors. The gathering of data and their preprocessing are covered in detail in the following section of the paper.

3.3. Preprocessing of Multimodal Data

The gas measurements from the seven metal-oxide (MOX) sensors are initially converted into heatmap images. Put simply, every numerical gas measurement taken every 2 s is converted into a heatmap RGB image. Each numerical value is assigned a color intensity value on the RGB scale. The measurements are plotted and assigned to a colormap pattern, resulting in the creation of an RGB image. This image is then saved with the file extension .jpg. Subsequently, the photos and IR thermal images are resized to match the dimensions of the input layers in the six distinct CNN versions. The data are subsequently partitioned into training and testing sets, with a ratio of 70% for training and 30% for testing or validation. The augmentation phase is vital for improving the training performance of CNNs. Augmenting the training data by increasing the number of photos enhances training efficacy and mitigates overfitting.

3.4. Data Classification for Multimodal System for DL Models

During this stage, it is necessary to collect data regarding the existence of a gas in order to generate a multimodal dataset. For this purpose, the thermal camera and gas sensors are merged. We retrieve the relevant data from each data type, such as quantitative data from gas sensors and visual images from heat sensors. Subsequently, the data are prepared by performing feature extraction, data normalization, and data cleaning. We construct a composite input representation for deep learning models by amalgamating these characteristics. LSTM, BiLSTM, and CNN are among the deep learning approaches that have been trained using numerical data collected from gas sensors. Standard rectified linear unit (ReLU), and Sigmoidal activation functions were employed for mentioned classification layers. The thermal sensors provide image data that are utilized to train convolutional neural networks (CNN), DenseNet, and visual geometry group (VGG16) models. Figure 5 provides a description of the setups for both datasets. To evaluate the effectiveness of our trained model in detecting gas leaks, we assess its performance on the test dataset using measures such as accuracy, precision, recall, and loss. To assess the effectiveness of the multimodal data, it is necessary to employ both slow and fast learning rates. During a slow learning rate, each modality produces a distinct set of feature representations that capture specific information related to that modality. These features are obtained by processing the features from each modality separately using specialized neural network architectures. The neural network architecture with a quick learning rate mixes or fuses data from several modalities at an early layer, even though they are initially processed independently.

3.5. Multimodal Data Classification in Federated Eco-System

FL, or federated learning, is an advanced method of machine learning that is becoming increasingly popular in both academic and business settings. The following text explains and discusses the fundamental mathematical ideas and techniques that form the basis of the FL paradigm. It also explores the possibilities of the FL paradigm in addressing the problem of water leak detection.
The initial presentation provides an overview of the overall structure, followed by a detailed explanation of the FL paradigm. In most circumstances, the FL architecture consists of a centralized FL server that can communicate with a group of devices that are ready to do the required FL task. The workflow consists of six primary steps [4,5]:
  • The collection of devices sends out a message signaling their availability, meaning they are ready to complete a FL task.
  • The FL server distributes the ML model to a subset of these accessible devices at time ti.
  • Each device then uses the local data to develop a new local machine learning model through a training process.
  • Every device transmits the updated parameters of its machine learning model, which are derived from the previously mentioned training process.
  • The FL server then combines the local models to compute the updated global ML model for time ti.
  • The FL server updates the global ML model and sends it to all devices.
This process is performed every round, with the FL server choosing how often to update it.
The FL paradigm aims to acquire the W matrix-representable parameters of the global ML model inside the field of mathematics. In order to do this, a portion of the total number of Dtot devices is sent the model W t i 1 by the FL server. Every device D t i goes through a training procedure to establish an updated local model W t i j . Each device then transmits its update to the FL server using the formula H t i j = W t i j W t i 1 . The FL server then combines these local modifications to create the following global model [4,5]:
W t i = W t i + 1 + α t i + 1 H t i
where α t i is the learning rate chosen by the FL server, and H t i is the average aggregated device-shared update, given by
H t i = 1 | D tot | j ϵ D t i H t i j
It is worth noting that the term H t i can be calculated as the weighted sum of the device-shared updates rather than the average for specific implementations [5].
This paradigm can be used to facilitate spread throughout numerous regions. This is appropriate for businesses with numerous manufacturing sites spread across various regions. The FL paradigm enables businesses to learn knowledge from a gas leak in one facility and apply this knowledge to other production sites, again making use of the fact that it is unusual to have many simultaneous leaks at separate locations. Each facility would, therefore, function as a FL device in this scenario by deploying a group of servers that are utilized to carry out the local training. A centralized cloud server (like the Amazon cloud service) that serves as the FL server and aggregates local models before sending back the utilized global ML models would connect all of the various facilities. As previously noted, ML detection models like SVM and artificial neural networks can be trained at the facility level with the centralized FL server delivering the global ML model after aggregation (as they have proven to be efficient leak detection methods). It should be noted that with such an architecture, it is predicted that the data would either be heterogeneous or not independently identically distributed (non-iid) due to the facilities’ varied hardware equipment capabilities or capacities. There are several strategies that can be used to address this. One strategy is to group together similar facilities and designate one of them to deliver updates on the group’s behalf [29]. Such a strategy would address both the variety of the data and the computational capability at each of the sites. A subset of the data from each facility will be shared globally as part of the “esgrssecond approach”. In this way, the data from other sites might be viewed and observed by the local models being trained at each location. For instance, Zhao et al. [30] demonstrated that sharing just 5% of the local data worldwide might considerably raise the global model’s quality. Therefore, a similar strategy can be used for the multi-facility design to guarantee a higher-quality global model at the centralized FL server.

4. Experimental Results and Analysis

The following section displays the outcomes of our implementation and provides details regarding the gas detection capabilities of our models. The recommended configuration for the system is shown in Table 4. The Table 5 displays the results for recall, accuracy, precision, F1 score, and loss function for every model that has been tested. We utilized accuracy as a unified numerical examination of a system’s completion to confirm its performance, while precision and recall were utilized to validate classification. Training accuracy is also known as categorical accuracy. This metric shows how well the models are categorizing the practice data [31]. An essential part of any deep neural network is the model loss function [32]. It shows how far off the actual results are from the predictions. One way to find out how bad CNN models are at making predictions from a dataset is to look at the model loss numbers [33]. We employ the test accuracy to assess the effectiveness of the models [34]. Following training, the most effective convolutional neural network (CNN) designs were chosen based on the performance indicators. We set out to improve test accuracy while reducing the model loss function [35,36,37,38]. Over the course of ten rounds, Adam optimized all models with a learning rate of 0.0001. Data from multiple instances can be accessible at the same time due to the multimodal system. In addition, there are a number of sources for the data, such as gas sensors (which supply CSV files) and thermal imaging (which supply pictures). To train models for both types of data at the same time, multimodal deep learning models are employed. The models subsequently generate results by utilizing shared categorization.

4.1. Analysis of Image and Sensor Data

Data collection and analysis from individual sensors are crucial for gas leakage detection systems to function. The accurate and timely identification of potential threats rely on these personal data. We have collected thermal camera photos and data from gas sensors into a csv file so that we can find gas leaks. For gas leakage detection systems to work, it is essential to collect and analyze data from individual sensors. For the correct and prompt detection of possible dangers, these personal data are essential. In order to detect gas leaks, we have compiled csv data from gas sensors and thermal camera images.

4.1.1. Result Analysis of Image Data

To identify gas in images, we used a CNN model and six pre-trained CNN models: DenseNet201, InceptionResNet, MobileNetV2, VGG16, VGG19, and Xception. Table 5 displays the six convolutional neural network (CNN) models that we listed together with their accuracy, precision, recall, and loss. The outcomes were exceptional across all models. After looking at the accuracy values, it is clear that none of the six models can be beaten by CNN. Every single one of them obtained a precision value higher than 0.99. This result indicates that the models were most correct in predicting that there would be no gas seepage, which is the case when there is no gas leakage. When compared to competing approaches, DenseNet201, VGG19, and Xception all produce better loss values.
Figure 6 compares the models’ performance in terms of training accuracy, validation accuracy, training precision, validation precision, training recall, validation recall, training loss, and validation loss, and it reveals that seven models perform better than the other six. A comparison of the seven designs’ training accuracy is displayed in the training accuracy graph. As can be seen, the DensNet201 and Xception models were picking up new knowledge from the training data at a respectable pace, as their upward trending lines demonstrated. At 83.000%, CNN has the lowest training accuracy (blue line). Still, each model increased its initial value from a little one. The loss functions were reduced when the models were trained for longer durations. Using test accuracy as a metric, the second figure compares and contrasts seven distinct designs. The fact that all model graphs are rising upwards in each epoch indicates that our suggested model did extraordinarily well in detecting gas leaks. The gas dataset was suitable for training our models, as shown by these results. Using the test dataset, the models accurately identified gas without being over-fit or under-fit. It is for these reasons that the models’ outstanding performance in the confusion matrix and classification report is explicable.

4.1.2. Result Analysis of Sensor Data

Three deep neural networks—LSTM_DenseDenseNet201, BiLSTM_Dense, and Dense—were selected to recognize gas from CSV data. Figure 7 shows a comparison of training accuracy, test accuracy, validation loss, precision, recall, validation recall, and validation precision for these models.
Table 6 displays the results for recall, accuracy, precision, and loss for these three DL models. All of the models produced very good results. When comparing the six models’ precision values, we found that they all beat BiLSTM Dense. With a precision value of more than 93.39, each of these models was able to accurately anticipate the highest possible number of no gas leakage forecasts. With a loss value of 0.15, BiLSTM Dense is the most effective of the three methods. When it comes to detecting gas leaks, BiLSTM Dense is the best option due to its exceptional accuracy.

4.2. Result Analysis of Multimodal Data Using Deep Learning

A precise decision was reached after combining the gas sensor results with the features extracted from thermal images. Using data from many modalities enhances the classifier’s accuracy compared to using data from a single modality alone. By utilizing multimodal representations, it is possible to train a classifier on labelled data from one modality and then apply it to data from another modality while maintaining an acceptable accuracy score. Figure 8 contrasts several metrics (training accuracy, test accuracy, recall, validation precision, validation loss, model loss function, and validation recall) to help us grasp the benefits of multimodal data. We used both slow and quick processing on multimodal data to validate our findings. According to the table, multimodal with learning rate 0.0001 (slow) performs better than multimodal with learning rate 1 (rapid) across all evaluation criteria, as shown in Table 7. Additionally, compared to single data, the accuracy and loss values for multimodal data are superior.

4.3. Result Analysis of Multimodal Data Using Federated Learning

Federated learning enables the collection of decentralized devices (clients) to train a model locally while retaining all training data, as opposed to transmitting data to a central server. The clients train local machine learning models utilizing their own data. After every client has finished their respective local training cycles, the central server compiles all of the model modifications that they have submitted. While protecting the privacy of individual data, this revised global model includes the expertise from all participating clients. The main benefit of federated learning for gas leak detection is that it enables the collective intelligence of all client devices to enhance the model’s accuracy without disclosing private or sensitive information.

4.3.1. Implementation Details and Communication Rounds

Before selecting 10% of the clients for local training, the server engages in a maximum of six communication cycles with the clients in this scenario. Client side local training has been conducted with 100 epochs on each client with five total communication rounds between federated clients and federated server. The learning rate for the training set is 0.001.

4.3.2. Performance Evaluation of Federated Learning Approach

As illustrated in Figure 9, the aggregated validation accuracy for the multimodal data was 99.7%, and the aggregated validation loss was exceptionally low, as is evident from the results that in terms of validation accuracy and validation losses, the proposed federated multimodal system generated noticeably superior results with no additional resource requirements. The suggested federated-learning-based fake news categorization system was proven to be both much more effective and secure than the conventional frameworks and to be cost-efficient.

5. Discussion

A federated-learning-based multi-step approach is described for gas detection and classification using multimodal data, with the goals of improving the accuracy, privacy, and scalability of gas leakage detection systems. Data preparation and collection from gas sensors and thermal cameras is the first step in the approach. Next, the data are scaled to match the input sizes to the models, and the gas sensor values are transformed into heatmap images. The preprocessed data are subsequently classified using deep learning models such as CNNs, LSTM, and Bidirectional LSTM. At this point, the data are cleaned, normalized, and feature extracted so that they may be used as an input for the deep learning models. Recall, loss, accuracy, and precision are some of the evaluation metrics used to gauge the classification models’ efficacy. In order to train models across distributed devices without compromising data privacy, the suggested solution additionally utilizes federated learning, a distributed paradigm for machine learning. This function ensures the security of sensitive device data while utilizing collective intelligence to enhance the central model’s accuracy. Experiment parameters including learning rates, communication rounds, and local training epochs are fine-tuned to maximize the model’s performance. As the outcomes of the federated learning technique demonstrate, the suggested methodology is effective, with high validation accuracy and low validation loss. Gas leakage detection systems can be made more accurate, private, and scalable by combining federated learning with multimodal data fusion and deep learning categorization. In many commercial and domestic settings, this would improve safety protocols.

Comparison with Existing Techniques

To illustrate the competitiveness of the proposed methodology, it is juxtaposed with other contemporary investigations that have employed the identical dataset for the purpose of gas escape detection. The results of this comparison, presented in Table 8, provide evidence that the proposed approach is superior. In contrast to similar investigations, the proposed methodology attains federated learning accuracies of 0.985 and 0.992. Significantly, its performance surpasses that of the 0.945 and 0.969 accuracies reported in [30] through the use of intermediate and multitask fusion, respectively, as well as the 0.96 accuracies derived from early fusion in [25]. This supremacy can be attributed to several factors. The proposed methodology begins with the implementation of federated learning on multimodal data. This is in opposition to previous investigations that have predominantly focused on geographical data. In the final stage of the proposed pipeline, bidirectional long–short-term memory (Bi-LSTM) is implemented for detection. It is well-known that this model outperforms the standard LSTM models that were frequently employed in prior studies, resulting in even more favorable outcomes.

6. Conclusions

The requirement for transporting gases or fluids (such as water or oil) from production sites to end user locations is driving a rapid increase in the design and deployment of pipelines. Many governmental and industrial stakeholders are quite concerned about finding gas leaks in these pipelines. This is because of the harms and expenses involved. Gas leaks not only result in financial and economic expenses but they also pose a safety risk, particularly in manufacturing and industrial settings.
In the industry 5.0 environment, this study presented a methodology for assessing the accuracy of intelligent multimodal data in detecting and identifying gas leaks. Since gas detector readings and infrared thermal imaging use different types of data, we compared the two to see which one was more effective for gas detection and identification. We also looked at the results of slow and rapid multimodal data. While CNN, DenseNet, and VG16 are used to train gas sensor data, LSTM, BiLSTM, and CNN are used to train temperature data for gas detection. Data that are multimodal are the result of combining the two datasets. We used slow and fast processing on multimodal data to confirm our results. According to the results, the classifier’s accuracy was enhanced when data from many modalities were used instead of just one. Given the distributed nature of gas leak monitoring systems that use sensors to gather data from different locations, FL offers a practical alternative to other methods of extracting useful insights from the collected data while also protecting its privacy and localization.
In industry revolution 5.0, data-oriented control and security systems are being developed and raised. In the case of gas identification, multimodalities of data ensure the correct identification of gas leakage. Therefore, such implementations in real-time distributed and federated ecosystems could help to enhance gas-related spillage concerns and identification.

Author Contributions

Conceptualization, R.K.; Methodology, I.K.; Software, V.K.; Validation, P.A.; Formal Analysis, I.K.; Resources, A.S.; Writing-original draft, G.C.; Writing-review & editing, R.P.; Visualization, P.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Narkhede, P.; Walambe, R.; Chandel, P.; Mandaokar, S.; Kotecha, K. MultimodalGasData: Multimodal Dataset for Gas Detection and Classification. Data 2022, 7, 112. [Google Scholar] [CrossRef]
  2. Castanedo, F. A review of data fusion techniques. Sci. World J. 2013, 2013, 704504. [Google Scholar] [CrossRef] [PubMed]
  3. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  4. Konečny, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
  5. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, H.B.; et al. Towards federated learning at scale: System design. arXiv 2019, arXiv:1902.01046. [Google Scholar]
  6. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  7. Adekitan, A.I.; Matthews, V.O.; Olasunkanmi, O. A microcontroller based gas leakage detection and evacuation system. IOP Conf. Ser. Mater. Sci. Eng. 2018, 413, 012008. [Google Scholar] [CrossRef]
  8. Kodali, R.K.; Greeshma, R.; Nimmanapalli, K.P.; Borra, Y.K.Y. IOT based industrial plant safety gas leakage detection system. In Proceedings of the 2018 4th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 14–15 December 2018; pp. 1–5. [Google Scholar]
  9. Suma, V.; Shekar, R.R.; Akshay, K.A. Gas leakage detection based on IOT. In Proceedings of the 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 12–14 June 2019; pp. 1312–1315. [Google Scholar]
  10. Evalina, N.; Azis, H. Implementation and design gas leakage detection system using ATMega8 microcontroller. IOP Conf. Ser. Mater. Sci. Eng. 2020, 821, 012049. [Google Scholar] [CrossRef]
  11. Hui, Z.; Lu, A. A deep learning method combined with an electronic nose for gas information identification of soybean from different origins. Chemom. Intell. Lab. Syst. 2023, 240, 104906. [Google Scholar] [CrossRef]
  12. Wang, B.; Zhang, J.; Wang, T.; Li, W.; Lu, Q.; Sun, H.; Huang, L.; Liang, X.; Liu, F.; Liu, F.; et al. Machine learning-assisted volatile organic compound gas classification based on polarized mixed-potential gas sensors. ACS Appl. Mater. Interfaces 2023, 15, 6047–6057. [Google Scholar] [CrossRef]
  13. Wang, T.; Wang, X.; Hong, M. Gas leak location detection based on data fusion with time difference of arrival and energy decay using an ultrasonic sensor array. Sensors 2018, 18, 2985. [Google Scholar] [CrossRef] [PubMed]
  14. Srivastava, S.; Chaudhri, S.N.; Rajput, N.S.; Alsamhi, S.H.; Shvetsov, A.V. Spatial upscaling-based algorithm for detection and estimation of hazardous gases. IEEE Access 2023, 11, 17731–17738. [Google Scholar] [CrossRef]
  15. Zhai, S.; Li, Z.; Zhang, H.; Wang, L.; Duan, S.; Yan, J. A multilevel interleaved group attention-based convolutional network for gas detection via an electronic nose system. Eng. Appl. Artif. Intell. 2024, 133, 108038. [Google Scholar] [CrossRef]
  16. Se, H.; Song, K.; Sun, C.; Jiang, J.; Liu, H.; Wang, B.; Wang, X.; Zhang, W.; Liu, J. Online drift compensation framework based on active learning for gas classification and concentration prediction. Sens. Actuators B Chem. 2024, 398, 134716. [Google Scholar] [CrossRef]
  17. Peng, P.; Zhao, X.; Pan, X.; Ye, W. Gas classification using deep convolutional neural networks. Sensors 2018, 18, 157. [Google Scholar] [CrossRef]
  18. Pan, X.; Zhang, H.; Ye, W.; Bermak, A.; Zhao, X. A fast and robust gas recognition algorithm based on hybrid convolutional and recurrent neural network. IEEE Access 2019, 7, 100954–100963. [Google Scholar] [CrossRef]
  19. Bilgera, C.; Yamamoto, A.; Sawano, M.; Matsukura, H.; Ishida, H. Application of convolutional long short-term memory neural networks to signals collected from a sensor network for autonomous gas source localization in outdoor environments. Sensors 2018, 18, 4484. [Google Scholar] [CrossRef] [PubMed]
  20. Hamilton, S.; Charalambous, B. Leak Detection: Technology and Implementation; IWA Publishing: London, UK, 2013. [Google Scholar]
  21. Adefila, K.; Yan, Y.; Wang, T. Leakage Detection of Gaseous CO2 through Thermal Imaging. In Proceedings of the 2015 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Pisa, Italy, 11–14 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 261–265. [Google Scholar]
  22. Marathe, S. Leveraging Drone Based Imaging Technology for Pipeline and RoU Monitoring Survey. In Proceedings of the SPE Symposium: Asia Pacific Health, Safety, Security, Environment and Social Responsibility, Kuala Lumpur, Malaysia, 23–24 April 2019. [Google Scholar]
  23. Jadin, M.S.; Ghazali, K.H. Gas Leakage Detection Using Thermal Imaging Technique. In Proceedings of the 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, 26–28 March 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 302–306. [Google Scholar]
  24. Bin, J.; Rahman, C.A.; Rogers, S.; Liu, Z. Tensor-Based Approach for Liquefied Natural Gas Leakage Detection from Surveillance Thermal Cameras: A Feasibility Study in Rural Areas. IEEE Trans. Ind. Inform. 2021, 17, 8122–8130. [Google Scholar] [CrossRef]
  25. Jia, L.; Cheng, P.; Yu, Y.; Chen, S.H.; Wang, C.X.; He, L.; Nie, H.T.; Wang, J.C.; Zhang, J.C.; Fan, B.G.; et al. Regeneration mechanism of a novel high-performance biochar mercury adsorbent directionally modified by multimetal multilayer loading. J. Environ. Manag. 2023, 326, 116790. [Google Scholar] [CrossRef]
  26. Narkhede, P.; Walambe, R.; Mandaokar, S.; Chandel, P.; Kotecha, K.; Ghinea, G. Gas Detection and Identification Using Multimodal Artificial Intelligence Based Sensor Fusion. Appl. Syst. Innov. 2021, 4, 3. [Google Scholar] [CrossRef]
  27. Sharma, A.; Kumar, R.; Kansal, I.; Popli, R.; Khullar, V.; Verma, J.; Kumar, S. Fire Detection in Urban Areas Using Multimodal Data and Federated Learning. Fire 2024, 7, 104. [Google Scholar] [CrossRef]
  28. Havens, K.J.; Sharp, E.J. Thermal Imaging Techniques to Survey and Monitor Animals in The Wild: A Methodology; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
  29. Ghosh, A.; Hong, J.; Yin, D.; Ramchandran, K. Robust federated learning in a heterogeneous environment. arXiv 2019, arXiv:1906.06629. [Google Scholar]
  30. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-IID data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
  31. Gupta, V.K.; Lalwani, S.K.; Bhati, G.S.; Prakash, S. Bayesian Optimization Based Neural Architecture Search for Classification of Gases/Odors Mixtures. IEEE Sens. J. 2024, 24, 7119–7125. [Google Scholar] [CrossRef]
  32. Ku, W.; Lee, G.; Lee, J.Y.; Kim, D.H.; Park, K.H.; Lim, J.; Cho, D.; Ha, S.C.; Jung, B.G.; Hwang, H.; et al. Rational design of hybrid sensor arrays combined synergistically with machine learning for rapid response to a hazardous gas leak environment in chemical plants. J. Hazard. Mater. 2024, 466, 133649. [Google Scholar] [CrossRef] [PubMed]
  33. Yan, J.; Zhang, H.; Ge, X.; Yang, W.; Peng, X.; Liu, T. A novel bionic olfactory network combined with an electronic nose for identification of industrial exhaust. Microchem. J. 2024, 200, 110287. [Google Scholar] [CrossRef]
  34. Bhatt, P.; Verma, A.; Gangola, S.; Bhandari, G.; Chen, S. Microbial Glycoconjugates in Organic Pollutant Bioremediation: Recent Advances and Applications. Microb. Cell Factories 2021, 20, 72. [Google Scholar] [CrossRef] [PubMed]
  35. Pandey, A.K.; Upreti, H.; Joshi, N.; Uddin, Z. Effect of Natural Convection on 3D MHD Flow of MoS2–GO/H2O via Porous Surface Due to Multiple Slip Mechanisms. J. Taibah Univ. Sci. 2022, 16, 749–762. [Google Scholar] [CrossRef]
  36. Chaudhary, P.; Ahamad, L.; Chaudhary, A.; Kumar, G.; Chen, W.-J.; Chen, S. Nanoparticle-Mediated Bioremediation as a Powerful Weapon in the Removal of Environmental Pollutants. J. Environ. Chem. Eng. 2023, 11, 109591. [Google Scholar] [CrossRef]
  37. Rani, L.; Sahoo, A.K.; Sarangi, P.K.; Yadav, C.S.; Rath, B.P. Feature Extraction and Dimensionality Reduction Models for Printed Numerals Recognition. In Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 23 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 798–801. [Google Scholar]
  38. Puri, M.; Garg, A.; Rani, L. Cryptocurrency Trading Using Machine Learning. In Proceedings of the 2023 3rd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 12 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 2620–2624. [Google Scholar]
  39. Rahate, A.; Mandaokar, S.; Chandel, P.; Walambe, R.; Ramanna, S.; Kotecha, K. Employing Multimodal Co-Learning to Evaluate the Robustness of Sensor Fusion for Industry 5.0 Tasks. Soft Comput. 2022, 27, 4139–4155. [Google Scholar] [CrossRef]
  40. Attallah, O. Multitask Deep Learning-Based Pipeline for Gas Leakage Detection via E-Nose and Thermal Imaging Multimodal Fusion. Chemosensors 2023, 11, 364. [Google Scholar] [CrossRef]
Figure 1. Data classification of multimodal system.
Figure 1. Data classification of multimodal system.
Sensors 24 05904 g001
Figure 2. Proposed multimodal federated system.
Figure 2. Proposed multimodal federated system.
Sensors 24 05904 g002
Figure 3. Statistical Properties of the Measurements from Gas Sensors.
Figure 3. Statistical Properties of the Measurements from Gas Sensors.
Sensors 24 05904 g003
Figure 4. Block level connections for dataset collection setup [27].
Figure 4. Block level connections for dataset collection setup [27].
Sensors 24 05904 g004
Figure 5. Configuration parameters of multimodal system.
Figure 5. Configuration parameters of multimodal system.
Sensors 24 05904 g005
Figure 6. Comparing different approaches to image data validation.
Figure 6. Comparing different approaches to image data validation.
Sensors 24 05904 g006aSensors 24 05904 g006b
Figure 7. Comparing different approaches to numerical data validation.
Figure 7. Comparing different approaches to numerical data validation.
Sensors 24 05904 g007
Figure 8. Comparing different approaches to multimodal data validation.
Figure 8. Comparing different approaches to multimodal data validation.
Sensors 24 05904 g008
Figure 9. Federated multimodal aggregated results.
Figure 9. Federated multimodal aggregated results.
Sensors 24 05904 g009aSensors 24 05904 g009b
Table 1. Literature survey of various techniques.
Table 1. Literature survey of various techniques.
Reference No.Proposed TechniqueConclusionResults
[7]Microcontroller-based gas leakage detectionProposed system offers a reliable method for gas leakage detection and evacuation.Detection accuracy: 95%
[8]IoT-based industrial plant safety systemSystem effectively detects gas leakages in industrial settings, enhancing safety measures.Real-time monitoring capability
[9]Gas leakage detection based on IoTUtilizing IoT for gas detection provides a scalable solution with remote monitoring capabilities.Remote monitoring via smartphone app
[10]Gas leakage detection system using microcontrollerMicrocontroller-based system offers a cost-effective solution for gas detection in various environments.Low-cost implementation
[11]Deep learning combined with an electronic noseEffective in identifying gas information from soybean of different origins.High accuracy in gas identification of over 95%
[12]Machine learning and polarized mixed-potential gas sensorsEnhanced classification of volatile organic compounds. It demonstrates improved efficiency and reliability in sensor-based volatile organic compound (VOC) detection.Improved classification accuracy of 92% with reduced false positive rates by 15%
[13]Gas leak location detection using ultrasonic sensor arrayUltrasonic sensor array enables precise localization of gas leaks, facilitating rapid response measures.Localization accuracy: 90%
[14]Spatial-upscaling-based algorithmThe algorithm is efficient in detecting and estimating concentrations of hazardous gases. It shows high potential for use in various industrial and environmental applications.Demonstrated detection accuracy of 90% and estimation error within 5% for hazardous gas concentrations
[15]Multilevel interleaved group attention-based convolutional network with an electronic noseThe method provides superior gas detection performance, with enhanced accuracy and robustness. It effectively handles complex gas mixtures and varying environmental conditions.Achieved a detection accuracy of 96% and demonstrated high robustness against sensor drift and noise.
[16]Online drift compensation framework using active learningThe framework effectively compensates for sensor drift in real-time applications. It ensures accurate gas classification and concentration prediction over extended periods.Maintained classification accuracy above 90% over long-term operation, with drift compensation reducing error rates by 20%.
[17]Gas classification using deep convolutional neural networksConvolutional neural networks enable accurate classification of different gas types, improving detection specificity.Classification accuracy: 92%
[18]Gas recognition using hybrid convolutional neural network (CNN) and recurrent neural network (RNN)Hybrid CNN-RNN model offers a fast and robust approach for gas recognition, suitable for real-time applications.real-time processing capability
[19]Convolutional long–short-term memory neural networks for localizing gas sourcesConvolutional LSTM networks facilitate autonomous gas source localization in outdoor environments, enhancing monitoring capabilities.Source localization accuracy: 85%
Table 2. Sensors and corresponding sensitive gases.
Table 2. Sensors and corresponding sensitive gases.
Used SensorGases Sensitive to Sensor
Sensor1/MQ2
  • Liquified petroleum gas
  • Methane Gas
  • Butane Gas
  • Smoke
Sensor2/MQ3
  • Smoke
  • Ethanol
  • Alcohol
Sensor5/MQ5
  • Liquified petroleum gas
  • Natural Gas
Sensor6/MQ6
  • Liquified petroleum gas
  • Butane Gas
Sensor7/MQ7
  • Carbon Monoxide Gas
Sensor8/MQ8
  • Hydrogen gas
Sensor135/MQ135
  • Air Quality
Table 3. Number of sensors in each class.
Table 3. Number of sensors in each class.
ClassNo of Samples
Perfume Class1600
Smoke Class1600
Both Perfume and Smoke Class1600
Neutral Environment1600
Total Samples6400
Table 4. System Setup Configurations.
Table 4. System Setup Configurations.
S.No.ComponentConfigurations
11 Computer: ServerProcessor: Core i7, NVIDIA 3070 8 GB Graphics, Memory, 32 GB RAM,
25 Computer’s: ClientsProcessor: Core i5, NVIDIA 1650 4 GB Graphics Memory, 16 GB RAM,
3PythonVersion 3.7
4KerasVersion 3.0
5TensorFlowVersion 2.14
6TensorFlow FederatedVersion 1.0
7Camera5 MP HD
8Gas SensorsAs mentioned in Table 2/Figure 4
Table 5. Comparison of different approaches to image data.
Table 5. Comparison of different approaches to image data.
CNNDenseNet201InceptionResNetV2MobileNetV2VGG16VGG19Xception
Accuracy83.0099.9299.5599.7699.6899.7899.94
Validation Accuracy83.4397.9697.6897.8197.3497.0397.96
Precision84.7699.9499.6099.7899.7299.7899.96
Validation Precision86.5997.9697.9697.9697.3497.0397.96
Recall80.9399.9099.4599.5899.6499.7699.82
Validation Recall81.0997.9697.9697.8197.3497.0397.96
Loss0.400.070.110.100.080.070.07
Validation Loss0.380.140.160.170.170.180.15
Table 6. Comparison of different approaches to numerical data.
Table 6. Comparison of different approaches to numerical data.
BiLSTM_DenseDenseLSTM_Dense
Accuracy93.3986.1092.38
Validation Accuracy94.1886.4394.33
Precision93.3986.1692.41
Validation Precision94.1886.5294.33
Recall93.3986.0492.36
Validation Recall94.1886.3794.33
Loss0.150.290.17
Validation Loss0.150.300.14
Table 7. Comparison of different approaches to multimodal data.
Table 7. Comparison of different approaches to multimodal data.
AccuracyLossPrecisionRecallVal_AccuracyVal_LossVal_PrecisionVal_Recall
Multimodal LR 0.00010.9993750.0252560.93439810.98750.0578370.9290290.998125
Multimodal LR 10.99750.0158760.9408990.9983330.9731250.159370.9173020.9775
Table 8. Performance comparison of existing methods with proposed method.
Table 8. Performance comparison of existing methods with proposed method.
Autors [Ref]MethodAccuracyPrecisionRecallPrivacy Preserve
[26]LSTM and CNN0.945--No
[39]LSTM and CNN0.969--No
[40]Inception, DWT, and Bi-LSTM0.9850.9850.985No
[30]Federated Deep Learning95.61--Yes
[17]CNN96.670.960.96No
[18]PCA, XgBoost94.170.920.94No
Proposed99.70.940.99Yes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, A.; Khullar, V.; Kansal, I.; Chhabra, G.; Arora, P.; Popli, R.; Kumar, R. Gas Detection and Classification Using Multimodal Data Based on Federated Learning. Sensors 2024, 24, 5904. https://doi.org/10.3390/s24185904

AMA Style

Sharma A, Khullar V, Kansal I, Chhabra G, Arora P, Popli R, Kumar R. Gas Detection and Classification Using Multimodal Data Based on Federated Learning. Sensors. 2024; 24(18):5904. https://doi.org/10.3390/s24185904

Chicago/Turabian Style

Sharma, Ashutosh, Vikas Khullar, Isha Kansal, Gunjan Chhabra, Priya Arora, Renu Popli, and Rajeev Kumar. 2024. "Gas Detection and Classification Using Multimodal Data Based on Federated Learning" Sensors 24, no. 18: 5904. https://doi.org/10.3390/s24185904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop