Next Article in Journal
Design of a Prediction Model to Predict Students’ Performance Using Educational Data Mining and Machine Learning
Previous Article in Journal
Survey of Machine Learning and Optimization Algorithms in Plant Tissue Culture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Recent Developments in Machine Learning Predictive Analytics for Disaster Resource Allocation †

1
Department of IBM, GLA University, Mathura 281406, India
2
Department of Computer Science and Engineering, Maharishi Markandeshwar Deemed to be University, Mullana, Ambala 133203, India
3
Department of Computer Science and Engineering, Manav Rachna International Institute of Research and Studies, Faridabad 121003, India
4
Department of Computer Science and Engineering, Sri Ramaswami Memorial University, Delhi NCR Campus, Sonepat 201204, India
5
Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Guntur 522502, India
6
Department of Computer Science and Engineering, United College of Engineering & Research, Prayagraj 211010, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 19; https://doi.org/10.3390/engproc2023059019
Published: 11 December 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
To be effective, evidence-driven disaster risk management (DRM) relies on a wide variety of data types, information sources, and models. Weather modeling, the rupture of earthquake fault lines, and the creation of dynamic urban exposure measures all require extensive data collection from a variety of sources in addition to complex science. There are various methodologies to utilize AI to recognize necessities and asset accessibility by the likes of Twitter; however, the foremost broadly recognized and exact strategies remain cloudy. Within the occurrence of a catastrophe, machine learning apparatuses for designating assets are required to instantly help those in need. This overview appears to be necessary for additional examination with respect to an assertion on endorsed methods for calculation to demonstrate assurance, benchmarking datasets, crisis word references, word embedding techniques, and evaluation methods. As fiascos of all sorts become more common, these devices have the potential to improve real-time crisis administration over all stages of a catastrophe. This study aims to provide readers, including data scientists, with a clear and uncomplicated reference on how disaster risk management systems can benefit from machine learning. There are numerous sources of information on this set of technologies, which are both complicated and constantly changing. The volume of sensor data that can be analyzed has increased exponentially because of enormous increases in computational speed and capacity over the past few decades.

1. Introduction

Understanding that ML algorithms, like deep learning, sometimes produce results without human intervention is critical because they are not bias-free. Therefore, it is essential to have a variety of training datasets, as well as to consider valuable information and the connection between underrepresented and vulnerable populations and development objectives. Due to the disproportionate impact of disasters on vulnerable groups, any bias in information regarding these groups’ characteristics can have a significant impact [1]. In a similar vein, if the underlying principles of ML models are understood, it should come as no surprise that publishing an algorithm’s output may unintentionally expose more private underlying factors. Artificial intelligence is a type of machine learning. ML algorithms were pioneers in statistical data analysis and satellite remote sensing thanks to computer vision [2]. They now power a wide range of digital services, including search engines and online shopping. From the picture, sound, and voice acknowledgment abilities of our cell phones to the consistent proposal of items during web-based shopping, from mail arranging to the positioning of web search tool results, the expressions “AI” and “computerized reasoning” have become typical [3]. A similar innovation is being utilized to respond to bigger inquiries regarding society, like a practical turn of events, a philanthropic guide, and chance administration for calamities. Figure 1 elaborates the prediction system of data. It is feasible for a PC to collaborate with the actual world so that the PC framework, or robot, has all of the earmarks of acting brilliantly when a few ML calculations cooperate, for example, when it is being taken care of by an enormous number of actual sensors [4]. Self-driving vehicles, advanced mechanics that copy and outperform human capacities, and supercomputers that can now perform undertakings better than people are models. With regard to working on our ability to answer squeezing cultural inquiries in a precise, productive, and viable way, similar assumptions ought to be made about AI (ML). The contextual analyses in this direction incorporate planning the casual settlements that house the weakest metropolitan populaces and distinguishing typhoon- and twister-inclined structures [5].
By exploiting a significant group of work on picture acknowledgment and characterization, AI is principally applied to techniques that are utilized in the grouping or classification of somewhat detected satellite, elevated, robot, and even street-level symbolism to grasp the fiasco risk. In any case, applications additionally envelop different sorts of information from virtual entertainment presentations on seismic sensor information organizations and build review records [6]. From capitalizing on our territory to preparing for and recuperating from emergencies, every one of the progressions made in the utilizations of AI can and are being utilized to tackle bigger issues that people face [2]. Consistently, a great many individuals all over the planet are influenced by catastrophes that are brought about naturally or by human movement [6]. These events regularly bring about the deficiency of human existence. Calamities essentially affect foundation and property, notwithstanding human losses. Previously, during, and after a calamity happens, an executive’s strategies are completed with the objectives of staying away from human setbacks, defending individuals and foundations, limiting monetary harm, and reestablishing business as usual [7]. The criticality and intricacy of catastrophe tasks, as well as the intricacy of the actual debacles, require powerful navigation, which is made more straightforward by data innovation and computer-based intelligence, specifically. For instance, in the event of a hurricane or sandstorm, the autonomous driving system’s vision may be impaired by dust particles, or it should be safe in hazy conditions [5]. The overall analysis about the machine learning system is defined in Figure 2. The disruption of communication during a disaster could be another obstacle. Additionally, other testing errands include expanding the quantity of individuals who are safeguarded during a calamity or a pandemic, clearing individuals brilliantly, recognizing weak regions in the spread of a pandemic, contacting the most impacted individuals, and giving them adequate assets, assessing the misfortune to the economy, and a lot more [8]. Several ML and DL techniques are used in the AI techniques to support disaster management throughout its phases. It is important to note that the successful application of machine learning in disaster resource allocation requires access to relevant data, robust models, and close collaboration between data scientists, domain experts, and disaster response agencies. Additionally, ethical considerations, fairness, and transparency in the allocation process are essential to ensure that resources are distributed equitably to those in need during critical times.

2. State of the Art

There have been productive fundamentals, including cloud starting points, to meet advanced creation setup needs. In [9,10], we planned a testbed to research the distant composed exertion and access of front-line creation resources using the GENI establishment. Other cloud establishments have been used as progression conditions for building programming applications. There are, furthermore, a couple of cloud-based business places that exist for different spaces, and our work develops near thoughts concerning the front line, delivering application joint endeavors to satisfy client needs. Task induction control is stable, which may not be appropriate in a setting that is changing rapidly [11,12]. The entrance mechanism in trademark-based permission control is based on the characteristics of the client. This strategy will not work because we have several clients who share cloud resources. In the development plan technique, the client is assigned a task and polled regarding the level of taste under a particular circumstance. Relationship assessments, contraption methods, and reasonings are used in relationship-based permission control to put together what can be accessed. When a relationship graph and food chain are created, the problem takes a strong turn. To divide the circumstance and provide access, situation planning is employed. The intrusive party can mimic the circumstances; therefore, they gain induction [13]. The client is given an activity, and the client is judged based on what is acceptable under a specific circumstance. The relationship examinations, device tactics, and rationales gather what may be attained in relationship-based admission control. Once the relational hierarchy and pecking order are established, the problem becomes stubborn. Circumstance mapping is utilized to break down the circumstance and give access. Circumstances can be emulated by the gatecrasher, so they gain admittance.

3. Disaster Management Model of Machine Learning

A disaster management system’s challenges must be able to be overcome by the systems developed to assist with disaster prediction [14]. In this section, we provide a fundamental overview of the various ML algorithms that can be utilized for the management of pandemics and disasters. Disaster resource allocation is a critical aspect of disaster management and response. Machine learning can play a significant role in optimizing resource allocation during disasters by helping decision makers make more informed and efficient choices. Here are some ways in which machine learning can be applied to disaster resource allocation: Machine learning models can be used to predict the likelihood and severity of disasters, such as hurricanes, earthquakes, or wildfires. By analyzing historical data and current environmental conditions, these models can provide early warning systems to allocate resources proactively [15]. Machine learning can analyze data from past disasters to forecast the demand for resources like medical supplies, food, water, and emergency personnel. This information can help authorities pre-position resources in high-risk areas. Machine learning algorithms can optimize evacuation routes by considering factors like traffic flow, road conditions, and population density. These models can help allocate resources to ensure a smooth evacuation process. Machine learning can process data from various sources, including remote sensing, social media, and IoT devices, to provide real-time situational awareness during a disaster. This information is critical for decision makers to allocate resources effectively. Machine learning can develop resource allocation models that consider various factors, such as population density, infrastructure, vulnerability, and accessibility [16]. These models can optimize the allocation of resources to areas that need them the most. After a disaster, machine learning can analyze satellite images and aerial surveys to assess the extent of damage. This information helps prioritize resource allocation to areas with the most significant impacts. Machine learning can optimize the supply chain to deliver resources to disaster-affected areas shown in Figure 3. It can consider factors like transportation routes, vehicle availability, and real-time demand to ensure efficient delivery.
This disaster management model emphasizes the use of machine learning across various phases of disaster management to enhance decision making, improve resource allocation, and aid in carrying out efficient responses and recovery efforts. Collaboration between data scientists, domain experts, and disaster management agencies is essential for the successful implementation of machine learning in disaster management. Data and machine learning algorithms must be fair for safe and responsible AI systems to be built from the ground up. To ensure that issues like AI bias are addressed in a meaningful way, stakeholders in both the technical and business sides of AI are constantly looking for fairness [17]. Fairness provides us with a means of comprehending the practical. Machine learning models can play a crucial role in disaster management by assisting in various phases of disaster preparedness, response, recovery, and mitigation. The overview of a disaster management model that incorporates machine learning is data collection and preprocessing, where data are gathered from various sources, including weather sensors, remote sensing satellites, social media, government agencies, and historical disaster records. The data are cleaned and preprocessed, for example, through data normalization, feature engineering, and by handling missing values [18]. Machine learning models are used for predictive analytics to forecast the likelihood and severity of disasters based on historical and real-time data. Anomaly detection algorithms are deployed to identify unusual patterns in the data that may indicate an impending disaster. Machine learning models are developed to forecast the demand for resources like medical supplies, food, and emergency personnel. Historical data and environmental conditions are used to optimize the allocation of resources to high-risk areas [19].

4. Parameters Used for Implementation in Machine Learning

For an ML algorithm’s performance evaluation, it is essential to divide the data into training, validation, and testing sets. The preparation set is utilized to train the model to recognize the classes we wish to foresee. There are several model parameters that must be set for each ML algorithm. The various model parameter settings are assessed, and the best one for problem is selected by checking the accuracy of the trained model on the validation set. The third gathering is the trying set [20]. This set is only used at the end to verify the accuracy of the model output and should not be touched during model development. In some situations, such as when you want to apply a model that has already been developed to a new region, the testing dataset is a brand-new dataset. The objective has been established, and the initial step is a brief data inventory. ML has really aided the executives during all periods of the catastrophe, killing inconsequential information and accelerating the handling and investigation of calamity occasion information [20]. Nonetheless, conventional AI methods cannot straightforwardly gain an intricate framework’s portrayal from crude information. DL is a subclass of ML that can become familiar with the portrayal of an intricate framework naturally for the motivations behind expectation, recognition, or grouping [20]. To ultimately learn invariant highlights and extremely complex capabilities, DL methods empower portrayals with different degrees of deliberation that are obtained by straightforward, non-direct modules that change the portrayal at each level to a higher, more conceptual one [12]. DL movements engage new approaches in the field of a disaster board. Satellite and flying imaging frameworks are fundamental for calamity reaction and harm evaluation on the grounds that CNNs rule PC vision assignments [8]. As an amazing asset for large information examinations, ANNs are generally utilized. Then again, text based NNs, like long momentary memory (LSTM) and, even more as of late, transformers, utilize their engineering to complete errands connected with normal language handling [8]. How to divide the reference data into these training, validation, and testing datasets is not specified in any specific rule. One guideline is to arbitrarily designate half, 25%, and the other 25% of the information to each set, separately. It is possible that different ratios will be used. Users are frequently required to submit their model results for a set of data for which they are not given the reference labels when using benchmarks to compare algorithms.
For ML calculations, both the amount and variety of the preparation tests are pivotal. Nonetheless, there is a place where an excess of information heterogeneity can bring about erratic results. Likewise, a component in one area might look like something else entirely in another area, requiring the utilization of unmistakable models for different areas in any event, when similar results are being looked for. When prepared with flood occasions of shifting sizes and areas, flood harm forecast models have been found to have higher exactness. When applied to pictures from an alternate geological region, profound learning models intended to evaluate twister harm to structures had fundamentally lower exactness. For the different classes, the ideal number of tests ought to be tantamount. Negative models ought to likewise be incorporated. While preparing a classifier to perceive rooftops, for example, it very well might be important to gather a second dataset that contains “everything except rooftops” so the AI calculation can figure out how to recognize rooftops from the remainder of the symbolism with more prominent accuracy [12].
To obtain exact measurements, there are various quality measurements that can be utilized to depict an ML calculation’s precision. A disarray or mistake grid can be utilized to outline the connection between the quantity of tests per class in the reference information and how they are sorted in the result information for characterization issues. All around, the overall precision is not entirely settled by indisputably isolating the quantity of precisely portrayed tests but is unquestionably settled by the quantity of tests. The quantity of genuine up-sides separated by the complete number of genuine up-sides and misleading up-sides in each class is the calculation’s accuracy, otherwise called rightness or client precision. This shows the probability that a pixel has a place with the right class. The review of a calculation, otherwise called its fulfillment or maker’s exactness, is determined by separating the absolute number of genuine up-sides by the quantity of bogus negatives. The likelihood that a pixel will be accurately characterized is shown by this number. Both are critical because they can determine if the class is being overpredicted or underpredicted [15].
The mean typical mistake or root mean square blunder are habitually used as exactness measurements in relapse issues. It may not be imaginable to obtain a quantitative blunder metric for the model in certain examples. For instance, with ML calculations for unaided bunching, the “valid” or worth probably will not exist. Visual understanding can be utilized to decide if a bunching calculation produces significant groups by examining its result. The assets expected to handle an undertaking can fluctuate, contingent upon how much information is available, the size of the area of interest, the kind of information available, and the calculation. Some can be run on a work area or PC with a decent illustration handling unit, while others need a server for capacity and handling power. The organization of enormous, pay-per-use distributed computing administrations benefits others. By using the force of the cloud, assets like Google Earth Motor have spearheaded and generally adjusted the most common way of handling Earth symbolism. They have achieved this by overlooking various tedious and expensive strides in information downloading, filing, preprocessing, and handling, as well as by keeping up with symbolism documents that are refreshed for repeating errands [11].

5. Challenges, Open Issues, and Future Research Directions

Implementing machine learning models in disaster management is a complex and evolving field with several research issues and challenges that need to be addressed. Access to high-quality and timely data is crucial for machine learning models in disaster management. Research is needed to improve data collection methods, ensure data accuracy, and address data gaps in real-time disaster scenarios. Integrating data from diverse sources, such as weather sensors, social media, government agencies, and remote sensing, can be challenging. Research is required to develop data integration standards and interoperable systems for seamless data sharing. Machine learning models trained on historical data may not always generalize well to new and unseen disaster scenarios. Researchers need to develop models that adapt to evolving disaster patterns and incorporate transfer learning techniques. Enabling machine learning models to provide real-time decision support to disaster response teams is a significant challenge. Research is needed to improve the model speed, accuracy, and robustness in dynamic and time-critical situations. Disaster management models must account for uncertainties in the data and predictions. Research is required to develop methods for quantifying and managing uncertainty in machine learning models. Scalability is a crucial issue, especially when handling large-scale disasters. Researchers need to develop scalable machine learning algorithms and infrastructure to process massive volumes of data efficiently. Ensuring that resources are allocated fairly and equitably to all affected populations, including vulnerable and marginalized groups, is essential. Research is needed to develop machine learning models that incorporate fairness and ethical considerations in resource allocation. Addressing these research issues is essential for enhancing the effectiveness and efficiency of machine learning models in disaster management and ensuring a more coordinated and data-driven response to disasters.

6. Proposed Algorithm

The proposed hybrid prediction Algorithm 1 needs to find the nearest closure information in the approximation theory, impacted by the disaster data system of the messages in the social media system. In this study, we went through a large amount of data collected from the AWS cloud system.
Algorithm 1: The proposed hybrid algorithm provides an optimized prediction system
Input: The parameters of datasets are counted as the input in the algorithm.
Output: The optimized predictions of the messages are found for the end users.
1: Procedure (Methods:)
2: If (Datasets = Ø) then
3: {
4: Perform no value of detection.
5:  Else Check (Datasets is in which Class)
  6: {
7: if (Datasets= Upper Approximation) then
8:   {
9:   Apply the Fuzzy optimization system in datasets.
    Step1: Divide the all the classes into functional and nonfunctional properties of
classes.
10:  else if (Datasets= Lower Approximation)
    Apply the Fuzzy optimization system in datasets.
11: end if
12: Step2: Formulate the different clusters of the lower datasets as rejected.
13:  }
  14: end if
  15: end if
  16: end procedure

7. Evaluation System of Prediction System in Machine Learning

The data points contained within this convex hull will be removed from our point collection, and this is shown at the admin end as well. To accurately depict the disaster-affected region, these data can also be plotted on the graph. Figure 4 finds the efficient message sending and receiving prediction using machine learning in a social media system.
Figure 5 shows the most suitable cost of the messages predicted by using our hybrid proposed algorithm.

8. Conclusions and Future Work

Cataclysmic events and pandemics have become essentially more horrendous and regular as of late. Because of the ascent in pandemics and debacles, crisis administrations are feeling the squeeze, requiring the successful activity of ML calculations and their usage of accessible assets. The utilization of AI in a fiasco and pandemic administration is analyzed from top to bottom in this article. The ML calculations were first exhaustively assessed in this article. The resulting conversation focused on the different phases of pandemics and calamities during which ML calculations can be used. Foreseeing a debacle, distinguishing a pandemic, conveying early alerts, finishing flight plans, restricting the gamble of future fiascos, social disconnection, and different issues are the stages. Likewise, we examined significant issues and how ML estimations can address them in a couple of progressions. All in all, we portrayed a couple of impediments, irritating issues, and potential examination methods. Notwithstanding the signs and side effects of the illness, ML calculations should consider the environment and safe human frameworks to foresee a pandemic precisely. The two highlights are missing from the model alluded to in our study. In any case, the human-resistant framework is as yet not considered in that frame of mind, in spite of the way that the precipitation design is utilized as a component to foresee the spread of cholera.

Author Contributions

Conceptualization, D.D. and S.P.; methodology, N.B., V.T., J.P.B. and A.K.S.; validation, D.D. and S.P.; formal analysis, N.B., V.T., J.P.B. and A.K.S.; investigation, D.D. and S.P.; resources, D.D. and S.P.; writing—original draft preparation, N.B., V.T., J.P.B. and A.K.S.; validation, D.D. and S.P.; writing—review and editing, N.B., V.T., J.P.B. and A.K.S.; validation, D.D. and S.P.; visualization, D.D. and S.P.; supervision, N.B., V.T., J.P.B. and A.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, W.; Bocchini, P.; Davison, B.D. Applications of Artificial Intelligence for Disaster Management. Nat. Haz. 2020, 103, 2631–2689. [Google Scholar] [CrossRef]
  2. Drakaki, M.; Tzionas, P. Investigating the impact of site management on distress in refugee sites using Fuzzy Cognitive Maps. Int. J. Disaster Risk Reduct. 2021, 60, 102282. [Google Scholar] [CrossRef]
  3. Kumar, S. Reviewing Software Testing Models and Optimization Techniques: An Analysis of Efficiency and Advancement Needs. J. Comput. Mech. Manag. 2023, 2, 43–55. [Google Scholar] [CrossRef]
  4. Van Wassenhove, L.N. Blackett memorial lecture humanitarian aid logistics: Supply chain management in high gear. J. Oper. Res. Soc. 2006, 57, 475–489. [Google Scholar] [CrossRef]
  5. Kumar, S.; Gupta, U.; Singh, A.K.; Singh, A.K. Artificial Intelligence: Revolutionizing Cyber Security in the Digital Era. J. Comput. Mech. Manag. 2023, 2, 31–42. [Google Scholar] [CrossRef]
  6. Kumar, S.; Kumari, B.; Chawla, H. Security challenges and application for underwater wireless sensor network. In Proceedings of the International Conference on Emerging Trends in Expert Applications & Security, Jaipur, India, 17–18 February 2018; Volume 2, pp. 15–21. [Google Scholar]
  7. Mehra, A.; Mandal, M.; Narang, P.; Chamola, V. ReviewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4256–4266. [Google Scholar] [CrossRef]
  8. Yaqoob, T.; Abbas, H.; Atiquzzaman, M. Security vulnerabilities, attacks, countermeasures, and regulations of networked medical devices—A review. IEEE Commun. Surv. Tutor. 2019, 21, 3723–3768. [Google Scholar] [CrossRef]
  9. Kumar Sharma, A.; Tiwari, A.; Bohra, B.; Khan, S. A Vision towards Optimization of Ontological Datacenters Computing World. Int. J. Inf. Syst. Manag. Sci. 2018, 1–6. [Google Scholar]
  10. Tiwari, A.; Sharma, R.M. Rendering Form Ontology Methodology for IoT Services in Cloud Computing. Int. J. Adv. Stud. Sci. Res. 2018, 3, 273–278. [Google Scholar]
  11. Tiwari, A.; Garg, R. Eagle Techniques In Cloud Computational Formulation. Int. J. Innov. Technol. Explor. Eng. 2019, 1, 422–429. [Google Scholar]
  12. He, L.; Yan, Z.; Atiquzzaman, M. LTE/LTE-A network security data collection and analysis for security measurement: A survey. IEEE Access 2018, 6, 4220–4242. [Google Scholar] [CrossRef]
  13. Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep learning for IoT big data and streaming analytics: A survey. IEEE Commun. Surv. Tutor. 2018, 20, 2923–2960. [Google Scholar] [CrossRef]
  14. Tiwari, A.; Garg, R. ACCOS: A Hybrid Anomaly-Aware Cloud Computing Formulation-Based Ontology Services in Clouds. In Proceedings of the ISIC’21: International Semantic Intelligence Conference, Online, 25–27 February 2021; pp. 341–346. [Google Scholar]
  15. Koppaiyan, R.S.; Pallivalappil, A.S.; Singh, P.; Tabassum, H.; Tewari, P.; Sweeti, M.; Kumar, S. High-Availability Encryption-Based Cloud Resource Provisioning System. In Proceedings of the 4th International Conference on Information Management & Machine Intelligence, Jaipur, India, 23–24 December 2022; pp. 1–6. [Google Scholar]
  16. Tiwari, A.; Garg, R. Reservation System for Cloud Computing Resources (RSCC): Immediate Reservation of the Computing Mechanism. Int. J. Cloud Appl. Comput. (IJCAC) 2022, 12, 1–22. [Google Scholar] [CrossRef]
  17. Bansal, G.; Chamola, V.; Narang, P.; Kumar, S.; Raman, S. Deep3DScan: Deep residual network and morphological descriptor based framework for lung cancer classification and 3D segmentation. IET Image Process. 2020, 14, 1240–1247. [Google Scholar] [CrossRef]
  18. Dora Pravina, C.T.; Buradkar, M.U.; Jamal, M.K.; Tiwari, A.; Mamodiya, U.; Goyal, D. A Sustainable and Secure Cloud resource provisioning system in Industrial Internet of Things (IIoT) based on Image Encryption. In Proceedings of the 4th International Conference on Information Management & Machine Intelligence, Jaipur, India, 23–24 December 2022; pp. 1–5. [Google Scholar]
  19. Manikandan, R.; Maurya, R.K.; Rasheed, T.; Bose, S.C.; Arias-Gonzáles, J.L.; Mamodiya, U.; Tiwari, A. Adaptive cloud orchestration resource selection using rough set theory. J. Interdiscip. Math. 2023, 26, 311–320. [Google Scholar] [CrossRef]
  20. Srivastava, P.K.; Kumar, S.; Tiwari, A.; Goyal, D.; Mamodiya, U. Internet of thing uses in materialistic ameliorate farming through AI. AIP Conf. Proc. 2023, 2782, 020133. [Google Scholar]
Figure 1. Basic structure of prediction system in machine learning.
Figure 1. Basic structure of prediction system in machine learning.
Engproc 59 00019 g001
Figure 2. The classification of machine learning.
Figure 2. The classification of machine learning.
Engproc 59 00019 g002
Figure 3. Steps involved in prediction system of machine learning [6].
Figure 3. Steps involved in prediction system of machine learning [6].
Engproc 59 00019 g003
Figure 4. The efficient prediction system for the dataset.
Figure 4. The efficient prediction system for the dataset.
Engproc 59 00019 g004
Figure 5. Cost prediction analysis for the given dataset.
Figure 5. Cost prediction analysis for the given dataset.
Engproc 59 00019 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pachar, S.; Dudeja, D.; Batra, N.; Tomar, V.; Bhimavarapu, J.P.; Singh, A.K. Recent Developments in Machine Learning Predictive Analytics for Disaster Resource Allocation. Eng. Proc. 2023, 59, 19. https://doi.org/10.3390/engproc2023059019

AMA Style

Pachar S, Dudeja D, Batra N, Tomar V, Bhimavarapu JP, Singh AK. Recent Developments in Machine Learning Predictive Analytics for Disaster Resource Allocation. Engineering Proceedings. 2023; 59(1):19. https://doi.org/10.3390/engproc2023059019

Chicago/Turabian Style

Pachar, Sunita, Deepak Dudeja, Neha Batra, Vinam Tomar, John Philip Bhimavarapu, and Avadh Kishor Singh. 2023. "Recent Developments in Machine Learning Predictive Analytics for Disaster Resource Allocation" Engineering Proceedings 59, no. 1: 19. https://doi.org/10.3390/engproc2023059019

Article Metrics

Back to TopTop