Next Article in Journal
Reducing Equipment Failure Risks by Redesigning of Products and Processes
Previous Article in Journal
Using Artificial Intelligence Methods to Create a Chatbot for University Questions and Answers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Resource Management Techniques for the Internet of Things, Edge, and Fog Computing Environments †

by
Koushik Chakraborty
1,
Manmohan Sharma
2,
Krishnaveni Kommuri
3,
Voore Subrahmanyam
4,
Pratap Patil
5 and
Manmohan Singh Yadav
6,*
1
Office of the Registrar, Adamas University, Kolkata 700126, India
2
Department of Computer Science and Engineering, Manipal University Jaipur, Rajasthan 303007, India
3
Wipro Technologies India Pvt Ltd., Hyderabad 500075, India
4
Department of Computer Science and Engineering, Anurag University, Hyderabad 500088, India
5
Department of IT and Engineering, Amity University in Tashkent, Tashkent 100028, Uzbekistan
6
Department of Computer Science and Engineering(AI), IIMT Group of Colleges, Greater Noida 201310, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 12; https://doi.org/10.3390/engproc2023059012
Published: 11 July 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
A speculative exhibit for distributed computing organizations is implied as a haze of mists joining different parceled mists into a solitary fluid mass for on-request tasks. Fundamentally put, the mist between clouds would ensure that a cloud could use resources outside of its run using current understandings with other cloud benefit providers. Edge processing is a growing registering perspective that brings several frameworks and devices to or near the client. The edge consists in handling data closer to where they are being created, dealing with additional important rates and volumes and resulting in a more conspicuous activity drove happening in real time. These centers perform continuous planning of the data that they receive within a millisecond response time. The center points discontinuously send logical summary information to the cloud. An example of an edge computer is a smartphone connected to a cloud system. Haze computing is more like a “gateway” to insights and control over handling. A haze computer connects to multiple edge computers simultaneously, resulting in a specialized set of devices for more efficient data handling and capacity. There are cutoff points to the actual resources and the geographic reach of any cloud. A cloud cannot help its customers if all its computational and storage capacities are used up. An inter-cloud system addresses situations in which one cloud gains access to other clouds’ frameworks for computing, capacity, or other assets.

1. Introduction

Cloud users frequently require a variety of resources, and their requirements are frequently varied and unpredictable. This component creates testing issues with asset provisioning and application administration conveyance. The difficulty lies in unifying cloud frameworks that incorporate the accompanying resources [1]. It is fundamental that the framework has the option to anticipate client needs and administration conduct. Until it can predict these, it cannot make rational decisions regarding dynamic scaling up and down [2]. Developing expectations and gauging models is important. Building models that precisely learn and fit measurable capabilities suitable for different types of behaviors is a difficult undertaking. It may be more challenging to connect the various behaviors of a service. Because of high functional costs and energy requests, it is critical to upgrade proficiency, cost-adequacy, and use. The system’s need to calculate the appropriate software and hardware combinations makes it difficult to match services to cloud resources. Throughout the mapping of services, the QoS goals must be met simultaneously with the highest possible system utilization and efficiency [3].
A way to deal with the dynamic that is driven by the market and searches for the most ideal blends of administrations and organizational methodologies is known as combinatorial enhancement. It is important to develop streamlining models that address both asset and client focused QoS goals. SMEs will most likely be unable to move to the cloud since they have a significant number of on-location IT resources, like business applications [4]. Because of safety and security concerns, delicate information in an association may not be moved to the cloud. For on-location resources and cloud administrations to cooperate, reconciliation and interoperability are required. It is important to track down answers to the issues of personality among the board, information on the executives, and business process in an organization. System management and monitoring are carried out using centralized procedures even though the system’s components are distributed. Scalability, performance, and reliability issues arise when managing multiple service queues and many service requests, rendering centralized approaches ineffective. Instead, service monitoring and management services can benefit from decentralized messaging and indexing models-based architectures [5].
Figure 1 shows that the board asset handles the proficient and successful distribution of processing assets. It incorporates both equipment assets, like processors and memory, and programming assets, like working framework administrations and application programs. For any computer system to run smoothly, good resource management is essential [1,2]. In a multiuser framework, for instance, unfortunate assets like the executives can prompt one client hoarding assets and dialing back to the framework for every other person. Poor resource management can result in the system not meeting its goals and missing deadlines in a real-time system like a factory or power plant control system. An essential feature of any operating system is resource management. The OS oversees allocating system resources to the various users and applications that are running. The objective of asset management for the executives is to involve the framework’s assets in a proficient and powerful way. There are various procedures that can be utilized for the executive assets. Utilizing a resource scheduler, a program that decides when and how resources should be used, is one common strategy. Another methodology is to utilize an asset supervisor, which is a program that upholds rules about how assets can be utilized [6]. Resource management is a complicated subject that is the focus of a lot of studies. Notwithstanding, there are a couple of fundamental rules that are vital to comprehend. In the first place, it is essential to have a reasonable comprehension of what assets are accessible and the ways in which they can be utilized. Second, it is vital to ensure that assets are utilized in a productive and compelling way. Lastly, it is essential to be able to respond to new demands and adapt to changing circumstances.

2. State of the Art

When it comes to distributing computer assets, the shortest-job-next CPU planning calculation distributes the shortest amount of time to one task. When a calculation is based on rounds, it is predetermining [7]. Design and realistic administration are, in substance, the establishments of working frameworks. To put it another way, computer designs are the pixel setting on the screen. The cache component empowers the server to straightforwardly store information to serve future demands as rapidly as possible. There are various other framework concepts that require administration, which may not be as well-known. A working framework oversees ensuring collaboration among an assortment of gadgets [8]. Ordinarily, this is often fulfilled through the utilization of a gadget driver. Portable gadgets, such as smart phones and tablets, utilize working frameworks that have been custom fitted to their necessities. It is vital to note that equipment complexity is veiled by a working framework. It enables access to information and programs that must be exchanged from auxiliary capacity to essential capacity. This program permits you to keep track of where programs and records are put away. This framework runs reinforcements. As a framework asset director, the working framework oversees distributing assets to programs and clients at whatever point is deemed fundamental. The working framework of a computer is, in this way, an asset director, overseeing the computer system’s internal assets. The Resource Chief Working Framework permits different programs to run concurrently whereas keeping them in memory. The method of asset assignment or sharing is outlined by two strategies: multiplexing and sharing assets in time and space. Depending on how time-multiplexed a program is, diverse programs may utilize a different processor. It is made up of many distinctive assignments, including counting, planning, ending, and locking down forms. The method could be a program that is within the handle of being executed, which is approximately what all cutting-edge working frameworks are. It is the OS’s duty to distribute assets to encourage data sharing and trade [8].

3. Computing Technology

Computing processing is a decentralized registering foundation where information, figures, stockpiling, and applications are found somewhere close to the information source and the cloud [9]. Fog computing, like edge computing, brings cloud computing’s advantages and power closer to the places where data are created and used. Many individuals utilize the terms haze figuring and edge registering reciprocally because both include carrying knowledge and handling nearer to where the information is created. This is frequently finished to further develop productivity; however, it could likewise be finished for security and consistency reasons. The meteorological term for a cloud close to the ground, like how fog focuses on the network’s edge, is the source of the fog metaphor. The term is frequently connected with Cisco; the organization’s product offering director, Ginny Nichols, is reported to have created the term. Cisco Mist Processing is an enlisted name; the entire community is welcome to use fog computing. Cloud computing is complemented but not substituted by fog networking. While the cloud performs resource-intensive, longer-term analytics, fogging makes it possible to perform short-term analytics at the edge [10]. Even though edge gadgets and sensors are where information is created and gathered, they, at times, do not have the process and capacity assets to perform prolonged investigations and AI undertakings. Even though cloud servers can do this, they are frequently too far away to quickly process the data [11].
Additionally, when dealing with sensitive data subject to regulations in various nations, having all endpoints connect to and send raw data to the cloud over the internet may have implications for privacy, security, and legality. Well-known mist processing applications incorporate savvy matrices, shrewd urban communities, brilliant structures, vehicle organizations, and programming-characterized networks.
Acknowledging the Open Fog Consortium begun by Cisco, shown in Figure 2, the key contrast between edge and mist computing is where the insights and computer control are set. In an entirely foggy environment, insights are organized at the nearby zone, and information is transmitted from endpoints to a mist door, where it is at that point transmitted to sources for processing and return transmission. In edge computing, insights and control can be either at the endpoint or at a door. Supporters of edge computing commend its reduced focus of disappointment since each gadget freely works and decides which information to store locally and which information to send to a door or the cloud for further examination. Advocates of mist computing over edge computing say that it is more versatile and provides by much better advantages: a much better, higher, stronger, and improved big picture to see the arrangement of numerous pieces of information, focusing on bolstering information sent into its environment. It ought to be stressed, be that as it may, that a few organizations’ engineers consider haze computing to be basically a Cisco branding of one approach to edge computing [12].
There are many potential use cases for haze computing. One increasingly common use case for mist computing is activity control. Since sensors—such as those utilized for identifying activity—are frequently associated with cellular systems, cities often locate computing assets near cell towers. These computing capabilities empower the real-time analytics of activity information, thereby enabling activity signals to be sent back in real time according to changing conditions. This essential concept is also being expanded to independent vehicles. Independent vehicles basically work as edge gadgets because of their endless onboard computing control. These vehicles must be able to ingest data from a gigantic number of sensors, perform real-time information analytics, and subsequently react appropriately [13].
Since an independent vehicle is designed to operate without the requirement of a cloud network, it is enticing to think of independent vehicles as not being associated gadgets. Indeed, although an independent vehicle must be able to drive securely without a cloud network, it is still conceivable to utilize a network when accessible. A few cities are investigating how an independent vehicle might work with the same computing assets utilized to control activity lights. Such a vehicle might, for instance, work as an edge gadget and utilize it inherent computing capabilities to relay real-time information to the framework that ingests activity information from other sources. The basic computing environment can, at that point, utilize this information to process activity signals more viably. Like all other innovations, mist computing has its advantages and disadvantages. One advantage is transferring speed preservation. Mist computing decreases the volume of information that is sent to the cloud, thereby diminishing transfer speed utilization and related costs. Another advantage is improved reaction time. Since the start of information handling happens close to the information source, idleness is reduced, and general responsiveness is improved. The objective is to supply millisecond-level responsiveness, enabling information to be prepared in near-real time. A further advantage is that it is network-agnostics. Although computing mostly places computer assets at the LAN level as against the gadget level, which is the case with edge computing, the organization may be considered as a portion of fog computing engineering. At the same time, haze computing is network-agnostic in the sense that the organization can be wired, Wi-Fi, indeed 5G. Since mist computing is tied to a physical area, it undermines a few of the “anytime/anywhere” benefits related to cloud computing. In unfavorable circumstances, mist computing can be subjected to security issues, such as Web convention address spoofing or man within the center (MitM) assaults. Mist computing may be an arrangement that utilizes both edge and cloud assets, which suggests that there are related equipment costs. Indeed, although mist computing has been existing for a long time, there are still a few equivocations surrounding the definition of haze computing, with various vendors characterizing haze computing in an unexpected way [14].

4. Resources of Computing

This fundamental concept is also being expanded to independent vehicles. Independent vehicles basically work as edge gadgets because of their tremendous onboard computing control capability [15]. These vehicles must be able to ingest data from a gigantic number of sensors, perform real-time information analytics, and subsequently react appropriately. Since an independent vehicle is designed to operate without the requirement for a cloud network, it is enticing to think of independent vehicles as not being associated gadgets. Indeed, although an independent vehicle must be able to drive securely without a cloud network, it is still conceivable to utilize a network when accessible. A few cities are considering how an independent vehicle might work with the same computing assets utilized to control activity lights [16]. Such a vehicle might, for instance, work as an edge gadget and utilize its inherent computing capabilities to relay real-time information to the framework that ingests activity information from other sources. The basic computing platform can, at this point, utilize this information to process activity signals more viably. Like all other innovations, mist computing has its advantages and disadvantages. Mist computing decreases the volume of information that is sent to the cloud, subsequently decreasing transmission capacity utilization [17]. It also has a fast reaction time. Since the introductory information preparation happens close to the information source, idleness is reduced, and responsiveness is largely facilitated. The objective is to supply millisecond-level responsiveness, enabling information to be prepared in near-real time. Although haze computing mostly places computing assets at the LAN level as opposed to the gadget level, which is the case with edge computing, the arrangement may well be considered a part of fog computing engineering [18]. At the same time, haze computing is network-agnostic in the sense that the arrangement can be wired, Wi-Fi, or 5G.
In these locations, critical systems that must function safely and reliably are powered, and the most sensitive data are processed. These locations require low-latency solutions that do not require a network connection. Since mist computing is tied to a physical location, it undermines a few of the “anytime/anywhere” benefits related to cloud computing. Under unfavorable circumstances, mist computing can be subject to security issues, such as Web convention address spoofing or man within the center assaults. Haze computing could be an arrangement that utilizes both edge and cloud assets, which implies that there are related equipment costs. Indeed, although haze computing has been in existence for a long time, there are still a few uncertainties surrounding the definition of haze computing, with different merchants characterizing mist computing in an unexpected way. The throughput of the framework characterizes the quantity of administrations within the ordinary time scope of completed clients at any point [19]. The resource management is shown in Figure 3. Edge computing is a growing computing worldview which involves the running of systems and gadgets at or close to the client [20]. Edge computing means approximately handling information closer to where it is being created, enabling information handling at more significant speeds and volumes, yielding more prominent action-led outcomes in near-real time. Most of today’s computing already takes place at the edge in places like hospitals, factories, and retail stores, shown in Figure 4.

5. Conclusion and Future Work

Edge computing’s potential to transform businesses in every industry and function from customer engagement and marketing to production and back-office operations is what makes it very exciting. In all cases, edge computing helps to make businesses more proactive and versatile in their capabilities, frequently and continuously prompting new, streamlined experiences for individuals. Edge computing enables organizations to carry the computerized world into the physical, integrating online data and algorithms into physical stores to enhance customer experiences, creating environments where workers can learn from machines and systems that workers can train, while also creating intelligent environments that prioritize our comfort and safety. Edge computing, which enables businesses to run applications with the most crucial reliability, real-time, and data requirements directly on-site, is common to all these examples. In the end, this makes it possible for businesses to come up with new products and services more quickly, innovate more quickly, and create new revenue streams.

Author Contributions

Each author contributed equally for conceptualization, K.C. and V.S.; methodology, K.K.; formal analysis, M.S.; writing—original draft preparation, M.S.Y.; writing—review and editing, P.P. and M.S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

Author Krishnaveni Kommuri was employed by the company Wipro Technologies India Pvt Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Dora, C.T.; Buradkar, M.U.; Jamal, M.K.; Tiwari, A.; Mamodiya, U.; Goyal, D. A Sustainable and Secure Cloud resource provisioning system in Industrial Internet of Things (IIoT) based on Image Encryption. In Proceedings of the 4th International Conference on Information Management & Machine Intelligence, Jaipur, India, 23–24 December 2022; pp. 1–5. [Google Scholar]
  2. Manikandan, R.; Maurya, R.K.; Rasheed, T.; Bose, S.C.; Arias-Gonzáles, J.L.; Mamodiya, U.; Tiwari, A. Adaptive cloud orchestration resource selection using rough set theory. J. Interdiscip. Math. 2023, 26, 311–320. [Google Scholar] [CrossRef]
  3. Kumar, S.; Gupta, U.; Singh, A.K.; Singh, A.K. Artificial Intelligence: Revolutionizing Cyber Security in the Digital Era. J. Comput. Mech. Manag. 2023, 2, 31–42. [Google Scholar] [CrossRef]
  4. Ravula, A.K.; Ahmad, S.S.; Singh, A.K.; Sweeti, S.; Kaur, A.; Kumar, S. Multi-level collaborative framework decryption-based computing systems. AIP Conf. Proc. 2023, 2782, 020131. [Google Scholar]
  5. Singh, S.; Singh, P.; Tanwar, S. Energy aware resource allocation via MS-SLnO in cloud data center. Multimed. Tools Appl. 2023, 82, 45541–45563. [Google Scholar] [CrossRef]
  6. Singh, P. Energy Management in Cloud Through Green Cloud Technologies. J. Manag. Serv. Sci. (JMSS) 2022, 2, 1–11. [Google Scholar] [CrossRef]
  7. Rawat, A.; Singh, P. A Comprehensive Analysis of Cloud Computing Services. J. Inform. Electr. Electron. Eng. (JIEEE) 2021, 2, 1–9. [Google Scholar] [CrossRef]
  8. Khan, H.; Singh, P. Issues and Challenges of Internet of Things: A Survey. J. Inform. Electr. Electron. Eng. (JIEEE) 2021, 2, 1–8. [Google Scholar] [CrossRef]
  9. Singh, P.; Hailu, N.; Chandran, V. Databases for Cloud Computing: Comparative Study and Review. Eur. J. Acad. Essays 2014, 1, 12–17. [Google Scholar]
  10. Rohinidevi, V.V.; Srivastava, P.K.; Dubey, N.; Tiwari, S.; Tiwari, A. A Taxonomy towards fog computing Resource Allocation. In Proceedings of the 2022 2nd International Conference on Innovative Sustainable Computational Technologies (CISCT), Dehradun, India, 23–24 December 2022; pp. 1–5. [Google Scholar]
  11. Singh, N.K.; Jain, A.; Arya, S.; Gonzales, W.E.G.; Flores, J.E.A.; Tiwari, A. Attack Detection Taxonomy System in cloud services. In Proceedings of the 2022 2nd International Conference on Innovative Sustainable Computational Technologies (CISCT), Dehradun, India, 23–24 December 2022; pp. 1–5. [Google Scholar]
  12. Rangaiah, Y.V.; Sharma, A.K.; Bhargavi, T.; Chopra, M.; Mahapatra, C.; Tiwari, A. A Taxonomy towards Blockchain based Multimedia content Security. In Proceedings of the 2022 2nd International Conference on Innovative Sustainable Computational Technologies (CISCT), Dehradun, India, 23–24 December 2022; pp. 1–4. [Google Scholar]
  13. Kamble, S.; Saini, D.K.J.; Kumar, V.; Gautam, A.K.; Verma, S.; Tiwari, A.; Goyal, D. Detection and tracking of moving cloud services from video using saliency map model. J. Discret. Math. Sci. Cryptogr. 2022, 25, 1083–1092. [Google Scholar] [CrossRef]
  14. Tiwari, A.; Garg, R. Adaptive Ontology-Based IoT Resource Provisioning in Computing Systems. Int. J. Semant. Internet Inf. Syst. (IJSWIS) 2022, 18, 1–18. [Google Scholar] [CrossRef]
  15. Tiwari, A.; Garg, R. Orrs Orchestration of a Resource Reservation System Using Fuzzy Theory in High-Performance Computing: Lifeline of the Computing World. Int. J. Softw. Innov. (IJSI) 2022, 10, 1–28. [Google Scholar] [CrossRef]
  16. Kumar, S.; Kumar, S.; Ranjan, N.; Tiwari, S.; Kumar, T.R.; Goyal, D.; Rafsanjani, M.K. Digital watermarking-based cryptosystem for cloud resource provisioning. Int. J. Cloud Appl. Comput. (IJCAC) 2022, 12, 1–20. [Google Scholar] [CrossRef]
  17. Kumar, S.; Kumari, B.; Chawla, H. Security challenges and application for underwater wireless sensor network. In Proceedings of the International Conference on Emerg, Jaipur, India, 17–18 February 2018; Volume 2, pp. 15–21. [Google Scholar]
  18. Kumar Sharma, A.; Tiwari, A.; Bohra, B.; Khan, S. A Vision towards Optimization of Ontological Datacenters Computing World. Int. J. Inf. Syst. Manag. Sci. 2018, 1, 1–6. [Google Scholar]
  19. Tiwari, A.; Sharma, R.M. Rendering Form Ontology Methodology for IoT Services in Cloud Computing. Int. J. Adv. Stud. Sci. Res. 2018, 3, 273–278. [Google Scholar]
  20. Tiwari, A.; Garg, R. Eagle Techniques In Cloud Computational Formulation. Int. J. Innov. Technol. Explor. Eng. 2019, 1, 422–429. [Google Scholar]
Figure 1. Cloud management structure.
Figure 1. Cloud management structure.
Engproc 59 00012 g001
Figure 2. Fog computing system.
Figure 2. Fog computing system.
Engproc 59 00012 g002
Figure 3. Resource management system.
Figure 3. Resource management system.
Engproc 59 00012 g003
Figure 4. Edge computing system.
Figure 4. Edge computing system.
Engproc 59 00012 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chakraborty, K.; Sharma, M.; Kommuri, K.; Subrahmanyam, V.; Patil, P.; Yadav, M.S. Resource Management Techniques for the Internet of Things, Edge, and Fog Computing Environments. Eng. Proc. 2023, 59, 12. https://doi.org/10.3390/engproc2023059012

AMA Style

Chakraborty K, Sharma M, Kommuri K, Subrahmanyam V, Patil P, Yadav MS. Resource Management Techniques for the Internet of Things, Edge, and Fog Computing Environments. Engineering Proceedings. 2023; 59(1):12. https://doi.org/10.3390/engproc2023059012

Chicago/Turabian Style

Chakraborty, Koushik, Manmohan Sharma, Krishnaveni Kommuri, Voore Subrahmanyam, Pratap Patil, and Manmohan Singh Yadav. 2023. "Resource Management Techniques for the Internet of Things, Edge, and Fog Computing Environments" Engineering Proceedings 59, no. 1: 12. https://doi.org/10.3390/engproc2023059012

APA Style

Chakraborty, K., Sharma, M., Kommuri, K., Subrahmanyam, V., Patil, P., & Yadav, M. S. (2023). Resource Management Techniques for the Internet of Things, Edge, and Fog Computing Environments. Engineering Proceedings, 59(1), 12. https://doi.org/10.3390/engproc2023059012

Article Metrics

Back to TopTop