Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment
Abstract
:1. Introduction
1.1. Replicated Gluster
1.2. Problem and Objective
- (A)
- The storage of bulk amounts of data (electrical parameters) collected through smart metering is essential in load flow analysis through SynerGee 4.0 Electric software.
- (B)
- The availability of enough space to store these results for future analysis is so important, and is studied and implemented. Failing this, the concept of a smart grid is not possible.
- (C)
- The primary study and implementation objective of this paper is the comprehensive characterization and testing of replicated Gluster in a genuinely distributed environment, delving into its performance nuances under diverse conditions.
1.3. Contribution of the Work
- Progress in Smart Grids: The work makes progress in the field of smart grids by utilizing tele-trafficking and storage of the data in replicated Gluster. It looks at cutting-edge methods for system optimization and data management, both of which are crucial for updating energy infrastructure.
- Virtual Data Management: One important contribution is the introduction of a simulation model. This method enables utilities to test and improve their systems in a controlled environment by exploring different scenarios and strategies without impacting the real grid.
- Gluster Replication for Normal and Heavy Flow: Scalability and dependability are two requirements that are met by the data replication using Gluster technology in smart grid systems. This improves grid resilience by guaranteeing that the system can manage both typical operations and scenarios of peak demand.
- Integration of GIS and GPRS Data: This effort builds a complete dataset that offers insights on grid performance and operations by merging data from GIS- and GPRS-enabled meters. Utilities can now make data-driven decisions for effective grid management and monitor important indicators in real-time thanks to this integration.
- Real-Time Gathering of Data: It is imperative to prioritize real-time data collection, especially with GPRS-enabled meters, in order to improve grid monitoring and control functions. Utilities are able to quickly detect patterns, abnormalities, and possible problems by recording minute data of voltage levels, energy consumption, and other characteristics.
- Empowerment of Utility: In the end, the work equips utilities with the know-how and resources required to manage the grid efficiently. Utilities may enhance grid operations, increase dependability, and react to shifting demand and environmental circumstances more quickly by giving users access to large datasets and simulation tools.
2. Literature Review
2.1. Techniques for Electric Data Extraction from Smart Grid
2.2. Techniques for Virtual Data Management and Its Performance in Terms of UDP and TCP Flow
2.2.1. Distributed File Systems (DFSs)
2.2.2. File System User Space
2.2.3. The Reliable Array of Independent Nodes
- Giving nodes more than one network interface.
- Making use of network surveillance to avoid single points of failure.
- Grouping is put into practice for cluster monitoring.
- Using error-correcting coding, like RAID, to incorporate redundancy in storage nodes.
2.2.4. Ceph
2.2.5. EOS
2.2.6. Gluster-FS
2.2.7. Comparison Table Regarding Traditional Virtual Data Storing Techniques with Replicated Gluster
3. Research Methodology
3.1. Method Adopted for Electrical Data Analysis from User-Designed Smart Grid
3.2. Method Adopted for Virtual Data Storage and Tele-Trafficking Detail
3.2.1. Tele-Trafficking Procedural Steps:
- Normal flow,
- Under loaded condition.
- (a)
- Under normal conditions, we have manipulated the data of 48 h of the proposed smart grid to the DFS through the edge routing application.
- (b)
- In the DFS, the server manages the data by allocating separate IDs to all the files sent from different clients.
- (c)
- The DFS server sends these files to Gluster FS and Gluster RS.
- (d)
- The I-perf utility is run on the server and client levels, and traffic flow is monitored using the tele-trafficking technique.
- (a)
- Under loaded conditions, we have manipulated the data of 15 days of the proposed smart grid to the DFS through the edge routing application.
- (b)
- In the DFS, the server manages the data by allocating separate IDs to all the files sent from different clients.
- (c)
- The DFS server sends these files to Gluster FS and Gluster RS.
- (d)
- The I-perf utility is run on the server and client levels, and traffic flow is monitored using the tele-trafficking technique.
3.2.2. Lab Setup and Performance in Syntose-Based Environment
- (a)
- Five core i-5, 10 generation processors with 32 GB RAM each.
- (b)
- With three VMs (virtual machines) on one main PC running two VMs for Gluster and its replicated version and one for the distributed file server (DFS).
- (c)
- The other four PCs will act as four separate enterprises. Each enterprise will have four remote clients, and their data will be forwarded to the DFS server using edge routing.
- (d)
- Wireshark for network graphs.
3.2.3. Algorithm of Proposed Virtual Storage Method Replicated Gluster (RG)
Algorithm 1: Virtual Data Management by Replicated Gluster | |
1 | # Import Python’s Library Functions import hashlib import subprocess |
2 | class GlusterManager: def __init__(self, volume_name, mount_path): self.volume_name = volume_name self.mount_path = mount_path |
3 | def store_data(self, data, filename): # Generate unique identifier for data data_id = hashlib.sha256(data.encode()).hexdigest() |
4 | # Calculate shard location shard_location = self.calculate_shard_location(data_id) |
5 | # Write data to GlusterFS volume subprocess.run([‘gluster’, ‘volume’, ‘write’, self.volume_name, shard_location, data]) |
6 | # Store mapping of the filename to shard location self.store_mapping(filename, shard_location) |
7 | return True |
8 | def retrieve_data(self, filename): # Retrieve shard location for given filename shard_location = self.retrieve_mapping(filename) |
9 | # Read data from GlusterFS volume result = subprocess.run([‘gluster’, ‘volume’, ‘read’, self.volume_name, shard_location], capture_output = True) |
10 | if result.returncode == 0: return result.stdout.decode() else: return None |
11 | def calculate_shard_location(self, data_id): # Simplified hash-based shard location calculation return data_id[:10] # Use first 10 characters of hash for simplicity |
12 | def store_mapping(self, filename, shard_location): # For simplicity, store mapping in a text file with open(f’{self.mount_path}/mapping.txt’, ‘a’) as f: f.write(f’{filename}:{shard_location}\n’) |
13 | def retrieve_mapping(self, filename): # Retrieve shard location from mapping file with open(f’{self.mount_path}/mapping.txt’, ‘r’) as f: for line in f: parts = line.strip().split(‘:’) if parts [0] == filename: return parts [1] return None |
14 | def replicate_data(self): # Trigger replication process subprocess.run([‘gluster’, ‘volume’, ‘replicate’, self.volume_name, ‘force’]) |
4. On-Site Data Collection and System Verification
4.1. Electrical Data Analysis in the Proposed Smart Grid
4.2. Tele-Trafficking in Virtual Data Storage through Replicated Gluster in Syntose Environment
- Fetching QoS parameters in normal flow mode for uplink and downlink in TCP and UDP throughput.
- Fetching QoS parameters in loaded flow mode for uplink and downlink TCP and UDP throughput.
4.2.1. QoS in Memory Parameters (TCP and UDP Throughput) in Uplink
4.2.2. QoS Parameters (TCP and UDP Throughput) in Downlink
4.3. (Case-2: Fetching QoS Parameters in Heavy Flow Mode for Both (Uplink and Downlink) TCP and UDP Throughput)
4.3.1. QoS Parameters (TCP and UDP Throughput) in Uplink with Heavy Flow
4.3.2. QoS Parameters (TCP and UDP Throughput) in Downlink with Heavy Flow
4.3.3. Concluding the Simulation Results Obtained in Case-I and II
4.4. Contrasting the Tele-Traffic Results in Terms of UDP and TCP Test with Proposed Replicated Gluster (RG) with Legacy Data Storage Method as Illustrated in Table 1
5. Conclusions and Discussions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, Y.; Qiu, B.; Fan, X.; Zhu, H.; Han, B. Review of Smart Home Energy Management Systems. Energy Procedia 2016, 104, 504–508. [Google Scholar] [CrossRef]
- Ulises, J.; Rodríguez-Urrego, L. Technological Developments in Control Models Using Petri Nets for Smart Grids: A Review. Energies 2023, 16, 3541. [Google Scholar] [CrossRef]
- Zhou, G. Analysis of the Development of Smart Grid and its key technology. In Proceedings of the 2015 International Conference on Industrial Technology and Management Science, Tianjin, China, 27–28 March 2015; pp. 43–44. [Google Scholar] [CrossRef]
- Kong, D. Application of Smart Grid Technology in Power Dispatch Automation. Power Equip. Manag. 2020, 8, 41–44. [Google Scholar]
- Gerwen, R.; Jaarsma, S.; Wilhite, R. Smart Metering. 2006. Available online: https://idc-online.com/technical_references/pdfs/electrical_engineering/Smart_Metering.pdf (accessed on 9 May 2024).
- Bouchard, P.; Heilig, L.; Shi, X. A Case Study on Smart Grid Technologies with Renewable Energy for Central Parts of Hamburg. Sustainability 2023, 15, 15834. [Google Scholar] [CrossRef]
- Zheng, J.; Okamura, H.; Pang, T.; Dohi, T. Availability importance measures of components in smart electric power grid systems. Reliab. Eng. Syst. Saf. 2021, 205, 107164. [Google Scholar] [CrossRef]
- Zhu, K.; Li, Y.; Mao, W.; Li, F.; Yan, J. LSTM enhanced by dual-attention-based encoder-decoder for daily peak load forecasting. Elec. Power Syst. Res. 2022, 208, 107860. [Google Scholar] [CrossRef]
- Wazid, M.; Das, A.K.; Chaola, V. Uniting Cyber Security and Machine Learning: Advantages, Chellenges and Future Research. ICT Express 2022, 8, 313–321. [Google Scholar] [CrossRef]
- Raza, M.A.; Khatri, K.L.; Haque, M.I.U.; Shahid, M.; Rafique, K.; Waseer, T.A. Holistic and scientific approach to the development of sustainable energy policy framework for energy security in Pakistan. Energy Rep. 2022, 8, 4282–4302. [Google Scholar] [CrossRef]
- Raza, M.A.; Khatri, K.L.; Hussain, A. Transition from fossilized to defossilized energy system in Pakistan. Renew. Energy 2022, 190, 19–29. [Google Scholar] [CrossRef]
- Kaloi, G.S.; Baloch, M.H. Smart grid implementation and development in Pakistan: A point of view. Sci. Int. 2016, 4, 3707–3712. [Google Scholar]
- Rubasinghe, O.; Zhang, X.; Chau, T.; Chow, Y.; Fernando, T.; Ho-Ching Iu, H. A novel sequence to sequence data modelling based CNN-LSTM algorithm for three years ahead monthly peak load forecasting. IEEE Trans. Power Syst. 2024, 39, 1932–1947. [Google Scholar] [CrossRef]
- Bagdadee, A.H.; Zhang, L. Renewable energy based selfhealing scheme in smart grid. Energy Rep. 2020, 6, 166–172. [Google Scholar] [CrossRef]
- Jiang, X.; Wu, L. A Residential Load Scheduling Based on Cost Efficiency and Consumer’s Preference for Demand Response in Smart Grid. Electr. Power Syst. Res. 2020, 186, 106410. [Google Scholar] [CrossRef]
- Lee, P.K.; Lai, L.L. A practical approach of smart metering in remote monitoring of renewable energy applications. In Proceedings of the 2009 IEEE Power & Energy Society General Meeting, Calgary, AB, Canada, 26–30 July 2009; pp. 1–4+7. [Google Scholar]
- Bao, C. Innovative Research on Smart Grid Technology in Electrical Engineering. Eng. Technol. Trends 2024, 2. [Google Scholar] [CrossRef]
- Bauer, M.; Plappert, W.; Dostert, K. Packet-oriented communication protocols for smart grid services over low-speed PLC. In Proceedings of the 2009 IEEE International Symposium on Power Line Communications and Its Applications, Dresden, Germany, 29 March–1 April 2009; pp. 89–94. [Google Scholar]
- Rydning, D.R.J.G.J. The Digitization of the World from Edge to Core. Available online: https://www.seagate.com/files/wwwcontent/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf (accessed on 4 January 2021).
- CERN. Storage|CERN. Available online: https://home.cern/science/computing/storage (accessed on 4 January 2021).
- Mascetti, L.; Rios, M.A.; Bocchi, E.; Vicente, J.C.; Cheong, B.C.K.; Castro, D.; Collet, J.; Contescu, C.; Labrador, H.G.; Iven, J.; et al. CERN Disk Storage Services: Report from last data taking, evolution, and future outlook towards Exabyte-scale storage. EPJ Web Conf. EDP Sci. 2020, 245, 04038. [Google Scholar] [CrossRef]
- Low, Y.; Bickson, D.; Gonzalez, J.; Hellerstein, J.M. Distributed Graphlah: A framework for machine learning and data mining in the cloud. Proc. VLDB Endow. 2012, 5, 716–727. [Google Scholar] [CrossRef]
- Zhao, D.; Raicu, I.; Distributed File Systems for Exascale Computing; pp. 1–2. 2012. Available online: http://216.47.155.57/publications/2012_SC12_paper_FusionFS.pdf (accessed on 9 May 2024).
- Singh, A.; Ngan, T.W.; Wallach, D.S. Eclipse attacks on overlay networks: Threats and Defenses. In Proceedings of the 25th IEEE INFOCOM, Barcelona, Spain, 23–29 April 2006; pp. 1–12. [Google Scholar]
- Xiao, D.; Zhang, C.; Li, X. The Performance Analysis of GlusterFS in Virtual Storage. In Proceedings of the International Conference on Advances in Mechanical Engineering and Industrial Informatics, Zhengzhou, China, 11–12 April 2015; pp. 199–203. [Google Scholar]
- Kumar, M. Characterizing the GlusterFS Distributed File System for Software-Defined Networks Research. Ph.D. Thesis, Rutgers The State University of New Jersey, School of Graduate Studies, New Brunswick, NJ, USA, 2015; pp. 1–43. [Google Scholar]
- Levy, E.; Silberschatz, A. Distributed file systems: Concepts and examples. ACM Comput. Surv. 1990, 22, 321–374. [Google Scholar] [CrossRef]
- Hsiao, H.; Chang, H. Load Rebalancing for Distributed File Systems in Clouds. IEEE Trans. Parallel Distrib. Syst. 2012, 25, 951–962. [Google Scholar] [CrossRef]
- Shao, B.; Wang, H.; Li, Y. Trinity: A distributed graph engine on a memory cloud. In Proceedings of the ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 22–27 June 2013; pp. 505–516. [Google Scholar]
- Wang, L.; Tao, J.; Ranjan, R.; Chen, D. G-Hadoop: MapReduce across distributed data centers for data-intensive computing. Future Gener. Comput. Syst. 2013, 29, 739–750. [Google Scholar] [CrossRef]
- Vrable, M.; Savage, S.; Voelker, G.M. BlueSky: A cloud-backed file system for the enterprise. In Proceedings of the 10th USENIX conference on File and Storage Technologies, San Jose, CA, USA, 15–17 February 2012; p. 19. [Google Scholar]
- Zhang, J.; Wu, G.; Wu, X. A Distributed Cache for Hadoop Distributed File System in Real-Time Cloud Services. In Proceedings of the ACM/IEEE 13th International Conference on Grid Computing, Beijing, China, 20–23 September 2012; pp. 12–21. [Google Scholar]
- Liao, J.; Zhu, L. Dynamic Stripe Management Mechanism in Distributed File Systems. In Proceedings of the 11th IFIP WG 10.3 International Conference NPC, Ilan, Taiwan, 18–20 September 2014; Volume 8707, pp. 497–509. [Google Scholar]
- Bohossian, V.; Fan, C.C.; LeMahieu, P.S.; Riedel, M.D.; Xu, L.; Bruck, J. Computing in the RAIN: A reliable array of independent nodes. IEEE Trans. Parallel Distrib. Syst. 2001, 12, 99–114. [Google Scholar] [CrossRef]
- Szeredi, M. Libfuse: Libfuse API Documentation. Available online: http://libfuse.github.io/doxygen/ (accessed on 4 January 2021).
- Tarasov, V.; Gupta, A.; Sourav, K.; Trehan, S.; Zadok, E. Terra Incognita: On the Practicality of User-Space File Systems. In Proceedings of the 7th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 15), Santa Clara, CA, USA, 6–7 July 2015. [Google Scholar]
- Ceph Foundation. Architecture—Ceph Documentation. Available online: https://docs.ceph.com/en/latest/architecture/ (accessed on 4 January 2021).
- Weil, S.A.; Brandt, S.A.; Miller, E.L.; Maltzahn, C. CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data. In Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, Tampa, FL, USA, 11–17 November 2006; Association for Computing Machinery: New York, NY, USA, 2006. SC ’06. p. 122-es. [Google Scholar]
- CERN. Introduction—EOS CITRINE Documentation. Available online: https://eos-docs.web.cern.ch/intro.html (accessed on 4 January 2021).
- CERN. RAIN—EOS CITRINE Documentation. Available online: https://eos-docs.web.cern.ch/using/rain.html (accessed on 4 January 2021).
- Juve, G.; Deelman, E.; Vahi, K.; Mehta, G. Data Sharing Options for Scientific Workflows on Amazon EC2. In Proceedings of the ACM/IEEE International Conference for High-Performance Computing, Networking, Storage and Analysis, Washington, DC, USA, 13–19 November 2010; pp. 1–9. [Google Scholar]
- Deelman, E.; Berriman, G.B.; Berman, B.P.; Maechling, B.P. An Evaluation of the Cost and Performance of Scientific Workflows on Amazon EC2. J. Grid Comput. 2012, 10, 5–21. [Google Scholar]
- Davies, A.; Orsaria, A. Scale-out with GlusterFS. Linux J. 2013, 2013, 1. [Google Scholar]
- Zhang, Q.; Cheng, L.; Boutaba, R. Cloud computing: State of the art and research challenges. Braz. Comput. Soc. J. Internet Serv. Appl. 2010, 1, 7–18. [Google Scholar] [CrossRef]
- Louati, W.; Jouaber, B.; Zeghlache, D. Configurable software-based edge router architechture. Comput. Commun. 2005, 28, 1692–1699. [Google Scholar] [CrossRef]
- Gudu, D.; Hardt, M.; Streit, A. Evaluating the performance and scalability of the Ceph distributed storage system. In Proceedings of the 2014 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 27–30 October 2014; pp. 177–182. [Google Scholar]
- Zhang, X.; Gaddam, S.; Chronopoulos, A.T. Ceph Distributed File System Benchmarks on an OpenStack Cloud. In Proceedings of the 2015 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), Bangalore, India, 25–27 November 2015; pp. 113–120. [Google Scholar]
- Lim, B.; Arık, S.Ö.; Loeff, N.; Pfister, T. Temporal Fusion Transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 2021, 37, 1748–1764. [Google Scholar] [CrossRef]
- Acquaviva, L.; Bellavista, P.; Corradi, A.; Foschini, L.; Gioia, L.; Picone, P.C.M. Cloud Distributed File Systems: A Benchmark of HDFS, Ceph, GlusterFS, and XtremeFS. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Li, X.; Li, Z.; Zhang, X.; Wang, L. LZpack: A Cluster File System Benchmark. In Proceedings of the 2010 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, Huangshan, China, 10–12 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 444–447. [Google Scholar]
- Lee, J.; Song, C.; Kang, K. Benchmarking Large-Scale Object Storage Servers. In Proceedings of the 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), Atlanta, GA, USA, 10–14 June 2016; Volume 2, pp. 594–595. [Google Scholar]
- Cooper, B.F.; Silberstein, A.; Tam, E.; Ramakrishnan, R.; Sears, R. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing, Indianapolis, IN, USA, 10–11 June 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 143–154. [Google Scholar]
- Red Hat. Chapter 9. Benchmarking Performance Red Hat Ceph Storage 1.3|Red Hat Customer Portal. Available online: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.3/html/administration_guide/benchmarking_performance (accessed on 4 January 2021).
- Li, J.; Wang, Q.; Jayasinghe, D.; Park, J.; Zhu, T.; Pu, C. Performance Overhead among Three Hypervisors: An Experimental Study Using Hadoop Benchmarks. In Proceedings of the 2013 IEEE International Congress on Big Data, Santa Clara, CA, USA, 27 June–2 July 2013; IEEE Computer Society: Washington, DC, USA, 2013; pp. 9–16. [Google Scholar]
- IEEE Std 1003.1-2017 (Revision of IEEE Std 1003.1-2008); IEEE Standard for Information Technology–Portable Operating System Interface (POSIX(TM)) Base Specifications, Issue 7. 2018; pp. 2641–2649. Available online: https://ieeexplore.ieee.org/document/8277153/ (accessed on 4 January 2021).
- Russel Cocker. Bonnie++ Russell Coker’s Documents. Available online: https://doc.coker.com.au/projects/bonnie/ (accessed on 4 January 2021).
Features | GlusterFS | EOS | Ceph | Replicated Gluster (RG) |
---|---|---|---|---|
Design/Architecture | Network File Used as Scale-out | File-Distributed System | Completely File-Distributed System | Completely Distributed |
Node/Volume Failure | May Need Corrections/Maintenance | System Memory/Bricks Failure | No System Failure | No Failure |
System Placement Techniques | No Auto All Manual | Automatic | Automatic | Auto |
System Fault Tolerance/Detection | Not Detected | Detectable/Fully Connect | Detectable/Fully Connect | Auto |
Storage Replication | Semi Replication | Original File Saved | Original File Saved | Replication |
Large/Small Storage Files | System Support is Not Efficient | Fully Supported | Fully Suitable | Supported |
System Working Check Pointing. | Not Favorable | Not Favorable | Not Favorable | Favorable |
Network Security | IP/Port-type Control System | Better/Advanced CHAP | PAP/Object Replication | Best/Even Replica is Established |
Process Time | Slow | Good | Better | Best in its Domain |
Normal Uplink Flow | Time | Bd | T (in 15 s) |
---|---|---|---|
TCP Throughput | 15 sec | 476 Mbps | 855 Mbytes |
Normal Uplink Flow | Time | Lost/Total Datagram | J (in ms) |
---|---|---|---|
UDP Throughput | 10 s | 11/8550 (0.13%) | 0.042 ms |
Normal Uplink Flow | Time | Bd | T (in 15 s) |
---|---|---|---|
TCP Throughput | 15 s | 578 Mbps | 1.35 G bytes |
Normal Uplink Flow | Time | Lost/Total Datagram | J (in ms) |
---|---|---|---|
UDP Throughput | 10 s | 0/8555(0%) | 0.112 ms |
Normal Uplink Flow | Time | Bd | T (in 15 s) |
---|---|---|---|
TCP Throughput | 15 s | 519 Mbps | 932 Mbytes |
Normal Uplink Flow | Time | Lost/Total Datagram | J (in mili s) |
---|---|---|---|
UDP Throughput | 10 s | 0/8549 (0.0%) | 0.103 ms |
Normal Uplink Flow | Time | Bd | T (in 15 s) |
---|---|---|---|
TCP Throughput | 15 s | 538 Mbps | 1.26 Gbytes |
Normal Uplink Flow | Time | Lost/Total Datagram | J (in mili s) |
---|---|---|---|
UDP Throughput | 10 s | 0/8555 (0.0%) | 0.112 ms |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hashmi, W.; Atiq, S.; Hussain, M.M.; Javed, K. Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies 2024, 17, 2344. https://doi.org/10.3390/en17102344
Hashmi W, Atiq S, Hussain MM, Javed K. Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies. 2024; 17(10):2344. https://doi.org/10.3390/en17102344
Chicago/Turabian StyleHashmi, Waqas, Shahid Atiq, Muhammad Majid Hussain, and Khurram Javed. 2024. "Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment" Energies 17, no. 10: 2344. https://doi.org/10.3390/en17102344
APA StyleHashmi, W., Atiq, S., Hussain, M. M., & Javed, K. (2024). Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies, 17(10), 2344. https://doi.org/10.3390/en17102344