Next Article in Journal
Second-Life Electric Vehicle Batteries for Home Photovoltaic Systems: Transforming Energy Storage and Sustainability
Previous Article in Journal
Combustion Analysis of Mixed Secondary Fuel Produced from Agro-Biomass and RDF in a Fluidized Bed
Previous Article in Special Issue
Distributed Energy Resources Management System (DERMS) and Its Coordination with Transmission System: A Review and Co-Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment

1
Department of Electrical Engineering, Khwaja Fareed University of Engineering & Technology (KFUEIT), Rahim Yar Khan 64200, Pakistan
2
School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK
3
Department of Electrical Engineering, Institute of Space Technology (IST), Islamabad 44000, Pakistan
*
Authors to whom correspondence should be addressed.
Energies 2024, 17(10), 2344; https://doi.org/10.3390/en17102344
Submission received: 3 February 2024 / Revised: 9 May 2024 / Accepted: 10 May 2024 / Published: 13 May 2024

Abstract

:
One of the most important developments in the energy industry is the evolution of smart grids, which record minute details of voltage levels, energy usage, and other critical electrical variables through General Packet Radio Service (GPRS)-enabled meters. This phenomenon creates an extensive dataset for the optimization of the grid system. However, the minute-by-minute energy details recorded by GPRS meters are challenging to store and manage in physical storage resources (old techniques lead to a memory shortage). This study investigates using the distributed file system, replicated Gluster, as a reliable storage option for handling and protecting the enormous volumes of data produced by smart grid components. This study performs two essential tasks. (1) The storage of virtual data received from GPRS meters and load flow analysis of SynerGee Electric 4.0 software from the smart grid (we have extracted electrical data from 16 outgoing feeders, distributed lines, in this manuscript). (2) Tele-trafficking is performed to check the performance of replicated Gluster (RG) for virtual data (electrical data received from the smart grid) storage in terms of User Datagram Protocol (UDP), Transmission Control Protocol (TCP), data flow, and jitter delays. This storage technique provides more opportuni11ty to analyze and perform smart techniques efficiently for future requirement, analysis, and load estimation in smart grids compared to traditional storage methods.

1. Introduction

A smart grid is a sophisticated power distribution system that uses digital technology to raise the grid’s sustainability, dependability, and efficiency. Smart grids enhance overall system resilience through modern metering infrastructure (GPRS meters), communication networks (cellular communication), data collection (GPRS devices to data center), load flow analysis (SynerGee Electric), and storage data (replicated Gluster). This phenomenon is represented in Figure 1.
These meters can also remotely disconnect or reconnect an electricity supply to any client and set maximum limitations on the amount of electricity consumed. In research articles [1,2], the authors review the control systems used in smart and microgrids. A smart meter system uses command signals, data transfer devices, and communication for parameter identification and control. Smart meters are expected to be essential in future power distribution networks to track the efficiency and energy characteristics of the grid load. By routinely gathering information on energy usage from all consumers, utility companies are better equipped to control the electricity demand and guide consumers on using appliances economically. As a result, smart meters have control over grid appliances such as current transformers (CT), potential transformers (PT), and capacitor banks [3]. Furthermore, utility firms can detect cases of electricity theft and unauthorized usage with the help of smart meter integration, which improves power quality and distribution efficiency [4]. Future electrical markets will be built to provide customers with extremely dependable, adaptable, easily available, and reasonably priced energy services [5].
The aim of the study is for better management of the grid, made possible by the real-time monitoring and control of electrical flow through the integration of communication and information technology and grid data storage in replicated Gluster.

1.1. Replicated Gluster

An open-source distributed file system called Gluster-FS can support thousands of concurrent connections and petabytes of data. It is built for linear scale-out. Its user space architecture is modular and stacking, forming a tree with each functional module referred to as a translator. Bricks are actual storage units assigned to the DFS in this system. Communication between data nodes makes copy shifting, high data replication, and data distribution rebalancing possible. Figure 2 shows the replicated Gluster within Gluster-FS, a particular configuration called replicated Gluster or replicated volume.
It entails duplicating data across several storage nodes to create a replicated volume. Data redundancy is achieved by replicating data among several bricks or storage nodes. The replica allows access to the data even during a brick failure. The system can still function if one or more nodes fail since the data are duplicated. Since the system may retrieve data from the closest replica, replication can also enhance read performance. To sum up, replicated Gluster is a configuration or kind of volume within Gluster-FS where data are replicated over several nodes for fault tolerance and redundancy. Gluster-FS is the overall distributed file system. Gluster-FS offers a wide variety of volume types, of which replicated volumes are just one. The selection of a particular volume type is contingent upon specific use cases such as performance, data safety, and fault clearance.

1.2. Problem and Objective

The essence of the problem of this research lies in
(A)
The storage of bulk amounts of data (electrical parameters) collected through smart metering is essential in load flow analysis through SynerGee 4.0 Electric software.
(B)
The availability of enough space to store these results for future analysis is so important, and is studied and implemented. Failing this, the concept of a smart grid is not possible.
(C)
The primary study and implementation objective of this paper is the comprehensive characterization and testing of replicated Gluster in a genuinely distributed environment, delving into its performance nuances under diverse conditions.

1.3. Contribution of the Work

  • Progress in Smart Grids: The work makes progress in the field of smart grids by utilizing tele-trafficking and storage of the data in replicated Gluster. It looks at cutting-edge methods for system optimization and data management, both of which are crucial for updating energy infrastructure.
  • Virtual Data Management: One important contribution is the introduction of a simulation model. This method enables utilities to test and improve their systems in a controlled environment by exploring different scenarios and strategies without impacting the real grid.
  • Gluster Replication for Normal and Heavy Flow: Scalability and dependability are two requirements that are met by the data replication using Gluster technology in smart grid systems. This improves grid resilience by guaranteeing that the system can manage both typical operations and scenarios of peak demand.
  • Integration of GIS and GPRS Data: This effort builds a complete dataset that offers insights on grid performance and operations by merging data from GIS- and GPRS-enabled meters. Utilities can now make data-driven decisions for effective grid management and monitor important indicators in real-time thanks to this integration.
  • Real-Time Gathering of Data: It is imperative to prioritize real-time data collection, especially with GPRS-enabled meters, in order to improve grid monitoring and control functions. Utilities are able to quickly detect patterns, abnormalities, and possible problems by recording minute data of voltage levels, energy consumption, and other characteristics.
  • Empowerment of Utility: In the end, the work equips utilities with the know-how and resources required to manage the grid efficiently. Utilities may enhance grid operations, increase dependability, and react to shifting demand and environmental circumstances more quickly by giving users access to large datasets and simulation tools.
However, the paper structure summary is as follows: The article is organized into several sections, with Section 2 providing an in-depth exploration of the existing literature. Following this, Section 3 details the methodology of the proposed technique, including information on the lab setup and procedural steps. Section 4 discusses simulation results, while Section 5 briefly summarizes these findings and outlines potential future directions.

2. Literature Review

This section is divided into two main categories to identify the research gap, as shown in Figure 3. This portion summarizes the theoretical results of the different research techniques for analyzing electrical data from smart grids and its manipulation with different virtual data storing techniques.

2.1. Techniques for Electric Data Extraction from Smart Grid

Among the most important factors to consider when designing a system are the careful selection of communication technologies and the deliberate design of communication devices. These components have several needs to fulfill, particularly in light of the large amount of data transfer into the smart meter system. This relevant data are sent to Gluster for storage. We evaluate the applicability of energy sources in a smart grid with variable energy requirements and ascertain the impact of virtual buffers, peak shaving, and storage options on unpredictable energy supply [6].
Cryptographic techniques are used to safeguard the unique identification that is assigned to a customer’s smart meter or equipment for the estimation of load [7]. In addition to enabling distribution automation, the selected communication network must ensure that the smart meter system continues functioning even during a power outage. The chosen network and its constituent parts must also be economical and capable of supporting “traffic prioritization”, which prioritizes data delivery according to their time and direction sequence [8]. A sizable segment of the populace still has inconsistent access to power. The main causes of this are widespread load shedding and blackouts brought on by insufficient and ineffective electrical infrastructure [9,10,11,12], as an example. A smart grid system features two distinct modes of power supply. As researchers examine the element significance of the system, seeking to discover the feeble sections of the system, they consequently improve the system created and load forecasting for certain time period [13]. The task for microgrids, to reliably load, can be achieved thanks to a smart grid-related strategy. A necessary trade-off has been suggested between financial expertise and cost-saving measures so that customers can take advantage of expediency [14]. Devices in the grid sector must guarantee that generated energy is transmitted correctly and device data are stored and processed safely. In the distribution industry, control systems actively track and manage malfunctions. Communication devices include smart meters, data collectors, and storage networks. The collection of smart grid data leads to the storage of data through different techniques and a comparison of these techniques is made.

2.2. Techniques for Virtual Data Management and Its Performance in Terms of UDP and TCP Flow

Different data storage methods with differences in processing time and data security are discussed as follows:

2.2.1. Distributed File Systems (DFSs)

By combining several distinct file systems into a single folder, or DFS root, a DFS enables the sharing of data and storage resources between computers spread out geographically [15]. This makes data more accessible in case of an outage or spike in demand by logically combining shares from multiple locations. Data stability is guaranteed by duplicating data across several servers, facilitating effective file management and client–server communication for file processing downloads and uploads. A DFS needs in a network include fault tolerance, consistency, transparency, replication, heterogeneity, security, and efficiency. A DFS installation typically requires a minimum of two servers and one client, with workstations and mainframes connected by a LAN. A centralized file system is advantageous because it provides consistent access to and control over data stored on several server nodes Through the integration of information technology, communication technology, control technology, and other methods, smart grid technology accomplishes intelligent management of power systems [16,17]. Studies show that different DFS implementations in routers, clouds, and networks have improved system stability overall [18]. A DFS ensures high availability and network transparency in a networked, linked server system. To achieve improved concurrency and granularity during data stripe access, it dynamically resizes and redistributes stripes on storage servers, supporting varying stripe widths [19]. For systems with complex access patterns, stripe management is critical in improving response times and increasing data throughput. Fully distributed and client–server models are the primary architectural subcategories of distributed file systems (DFSs). Files are dispersed over all locations in the fully distributed approach, presenting performance and implementation complexity difficulties. In contrast, file servers and clients are the two main players in the client–server paradigm. File servers function as specialized sites meant to hold and retrieve massive amounts of data, and clients on other sites use servers to access files. The DFS architecture (Figure 4) emphasizes that the connection framework’s quality and load-bearing capacity often set the top bound. Cloud computing applications, where nodes handle computation and storage, depend on DFSs.

2.2.2. File System User Space

FUSE, which stands for File System in User Space, is an interface that connects file-distributed systems to the Linux kernel from a user space-identified program, typically present in most Linux versions [20]. The FUSE library circumvents the fundamental challenge of building a file-distributed system into the Linux kernel, enabling the creation of a file system without requiring direct kernel modifications. The FUSE has low-level and high-level APIs and supports several operating systems. Due to its flexibility, the FUSE library has been extensively utilized in developing many file systems.

2.2.3. The Reliable Array of Independent Nodes

The Reliable Array of Independent Nodes (RAIN) is a partnership between NASA’s Jet Propulsion Laboratory [21,22,23] for distributed computing and data storage systems for future spaceborne missions. The purpose of this work was to develop a scalable and efficient working statistical data computing cluster with a large number of connections of hardware and storage equipment. Using diverse clusters to provide redundancy, the RAIN project prioritizes the efficient working mechanism with scalability, statistical and dynamic network reconfiguration, cost-effectiveness, and high network resource availability. The project uses four essential strategies to address any failures:
  • Giving nodes more than one network interface.
  • Making use of network surveillance to avoid single points of failure.
  • Grouping is put into practice for cluster monitoring.
  • Using error-correcting coding, like RAID, to incorporate redundancy in storage nodes.

2.2.4. Ceph

Ceph is considered an open-source file network distributed file management system created by the University of California and is run by the Ceph Foundation [24,25]. This system makes unified object, block, and file storage possible. As shown in Figure 5, Ceph offers dependable and high-performance storage capacities ranging from petabytes to exabytes via the use of the Reliable Autonomous Distributed Object Store (RADOS). MON is used to keep a master copy of the cluster map; then, arrangement shows the copy of the cluster. Figure 5 explains the structure of Ceph.

2.2.5. EOS

Transmission losses in France in 2017 amounted to 11,133 GWh or 2.36% of the energy injected. The existing energy grid is vulnerable to domino effect failures, which have resulted in multiple significant blackouts over the years due to its hierarchical asset architecture [26,27,28,29]. X-Root-D, HTTP, WebDAV, and Grid-FTP are among the native protocols for EOS support, consisting of three primary components: MGM, FST, and MQ. EOS imposes its own “layout” on the file system, dictating how data are stored. This architecture reduces the possibility of data loss by enabling duplication or erasure coding. Figure 6 illustrates the structure of data flow with EOS.

2.2.6. Gluster-FS

Two server-side daemons that offer a command-line interface and help with data archiving are called Gluster-FSD and GlusterFS; data storage capacity is enhanced while the security of data is improved [30,31,32,33,34,35]. Because the system operator has little control over loads, generation capacity has been installed to meet peak demand. Peaking Power Plants, so named for their excess capacity, are only used for a relatively tiny portion of the year, frequently less than 1% in some nations [36,37,38,39,40]. Moreover, transmission losses become a major concern when generating stations are geographically far from loads, requiring electricity transmission across large distances. Transmission losses in France vary from 2% to 3% of the injected energy, as [41] shows. Several research works have assessed distributed file system performance. Ceph was deployed on commodity servers by Diana et al. [42,43,44], who then examined its scalability, multi-use capability, high availability, and performance efficiency for various applications. They tested the effects of increasing the object storage server, client counts, and object size on file system performance. GlusterFS, an open-source distributed file system, is distinguished for its remarkable scalability, ability to extend to 72 brontobytes, and adept handling of many clients. By harnessing Infinite Band RDMA or TCP/IP interconnect technologies, GlusterFS consolidates storage building blocks, unifying disk and memory resources within an integrated global namespace. Its stackable user-space design positions it as a high-performance solution adaptable to diverse workloads. Figure 7 explains the Gluster-Fs working mechanism and load forecasting [45,46,47,48,49].
GlusterFS seamlessly accommodates standard clients executing applications over conventional IP networks. As depicted in Figure 7, the system exemplifies the accessibility of application data and files within a global namespace, employing standard protocols like NFS and SMB/CIFS [50,51,52,53,54,55,56]. Notably, the system’s architecture is optimized to leverage commodity software, ensuring compatibility with almost any operating system without necessitating kernel fine-tuning.

2.2.7. Comparison Table Regarding Traditional Virtual Data Storing Techniques with Replicated Gluster

The comparison table regarding traditional ways of virtual data storing techniques and replicated Gluster used in this research article is shown in Table 1.
Table 1 shows the comparison of available and applicable techniques for data storage through which the best and most reliable technique seems to be replicated Gluster, which is to be implemented and tested in this research article. Table 1 findings are further tested and shown in Section 4.3.

3. Research Methodology

The research methodology is based on three tasks. Task 1 is related to electrical data extraction from the smart grid, Task 2 is related to the virtual storage of this electrical data received from SynerGee 4.0 Electric software and GPRS meters, and Task 3 involves the tele- trafficking of virtual data storage obtained from the smart grid in replicated Gluster in a syntose environment.

3.1. Method Adopted for Electrical Data Analysis from User-Designed Smart Grid

A combination of hardware and software tools to take points at various locations on Earth and use GIS to trace them on a map. Utilizing the GIS program involves creating maps, conducting spatial analysis, and making defensible judgments in light of the field data. The process is summarized in Figure 8, Figure 9, Figure 10 and Figure 11.
Figure 8 shows the GIS software 4.0: real-time data collection-supported GIS software program. Several well-liked options are Fulcrum, QGIS, and ArcGIS Collector. For the analysis in this paper, ArcGIS is used to make maps, gather spatial data, and sync it with a central GIS database. The GIS program configures the map to comply with our specifications for gathering field data.
Figure 9 shows the geographic coordinates (latitude, longitude, and occasionally elevation) required using a GPS receiver. The GPS device records a point at each place to find where to gather data points. The software will automatically record the geographic coordinates when launching the GIS application on the laptop and give each point further details or properties, like a category, description, or pertinent data. Data collection syncs with the main GIS database.
Figure 10 shows the platforms for advanced analytics: these systems analyze gathered data for valuable insights. Machine learning techniques can be used to identify algorithms for machine learning to identify trends, anticipate malfunctions, and enhance grid performance. SynerGee Electric software is utilized for load flow analysis.
Multiple GPRS meters are installed on each feeder of a grid station to collect a large amount of data, as shown in Figure 12. Almost 10 to 15 distributed feeders are installed on each grid station, and the same number of GPRS meters, sending the information after 5 to 10 min, which is displayed on the control room screen, as shown in Figure 13. This creates a bulk quality of data needed to be stored, load flow analysis, security, and analytical processes.
Figure 12 shows smart meters: installed at customer locations, smart meters monitor energy use continuously or periodically. These meters often come with integrated communication features, automatically transferring data to utility companies, and after 10 s data are sent to the data collection center through GPRS cellular service (Mtec GPRS meters). Communication networks: to transfer data between the various grid components, smart grids utilize communication networks. This can comprise specialized communication networks and cellular networks (GPRS).
Figure 13 shows centralized data centers: after being extracted, the data are frequently sent to these facilities for processing, storing, and analysis. These hubs are essential for controlling and enhancing the grid’s overall performance. However, the storage of data is important, which is performed through tele-trafficking and storage in replicated Gluster.

3.2. Method Adopted for Virtual Data Storage and Tele-Trafficking Detail

3.2.1. Tele-Trafficking Procedural Steps:

The adaptation of procedural steps based on two portions; namely:
  • Normal flow,
  • Under loaded condition.
Normal Flow:
(a)
Under normal conditions, we have manipulated the data of 48 h of the proposed smart grid to the DFS through the edge routing application.
(b)
In the DFS, the server manages the data by allocating separate IDs to all the files sent from different clients.
(c)
The DFS server sends these files to Gluster FS and Gluster RS.
(d)
The I-perf utility is run on the server and client levels, and traffic flow is monitored using the tele-trafficking technique.
Loaded Condition:
(a)
Under loaded conditions, we have manipulated the data of 15 days of the proposed smart grid to the DFS through the edge routing application.
(b)
In the DFS, the server manages the data by allocating separate IDs to all the files sent from different clients.
(c)
The DFS server sends these files to Gluster FS and Gluster RS.
(d)
The I-perf utility is run on the server and client levels, and traffic flow is monitored using the tele-trafficking technique.

3.2.2. Lab Setup and Performance in Syntose-Based Environment

The Requirements of Syntose Environment Creation,
(a)
Five core i-5, 10 generation processors with 32 GB RAM each.
(b)
With three VMs (virtual machines) on one main PC running two VMs for Gluster and its replicated version and one for the distributed file server (DFS).
(c)
The other four PCs will act as four separate enterprises. Each enterprise will have four remote clients, and their data will be forwarded to the DFS server using edge routing.
(d)
Wireshark for network graphs.

3.2.3. Algorithm of Proposed Virtual Storage Method Replicated Gluster (RG)

The algorithm of virtual data storage in replicated Gluster is shown in Algorithm 1, which is as following:
Algorithm 1: Virtual Data Management by Replicated Gluster
1# Import Python’s Library Functions
import hashlib
import subprocess
2class GlusterManager:
def __init__(self, volume_name, mount_path):
self.volume_name = volume_name
self.mount_path = mount_path
3def store_data(self, data, filename):
# Generate unique identifier for data
data_id = hashlib.sha256(data.encode()).hexdigest()
4# Calculate shard location
shard_location = self.calculate_shard_location(data_id)
5# Write data to GlusterFS volume
subprocess.run([‘gluster’, ‘volume’, ‘write’, self.volume_name, shard_location, data])
6# Store mapping of the filename to shard location
self.store_mapping(filename, shard_location)
7return True
8def retrieve_data(self, filename):
# Retrieve shard location for given filename
shard_location = self.retrieve_mapping(filename)
9# Read data from GlusterFS volume
result = subprocess.run([‘gluster’, ‘volume’, ‘read’, self.volume_name, shard_location], capture_output = True)
10if result.returncode == 0:
return result.stdout.decode()
else:
return None
11def calculate_shard_location(self, data_id):
# Simplified hash-based shard location calculation
return data_id[:10] # Use first 10 characters of hash for simplicity
12def store_mapping(self, filename, shard_location):
# For simplicity, store mapping in a text file
with open(f’{self.mount_path}/mapping.txt’, ‘a’) as f:
f.write(f’{filename}:{shard_location}\n’)
13def retrieve_mapping(self, filename):
# Retrieve shard location from mapping file
with open(f’{self.mount_path}/mapping.txt’, ‘r’) as f:
for line in f:
parts = line.strip().split(‘:’)
if parts [0] == filename:
return parts [1]
return None
14def replicate_data(self):
# Trigger replication process
subprocess.run([‘gluster’, ‘volume’, ‘replicate’, self.volume_name, ‘force’])

4. On-Site Data Collection and System Verification

4.1. Electrical Data Analysis in the Proposed Smart Grid

A complete design mechanism of this paper's technique is shown in Figure 14, which explains that the electrical energy is generated through different resources and collected in an intelligent mode called a grid system which is now transformed into the smart grid system. Different resources show that the bulk amount of energy can be efficiently transferred to the end users through a smart grid system due to which the need of the hour is to bring modern techniques and methods for efficient ways of working in the grid system. Figure 14 shows that all the grid data are further remitted through tele-trafficking to replicated Gluster for virtual data storage. Two Glusters shows the replicated arrangement of the data storage technique. From generation to smart grid station and flow of energy data to the end. Replicated Gluster is used for the storage and process of electrical analysis. The electrical obtained from various feeders in the proposed smart grid is illustrated in Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22.
Figure 15 illustrates the two feeders starting from the smart grid called Walton Colony and Must Iqbal Road. Real-time data show the value of voltage, current, and power in KW. KWH shows the total units consumed and efficient use of active energy pf is also important. The storage of all the data is important to run the load flow analysis for the calculation of present and future demands of energy. The smart grid concept provides a comprehensive approach to meeting the goals. The above data are collected through two resources: one is from GPRS meters in the data center and the other is collected through personal field visits and all the values mentioned in the diagrams are cross-verified to refrain from any miscalculation or false analysis risk. Verified data give confidence to the science results and results can be verified in later sections or future load estimation.
Figure 16 illustrates the real-time data of New Mandi, PECO 2, and Gulistan colony. Data received and stored provide a great help to run load flow analysis. Data are collected after every 10 s. Data collected from above-mentioned distributed lines are collected through two resources. One is GPRS meters and the other is a personal visit to the grid station. The mentioned distributed lines are started from the same grid stations but with different GPRS meters and a different number of users. Verified data of each feeder of the same grid also provide the opportunity to transfer the load or load sharing to the nearby feeder or distribution line. Verification or on-site personal visits provide deep learning with smart and modern techniques. The same results are available from the data center, but on-site visits and taking the value of voltage and current then calculating used energy is the best way of verification.
Figure 17 illustrates the data collection of PEL, Ittefaq Hospital, and IT Park feeders. Data show the meter communication, breaker’s availability, meter details, and all technical data for load flow analysis. Multiple options of different instruments can be added like power transformers and frequency used to understand the synchronization of one smart grid from one area to another. Only the smart grid and smart applications used in this research can provide real-time data analysis. On-site data verification provides an opportunity to see the current condition of the installed GPRS meters and working environment which is possible if working engineers visit the site. There is a possibility of meter damage or not communicating some values.
Figure 18 illustrates the two feeders starting from the smart grid called Amer sadhu, IC-T2, and Liaqat Abad. Real-time data show the value of voltage, current, and power in KW. KWH shows the total units consumed and efficient use of active energy pf is also important. A site visit of the above-mentioned feeders further shows the power factor measurement opportunity. The lower power factor could depend on many factors. The mentioned power factor is 0.93 due to which the power loss factor is obvious and losses to the end customers are too much. Grid station instrumentation maintenance is also an important factor due to which overall system efficiency is increased.
Figure 19 shows the onsite visit of the Defense Colony, Model town, and Township feeders. It shows that in the existing situation, only the model town feeder is working while the other two are facing load shedding. In the current situation, the Model Town feeder is also at a lower power factor which leads to a higher loss of energy. Lowering the loss of the feeder with advanced techniques is also required to enhance the available resources and other factors, mainly power factor improvement.
Figure 20 illustrates the parameters of the Bahar colony, Model Colony, and Ferozpur Road. Different feeders in the Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22 show the variation of KWH and KVAR, which shows the load and number of user variations. This is linked to the losses and load burden on the system. Overloaded and high-loss feeders reduce the overall system efficiency leading to power loss and sudden system tripping with load shedding and loss of resources. Feeders show that no electricity is being distributed at the current time.
Figure 21 shows the onsite visit of the IC-T3, PAF colony, and Madina Colony feeders. On-site data collection and formulation show that all the feeders are facing load shedding. No value of KW and KVAR are reflected. No current is being distributed. However, the information on the power factor is not shown due to the unavailability of electricity to the end users. The distributed transformer rating is available and the transformer is in working condition. At this time, the cross-verification of the following feeders is not available so we have to rely on data received at previous times during the working condition of this feeder. So here, the importance of the smart meter can never be ignored.
Figure 22 illustrates the parameters of Tariq, R A Bazar, and Alnoor Town. Different feeders in Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22 show variation of KWH and KVAR, which shows the load and number of user variations. This is linked to the losses and load burden on the system. Overloaded and high-loss feeders reduce the overall system efficiency leading to power loss and sudden system tripping with load shedding and loss of resources. The following feeders are not shown in working condition. So, the power factor is also not shown. Data collected in the data center through a smart meter at the previous time will be accurate. Field visits show that some of the feeders are defective and they are disconnected due to payment pending issues or power factor improvement issues with some industries. However, visits to the site and data collection generate a lot of knowledge.

4.2. Tele-Trafficking in Virtual Data Storage through Replicated Gluster in Syntose Environment

Tele-trafficking in virtual data storage through replicated Gluster in a Syntose Environment is shown in Figure 14.
  • Fetching QoS parameters in normal flow mode for uplink and downlink in TCP and UDP throughput.
  • Fetching QoS parameters in loaded flow mode for uplink and downlink TCP and UDP throughput.

4.2.1. QoS in Memory Parameters (TCP and UDP Throughput) in Uplink

The bandwidth and transfer rate for TCP and UDP throughput is calculated for 15 s of simulation time when the media file of 100 GB is sent from enterprises to both Gluster-FS and replicated Gluster. Replicated Gluster is tested for lower-size data files (few Mbs) and heavy files (Gbs). Results for both files and graphs are shown in Figure 23.
To better interpret the data in Figure 23a, Table 2 summarizes the memory parameters in uplink for TCP throughput, where Bd is bandwidth and T is transfer rate.
To better interpret the data shown in Figure 23b, Table 3 summarizes the QoS parameters in uplink for UDP throughput, where Bd represents the bandwidth and J represents the Jitter.
Figure 24a,b illustrate the TCP and UDP graphs (uplink normal flow) obtained from the GNU plot utility.

4.2.2. QoS Parameters (TCP and UDP Throughput) in Downlink

The bandwidth and transfer rate for both TCP and UDP throughput is calculated for 15 s of simulation time when the media file of 100 GB is sent from enterprises to both GlusterFS and Replicated Gluster. Gluster statistical data were fetched by utility (I-Perf) after 15 s in Figure 25.
To better interpret the data shown in Figure 25a, Table 4 summarizes the QoS parameters in downlink for TCP throughput, where Bd represents bandwidth and T the transfer rate.
To better interpret the data shown in Figure 25b, Table 5 summarizes the QoS parameters in uplink for UDP throughput, where Bd represents the bandwidth and J represents the Jitter.
Figure 26a,b illustrate the TCP and UDP graphs (downlink normal flow) obtained from the GNU plot utility.

4.3. (Case-2: Fetching QoS Parameters in Heavy Flow Mode for Both (Uplink and Downlink) TCP and UDP Throughput)

4.3.1. QoS Parameters (TCP and UDP Throughput) in Uplink with Heavy Flow

The bandwidth and transfer rate for TCP and UDP throughput is calculated for 15 s of simulation time when the media file of 1000 GB is sent from enterprises to both Gluster-FS and replicated Gluster. Statistical data utility for 15 s is shown in Figure 27.
To better interpret the data shown in Figure 27a, Table 6 summarizes the QoS parameters in uplink for TCP throughput. where Bd is bandwidth and T is the transfer rate.
To better interpret the data shown in Figure 27b, Table 7 summarizes the QoS parameters in uplink for UDP throughput, where Bd represents the bandwidth and J represents the Jitter.
Figure 28a,b illustrate the TCP and UDP graphs (uplink heavy flow) obtained from the GNU plot utility.

4.3.2. QoS Parameters (TCP and UDP Throughput) in Downlink with Heavy Flow

The bandwidth and transfer rate for TCP and UDP throughput is calculated for 15 s of simulation time when the media file of 1000 GB is sent from enterprises to both Gluster-FS and replicated Gluster. Statistical data fetched from the I-Perf utility after 15 are shown in Figure 29.
To better interpret the data shown in Figure 29a, Table 8 summarizes the QoS parameters in downlink for TCP throughput, where Bd is bandwidth and T is the transfer rate.
To better interpret the data shown in Figure 29b, Table 9 summarizes the QoS parameters in uplink for UDP throughput, where Bd represents the bandwidth and J represents the Jitter.
Figure 30a,b illustrate the TCP and UDP graphs (downlink heavy flow) obtained from the GNU plot utility.

4.3.3. Concluding the Simulation Results Obtained in Case-I and II

Concerning the QoS parameters with TCP and UDP throughputs in both uplink and downlink cases of heavy and normal flow in Section 4.1, Section 4.2 and Section 4.3, the performance of the proposed Gluster method is unaffected.

4.4. Contrasting the Tele-Traffic Results in Terms of UDP and TCP Test with Proposed Replicated Gluster (RG) with Legacy Data Storage Method as Illustrated in Table 1

A comparison of different methods for virtual data storage discussed in Table 1 is carried out with the proposed virtual data storage method RG (replicated Gluster) in terms of UDP and TCP test results. The TCP results comparison is shown in Figure 31, and the UDP results comparison is shown in Figure 32. The results demonstrate that replicated Gluster performance is far superior to traditional data storage systems.

5. Conclusions and Discussions

Discussion: this study shows the data received from GPRS meters and GIS coordinates of distributed lines, and the saving of all data in replicated Gluster for better management of huge amounts of electrical and load flow analysis. Collected data are further tested, and load flow analyses are conducted to see the working behavior and improvement requirements of the distributed lines. These results and important electrical parameters are stored and processed in replicated Gluster in light and heavy files. Different data storage types are also examined, and the finding is that replicated Gluster is an excellent adaptation to the processing, analyses, and storage of large amounts of data.
Real-time data have been made available by integrating GPRS smart meters into the analysis framework, enabling a thorough investigation of power consumption trends and system behavior. Using sophisticated software tools for load flow analysis, these data have made it possible to identify bottlenecks, voltage fluctuations, places of load imbalance, and load forecasting, and present energy demands within the electrical grid.
Tele-trafficking was conducted on a user-defined simulation model for virtual data management through replicated Gluster in both normal and heavy Flow. The QoS parameters in both cases mentioned above, for TCP and UDP throughput, show that whether a heavy file is uploaded or downloaded, the performance of replicated Gluster is unaffected and performs efficiently under all test cases (normal flow and loaded flow uplink downlink versions). Replicated Gluster has shown to be a strong option for data storage, guaranteeing the safety, dependability, and availability of the enormous amount of data produced throughout the analysis process. Replicated architecture improves data resilience by lowering the chance of data loss and guaranteeing the availability of vital data for upcoming analyses and decision-making.
In conclusion, a thorough framework for evaluating, displaying, and storing electric grid data has been made possible by the cooperation of GIS, GPRS smart meters, SyerGee Electric software, and replicated Gluster. This integrated strategy improves grid management effectiveness, makes informed decisions easier, and strengthens the power distribution system’s overall resilience and dependability. During all, data storage bandwidth, jitter, and throughput of replicated Gluster remain unaltered, which shows reliability and efficiency. For future work, the proposed data storage method could be applied to the advanced SDN controller, Network Load Management, to enhance the QoS parameters. Load flow analysis could be a more in-depth analytical and technical study to improve user satisfaction and maintenance of long-range (many kilometers) distributed lines. The collected GIS and GPRS data are a worker’s property, which could provide a platform for any system maintenance. In the progressing world, these valuable tools are important property in the electrical, communication, and power sectors.

Author Contributions

The authors contributions to this paper are as follows: study conception and design: W.H. and S.A.; data collection: W.H., S.A. and K.J.; analysis and interpretation of results: W.H. and S.A.; draft manuscript preparation: W.H., S.A. and M.M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available with the corresponding author and can be provided upon appropriate request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Qiu, B.; Fan, X.; Zhu, H.; Han, B. Review of Smart Home Energy Management Systems. Energy Procedia 2016, 104, 504–508. [Google Scholar] [CrossRef]
  2. Ulises, J.; Rodríguez-Urrego, L. Technological Developments in Control Models Using Petri Nets for Smart Grids: A Review. Energies 2023, 16, 3541. [Google Scholar] [CrossRef]
  3. Zhou, G. Analysis of the Development of Smart Grid and its key technology. In Proceedings of the 2015 International Conference on Industrial Technology and Management Science, Tianjin, China, 27–28 March 2015; pp. 43–44. [Google Scholar] [CrossRef]
  4. Kong, D. Application of Smart Grid Technology in Power Dispatch Automation. Power Equip. Manag. 2020, 8, 41–44. [Google Scholar]
  5. Gerwen, R.; Jaarsma, S.; Wilhite, R. Smart Metering. 2006. Available online: https://idc-online.com/technical_references/pdfs/electrical_engineering/Smart_Metering.pdf (accessed on 9 May 2024).
  6. Bouchard, P.; Heilig, L.; Shi, X. A Case Study on Smart Grid Technologies with Renewable Energy for Central Parts of Hamburg. Sustainability 2023, 15, 15834. [Google Scholar] [CrossRef]
  7. Zheng, J.; Okamura, H.; Pang, T.; Dohi, T. Availability importance measures of components in smart electric power grid systems. Reliab. Eng. Syst. Saf. 2021, 205, 107164. [Google Scholar] [CrossRef]
  8. Zhu, K.; Li, Y.; Mao, W.; Li, F.; Yan, J. LSTM enhanced by dual-attention-based encoder-decoder for daily peak load forecasting. Elec. Power Syst. Res. 2022, 208, 107860. [Google Scholar] [CrossRef]
  9. Wazid, M.; Das, A.K.; Chaola, V. Uniting Cyber Security and Machine Learning: Advantages, Chellenges and Future Research. ICT Express 2022, 8, 313–321. [Google Scholar] [CrossRef]
  10. Raza, M.A.; Khatri, K.L.; Haque, M.I.U.; Shahid, M.; Rafique, K.; Waseer, T.A. Holistic and scientific approach to the development of sustainable energy policy framework for energy security in Pakistan. Energy Rep. 2022, 8, 4282–4302. [Google Scholar] [CrossRef]
  11. Raza, M.A.; Khatri, K.L.; Hussain, A. Transition from fossilized to defossilized energy system in Pakistan. Renew. Energy 2022, 190, 19–29. [Google Scholar] [CrossRef]
  12. Kaloi, G.S.; Baloch, M.H. Smart grid implementation and development in Pakistan: A point of view. Sci. Int. 2016, 4, 3707–3712. [Google Scholar]
  13. Rubasinghe, O.; Zhang, X.; Chau, T.; Chow, Y.; Fernando, T.; Ho-Ching Iu, H. A novel sequence to sequence data modelling based CNN-LSTM algorithm for three years ahead monthly peak load forecasting. IEEE Trans. Power Syst. 2024, 39, 1932–1947. [Google Scholar] [CrossRef]
  14. Bagdadee, A.H.; Zhang, L. Renewable energy based selfhealing scheme in smart grid. Energy Rep. 2020, 6, 166–172. [Google Scholar] [CrossRef]
  15. Jiang, X.; Wu, L. A Residential Load Scheduling Based on Cost Efficiency and Consumer’s Preference for Demand Response in Smart Grid. Electr. Power Syst. Res. 2020, 186, 106410. [Google Scholar] [CrossRef]
  16. Lee, P.K.; Lai, L.L. A practical approach of smart metering in remote monitoring of renewable energy applications. In Proceedings of the 2009 IEEE Power & Energy Society General Meeting, Calgary, AB, Canada, 26–30 July 2009; pp. 1–4+7. [Google Scholar]
  17. Bao, C. Innovative Research on Smart Grid Technology in Electrical Engineering. Eng. Technol. Trends 2024, 2. [Google Scholar] [CrossRef]
  18. Bauer, M.; Plappert, W.; Dostert, K. Packet-oriented communication protocols for smart grid services over low-speed PLC. In Proceedings of the 2009 IEEE International Symposium on Power Line Communications and Its Applications, Dresden, Germany, 29 March–1 April 2009; pp. 89–94. [Google Scholar]
  19. Rydning, D.R.J.G.J. The Digitization of the World from Edge to Core. Available online: https://www.seagate.com/files/wwwcontent/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf (accessed on 4 January 2021).
  20. CERN. Storage|CERN. Available online: https://home.cern/science/computing/storage (accessed on 4 January 2021).
  21. Mascetti, L.; Rios, M.A.; Bocchi, E.; Vicente, J.C.; Cheong, B.C.K.; Castro, D.; Collet, J.; Contescu, C.; Labrador, H.G.; Iven, J.; et al. CERN Disk Storage Services: Report from last data taking, evolution, and future outlook towards Exabyte-scale storage. EPJ Web Conf. EDP Sci. 2020, 245, 04038. [Google Scholar] [CrossRef]
  22. Low, Y.; Bickson, D.; Gonzalez, J.; Hellerstein, J.M. Distributed Graphlah: A framework for machine learning and data mining in the cloud. Proc. VLDB Endow. 2012, 5, 716–727. [Google Scholar] [CrossRef]
  23. Zhao, D.; Raicu, I.; Distributed File Systems for Exascale Computing; pp. 1–2. 2012. Available online: http://216.47.155.57/publications/2012_SC12_paper_FusionFS.pdf (accessed on 9 May 2024).
  24. Singh, A.; Ngan, T.W.; Wallach, D.S. Eclipse attacks on overlay networks: Threats and Defenses. In Proceedings of the 25th IEEE INFOCOM, Barcelona, Spain, 23–29 April 2006; pp. 1–12. [Google Scholar]
  25. Xiao, D.; Zhang, C.; Li, X. The Performance Analysis of GlusterFS in Virtual Storage. In Proceedings of the International Conference on Advances in Mechanical Engineering and Industrial Informatics, Zhengzhou, China, 11–12 April 2015; pp. 199–203. [Google Scholar]
  26. Kumar, M. Characterizing the GlusterFS Distributed File System for Software-Defined Networks Research. Ph.D. Thesis, Rutgers The State University of New Jersey, School of Graduate Studies, New Brunswick, NJ, USA, 2015; pp. 1–43. [Google Scholar]
  27. Levy, E.; Silberschatz, A. Distributed file systems: Concepts and examples. ACM Comput. Surv. 1990, 22, 321–374. [Google Scholar] [CrossRef]
  28. Hsiao, H.; Chang, H. Load Rebalancing for Distributed File Systems in Clouds. IEEE Trans. Parallel Distrib. Syst. 2012, 25, 951–962. [Google Scholar] [CrossRef]
  29. Shao, B.; Wang, H.; Li, Y. Trinity: A distributed graph engine on a memory cloud. In Proceedings of the ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 22–27 June 2013; pp. 505–516. [Google Scholar]
  30. Wang, L.; Tao, J.; Ranjan, R.; Chen, D. G-Hadoop: MapReduce across distributed data centers for data-intensive computing. Future Gener. Comput. Syst. 2013, 29, 739–750. [Google Scholar] [CrossRef]
  31. Vrable, M.; Savage, S.; Voelker, G.M. BlueSky: A cloud-backed file system for the enterprise. In Proceedings of the 10th USENIX conference on File and Storage Technologies, San Jose, CA, USA, 15–17 February 2012; p. 19. [Google Scholar]
  32. Zhang, J.; Wu, G.; Wu, X. A Distributed Cache for Hadoop Distributed File System in Real-Time Cloud Services. In Proceedings of the ACM/IEEE 13th International Conference on Grid Computing, Beijing, China, 20–23 September 2012; pp. 12–21. [Google Scholar]
  33. Liao, J.; Zhu, L. Dynamic Stripe Management Mechanism in Distributed File Systems. In Proceedings of the 11th IFIP WG 10.3 International Conference NPC, Ilan, Taiwan, 18–20 September 2014; Volume 8707, pp. 497–509. [Google Scholar]
  34. Bohossian, V.; Fan, C.C.; LeMahieu, P.S.; Riedel, M.D.; Xu, L.; Bruck, J. Computing in the RAIN: A reliable array of independent nodes. IEEE Trans. Parallel Distrib. Syst. 2001, 12, 99–114. [Google Scholar] [CrossRef]
  35. Szeredi, M. Libfuse: Libfuse API Documentation. Available online: http://libfuse.github.io/doxygen/ (accessed on 4 January 2021).
  36. Tarasov, V.; Gupta, A.; Sourav, K.; Trehan, S.; Zadok, E. Terra Incognita: On the Practicality of User-Space File Systems. In Proceedings of the 7th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 15), Santa Clara, CA, USA, 6–7 July 2015. [Google Scholar]
  37. Ceph Foundation. Architecture—Ceph Documentation. Available online: https://docs.ceph.com/en/latest/architecture/ (accessed on 4 January 2021).
  38. Weil, S.A.; Brandt, S.A.; Miller, E.L.; Maltzahn, C. CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data. In Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, Tampa, FL, USA, 11–17 November 2006; Association for Computing Machinery: New York, NY, USA, 2006. SC ’06. p. 122-es. [Google Scholar]
  39. CERN. Introduction—EOS CITRINE Documentation. Available online: https://eos-docs.web.cern.ch/intro.html (accessed on 4 January 2021).
  40. CERN. RAIN—EOS CITRINE Documentation. Available online: https://eos-docs.web.cern.ch/using/rain.html (accessed on 4 January 2021).
  41. Juve, G.; Deelman, E.; Vahi, K.; Mehta, G. Data Sharing Options for Scientific Workflows on Amazon EC2. In Proceedings of the ACM/IEEE International Conference for High-Performance Computing, Networking, Storage and Analysis, Washington, DC, USA, 13–19 November 2010; pp. 1–9. [Google Scholar]
  42. Deelman, E.; Berriman, G.B.; Berman, B.P.; Maechling, B.P. An Evaluation of the Cost and Performance of Scientific Workflows on Amazon EC2. J. Grid Comput. 2012, 10, 5–21. [Google Scholar]
  43. Davies, A.; Orsaria, A. Scale-out with GlusterFS. Linux J. 2013, 2013, 1. [Google Scholar]
  44. Zhang, Q.; Cheng, L.; Boutaba, R. Cloud computing: State of the art and research challenges. Braz. Comput. Soc. J. Internet Serv. Appl. 2010, 1, 7–18. [Google Scholar] [CrossRef]
  45. Louati, W.; Jouaber, B.; Zeghlache, D. Configurable software-based edge router architechture. Comput. Commun. 2005, 28, 1692–1699. [Google Scholar] [CrossRef]
  46. Gudu, D.; Hardt, M.; Streit, A. Evaluating the performance and scalability of the Ceph distributed storage system. In Proceedings of the 2014 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 27–30 October 2014; pp. 177–182. [Google Scholar]
  47. Zhang, X.; Gaddam, S.; Chronopoulos, A.T. Ceph Distributed File System Benchmarks on an OpenStack Cloud. In Proceedings of the 2015 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), Bangalore, India, 25–27 November 2015; pp. 113–120. [Google Scholar]
  48. Lim, B.; Arık, S.Ö.; Loeff, N.; Pfister, T. Temporal Fusion Transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 2021, 37, 1748–1764. [Google Scholar] [CrossRef]
  49. Acquaviva, L.; Bellavista, P.; Corradi, A.; Foschini, L.; Gioia, L.; Picone, P.C.M. Cloud Distributed File Systems: A Benchmark of HDFS, Ceph, GlusterFS, and XtremeFS. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  50. Li, X.; Li, Z.; Zhang, X.; Wang, L. LZpack: A Cluster File System Benchmark. In Proceedings of the 2010 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, Huangshan, China, 10–12 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 444–447. [Google Scholar]
  51. Lee, J.; Song, C.; Kang, K. Benchmarking Large-Scale Object Storage Servers. In Proceedings of the 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), Atlanta, GA, USA, 10–14 June 2016; Volume 2, pp. 594–595. [Google Scholar]
  52. Cooper, B.F.; Silberstein, A.; Tam, E.; Ramakrishnan, R.; Sears, R. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing, Indianapolis, IN, USA, 10–11 June 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 143–154. [Google Scholar]
  53. Red Hat. Chapter 9. Benchmarking Performance Red Hat Ceph Storage 1.3|Red Hat Customer Portal. Available online: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.3/html/administration_guide/benchmarking_performance (accessed on 4 January 2021).
  54. Li, J.; Wang, Q.; Jayasinghe, D.; Park, J.; Zhu, T.; Pu, C. Performance Overhead among Three Hypervisors: An Experimental Study Using Hadoop Benchmarks. In Proceedings of the 2013 IEEE International Congress on Big Data, Santa Clara, CA, USA, 27 June–2 July 2013; IEEE Computer Society: Washington, DC, USA, 2013; pp. 9–16. [Google Scholar]
  55. IEEE Std 1003.1-2017 (Revision of IEEE Std 1003.1-2008); IEEE Standard for Information Technology–Portable Operating System Interface (POSIX(TM)) Base Specifications, Issue 7. 2018; pp. 2641–2649. Available online: https://ieeexplore.ieee.org/document/8277153/ (accessed on 4 January 2021).
  56. Russel Cocker. Bonnie++ Russell Coker’s Documents. Available online: https://doc.coker.com.au/projects/bonnie/ (accessed on 4 January 2021).
Figure 1. Smart metering structure in smart grid.
Figure 1. Smart metering structure in smart grid.
Energies 17 02344 g001
Figure 2. Replicated Gluster internal architecture with brick formation for maximum data storage capacity.
Figure 2. Replicated Gluster internal architecture with brick formation for maximum data storage capacity.
Energies 17 02344 g002
Figure 3. The structural arrangement of the literature review.
Figure 3. The structural arrangement of the literature review.
Energies 17 02344 g003
Figure 4. Design and modelling of DFS (distributed file server).
Figure 4. Design and modelling of DFS (distributed file server).
Energies 17 02344 g004
Figure 5. Memory slots of Ceph.
Figure 5. Memory slots of Ceph.
Energies 17 02344 g005
Figure 6. Memory slots and data flow of EOS.
Figure 6. Memory slots and data flow of EOS.
Energies 17 02344 g006
Figure 7. GlusterFS working mechanism.
Figure 7. GlusterFS working mechanism.
Energies 17 02344 g007
Figure 8. GIS import data in load flow analysis software SynerGEE Electric 4.0.
Figure 8. GIS import data in load flow analysis software SynerGEE Electric 4.0.
Energies 17 02344 g008
Figure 9. Load flow analysis software SynerGEE Electric 4.0 results achieved on single feeder analysis (software-generated figure).
Figure 9. Load flow analysis software SynerGEE Electric 4.0 results achieved on single feeder analysis (software-generated figure).
Energies 17 02344 g009
Figure 10. Load flow analysis with software SynerGEE Electric.
Figure 10. Load flow analysis with software SynerGEE Electric.
Energies 17 02344 g010
Figure 11. Load flow analysis software SynerGEE Electric detail feeder results from each phase (3-phase electric system installed in Pakistan), with total load and loss calculation.
Figure 11. Load flow analysis software SynerGEE Electric detail feeder results from each phase (3-phase electric system installed in Pakistan), with total load and loss calculation.
Energies 17 02344 g011
Figure 12. Grid station survey and viewpoint of feeder (distribution lines) panels.
Figure 12. Grid station survey and viewpoint of feeder (distribution lines) panels.
Energies 17 02344 g012
Figure 13. Data collected by smart energy GPRS meters in the control room located next to NEPRA (National Electric Power Regulatory Authority).
Figure 13. Data collected by smart energy GPRS meters in the control room located next to NEPRA (National Electric Power Regulatory Authority).
Energies 17 02344 g013
Figure 14. Generation of electrical energy to smart grid (data center) and data fetching to Gluster network.
Figure 14. Generation of electrical energy to smart grid (data center) and data fetching to Gluster network.
Energies 17 02344 g014
Figure 15. Electric parameters for feeders (Walton Colony and Must Iqbal Road). (a) On-field data verification of electrical parameters of feeders through GPRS meters.* shows the incoming and outgoing of power (b) On-field data verification of energy parameters distributed to end users.
Figure 15. Electric parameters for feeders (Walton Colony and Must Iqbal Road). (a) On-field data verification of electrical parameters of feeders through GPRS meters.* shows the incoming and outgoing of power (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g015
Figure 16. Electric feeder parameters (New Mandi, PECO 2, and Gulistan Colony). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 16. Electric feeder parameters (New Mandi, PECO 2, and Gulistan Colony). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g016
Figure 17. Electric parameters for feeders (PEL, Ittefaq Hospital, and IT Park). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 17. Electric parameters for feeders (PEL, Ittefaq Hospital, and IT Park). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g017
Figure 18. Electric parameters for feeders (Amer Sadhu and Liaqat Abad). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 18. Electric parameters for feeders (Amer Sadhu and Liaqat Abad). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g018
Figure 19. Electric parameters for feeders (Defense Colony, Model Town, and Township). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 19. Electric parameters for feeders (Defense Colony, Model Town, and Township). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g019
Figure 20. Electric feeder parameters (Bahar Colony, Model Colony, and Ferozpur Road). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 20. Electric feeder parameters (Bahar Colony, Model Colony, and Ferozpur Road). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g020
Figure 21. Electric parameters for feeders (PAF Walton and Madina Colony). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 21. Electric parameters for feeders (PAF Walton and Madina Colony). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g021
Figure 22. Electric feeder parameters (Tariq, R A Bazar, and Alnoor Town). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Figure 22. Electric feeder parameters (Tariq, R A Bazar, and Alnoor Town). (a) On-field data verification of electrical parameters of feeders through GPRS meters. (b) On-field data verification of energy parameters distributed to end users.
Energies 17 02344 g022
Figure 23. (a) Parameters for uplink in normal flow with TCP throughput. (b) Parameters for uplink in normal flow with UDP throughput.
Figure 23. (a) Parameters for uplink in normal flow with TCP throughput. (b) Parameters for uplink in normal flow with UDP throughput.
Energies 17 02344 g023
Figure 24. (a) QoS parameters for uplink with TCP throughput in normal flow. (b) QoS parameters for uplink with UDP throughput in normal flow.
Figure 24. (a) QoS parameters for uplink with TCP throughput in normal flow. (b) QoS parameters for uplink with UDP throughput in normal flow.
Energies 17 02344 g024aEnergies 17 02344 g024b
Figure 25. (a) Memory parameters for downlink in normal flow with TCP throughput. (b) QoS parameters for downlink in normal flow with UDP throughput.
Figure 25. (a) Memory parameters for downlink in normal flow with TCP throughput. (b) QoS parameters for downlink in normal flow with UDP throughput.
Energies 17 02344 g025aEnergies 17 02344 g025b
Figure 26. (a) QoS parameters for downlink with TCP throughput in normal flow. (b) QoS parameters for downlink with UDP throughput in normal flow.
Figure 26. (a) QoS parameters for downlink with TCP throughput in normal flow. (b) QoS parameters for downlink with UDP throughput in normal flow.
Energies 17 02344 g026
Figure 27. (a) Parameters for uplink in heavy flow with TCP throughput. (b) Parameters for uplink in heavy flow with UDP throughput.
Figure 27. (a) Parameters for uplink in heavy flow with TCP throughput. (b) Parameters for uplink in heavy flow with UDP throughput.
Energies 17 02344 g027
Figure 28. (a) These are the parameters for uplink with TCP throughput in heavy flow. (b) Parameters for uplink with UDP throughput in heavy flow.
Figure 28. (a) These are the parameters for uplink with TCP throughput in heavy flow. (b) Parameters for uplink with UDP throughput in heavy flow.
Energies 17 02344 g028aEnergies 17 02344 g028b
Figure 29. (a) QoS parameters for downlink with TCP throughput in heavy flow. (b) QoS parameters for downlink with UDP throughput in heavy flow.
Figure 29. (a) QoS parameters for downlink with TCP throughput in heavy flow. (b) QoS parameters for downlink with UDP throughput in heavy flow.
Energies 17 02344 g029aEnergies 17 02344 g029b
Figure 30. (a) QoS parameters for downlink with TCP throughput in heavy flow. (b) Memory parameters for downlink with UDP throughput in heavy flow.
Figure 30. (a) QoS parameters for downlink with TCP throughput in heavy flow. (b) Memory parameters for downlink with UDP throughput in heavy flow.
Energies 17 02344 g030
Figure 31. Comparison of proposed virtual data storage system with legacy data management system in terms of TCP.
Figure 31. Comparison of proposed virtual data storage system with legacy data management system in terms of TCP.
Energies 17 02344 g031
Figure 32. Comparison of proposed virtual data storage system with legacy data management system regarding UDP.
Figure 32. Comparison of proposed virtual data storage system with legacy data management system regarding UDP.
Energies 17 02344 g032
Table 1. Comparison between the traditional virtual data storage method and the proposed (RG) virtual data storage technique.
Table 1. Comparison between the traditional virtual data storage method and the proposed (RG) virtual data storage technique.
FeaturesGlusterFSEOSCephReplicated Gluster (RG)
Design/ArchitectureNetwork File Used as Scale-outFile-Distributed SystemCompletely File-Distributed SystemCompletely Distributed
Node/Volume FailureMay Need Corrections/MaintenanceSystem Memory/Bricks FailureNo System FailureNo Failure
System Placement TechniquesNo Auto All ManualAutomaticAutomaticAuto
System Fault Tolerance/DetectionNot DetectedDetectable/Fully ConnectDetectable/Fully ConnectAuto
Storage ReplicationSemi ReplicationOriginal File SavedOriginal File SavedReplication
Large/Small Storage FilesSystem Support is Not EfficientFully SupportedFully SuitableSupported
System Working Check Pointing.Not FavorableNot FavorableNot FavorableFavorable
Network SecurityIP/Port-type Control SystemBetter/Advanced CHAPPAP/Object ReplicationBest/Even Replica is Established
Process TimeSlowGoodBetterBest in its Domain
Table 2. Memory parameters obtained from the utility (I-Purf) for the TCP throughput uplink case for normal flow.
Table 2. Memory parameters obtained from the utility (I-Purf) for the TCP throughput uplink case for normal flow.
Normal Uplink FlowTimeBdT (in 15 s)
TCP Throughput15 sec476 Mbps855 Mbytes
Table 3. QoS parameters obtained from the I-Perf utility for UDP throughput in the uplink case for normal flow.
Table 3. QoS parameters obtained from the I-Perf utility for UDP throughput in the uplink case for normal flow.
Normal Uplink FlowTime Lost/Total DatagramJ (in ms)
UDP Throughput10 s11/8550 (0.13%)0.042 ms
Table 4. TCP throughput in downlink case for normal flow.
Table 4. TCP throughput in downlink case for normal flow.
Normal Uplink FlowTimeBdT (in 15 s)
TCP Throughput15 s578 Mbps1.35 G bytes
Table 5. UDP throughput in downlink case for normal flow.
Table 5. UDP throughput in downlink case for normal flow.
Normal Uplink FlowTimeLost/Total DatagramJ (in ms)
UDP Throughput10 s0/8555(0%)0.112 ms
Table 6. TCP throughput in the uplink case for heavy flow.
Table 6. TCP throughput in the uplink case for heavy flow.
Normal Uplink FlowTime BdT (in 15 s)
TCP Throughput15 s519 Mbps932 Mbytes
Table 7. UDP throughput in the uplink case for heavy flow.
Table 7. UDP throughput in the uplink case for heavy flow.
Normal Uplink FlowTime Lost/Total DatagramJ (in mili s)
UDP Throughput10 s0/8549 (0.0%)0.103 ms
Table 8. QoS parameters obtained from the I-Perf utility for TCP throughput in the downlink case for heavy flow.
Table 8. QoS parameters obtained from the I-Perf utility for TCP throughput in the downlink case for heavy flow.
Normal Uplink FlowTime BdT (in 15 s)
TCP Throughput15 s538 Mbps1.26 Gbytes
Table 9. Parameters obtained from the I-Perf utility for UDP throughput in the downlink case for heavy flow.
Table 9. Parameters obtained from the I-Perf utility for UDP throughput in the downlink case for heavy flow.
Normal Uplink FlowTime Lost/Total DatagramJ (in mili s)
UDP Throughput10 s0/8555 (0.0%)0.112 ms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hashmi, W.; Atiq, S.; Hussain, M.M.; Javed, K. Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies 2024, 17, 2344. https://doi.org/10.3390/en17102344

AMA Style

Hashmi W, Atiq S, Hussain MM, Javed K. Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies. 2024; 17(10):2344. https://doi.org/10.3390/en17102344

Chicago/Turabian Style

Hashmi, Waqas, Shahid Atiq, Muhammad Majid Hussain, and Khurram Javed. 2024. "Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment" Energies 17, no. 10: 2344. https://doi.org/10.3390/en17102344

APA Style

Hashmi, W., Atiq, S., Hussain, M. M., & Javed, K. (2024). Tele-Trafficking of Virtual Data Storage Obtained from Smart Grid by Replicated Gluster in Syntose Environment. Energies, 17(10), 2344. https://doi.org/10.3390/en17102344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop