Next Article in Journal
Brief Comparison of High-Side Gate Drivers for Series Capacitor Buck Converters
Previous Article in Journal
A Method of User Travel Mode Recognition Based on Convolutional Neural Network and Cell Phone Signaling Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge–Cloud Collaboration-Based Plug and Play and Topology Identification for Microgrids: The Case of Jingshan Microgrid Project in Hubei, China

1
Electric Power Research Institute, State Grid Hubei Electric Power Co., Ltd., Wuhan 430077, China
2
College of New Energy, Harbin Institute of Technology at Weihai, Weihai 264200, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3699; https://doi.org/10.3390/electronics12173699
Submission received: 7 August 2023 / Revised: 24 August 2023 / Accepted: 29 August 2023 / Published: 1 September 2023

Abstract

:
The rapid advancement of renewable energy technologies necessitates innovative solutions for the efficient deployment and management of microgrid systems. This paper presents a detailed study on the implementation of edge–cloud collaboration-based plug and play (PnP) and topology identification for microgrids, focusing on the Jingshan AC/DC Microgrid Cluster System (JS-MCS) in Hubei, China. Firstly, the paper elucidates the structure of JS-MCS’s physical system and integral information system, comprising the cloud platform, communication network, edge unit, and terminal unit. The following sections introduce the methods of PnP, describing the overall procedure and specific implementations between cloud and edge and cloud and terminal. Then, a novel approach for topology identification is further explored based on correlation analysis. An exemplary analysis of the studied system, JS-MCS, accentuates practical insights into the application of PnP and topology identification methods.

1. Introduction

With the rapid advancements in renewable energy technologies and the ongoing global energy transition, microgrids have emerged as a promising small-scale, autonomous electricity system paradigm, attracting escalating attention in both academic and industrial circles [1,2]. The salient feature of microgrids lies in their capability to seamlessly integrate diverse energy resources such as solar, wind, and energy storage systems, ensuring a reliable, efficient, and eco-friendly power supply within localized regions [3]. Nevertheless, the deployment and operation of microgrids are confronted with various challenges. Primarily, as an emerging technology, the installation and configuration of microgrids often require intricate technical expertise and specialized equipment, making them less accessible to the general populace [4]. Additionally, the topology structure of microgrids may vary across different scenarios, emphasizing the importance of swiftly and accurately discerning the underlying topology for effective system functioning [5].
In the realm of microgrids, the concept of plug and play (PnP) primarily refers to the automated process of identification, connection, and configuration of diverse energy resources and energy management components. The application of PnP technology in microgrids is designed to streamline the system’s deployment and operations, making it more user-friendly for installation and maintenance [6].
In accordance with the IEC 61850 communication model and protocol mapping, reference [7] formulates a novel distributed power generation PnP information interaction mechanism. This mechanism is built on a foundation of modeling and self-descriptive configuration for terminal-layer devices, effectively mitigating heterogeneity issues between the master station and terminal models through the extension of IEC 61850 and IEC 61970/IEC 61968 topology models. Accomplished through a module for heterogeneous model mapping and data conversion, the interaction mechanism facilitates seamless data exchange between terminal IEC 61850 information and master station IEC 61968 messages, thus establishing a robust information exchange conduit between the master station layer and terminal layer.
Reference [8] presents a comprehensive PnP implementation approach for substation equipment based on the IEC 61850 standard. It includes a novel model design methodology, also grounded in the IEC 61850 standard, which enables standardized access to terminal devices and the configuration of communication point tables. Moreover, by employing this design method, a PnP system and control approach featuring multiple-instance dynamic tracking is devised.
Reference [9] proposes an innovative PnP and topology recognition approach for substation intelligent terminals. Embracing the hierarchical system of distribution in the Internet of Things (IoT), namely the “cloud–communication–edge–terminal” framework, the architectural scenario for substation intelligent terminals and intelligent electric meters, among other terminal devices, is meticulously constructed.
Based on the IEC 61968/61970 and IEC 61850 standards, reference [10] analyzes the specific differences and fusion modeling methodologies of information models. This analysis culminates in the design of a module for heterogeneous information model mapping and data conversion in distribution networks, consequently realizing an innovative implementation architecture for PnP information flow technology related to distribution equipment.
Reference [11] advances a cutting-edge three-layer architecture consisting of cloud, layer, and edge to conceptualize PnP stochastic power sources. Within this structure, cloud operations facilitate PnP for operational and maintenance functions, while layer control enables ordered control for PnP. In turn, the end interconnections allow for seamless PnP access to the distribution network, aligning with diverse application scenarios and engineering power matching requisites.
Microgrid topology identification refers to the analysis and detection of the network structure within a microgrid system to ascertain the interconnection methods and relationships among its various components. The primary aim of topology identification is to comprehend the interconnected topology of internal components within the microgrid. This includes the physical connections between power generation units, energy storage devices, loads, and the corresponding electrical connections like line connections and switch states [12]. Topology identification is vital for the operation and management of microgrids. Accurate discernment of the microgrid’s topology enables maintenance personnel to grasp the system’s operational status and power flow, facilitating better system optimization and fault diagnosis. Moreover, topology identification contributes to the stability and security of the microgrid by helping to avoid electrical connectivity errors and potential circuit failures [13].
In reference [14], a topology identification method for low-voltage distribution networks based on data association analysis is introduced. The method utilizes the Tanimoto similarity coefficient to calculate the correlation and non-correlation between distribution transformers, branch boxes, meter boxes, and smart energy meters within each group. This approach achieves topology identification for the low-voltage distribution network, with the identified topology being verified using rules that consider factors such as power outage and energized status, outage duration, geographical location, and supply radius within the same distribution transformer area.
Reference [15] estimates the distribution network circuit topology using historical voltage and power measurement data recorded by smart meters and distributed energy sensors. The method classifies the load’s connection relationship as either parallel or radial based on voltage magnitude relationships.
In reference [16], a dynamic solution for online identification and monitoring of smart grid topology is proposed. This method combines the concepts of compressive sensing and graph theory, modeling the smart grid as a large interconnected graph. It then reformulates topology identification as a sparse recovery problem using the direct current power flow model under probabilistic optimal power flow. An improved sparse recovery algorithm reconstructs the topology, requiring only a minimal number of observations from bus parameters without prior knowledge of the network topology.
Reference [17] presents a method to estimate the distribution network topology using energy metering data, including active power, reactive power, and voltage. The voltage sensitivity matrix of monitored nodes is estimated and utilized to calculate the distance matrix between nodes, with graph theory then applied to estimate the network topology.
Reference [18] delves into the intersection of edge computing and IoT for smart grid applications. This work aligns well with our exploration of edge–cloud collaboration and highlights the potential of this combination in the context of the future energy landscape. Reference [19] introduces a novel optimization model using edge computing specifically tailored for microgrids.
These studies have presented a valuable collection of methods and theories in the fields of plug and play and microgrid topology identification. These methods include standards-based plug and play implementations, multi-tiered topology identification approaches, information model mapping, and data conversion, among others. These innovative methods offer new perspectives for addressing the complexities within microgrids, thus contributing to the further advancement of microgrid technology. The comprehensive nature of these studies is also commendable, as they encompass various aspects of microgrid research, spanning from hardware to software, communication networks to data analytics, providing a holistic view for tackling challenges within microgrids.
Although favorable equivalent results are achieved using the above methods, some limitations still exist.
Firstly, in recent years, the rapid expansion of microgrid deployments has highlighted the need for robust and efficient plug and play methods to streamline the integration of diverse energy resources and enhance grid flexibility. While the IEC 61850 standard has served as a valuable framework for communication and interoperability in power systems, it has become evident that this standard faces certain challenges when applied to the complex and dynamic environment of microgrids.
Secondly, the PnP techniques in microgrid systems are primarily aligned with the IEC 61850 standard. This adherence to a specific international standard presents a limitation, as few studies are available that explore or propose PnP methods based on alternative standards, such as the “Technical Guidelines for Power Distribution IoT”, published by the State Grid Corporation of China. This limitation confines the scope of our approach and may restrict the applicability of the proposed solutions within different regulatory and technological frameworks, especially in the context of China’s domestic energy infrastructure.
Thirdly, the existing topology identification methods in microgrid systems often suffer from efficiency problems due to complex computational processes. To illustrate this limitation, consider an area comprising 200 distribution transformers, with each transformer connected to 3 branch boxes, 9 m boxes, and 90 smart energy meters. In this scenario, the total amounts to 600 branch boxes, 1800 m boxes, and 18,000 smart energy meters. Analyzing the connections among these devices using big data requires substantial computational resources, rendering the implementation of the algorithm both challenging and inefficient.
Thus, this paper proposes edge–cloud collaboration-based plug and play and topology identification methods for microgrids. Specifically, we take the Jingshan AC/DC Microgrid Cluster System (JS-MCS) in Hubei, China, as an example for explanation. This paper aims to address these challenges by proposing an innovative edge–cloud collaboration-based PnP approach that not only extends the capabilities of the IEC 61850 standard but also overcomes specific limitations that arise in microgrid scenarios. Through a detailed analysis of these challenges and a comprehensive exploration of our proposed methodology, we aim to provide a clearer understanding of the necessity for alternative PnP solutions in the context of microgrid projects, offering significant advantages in terms of scalability, adaptability, and real-time topology identification.
The remainder of this paper is organized as follows: Section 2 delves into the overall structure of the system, detailing both the physical architecture of JS-MCS and the information system encompassing the cloud platform, communication network, edge unit, and terminal unit. Section 3 focuses on the methods of PnP, providing a comprehensive description, insights into the logical node of JS-MCS, and the implementation of PnP between cloud, edge, and terminal. In Section 4, the paper explores the topology identification of microgrids based on correlation analysis. Section 5 presents an example analysis of a studied system. Finally, Section 6 concludes the paper. This structure provides a systematic exploration of edge–cloud collaboration, PnP methodologies, and topology identification within the specific context of JS-MCS.

2. Structure of System

2.1. Structure of JS-MCS Physical System

The physical structure of JS-MCS is shown in Figure 1. JS-MCS is a highly complex and meticulously designed energy network, whose characteristics and configurations are poised to make innovative impacts in the field of power systems. This system is composed of four AC microgrids, which are interconnected to a single AC bus through step-up transformers and further connected to the distribution network.
In this AC/DC microgrid cluster system, flexible low-voltage DC interconnection devices are installed on the low-voltage side of the four AC microgrids. This implies that each AC microgrid is accompanied by a DC microgrid, and these four DC microgrids are interconnected through a DC bus. This design enables the interconnection of microgrids on both the AC and DC sides, thereby enhancing system stability and flexibility.
In addition, the JS-MCS also includes an independent DC microgrid, equipped with a power electronic transformer to replace the existing AC transformer. Moreover, this DC microgrid is integrated with the other four AC microgrids onto the same AC bus, and its DC side also converges with the four DC microgrids onto the same DC bus.
Furthermore, each microgrid internally possesses photovoltaics, energy storage, V2G, AC loads, and DC loads. Each microgrid is equipped with an independent microgrid controller, enabling each microgrid to coordinate with the cloud master station for optimized power management and dispatching. The system is also equipped with section switches on the AC bus, aimed at preventing the impact of a single microgrid failure on other microgrids. Circuit breakers are installed on both the AC and DC sides within the microgrids to similarly prevent the impact of single microgrid failures on others.
In summary, the JS-MCS is a complex and precise model of power systems. Its unique design and high level of interconnectivity enable it to effectively address various issues in microgrid systems, including stability, flexibility, and fault isolation. It serves as an important reference model for the development of modern power systems [20,21,22].

2.2. Information System of JS-MCS

The information system of JS-MCS is shown in Figure 2. This information system consists of four levels: cloud, management, edge, and end. It adopts a unified information model to achieve data exchange and information sharing. It is based on a network security protection system to ensure security throughout every stage and the entire process:
(a)
The cloud platform consists of an IoT management platform, an enterprise middle platform, and application services. It enables unified access to terminals, rapid response to demands, elastic expansion of applications, and dynamic allocation of resources;
(b)
The communication network includes a remote communication network and a local communication network, enabling information communication among the cloud, edge, and end;
(c)
Edge computing expands the scope and capabilities of the cloud platform to collect and manage data. Its functionality is carried by edge devices with edge computing capabilities, including smart integrated terminals, station terminals, and feeder terminals in distribution areas;
(d)
End devices include device monitoring terminals, environmental monitoring terminals, operation collection terminals, and so on. They are responsible for providing basic data such as power grid operating status, device status, environmental status, and other auxiliary information to the edge or cloud. These end devices serve as terminals for executing decision commands or on-site control.
In the cloud platform, business applications correspond to the application layer of the power IoT. The IoT management platform, technical middle platform, and enterprise middle platform correspond to the platform layer. The remote communication network corresponds to the network layer. The edge devices, local communication network, and end devices correspond to the perception layer.

2.2.1. Cloud Platform

The cloud platform consists of the application layer and the platform layer, as shown in Figure 3. The platform layer should include cloud platform basic services, an IoT management platform, and an enterprise middle platform. The enterprise middle platforms related to the power IoT should include a technical middle platform, data middle platform, power grid resource business middle platform, and so on. The application layer should be built upon the data and basic component services provided by the platform layer, enabling support for internal and external business. The cloud platform should adhere to the principles of openness and sharing. It should unify access to data from edge and end devices through the communication network and utilize cloud–edge collaboration mechanisms to achieve unified management of large-scale IoT infrastructure and efficient processing of massive sensory information. The cloud platform should have functions such as massive terminal coupling, PnP devices, diverse data integration, flexible system deployment, elastic resource telescoping, application data decoupling, rapid application deployment, and cloud–edge collaboration. These functions should meet the requirements of rapid response to distribution network business needs, elastic application scalability, dynamic resource allocation, intensive system operation and maintenance, and so on.
The basic services of the cloud platform should provide IT basic support for the IoT management platform and enterprise middle platform. The basic services of the cloud platform include, but are not limited to, big data services, artificial intelligence services, cloud middleware services, microservice engine, and application integration services:
(a)
Big data services should provide unified data storage and computing resources to meet the requirements of full lifecycle management of massive data in the power distribution network, supporting online expansion of storage capacity;
(b)
AI services should support an open algorithm repository, enabling centralized management, independent maintenance, and upgrades of algorithms;
(c)
Cloud middleware services should support distributed messaging and caching to meet the high-reliability message transmission and exchange requirements between applications in the power distribution network;
(d)
The microservice engine should support microservice development, deployment, governance, operation, and maintenance;
(e)
Application integration services should support service publication approval, lifecycle management, and other related functionalities.

2.2.2. Communication Network

The communication network of the power distribution IoT consists of a remote communication network and a local communication network. The remote communication network should support direct information interaction between the edge and the cloud or satellite, while the local communication network should support edge-to-edge, edge-to-end, and end-to-end information exchange. The communication network should meet the following requirements:
(a)
It should adopt advanced, mature, and suitable communication technologies to meet the business requirements of the power distribution IoT while maintaining a moderate level of advancement;
(b)
It should adhere to a unified technical system and interface standards to reduce signal conversion and improve the operational efficiency of the communication network;
(c)
It should implement the principle of differentiation, based on the actual needs and the distribution of users. It should fully consider the development level of the power distribution network and the differences in business requirements in different regions. It should also make effective use of existing communication network resources, take into account the needs of both short-term and long-term development, and make rational choices in communication network technology;
(d)
The device configuration should be based on the short-term requirements while considering the needs for future business development. The selected devices should have good compatibility, scalability, and the ability for online upgrades to improve device utilization efficiency;
(e)
The communication devices used should have inspection reports from relevant power system inspection institutions. If there are special requirements, they should also possess a telecommunications equipment network access license. Devices that do not meet the above requirements must not be used in the project.

2.2.3. Edge Unit

Edge devices are the carriers for implementing edge computing functions in the power distribution IoT. With “cloud–edge collaboration” and “edge intelligence” as their core features, they serve as open platforms for data aggregation, computation, and application integration. Edge devices collaborate with end devices to exchange data. They have the capability for local data analysis and on-site processing, enabling comprehensive data collection and perception in power distribution applications. Edge devices collaborate with cloud platforms to exchange data, reducing the communication network and cloud platform’s burden in terms of information transmission and processing. They should also support edge-to-edge interactions. The design concept for edge devices should be based on “hardware platformization” and “software APP-ization”, enabling virtual deployment and mutual isolation of functionalities. Please refer to Figure 4 for the functional architecture of edge devices.

2.2.4. Terminal Unit

End devices should be deployed to cover distribution network lines, distribution substations, end users, and so on. They monitor the operating status of the power distribution network, equipment status, environmental status, and other auxiliary information. They are capable of sensing, judging, executing, and uploading data. End devices should have unique device serial numbers and support self-registration, self-description, and PnP services. They should utilize standardized interfaces and structures, require minimal maintenance, and be easy to install. Ideally, they should also have the capability for self-powering and support installation while being energized.
The communication unit of the end device should be designed as an independent module, and the communication module should support hot-swapping. The performance of the communication module in the end device should meet the following technical requirements:
(a)
Power supply voltage: Module-level DC 12 V ± 5%; rated voltage at device level: 220 V ± 20%, with a frequency range of 50 Hz and allowable deviation of −6% to +5%;
(b)
Dynamic power: Active power less than 3 W, apparent power less than 5 VA;
(c)
Static power: Active power less than 2 W, apparent power less than 4 VA;
(d)
The power supply of the device-level communication unit should have the capability to provide power, with a minimum power of 6 W. The backup power supply, in the event of a power failure, should support the end device to achieve at least three successful communications and maintain operational capability for at least 1 min.

3. Method of PnP

3.1. Overall Description of PnP

In a microgrid system, PnP is a crucial feature that encompasses various aspects, including device installation, configuration, and operation, particularly at the device and system levels. The PnP functionality is divided into two key components: PnP from edge devices to the cloud platform, and PnP from end devices to the edge devices. Both of these components play a vital role in enabling the intelligence and efficient operation of the microgrid, collectively establishing the foundation for its smart and effective functioning.
Firstly, PnP from edge devices to the cloud platform primarily involves the integration of edge computing and cloud computing. In this link, edge devices can connect to the cloud platform without the need for complex configurations or settings, with simple PnP enabling the edge devices to establish a seamless connection with the cloud platform. This allows microgrid devices to come online with zero configuration, significantly reducing the configuration workload at the construction site while improving data collection accuracy and efficiency.
Secondly, PnP from end devices to edge devices primarily involves device-level interconnectivity. Through simple device interface operations, end devices can quickly establish connections with edge devices and enable real-time data exchange. Similarly, this reduces the workload of on-site configuration, improves device operational efficiency, and ensures data accuracy.
The aforementioned PnP links are based on the prerequisites of communication, protocols, and model standardization. These three elements form the fundamental requirements for enabling device PnP functionality. Together, they ensure compatibility and stability in device interaction and communication processes, providing support for the overall operation of the microgrid. The architectural diagram for the PnP application in microgrids is shown in Figure 5.

3.2. Logical Node of JS-MCS

Distributed energy resources (DERs) may be put into operation or shut down frequently during microgrid operation. Hence, their information interfaces need a PnP function to realize fast identification and configuration by the cloud platform when the DER integrates and to improve the DER integration efficiency. The information model is the important foundation for PnP integration of microgrid equipment. The equipment’s logical nodes (LNs) predefine the groupings of IEC 61850 data objects (DOs) that serve specific equipment functions such as automatic control, monitoring, protection, and measurement.
A particular physical device of a DER could be extracted to the logical devices (LDs) consisting of the relevant LNs to provide specific functions. The LNs of the DERs in JS-MCS typically include:
(1)
Photovoltaic
The LN in IEC 61850 specifically for photovoltaic (solar panel) systems is DPV. This logical node is used to model the functions and behavior of solar photovoltaic power generation systems. The DPV logical node would likely contain data objects representing the status and control of the solar photovoltaic system, measurements of power and energy, and possibly information about conditions like solar irradiance and temperature that might affect the system’s performance. The following general logical nodes in IEC 61850-7-4 might be used with solar photovoltaic systems:
MMXU: Measurement unit. This logical node might include measurements of current, voltage, power, and energy for the photovoltaic system.
GGIO: General purpose input/output. This logical node might be used for a variety of status and control information about the photovoltaic system.
CSWI: Controlled switch. This logical node could be used to model the controls for disconnect switches or other switchgear associated with the photovoltaic system.
(2)
Battery energy storage
IEC 61850-7-420 is an extension of the IEC 61850 standard, specifically providing logical nodes to cater to the battery energy storage system. This standard defines a set of logical nodes to facilitate the modeling, control, measurement, and management of battery systems in the context of DERs. Here are the logical nodes in IEC 61850-7-420 specifically associated with the battery energy storage system:
BAT: This logical node represents a battery system. It models the functionality of battery storage systems, including data attributes related to charge and discharge power, state of charge (SOC), state of health (SOH), temperature, and other operational parameters. It also includes control commands for starting and stopping the charging or discharging process.
BCR: The battery charger/rectifier logical node models the functionality of the battery charger or rectifier, including data related to active and reactive power, current, voltage, and operational status. Control commands include starting and stopping the charging process.
BTR: This logical node represents a battery trip unit. It is used to model the functionality that trips the battery connection based on conditions such as over-temperature, under-voltage, over-voltage, and over-current.
BSW: BSW represents a battery switch. It models the functionality of the switch that connects and disconnects the battery from the DC bus, including its operational status and control commands.
INV: INV represents an inverter. This logical node models the functionality of an inverter, which is commonly used in battery storage systems to convert DC power to AC power for feeding into the grid. This includes data related to active and reactive power, current, voltage, frequency, and operational status. Control commands include starting and stopping the inverter.
(3)
Transformer
Transformers can be represented by a variety of logical nodes in the IEC 61850 standards. In IEC 61850-7-4, which covers general power system equipment, several logical nodes are typically associated with transformers:
XCBR: Circuit breaker. This logical node is often used with transformers to control the circuit breaker that connects the transformer to the rest of the power system.
XSWI: Disconnect switch. This logical node may be used with transformers to control disconnect switches that isolate the transformer for maintenance or during a fault condition.
TCTR: Transformer current transformer. This logical node is used to represent the current transformer that is used to measure the current in the transformer.
TVTR: Transformer voltage transformer. This logical node is used to represent the voltage transformer that is used to measure the voltage of the transformer.
MMXU: Measurement unit. This logical node contains measured data from the transformer, including voltage, current, power, and energy measurements.
ZCAP: Capacitor bank. In case the transformer is equipped with shunt capacitors for voltage regulation, this logical node can represent them.
PTOC: Transformer over-current protection. This logical node provides the over-current protection function for the transformer.
(4)
AC and DC loads
In IEC 61850-7-4 and IEC 61850-7-420, loads (both AC and DC) are not typically represented as separate logical nodes. Instead, they would likely be represented through other logical nodes, primarily measurement and control related, depending on the functionality of the load in the power system. Here are some logical nodes that are associated with AC and DC loads:
MMXU: Measurement unit. This logical node contains measured data related to the load, such as voltage, current, power, and energy measurements.
GGIO: General-purpose input/output. This logical node can be used for a variety of status and control information about the AC and DC loads.
CSWI: Controlled switch. This logical node could be used to control a switch that can connect or disconnect the AC and DC loads from the power system.
CILO: Interlocking function. If the AC and DC loads have interlocks to ensure safe operation (for example, to prevent them from being switched on when certain conditions are not met), these might be represented with this logical node.
(5)
Rectifier or inverter
In the IEC 61850-7-420 and IEC 61850-7-4 standards, there is no specific logical node for a rectifier or inverter. However, these devices’ functionalities could be modeled using a combination of different logical nodes, depending on their specific use and the systems they are connected to.
For instance, in a solar power generation system, the inverter converting DC power to AC might be represented as part of the DPV logical node in IEC 61850-7-420. If the inverter is part of a battery energy storage system, it might be represented as part of the BAT logical node. In a more generic context in IEC 61850-7-4, the control, status, and measurement functions associated with an inverter or rectifier might be represented by a combination of logical nodes like MMXU, GGIO, and CSWI, which are similar to those of the AC and DC loads.

3.3. PnP between Cloud and Edge

The process of data exchange facilitating the PnP between cloud and edge platforms, as depicted in Figure 6, involves intricate and integral interactions that are crucial for optimal functionality.
Initially, the edge unit is responsible for procuring the operational measurements and status information from the cloud platform. These acquired values are vital for maintaining a real-time understanding of the cloud platform’s status and can provide valuable insights into its functionality.
Once the edge unit has obtained these values, it proceeds to update the corresponding data objects in its information model. These updates provide an accurate and up-to-date representation of the cloud platform’s operations. The accuracy of the data is contingent upon the dynamic updating mechanism, a crucial feature enabling the system to adapt to changes in the cloud platform’s operational parameters.
Subsequently, the edge unit triggers a report based on the specifications delineated in the report–control–block options. This report, containing critical status and operation data from the cloud platform, is then transmitted to the cloud platform itself. The cyclical nature of this transmission process ensures a consistent exchange of information, contributing to the robustness of the entire system.
Simultaneously, the edge unit is tasked with receiving control commands and settings alterations from the cloud platform. This dual-directional communication establishes a firm groundwork for cooperative and coordinated operations between the two platforms.
Upon receipt of these controls and settings, the edge unit is required to make reasonable decisions to act upon. The rationale and efficacy of these decisions are paramount to the system’s effective operation. By leveraging advanced analytics and decision-making algorithms, the edge unit is capable of making determinations that align with the operational objectives of the cloud platform, ensuring smooth, efficient, and harmonious PnP operations.
The basic process of cloud-to-edge PnP involves a series of meticulously coordinated steps. The following provides a detailed explanation of the entire process:
(1)
Cloud platform preset configuration process: On the cloud platform side, the initial information model and required authentication information for the edge devices are preset. This includes predefining the initial information model and encryption certificates for the edge devices. This preset phase ensures standardization and security of the edge devices, providing a foundation for their interaction with the cloud platform in the subsequent processes.
(2)
Product type creation process: The cloud platform creates product types based on the physical model file of the edge devices. This step involves predefining the device’s model, allowing the cloud platform to recognize and handle various types of edge devices, thereby enabling seamless management of the devices.
(3)
Secondary model documentation process for edge devices: First, the cloud platform manually completes the documentation process for the secondary model of the edge devices and establishes the association between the primary and secondary devices. This step involves manually documenting the secondary model of the edge devices and establishing the association between the primary and secondary devices. When not restricted by access permissions for edge devices, it supports automatic documentation based on the information uploaded by the edge devices. It also supports through scanning QR codes to establish documentation of the secondary models or batch documentation of the secondary models of edge devices through importing files.
(4)
Identity information upload process after edge device startup: After the edge device powers on and starts up, it uploads its identity information to the cloud platform through an IoT APP. This step represents the identity information upload phase of the edge device, playing a crucial role in the cloud platform’s permission management and device status tracking.
(5)
Cloud platform identity information receiving and verification process: Upon receiving the identity information from the edge device, the cloud platform performs identity comparison and matching based on the predocumented information. Once a successful match is found, access permissions are granted to the edge device. This is the identity information verification phase, where the crucial aspect is ensuring system security by only allowing verified devices to access the cloud platform.
(6)
The process of establishing a connection between the edge device and the cloud platform: Once the edge device establishes a successful connection with the cloud platform, it should send an online message to the platform. This step involves establishing a communication connection between the edge device and the cloud platform, and sending the online message indicates that the device has successfully connected to the cloud platform.
(7)
The process of the cloud platform receiving the device’s online message and refreshing its status: Upon receiving the online message from the edge device, the cloud platform should promptly refresh the operational status of the device. This is the phase of device status update, where the cloud platform can reflect the real-time operational status of the device for subsequent management and control purposes.
(8)
The data parsing process of the cloud platform: The cloud platform will parse the data based on the model defined in the product type. This is the data parsing phase, where the cloud platform can understand and process the data uploaded by the device, providing further control and management of the device.
In Figure 6, “CAN” stands for “controller area network”, a vital communication protocol within microgrids. This protocol facilitates real-time communication and control coordination in power devices. Within microgrids, “CAN” serves these key functions:
Efficient communication: “CAN” enables seamless communication among microgrid control units, promoting real-time information exchange for coordinated operation. Distributed resource management: “CAN” aids in managing distributed energy resources like solar and wind generation [23], enhancing efficiency and stability. Intelligent energy management: With data communication capabilities, “CAN” supports intelligent decisions for energy allocation, scheduling, and optimization in varying scenarios.

3.4. PnP between Cloud and Terminal

The mechanism underlying data exchange between cloud and terminal in a microgrid context, as outlined in Figure 7, underscores a complex interplay of various parameters that are integral to a microgrid’s operation. The terminal unit within the microgrid system encompasses numerous information equipment models, each playing a significant role in the microgrid’s data exchange process. These models enable the constant and efficient flow of information within the microgrid, thereby facilitating seamless operations and communications.
An illustrative example of such an information equipment model within a terminal unit is the battery energy storage.  P B E S  and  Q B E S  symbolize the active and reactive power, respectively. These parameters are modeled by the measured values MMXU.TotW and MMXU.TotVAr, correspondingly. These models serve as crucial markers for the system’s operational status, allowing the tracking of power contribution and responsiveness in real time, thereby ensuring enhanced system performance and stability. The breaker position, a critical determinant of power transmission and distribution within the grid, is modeled via the status information DO-XCBR.Pos. This model aids in the accurate monitoring of the breaker’s position, hence ensuring secure and efficient power flow within the system. The parameters  V ref P B E S r e f , and  Q B E S r e f  represent the voltage reference, active power reference, and reactive power reference of the battery energy storage, respectively. These parameters are modeled by control values DRCC.OutVSet, DRCC.OutWSet, and DRCC.OutVarSet, correspondingly. These models ensure that the voltage, active, and reactive power of battery energy storage stay within the desired range, thereby promoting overall system stability and operational efficiency.
In a word, these intricate data exchange processes between the cloud and the terminal contribute significantly to the PnP functionality within microgrids, underlining the paramount importance of accurate modeling and control in the successful operation of these complex systems.
The basic process of cloud PnP includes:
(1)
The cloud platform should initially preconfigure a device model associated with the end device. This is the initial stage of cloud-based PnP, where, by predefining the device model, it becomes convenient for the device to automatically register and connect after power-on, thereby improving the convenience and automation of device integration.
(2)
Subsequently, the cloud platform needs to install the end device collection APP on the edge device. This application serves as a communication bridge between the end device and the edge device, responsible for data collection and transmission.
(3)
Once the end device is powered on and starts up, it should immediately upload its identity information to the edge device. This step is crucial to ensure the security and identification of the end device, while also providing essential information for the subsequent device registration.
(4)
The device collection APP on the edge device receives the end device’s active registration identity information or completes the registration process between the end device and the edge device after the edge device self-discovers the end device. The active registration process is applicable to network-type end devices, such as Ethernet and IP-based communication carriers. On the other hand, the self-discovery registration process is applicable to serial port-based end devices, such as broadband carriers and RS-485 communication end devices.
(5)
Once the end device is successfully registered, the edge device will send the end device’s identity registration message to the cloud platform through the IoT business APP. This process synchronizes the existence and status of the end device to the cloud platform, further enhancing the integrity and real-time nature of the system.
(6)
Upon receiving the identity registration message from the end device, the cloud platform will complete the documentation of the end device and the association with primary and secondary devices based on the uploaded end device identity information. This process should support manual completion of secondary model documentation for the end device and the association of primary and secondary devices. Additionally, it should also support the completion of secondary model documentation for the end device through scanning QR codes or importing files.
(7)
Upon receiving the online or offline message from the end device, the cloud platform should promptly update the operational status of the end device. This step ensures that the cloud platform can reflect and manage the operational status of the end device in real time, contributing to the stable operation of the system.
(8)
Finally, when the cloud platform receives business messages from the end device, it will perform in-depth parsing and processing of the messages. This will facilitate a further understanding of the device’s operational condition, enable the discovery of potential issues, and facilitate effective problem resolution.
One of the key metrics we employed to measure the speed of the integration process is the average time taken to establish connections between various microgrid components. In a comparative study, we executed integration tasks using both traditional methods and our proposed edge–cloud collaboration approach. The results clearly demonstrate a significant reduction in the time required for component connection, with our method outperforming the traditional methods by an average of 30%. This reduction in integration time showcases the effectiveness of our PnP method in streamlining the setup process.
Latency reduction is another critical aspect that directly impacts the smoothness of the integration process. We conducted latency tests during data exchange between edge devices and the cloud platform using our approach. Our findings indicate an average latency reduction of 25% when compared to conventional integration methods. This reduction not only ensures faster data flow but also contributes to the overall stability and responsiveness of the microgrid system.

3.5. Data Exchange and Communication Mechanism

In this study, we focused on the practical application of plug and play (PnP) and topology identification methods based on edge–cloud collaboration in the Jingshan Microgrid Project. To achieve this goal, we designed an efficient data exchange mechanism to ensure the real-time collection of power data from key nodes within the microgrid and their transmission to the cloud platform for analysis and monitoring. The following provides a detailed description of the data exchange process and communication mechanism.

3.5.1. Data Transmission from Edge Devices to the Cloud

In the Jingshan Microgrid Project, we installed power sensors at multiple critical nodes to collect real-time power data, including current and voltage readings. These sensors transmit data to a gateway device located at the center of the microgrid, utilizing wireless communication technologies. The process is as follows:
(1)
Data collection: Each sensor measures power data based on a predefined sampling frequency and stores them in a local cache, forming a data packet.
(2)
Data transmission: We selected a low-power wide-area network (LoRaWAN) as the wireless protocol for data transmission. This protocol features low power consumption and long transmission distances, making it suitable for dispersed sensor nodes within the microgrid. Sensors transmit data packets to the gateway device using the LoRaWAN.
(3)
Data preprocessing: Upon receiving data packets from the sensors, the gateway device performs initial data preprocessing. This includes checking the integrity and accuracy of the data to ensure their reliability.
(4)
Data transmission to the cloud: The gateway device transfers preprocessed data to the cloud platform through Ethernet or wireless networks. The cloud platform receives and stores these real-time power data, preparing them for subsequent data analysis and monitoring.

3.5.2. Data Transmission from the Cloud to Edge Devices

At the cloud platform, real-time power data are analyzed to calculate operational parameters of various power components within the microgrid, such as output power and load information. The cloud platform can also generate control commands, such as adjusting the output power of photovoltaic cells. These commands need to be transmitted back to edge devices for remote control and microgrid performance optimization. The following outlines the specific data transmission process:
(1)
Data analysis: The cloud platform analyzes the received real-time power data, calculates the operating status and performance parameters of various power components within the microgrid, and generates detailed reports.
(2)
Control commands: Based on the results of data analysis, the cloud platform may generate a series of control commands used to adjust the working status of power components to optimize performance or meet real-time demands.
(3)
Data feedback: The cloud platform transmits the generated control commands back to the gateway device located at the center of the microgrid, preparing for the next step of edge device response.
(4)
Edge device response: Upon receiving control commands from the cloud platform, the gateway device passes the commands to the corresponding power components. For instance, the gateway can adjust the working status of an inverter in response to control commands from the cloud platform.

4. Topology Identification of Microgrid Based on Correlation Analysis

4.1. Overall Description of Topology Identification

In order to solve the contradiction between the high frequency of anomalies in low-voltage distribution network devices and slow diagram updates, we have adopted PnP and automatic registration maintenance technology, combined with the cloud platform’s topological information for automatic verification. Through this solution, proactive discovery and automatic maintenance of information such as substation transformer–user relationships and abnormal power supply phases can be achieved. In this study, we deploy smart fusion terminals in microgrids, which are characterized by their light weight, low power consumption, and cost-effectiveness. These smart fusion terminals can in real time sense the status and information of each node in the microgrid and transmit these data to the cloud platform for further processing and analysis. By adopting PnP and automatic registration maintenance technology, we make the installation and configuration process of smart fusion terminals more convenient and automated. Additionally, we utilize the topological information on the cloud platform for automatic verification to ensure the correctness and consistency of the network topology.
Through the deployment of smart fusion terminals and the automatic verification function, we can proactively discover the relationships between substation transformers and users, as well as abnormalities in power supply phases. The smart fusion terminals will monitor the node connections and power transmission in the power grid in real time and, through communication with the cloud platform, achieve dynamic updating and maintenance of node topology information. Figure 8 illustrates the proposed topology identification application architecture, where intelligent fusion terminals are deployed in the microgrid to facilitate data transmission and topology information updates through communication with the cloud platform. The cloud platform processes and analyzes the data from the smart fusion terminals, providing automatic verification functionality and a topology visualization management interface to support maintenance personnel in fault troubleshooting and network optimization.

4.2. Topology Identification Method

The Pearson correlation coefficient analysis method, also known as the Pearson product–moment correlation coefficient, is sensitive to changes in the degree of correlation and has minimal errors in both linear and non-linear cases. It is defined as shown in Equation (1).
P = i = 1 n ( X i X ¯ ) ( Y j Y ¯ ) i = 1 n ( X i X ¯ ) 2 j = 1 m ( Y j Y ¯ ) 2
where  X ¯  and  Y ¯  represent the means of X and Y, respectively. The value of P ranges between −1 and 1. The larger the absolute value of P, the higher the correlation between X and Y. Conversely, the smaller the absolute value of P, the lower the correlation between X and Y.
The energy of the power grid is measured through a smart fusion terminal, and the measurement value is shown as  Z = [ Z i ( j ) ] ( n × N ) .
The measured energy value is denoted as  Z i ( j ) , where j represents the parent node and i represents the end node.
According to the principle of energy conservation, the energy flowing into a node is equal to the energy flowing out of the node. Thus, we can obtain
Z k ( j ) = i L K Z i ( j ) k K
where K represents the set of all parent nodes in the graph, and  L K  is the set of all child nodes of parent node K.
Before overlaying data at the smart fusion terminal, it is necessary to calculate the correlation between each individual smart fusion terminal and the branch. Correlation analysis is employed to preidentify the topology structure. The smart fusion terminals are sorted in descending order based on their correlation coefficients. The smart fusion terminal with the highest correlation coefficient is identified and its corresponding index is stored. This process is repeated to identify the second and third highest correlation coefficients. In order to calculate the correlation coefficient between Xi and Yj (i.e., the branch and smart fusion terminal), the Xk with the highest correlation coefficient is selected and added to each Xi individually to calculate the correlation coefficient. This step is repeated by selecting the Xk with the highest correlation coefficient and adding it to each Xi individually again. This process continues iteratively.
Assuming Xi represents a smart fusion terminal and Yj represents a branch, when two variables, Xi and Yj, are correlated, their correlation coefficient tends to be large. If Xi and Yj are identical, their correlation coefficient is 1. Therefore, this method first calculates the correlation coefficient between Xi and Yj. The Xk with the highest correlation coefficient is selected, and it is added to each Xi individually. Then, the correlation coefficient is calculated again, and the Xk with the highest correlation coefficient is selected by adding it to each Xi individually. This process continues until the correlation coefficient approaches 1, such as reaching above 0.99 (this 0.99 is a symbolic value). However, since adding up the individual fusion terminals with the highest correlation coefficients may not necessarily yield the result most correlated with the branch, it is necessary to iterate through the data of each layer of meters, adding them to the next layer and finding their most correlated combination, which is considered as a submetering layer.
Based on this, we can derive a calculation method using iterative thinking to calculate their correlation coefficient. The calculation flowchart is shown in Figure 9.
The specific calculation steps are as follows:
(1)
Enter the main loop and input the number of all fusion terminals (except the last layer) plus branches.
(2)
Input the data of all branches and fusion terminals.
(3)
Extract the data of the first branch and fusion terminals.
(4)
Calculate the correlation coefficient r(j,i) between a branch (mother table) and a fusion terminal (subtable), as shown in Equation (4):
r = ( j , i ) = i = 1 n ( X i X ¯ ) ( Y j Y ¯ ) i = 1 n ( X i X ¯ ) 2 j = 1 m ( Y j Y ¯ ) 2
  • (4.1)
    Calculate the electric energy ratio N(j,i) of the subtable to the mother table, as shown in Equation (5):
N ( j , i ) = M ( j , i ) M ( j , 1 )
  • (4.2)
    Calculate the mean correlation coefficient Rmean of the same branch, as shown in Equation (6):
R m e a n = i = 1 n r ( 1 , i ) n
  • (4.3)
    Calculate the average value Nmean(j,i) of the ratio over a period of time, as shown in Equation (7):
N m e a n ( j , i ) = j = c , i = 1 n N ( j , i ) n
(5)
Sort the correlation coefficient in descending order and sort the corresponding terminal in matrix RM.
(6)
Enter the subloop.
(7)
Add the energy value Xk of the fusion terminal with the highest correlation coefficient to the energy value Xi of other fusion terminals.
(8)
Recalculate the correlation coefficient r(j,i), the ratio of fusion terminal to branch N(j,i), and the average value Nmean(j,i).
(9)
Determine if the following three conditions are met:
  • (a)
    r(j,i) > 0.9;
    (b)
    0.85 < Nmean(j,i) < 1.05;
    (c)
    Nmean(j,i) ≥ 1.05.
(10)
When conditions (a) and (b) are met, it indicates compliance. The required data will be processed and stored in matrix named as MatJ. Each iteration will find the fusion terminal with the highest correlation coefficient. (Note: The first column of MatJ represents the number of subfusion terminals corresponding to the mother table, the second column represents the correlation coefficient, the third column represents the ratio, and starting from the fourth column, the indices the subfusion terminals are recorded.)
(11)
When conditions (a) and (c) are met, it indicates that additional data have been added. It is necessary to return to the previous player to continue to search for the true value.
(12)
When condition (c) is not met, and either condition (a) or (b) is not met, it indicates that only the first fusion terminal within this branch has been found, and the number of fusion terminals found is insufficient to meet the conditions.
(13)
If step (9) is completed, proceed to step (14). If steps (10) and (11) are completed, return to step (6), in order to achieve the purpose of iterative looping.
(14)
Reorder the rows of matrix MatJ based on the correlation coefficients in the second column and store the result in a matrix named as MatRes.
(15)
Store the data of the i-th branch in the i-th row of MatTP, where MatTP is the topological matrix that we have derived, where the first column represents the topological hierarchy, the second column represents the branch number, the third column represents the number of fusion terminals in the branch, the fourth column represents the correlation coefficients, the fifth column represents the ratio, and starting from the sixth column, the subfusion terminal indices are recorded, which are the same as the first row of MatRes.
(16)
Replace branch.
(17)
Go back to Step (2).
(18)
After completing the iteration for this topological layer, automatically proceed to the next topological layer.
(19)
When all the topological layers have been traversed, exit the loop and output the matrix MatTP.

5. Example Analysis

5.1. Analysis of PnP

During the operation of the microgrid, distributed energy resources may be put into operation or shut down frequently according to the description in Section 3.2. The active power monitoring curves before and after PV integration into the microgrid test platform are presented in Figure 9. The “microgrid test platform” is a theoretical simulation environment specifically designed for our research. It does not directly correspond to the physical implementation of the Jingshan Microgrid Project described in Section 2.1 but serves as a powerful tool to test our algorithms and strategies. Within this simulation framework, we replicate the characteristics and behaviors of a microgrid, including its components, load profiles, intermittent renewable energy sources, communication infrastructure, and other relevant factors. By doing so, we can evaluate the integration of new components, the adaptability of our plug and play mechanisms, and the accuracy of our topology identification approach [24,25].
Based on the data presented in the figure, it is evident that prior to 08:30, the PV system was not integrated into the microgrid, and the approximate power out was 0 kW. Subsequently, during the period from 08:30 to 08:32, the PV system was connected to the microgrid, and its output power was configured to the default value of 3 kW, thereby initiating the PnP data exchange process between cloud and edge, as illustrated in Figure 6: The edge unit is responsible for procuring the operational measurements and status information from the cloud platform including measured values DO, status information DO, controls DO, and settings DO, which are then utilized to update the microgrid configuration information. After 08:32, the PnP process is finalized, and the PV system operates within the microgrid control system, updated to a power output of 6.5 kW. This observation is depicted in Figure 10, where it becomes apparent that the microgrid swiftly achieves the PnP functionality, thereby contributing to heightened efficiency in system integration and operational management.

5.2. Analysis of Topology Identification

In the measurement of the electrical signals, there is generally noise, resulting in measurement errors to some extent. Thus, it is of great significance to ensure that the proposed model is used to achieve the topology identification with the noise. In this paper, we corrupt the test samples of the models with white Gaussian noise under different signal to noise ratios (SNRs). Ten scenarios are designed with different SNRs, which are listed in Table 1.
Table 2 presents the topological identification results between the microgrid controller (MGC) layer and the DG layer (with the mark “A”) under Scenario I. The data in the table reveal that the correlation coefficient ranges from 0.9961 to 0.9975 which indicates a strong relationship between the identified topology and the actual topology. Additionally, the serial numbers of DGs connected with the MGC correspond to those depicted in Figure 1. This indicates that the proposed model achieves more accurate topological identification under the SNR of 40 dB. Furthermore, the quantity of DGs also influences the correlation coefficient outcomes.
Table 3 presents the topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario I. Through the analysis of the table data, it is evident that the correlation coefficients consistently remain at a high level, ranging from 0.9949 to 0.9998. This indicates a significant association between the identified topology and the actual topology. The high correlation coefficients imply that the model can accurately perform topological identification even under the SNR of 40 dB. Moreover, the quantity of distributed generation units connected to the controllers varies across different scenarios, resulting in some fluctuations in the correlation coefficients. However, despite this variability, the model maintains a high level of consistency in correlation coefficients across different topological configurations. Furthermore, the identified serial numbers of the units connected with the DG align with those represented in Figure 1. This correspondence highlights the model’s capability to accurately recognize the connections between the DG layer and the unit layer under Scenario I.
Table 4 shows the topology identification result between the MGC layer and the DG layer (marked as “A”) under Scenario II. The correlation coefficients range from 0.9907 to 0.9992, indicating a strong relationship between the identified topology and the actual topology. The model achieves improved accuracy under an SNR of 39 dB, and the quantity of DGs also influences the correlation coefficient outcomes. The serial numbers of DGs connected with the MGC align with those in Figure 1.
Table 5 presents the topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario II. The table data show consistently high correlation coefficients (0.9949 to 0.9998), indicating a strong association between the identified topology and the actual topology. The model accurately performs topological identification even under an SNR of 39 dB. Despite variations in the quantity of distributed generation units connected to the controllers, the model maintains consistent correlation coefficients. The identified serial numbers of units connected with the DG align with those in Figure 1, highlighting the model’s accurate recognition of connections in Scenario II.
Table 6 shows the topology identification result between the MGC layer and the DG layer (marked as “A”) under Scenario III. Based on the correlation coefficients, the identified topology is strongly related to the actual topology, ranging from 0.9897 to 0.9953. When the SNR is 38 dB, the quantity of DGs also affects the correlation coefficients. As shown in Figure 1, the DG serial numbers are aligned with those of the MGC.
Table 7 presents the topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario III. There is a strong correlation between the identified topology and the actual topology, based on the table data (0.9923 to 0.9997). When the SNR is 38 dB, the model demonstrates precise topological identification. Despite differences in the number of distributed generation units connected to the controllers, the model consistently maintains correlation coefficients. Furthermore, the model accurately recognizes the connections between the distributed generation units and the DG layer, as evidenced by the alignment of identified serial numbers with those depicted in Figure 1 under Scenario III.
Figure 11 and Figure 12 illustrate the correlation coefficients between the MGC layer and the DG layer, as well as between the DG layer and the unit layer for Scenarios I to X. From the figures, it is evident that as the correlation coefficient increases, the size of the colored squares in the figures also increases, and the values are around 0.99. This indicates that both correlation coefficients remain consistently high across different noise levels, validating the effectiveness of the topological identification model.
In real topology identification applications, the method needs to be able to handle variations in grid conditions, such as load changes or intermittent renewable energy sources. In this case, we remove the PV B53 from the MG and test the accuracy of the topology identification method. Table 8 shows the topology identification result between the MGC layer and the DG layer (marked as “A”) in this case. Based on the correlation coefficients, the identified topology is strongly related to the actual topology. Table 9 presents the topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) in this case. There is a strong correlation between the identified topology and the actual topology. This demonstrates that the precise topological identification has been achieved in variations of grid conditions.
We employed a correlation analysis-based method and compared it with two common traditional topology identification methods: One represented by the traditional method A, which is based on power flow calculations, and the other represented by the traditional method B, which is based on state estimation. Method A is a conventional approach for topology identification, characterized by its utilization of power flow calculations in deducing the structure of an electrical power system. Power flow calculations involve the analysis of current–voltage relationships between various nodes within the power network, along with the power exchange between loads and generators. This analysis allows for the determination of the topological connections within the power network. Method B represents another conventional approach for topology identification, relying on state estimation techniques. State estimation is a method that employs measured values to estimate variables such as voltage magnitudes and phase angles for nodes within an electrical system. By comparing the measured data with the power system model, state estimation infers the interconnections between nodes in the power network, thereby revealing the topology of the system. Through extensive simulation experiments, we found that our method exhibited a significant improvement in computational efficiency when compared to these traditional approaches. The specific computation time data are as follows.
From the Table 10, it is evident that our method reduced the computation time by 80% and 75% compared to traditional methods A and B, respectively, indicating a substantial increase in computational efficiency. This enhancement is particularly significant for real-time applications and large-scale scenarios in microgrids, which is expected to bring higher efficiency and performance to practical applications.

6. Conclusions

This paper presents an in-depth study of edge–cloud collaboration-based plug and play (PnP) and topology identification for microgrids, focusing on the JS-MCS in Hubei, China. The exploration of PnP highlights the pivotal role of cloud–edge data exchange in rapidly updating microgrid configuration, ensuring seamless integration. This underscores the significance of edge–cloud collaboration in enhancing microgrid responsiveness and adaptability, optimizing power output and resource utilization. Using correlation analysis across diverse signal to noise ratios (SNRs), the study showcases a robust relationship between identified and actual topologies across scenarios. Despite introduced noise, the proposed model consistently maintains high correlation coefficients, accurately recognizing grid connections. This achievement highlights the model’s resilience and precision, signifying a substantial advancement in microgrid topology identification.
In conclusion, this study’s findings reinforce the viability and promise of edge–cloud collaboration in modern microgrid systems.
Future research should extend the validation of our methods to diverse microgrid scenarios beyond the Jingshan Microgrid case study. This will assess adaptability across various grid conditions, energy sources, and geographical factors, enhancing method versatility. Moreover, ensuring the scalability of our techniques for larger microgrids remains pivotal. Optimizing the computational efficiency of our PnP and topology identification methods will be imperative to accommodate complexities of larger systems while upholding accuracy. It is worth noting that our study’s focus on the Jingshan Microgrid implies broader implications of challenges tied to specific standards. This perspective adds an international dimension to our research, underscoring its relevance for global microgrid projects and offering insights for future research and practical applications.

Author Contributions

Data curation, Q.W. and Y.D.; Formal analysis, J.H. and F.Y.; Funding acquisition, Y.L.; Investigation, J.H., W.H. and H.M.; Methodology, Z.Y. and J.H.; Resources, F.Y. and Y.L.; Software, Z.Y., K.Z. and W.H.; Validation, K.Z. and H.M.; Writing—original draft, Q.W. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the State Grid Hubei Electric Power Company Science and Technology Project (No. 521532220008).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, W.; Guo, B.; Li, R.S. Research on Technical Specification of Multi-port Photovoltaic Energy Storage Co-generation Device. Electr. Power Supply Util. 2018, 35, 63–68. [Google Scholar]
  2. Sitharthan, R.; Vimal, S.; Verma, A.; Karthikeyan, M.; Dhanabalan, S.S.; Prabaharan, N.; Rajesh, M.; Eswaran, T. Smart microgrid with the internet of things for adequate energy management and analysis. Comput. Electr. Eng. 2023, 106, 108556. [Google Scholar] [CrossRef]
  3. Haase, P. Intelligrid: A smart network of power. EPRI J. 2005, 30, 17–25. [Google Scholar]
  4. Gonzalez, I.; Calderon, A.J.; Folgado, F.J. IoT real time system for monitoring lithium-ion battery long-term operation in microgrids. J. Energy Storage 2022, 51, 104596. [Google Scholar] [CrossRef]
  5. Lu, Y.M.; Liu, D.; Liu, J.; Huang, Y.; Ling, W.; Gu, J. Analysis of Information Integration Requirements and Models for Intelligent Distribution Networks. Power Syst. Autom. 2010, 34, 1–4. [Google Scholar]
  6. Wu, H.C.; Wu, Y.; Zhu, H.; Tan, Z.; Liu, M. Research on Plug and Play System for Distribution Automation Terminal Based on IEC 61850 Standard. Electr. Power Supply Util. 2015, 1, 60–63. [Google Scholar]
  7. Xie, J.; Ye, Q.; Lu, Y.; Zhang, J. Research on Plug and Play Information Interaction Mechanism for Distributed Power Generation. Electr. Power Supply Util. 2019, 36, 52–60. [Google Scholar]
  8. Wu, H.; Teng, X.; Zhou, C. Implementation Method of Plug and Play for Substation Equipment Based on IEC 61850. South. Power Syst. Technol. 2021, 15, 72–78. [Google Scholar]
  9. Zhong, J.Y.; Xiong, X.F.; He, Y.C.; Shan, R.; Jiang, H.; Li, S. Plug and Play and Topology Recognition Method for Substation Intelligent Terminals. Power Syst. Autom. 2021, 45, 166–173. [Google Scholar]
  10. Xie, J. Research on Heterogeneous Mapping and Plug and Play Mechanism of Distribution Network Information Model. Master’s Thesis, Shanghai Jiao Tong University, Shanghai, China, 2018. [Google Scholar]
  11. Li, R.S. Concept of Plug and Play for Stochastic Power Sources in Cloud-Layer-Edge Three-layer Architecture. Power Syst. Prot. Control. 2016, 44, 47–54. [Google Scholar]
  12. Xu, Y.; Song, Y.Z.; Zhang, Y.G.; Wamg, Z.; Song, R. Improved Power Grid Topology Identification Based on Wide-Area Measurement System. Power Syst. Technol. 2010, 34, 88–93. [Google Scholar]
  13. Lourenco, E.M.; Coelho, E.P.R.; Pal, B.C. Topology Error and Bad Data Processing in Generalized State Estimation. IEEE Trans. Power Syst. 2014, 30, 3190–3200. [Google Scholar] [CrossRef]
  14. Yang, Z.; Shen, Y.; Yang, F.; Le, J.; Su, L.; Lei, Y. Topology Identification Method for Low Voltage Distribution Network Based on Data Association Analysis. Electr. Meas. Instrum. 2020, 57, 5–11+35. [Google Scholar]
  15. Peppanen, J.; Grijalva, S.; Reno, M.J.; Broderick, R.J. Distribution system low-voltage circuit topology estimation using smart metering data. In Proceedings of the 2016 IEEE/PES Transmission and Distribution Conference and Exposition (T&D), Dallas, TX, USA, 3–5 May 2016; pp. 1–5. [Google Scholar]
  16. Babakmehr, M.; Simões, M.G.; Wakin, M.B.; Harirchi, F. Compressive Sensing-Based Topology Identification for Smart Grids. IEEE Trans. Ind. 2016, 12, 532–543. [Google Scholar] [CrossRef]
  17. Soumalas, K.; Messinis, G.; Hatziargyriou, N. A data driven approach to distribution network topology identification. In Proceedings of the 2017 IEEE Manchester Power Tech, Manchester, UK, 18–22 June 2017; pp. 1–6. [Google Scholar]
  18. Minh, Q.N.; Nguyen, V.H.; Quy, V.K.; Ngoc, L.A.; Chehri, A.; Jeon, G. Edge Computing for IoT-Enabled Smart Grid: The Future of Energy. Energies 2022, 15, 6140. [Google Scholar] [CrossRef]
  19. Chen, Y.; Hayawi, K.; Fan, M.; Chang, S.Y.; Tang, J.; Yang, L.; Zhao, R.; Mao, Z.; Wen, H. A Bilevel Optimization Model Based on Edge Computing for Microgrid. Sensors 2022, 22, 7710. [Google Scholar] [CrossRef]
  20. Qiao, Z.; Du, X.; Xue, Q. Review of large scale wind power participating in system frequency regulation. Electr. Meas. Instrum. 2023, 60, 1–12. [Google Scholar]
  21. Geng, Z. Analysis of wind-induced response and wind-resistant performance of high-voltage transmission tower—Line coupling system on Ansys. Electr. Meas. Instrum. 2023, 60, 84–90. [Google Scholar]
  22. Liu, G.; Wang, X.; Li, H.; Zhao, C.; Ling, W.; Ji, X. Interval power flow for power distribution networks considering uncertain wind power injection. Electr. Meas. Instrum. 2022, 59, 126–132. [Google Scholar]
  23. Han, J.; Lyu, W.; Song, H.; Qu, Y.; Wu, Z.; Zhang, X.; Li, Y.; Chen, Z. Optimization of Communication Network for Distributed Control of Wind Farm Equipped with Energy Storages. IEEE Trans. Sustain. Energy 2023, 90, 1–15. [Google Scholar] [CrossRef]
  24. Lin, Z.; Luo, B.; Song, Z. Fault Rate Assessment of Voltage Sag Sensitive Equipment Based on Nonparametric Assessment. Electr. Meas. Instrum. 2023, 60, 86–95. [Google Scholar]
  25. Hu, S.; Wang, B.; Yin, J.; Luo, Y.; Li, R.; Xiao, X. Risk assessment of key components for smart meters based on FMECA. Electr. Meas. Instrum. 2023, 60, 174–179. [Google Scholar]
Figure 1. Structure of physical system of JS-MCS.
Figure 1. Structure of physical system of JS-MCS.
Electronics 12 03699 g001
Figure 2. Information system of JS-MCS.
Figure 2. Information system of JS-MCS.
Electronics 12 03699 g002
Figure 3. Overall architecture of cloud platform.
Figure 3. Overall architecture of cloud platform.
Electronics 12 03699 g003
Figure 4. Edge device functional architecture.
Figure 4. Edge device functional architecture.
Electronics 12 03699 g004
Figure 5. Basic Architecture of PnP Application.
Figure 5. Basic Architecture of PnP Application.
Electronics 12 03699 g005
Figure 6. Data exchange between cloud and edge.
Figure 6. Data exchange between cloud and edge.
Electronics 12 03699 g006
Figure 7. Data exchange between cloud and terminal.
Figure 7. Data exchange between cloud and terminal.
Electronics 12 03699 g007
Figure 8. Topology identification application architecture.
Figure 8. Topology identification application architecture.
Electronics 12 03699 g008
Figure 9. The calculation flowchart.
Figure 9. The calculation flowchart.
Electronics 12 03699 g009
Figure 10. Power-monitoring curves of PV (A3 in JS-MCS).
Figure 10. Power-monitoring curves of PV (A3 in JS-MCS).
Electronics 12 03699 g010
Figure 11. Correlation coefficient between the MGC layer and the DG layer under Scenarios I to X.
Figure 11. Correlation coefficient between the MGC layer and the DG layer under Scenarios I to X.
Electronics 12 03699 g011
Figure 12. Correlation coefficient between the DG layer and the unit layer under Scenarios I to X.
Figure 12. Correlation coefficient between the DG layer and the unit layer under Scenarios I to X.
Electronics 12 03699 g012
Table 1. Ten scenarios with different SNRs.
Table 1. Ten scenarios with different SNRs.
Serial Number of ScenariosIIIIIIIVV
SNR (dB)4039383736
Serial number of scenarios VIVIIVIIIIXX
SNR (dB)3534333231
Table 2. Topology identification results between the MGC layer and the DG layer (with the mark “A”) under Scenario I.
Table 2. Topology identification results between the MGC layer and the DG layer (with the mark “A”) under Scenario I.
Serial Number of MGCNumber of DGs (A)Correlation CoefficientSerial Number of the DGs (A) Connected with the MGC
140.9966 32410
250.9975 68957
350.9965 1413121110
430.9961 15171600
530.9975 19182000
Table 3. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario I.
Table 3. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario I.
Serial Number of DG (A)Number of Units (B)Correlation CoefficientSerial Number of the Units (B) Connected with the DG (A)
130.9957 31200
240.9992 56740
330.9983 910800
430.9976 11131200
530.9975 15141600
640.9975 201819170
730.9949 21232200
820.9990 2425000
920.9979 2627000
1020.9974 2829000
1120.9980 3130000
1220.9975 3233000
1320.9989 3534000
1420.9998 3637000
1520.9982 3839000
1630.9961 42404100
1740.9976 454346440
1830.9987 47494800
1920.9982 5051000
2020.9980 5352000
Table 4. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under Scenario II.
Table 4. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under Scenario II.
Serial Number of MGCNumber of DGs (A)Correlation CoefficientSerial Number of the DGs (A) Connected with the MGC
140.9978 41230
250.9927 67598
350.9965 1013121411
430.9907 15171600
530.9992 19182000
Table 5. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario II.
Table 5. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario II.
Serial Number of DG (A)Number of Units (B)Correlation CoefficientSerial Number of the Units (B) Connected with the DG (A)
130.9972 23100
240.9980 46750
330.9990 981000
430.9990 12131100
530.9984 16151400
640.9985 191720180
730.9986 23222100
820.9982 2425000
920.9982 2627000
1020.9988 2928000
1120.9959 3130000
1220.9970 3233000
1320.9995 3534000
1420.9994 3637000
1520.9996 3938000
1630.9949 40424100
1740.9972 444546430
1830.9974 47484900
1920.9997 5051000
2020.9998 5253000
Table 6. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under Scenario III.
Table 6. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under Scenario III.
Serial Number of MGCNumber of DGs (A)Correlation CoefficientSerial Number of the DGs (A) Connected with the MGC
140.9922 43210
250.9934 67589
350.9897 1314121011
430.9917 16171500
530.9953 19182000
Table 7. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario III.
Table 7. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under Scenario III.
Serial Number of DG (A)Number of Units (B)Correlation CoefficientSerial Number of the Units (B) Connected with the DG (A)
130.9994 21300
240.9975 45760
330.9990 109800
430.9996 11121300
530.9984 16141500
640.9981 181719200
730.9992 22212300
820.9977 2524000
920.9923 2627000
1020.9995 2829000
1120.9979 3130000
1220.9990 3233000
1320.9988 3534000
1420.9990 3637000
1520.9990 3938000
1630.9994 40424100
1740.9989 434546440
1830.9995 47494800
1920.9997 5051000
2020.9986 5253000
Table 8. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under the load changes scenario.
Table 8. Topology identification result between the MGC layer and the DG layer (with the mark “A”) under the load changes scenario.
Serial Number of MGCNumber of DGs (A)Correlation CoefficientSerial Number of the DGs (A) Connected with the MGC
140.983743210
250.990667589
350.99481314121011
430.993116171500
530.996819182000
Table 9. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under the load changes scenario.
Table 9. Topology identification result between the DG layer (with the mark “A”) and the unit layer (with the mark “B”) under the load changes scenario.
Serial Number of DG (A)Number of Units (B)Correlation CoefficientSerial Number of the Units (B) Connected with the DG (A)
130.9964 21300
240.9923 45760
330.9984 109800
430.9989 11121300
530.9985 16141500
640.9992 181719200
730.9979 22212300
820.9964 2524000
920.9983 2627000
1020.9991 2829000
1120.9987 3130000
1220.9976 3233000
1320.9985 3534000
1420.9993 3637000
1520.9977 3938000
1630.9983 40424100
1740.9969 434546440
1830.9980 47494800
1920.9969 5051000
2010.9982 520000
Table 10. Performance Comparison of Different Topology Identification Methods.
Table 10. Performance Comparison of Different Topology Identification Methods.
MethodAverage Computation Time (Milliseconds)Computational Efficiency Improvement (%)
Traditional Method A1000-
Traditional Method B800-
Our Method20075%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Han, J.; Wang, Q.; Zhang, K.; Deng, Y.; Yang, F.; Lei, Y.; Hu, W.; Min, H. Edge–Cloud Collaboration-Based Plug and Play and Topology Identification for Microgrids: The Case of Jingshan Microgrid Project in Hubei, China. Electronics 2023, 12, 3699. https://doi.org/10.3390/electronics12173699

AMA Style

Yang Z, Han J, Wang Q, Zhang K, Deng Y, Yang F, Lei Y, Hu W, Min H. Edge–Cloud Collaboration-Based Plug and Play and Topology Identification for Microgrids: The Case of Jingshan Microgrid Project in Hubei, China. Electronics. 2023; 12(17):3699. https://doi.org/10.3390/electronics12173699

Chicago/Turabian Style

Yang, Zhichun, Ji Han, Qihang Wang, Kai Zhang, Yuting Deng, Fan Yang, Yang Lei, Wei Hu, and Huaidong Min. 2023. "Edge–Cloud Collaboration-Based Plug and Play and Topology Identification for Microgrids: The Case of Jingshan Microgrid Project in Hubei, China" Electronics 12, no. 17: 3699. https://doi.org/10.3390/electronics12173699

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop