Next Article in Journal
Optimal Pricing of Vehicle-to-Grid Services Using Disaggregate Demand Models
Next Article in Special Issue
Virtual Digital Substation Test System and Interoperability Assessments
Previous Article in Journal
Design and Validation of BAT Algorithm-Based Photovoltaic System Using Simplified High Gain Quasi Boost Inverter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Communication Requirements for a Hybrid VSC Based HVDC/AC Transmission Networks State Estimation

1
Efacec Automation, Grid Management Division, 4471-907 Porto, Portugal
2
Faculty of Engineering (FEUP), University of Porto, 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.
Current address: Via de Francisco Sá Carneiro Apartado 3078, 4471-907 Moreira da Maia, Porto, Portugal.
Energies 2021, 14(4), 1087; https://doi.org/10.3390/en14041087
Submission received: 20 January 2021 / Revised: 8 February 2021 / Accepted: 11 February 2021 / Published: 19 February 2021

Abstract

:
The communication infrastructure of the modern Supervisory, Control and Data Acquisition (SCADA) system continues to enlarge, as hybrid High Voltage Direct Current (HVDC)/Alternating Current (AC) networks emerge. A centralized SCADA faces challenges to meet the time requirements of the two different power networks topologies, such as employing the SCADA toolboxes for both grids. This paper presents the modern communication infrastructure and the time requirements of a centralized SCADA for hybrid HVDC/AC network. In addition, a case study of a complete cycle for a unified Weighted Least Squares (WLS) state estimation is tested on a hybrid HVDC/AC transmission network, based on Voltage Source Converter (VSC). The cycle estimates the elapsed times from the sensors up to the SCADA side, including the data acquisition and the WLS processing times. The case study is carried out on the Cigre B4 DC test case network with 43 virtual Remote Terminal Unit (RTU)s installed and 10 data concentrators, all connected through a fiber-based communication network. It is concluded that the time requirements can be fulfilled for a hybrid HVDC/AC network.

1. Introduction

The need for a supervisory and control system known as SCADA is still required as the power networks are evolving. The SCADA system is a complete structure of hardware components, communication equipments and software toolboxes [1,2,3,4]. The software components are available in the SCADA operator room, which are connected to substations and end-side devices through a Wide Area Network (WAN) communication.
The research on enhanced SCADA systems has become essential, mainly due to the integration of HVDC and low-carbon technologies such as wind and solar generation. The hybrid HVDC/AC networks require faster, robust and complex SCADA systems [5]. The recent researches have discussed the SCADA hardware and software upgrades and modifications. New communication infrastructures are proposed to cover AC and DC requirements [5,6,7] and new DC side RTUs are currently being characterised and researched [8,9,10]. A complete modern SCADA communication infrastructure can be split into four layers as shown in Figure 1a [4,7,11]:
  • Power System Layer: it is the lower end layer. It covers all the electrical units; generators, transformers, converters, feeders, and collector buses.
  • Data Acquisition and Monitoring Layer: contains sensors and actuators represented by RTUs. This layer reports measurements in different forms, data rates, and frequency to the communication layer. This layer has circuit breaker controllers and several power protection devices.
  • Communication Network Layer: The third layer and the backbone of the SCADA system. It connects the three main levels of the system, as described below and shown in Figure 1b:
    • Controller area network: is the smallest communication network inside the RTU. It is responsible for connecting the microcontrollers with their slaves (sensors and actuators).
    • Power area network: is a narrow area network, positioned above the controller area network and provides a stable connection between the main Master Terminal Unit (MTU) and the connected RTUs. Multiple communication protocols are used depending on the characteristics of the controller type.
    • Station area network: is a station WAN, and it provides the communication link between the power area network and the SCADA command center.
  • Human Machine Interface (HMI) Layer: consists of several softwares and graphical user interface applications that support the system operators. Usually, the interface has different menus, options, and screens for each layer of the SCADA.
The main challenge appears in the timescale of the modern SCADA communication network due to the intensive integration of low-carbon technologies. New time requirements are required for a stable and robust SCADA system. Figure 2 shows the bigger picture of the timescales requirements in a modern hybrid HVDC/AC network SCADA [12,13,14,15]. The different timescale requirements between the traditional AC networks and the HVDC demands an update to the communication system. It has to be faster and able to handle more control units and data bandwidth [16,17,18]. In addition, the traditional messaging protocols (such as IEEE C37.118 [19] and IEEE 1646 [18]) have to be restudied to include HVDC messages characteristics.
Furthermore, a heavy load on the communication infrastructure has emerged due to the large number of sensors for each RTU, especially in wind farm generation [20,21,22]. However, a combination between stageable data concentration and back-haul communication mechanism can improve the performance of the centralized approaches [22]. Besides, the distributed control approach of large AC/DC power systems is still under research, and one of the biggest challenges in this field is the global variables [5,13]. The question remains on how to share these global data between different SCADAs, data such as the frequency in the AC system and the voltage in the DC side. Several distributed computing solutions have been proposed such as the Epidemic Propagation [23], but it has not been tested on real SCADA for power systems yet.
This paper aims to estimate the time required by a SCADA system to perform a complete cycle of unified state estimation for a hybrid VSC-HVDC/AC network. The process starts from the RTU sensing time up to transmitting data through the communication network and the estimation processing time. The paper presents if the total time consumed is within the acceptable standards by simulating a case study based on the Cigre B4 network.
The structure of the paper is as follows—Section 2 goes through a detailed review of SCADA components and structure. Section 3 explores the time requirements of modern RTUs and the required upgrades for HVDC integration. Section 4 presents a simulation of a case study to illustrate the time elapsed to complete a unified WLS state estimation cycle. Section 5 concludes the work of this paper.

2. SCADA Components and Structure

In the following subsections, each component of the SCADA is reviewed.

2.1. Remote and Master Terminal Units

2.1.1. Remote Terminal Units

RTUs are considered the slaves of the master units. RTUs are more advanced than Programmable Logic Controllers (PLC)s and considered an extended version of it. They contain sensors, actuators, and a control unit. The RTUs can transmit synchronized data to the master terminal in real-time from the network, periodically or on-demand requests [24,25].
RTUs are geographically distributed, usually with Global Positioning System (GPS) modules installed to label time to the measurements (time-synchroniser). Several technologies can be employed as a communication medium such as radio, Fiber optic, and microwaves [26], the network can be Local Area Network (LAN) or WAN. A generic RTU includes:
  • Controllers or PLC are embedded systems with multiple digital and analog Input/Output (I/O) ports, memory, communication interface, Analog-to-Digital (A/D) and Digital-to-Analog (D/A), instrumentation amplifiers and signal filters;
  • Sensors: comes with a driver to convert and process the signal, for example, Current Transformer (CT) or Voltage Transformer (VT) with 4–20 mA transducer;
  • Actuators: comes with isolated drivers, used to trigger circuit breakers or relays;
  • HMI: (Not mandatory) a small-scale interface, screen or display.
Usually, the functional block programming (International Electro Technical Commission (IEC) 61131-3) is used to build up PLCs and RTUs programs. Also, Windows embedded 5.0/6.0 or Unix can be used as an operating system.
The RTUs are presented in the power network under different titles; Phasor Measurements Unit (PMU)s, and Intelligent Electronic Devices (IED)s. In industry, Phasor Data Concentrator (PDC)s (IEEE C37.244) are refereed as a superior RTUs, where data are collected and aggregated from RTUs in a synchronized multi-staged structure. The standard IEC 61850 defines the automation and communication structure between substations, within substations and RTUs [27]. The standard provides constrains for data acquisition, protection and data exchange between different types of RTUs [17]. IED is an emerging technology that is reconfigurable and extendable, since it is based on Field-Programmable Gate Array (FPGA) technology. There are three different types of IEDs; circuit breaker (CB) only, merging unit (MU), and protection and control (P&C) [9,28]. MU-IEDs are used for data acquisition from current and voltage transformers (sensors), while CB-IEDs are used to monitor and control the circuit breaker states [28]. P&C-IEDs combine the protection and data acquisition features with an extended I/O ports [11,17]. With the integration of the HVDC technology, the IEDs became the generic name for the measuring and control devices in AC and DC systems [29].

2.1.2. Master Terminal Units

MTUs are considered the second control unit after the command center and usually installed in substations. MTUs have similar characteristics of the main station, like HMI, servers and they are connected to other MTUs through a communication network, for example, LAN/WAN. The HMI allows the operators to control and monitor the connected RTUs through predefined procedures and some visual presentation of geographical and textual information and shapes [16,25,26].

2.2. Communication Infrastructure

The communication infrastructure is the backbone of the SCADA system, and it is a function of the coverage area, the selected medium and protocols. Different technologies have been already used in the SCADA field such as radiowaves, serial modems, LAN/WAN and Transmission Control Protocol (TCP)/Internet Protocol (IP) [2,3,4]. The common SCADA communication topologies are: Point to Point (PTP), Point to Multiple (PTM), and Multiple to Multiple (MTM) [30,31,32]. The SCADA communication networks have gone through several generations presented below in chronological order:
  • Monolithic (1st Generation): this generation was missing the network and queuing systems, such as LAN/WAN protocols. SCADA systems were operated independently and individually on minicomputers. This generation was known for having multiple redundancy systems [30,33].
  • Distributed (2nd Generation): the SCADA architecture was a low-traffic semi-real-time LAN network. It connects master stations and remote terminal with the command room. As a result, the reliability has improved, and the processing time and failure rate have been reduced. However, no standards were designed for the LAN protocols, and the security of the protocol was not guaranteed [30,33].
  • Networked (3rd Generation): is an extension of the 2nd generation, and it is used widely today. It was introduced to override the protocol’s limitations. It consists of a centralized command center with multiple remote stations connected in one network with a single application interface to access all the stations. This generation breakthrough is the integration of the IP and TCP protocols. This improvement increased network security and extended the accessibility to the I/O devices. The system’s reliability has also been enhanced due to duplicated/redundant components and strong WAN technology. Despite these improvements, there is an uprising risk on system cyber-security due to internet protocols [4,30,33].
  • Internet of Things (IoT) (4th Generation): is an under research generation, it came with the so-called Industry 4.0 revolution. Researchers claim that it will be suitable for low voltage distribution networks and microgrids [34]. It has the advantage of dealing with big data using simpler communication equipment; it uses cloud infrastructure and Wi-Fi technology. Also, it proposes cheaper and simpler controllable devices to replace the complicated IEDs and measuring units. Although this generation claims higher stability and robustness system, others stand against it due to privacy and security concerns. IoT is expected to add data aggregation, predictive/prescriptive analytics and unified data structure for the the different SCADA applications [33,34,35].

2.2.1. Communication Mediums

Some of the common communication mediums in the transmission level are described below, along with their advantages and disadvantages [25,30,31,32].
  • Fiber Optic cables: is considered the best solid medium for long-distance communication (140+ km), and it is in continuous enhancement since the 1970s. The commercial fiber wire has reached a signal attenuation of less than 0.3 dB/km, and the optical detectors and data modulators have archived a higher accuracy [36].
    The simplified fiber optic cable is built from three layers, a nanometer-diameter glass core, a glass cladding, and a protective sheath (plastic jacket). The cables are manufactured to handle one of the following light propagation technologies: multi-mode graded-index, multi-mode step-index, and single-mode step-index. The single-mode fiber optic can handle a distance of 60+ km with speed up to 10 Gbps, while multi-mode covers lower distance with data rate up to 40 Gbps [37,38].
    The Fiber optic cables are used widely in power systems SCADA, three major modified fiber optic cables are used [36]:
    (a)
    Optical Power Ground Wire: is a cable used between transmission lines, either underground or overhead.
    (b)
    All-Dielectric Self-Supporting: is a completely dielectric cable designed to work alongside the conductor part of the HV transmission lines/towers.
    (c)
    Wrapped Optical Cable: is used in power transmission and distribution lines. Usually, it is wrapped around the grounding wire or the phase conductor.
    Fiber optic technology is considered a secured and straightforward medium, but with high installation and maintenance costs. Underground or offshore installations require complicated permissions compared to other communication mediums. The most two fiber optic standard technologies in power systems are: Synchronous Optical Networking and Synchronous Digital Hierarchy [36,39].
  • Power Line Carrier communication (PLCC) (analog/digital): is power-dependent communication technology, it uses the power transmission lines as a communication medium. The analog version is still used up-to-date; it can handle two communication channels, and transmit voice, data, and SCADA commands. The medium is reliable, secure, and uses frequency signals between 30 kHz and 500 kHz, with a baud rate of 9600. PLCC was installed and tested on 220/230, 110/115 and 66 kV power lines [40,41,42]. This technology can cover distance of 200 m with data rate >1 Mbps, or distance >3 km with data rate between 10 and 500 Kbps [37,43]. The digital PLCC is an under research technology. It provides higher reliability and at least 4 channels with higher data rate/speed. It is expected to be less affected by the electrical noise, and more secured than the analog PLCC [44].
  • Satellites: are considered the preferred communication technology for the graphically separated receiver and transmitter terminals; it is widely used in remote access systems [30,32]. The signal is sent from one earth-stationary terminal through a special antenna (dish) pointing towards the satellite. The satellite receives, amplifies and retransmits the signal towards another earth-stationary terminal. The antenna has a special low-noise amplifier and works on C-band and Ku-band frequencies. The C-band covers 5.2625–26.0265 GHz uplink and 3.26–4.8 GHz downlink, while Ku-band covers 12.265–18.10 uplink and 10.260–13.25 GHz downlink [30].
    State of the art in satellite communications is the Very Small Aperture Terminal technology. It provides a smaller and cheaper antenna under the KU-band, but the communication can be affected by natural phenomena, for example, solar equinox [25,30].
  • Microwave Radio: is part of the ultra-high frequency technologies that operate on frequencies higher than 1 GHz or as a multichannel medium in lower frequencies. Microwave technology was invented as an analog communication medium with high data rates and secured multichannel capabilities [25,30]. Despite the improvements on the analog microwave, such as complexity reduction, the digital microwave leads with cost reduction and high communication flexibility. Also, it provides a higher data rate, new communication protocols, and standards. WiMax technology is one of the ultra-high frequency microwaves, and it can cover a distance up to 50 km with speed up to 75 Mbps [30,37].
    The microwave technology has two main network topologies; the PTP and PTM. The PTP is a directional long-distance communication link, while PTM is an omnidirectional communication that can be structured as star or tree networks [25,30]. PTM is more suitable for on-demand channels. Both topologies are power-independent, but the line of sight between the communications nodes must always be guaranteed.
  • Omnidirectional Wireless/Cellular: is part of the radio based technologies which operated in specific frequency bandwidths. Table 1 shows the most common Wireless/Cellular communication technologies along with their specifications and data speeds [39,45,46]:
    The advantage of ZigBee over traditional Wi-Fi is the ability to connect a higher number of cell nodes (approx. 60k instead of 2k). Cellular networks can provide high speed, low latency, and stable communication networks for power systems. However, the security side is still doubtable [39,45]. Further visions are available in [47] for the 6th generation (6G).

2.2.2. SCADA Communication Protocols

Communication protocols are a list of rules, structures, and limitations that control the transfer speed, security, and reliability of the information. The communication in a SCADA system starts in two steps. First, the MTUs initialize the connection, identify all the connected RTUs and perform ping test. Second, all remote access stations initialize the connection with the MTUs and allocate themselves.
SCADA systems use different types of communication protocols which are maintained by several standards such as Modbus, IEC 60870-5-101/104, IEC 61850 and Distributed Network Protocol (DNP3) [6,26]. Some of these protocols can deal with TCP/IP, which provides higher security and on-demand requests. However, it is always recommended that SCADA system should not be connected to the internet to avoid any DOS or cyber attacks [48]. The most common communication protocols for SCADA are listed and described below:
  • Modbus: One of the most used protocols in SCADA systems, provides real-time communication using the Open Systems Interconnection (OSI) seven layers model [25,49]. Modbus protocol has four operating modes between the main station and the remote station. It converts the transmitted requests into a protocol data unit (PDU: function code and data request). The data linking layer in the OSI converts the PDU into an application data unit (ADU), which the receiver side can understand. Modbus is considered as a serial communication tool and usually use RS-232, and RS-485 modems [31]. Also, it can be extended to deal with TCP/IP protocols through a new layer to generate the PDU’s “encapsulation” [49].
  • DNP3: is a protocol developed by IEC based on a simplified version of the OSI called the Enhanced Performance Architecture (EPA) model, as shown in Figure 3. DNP3 is widely used in SCADA systems to establish a connection between multiple MTUs and RTUs. DNP3 can handle low bandwidth serial and IP communication modems, or TCP/IP over internet connection (WAN) [7,50].
  • IEC 60870-5: the protocol is based on EPA model and was developed by IEC. An additional layer is added called the User Application layer. It is a front-end layer that can be used to set up the functions of the telecontrol system to interact with the SCADA hardware devices [30]. There are several versions of the protocol 60870-5-(101 to 104), each with different data objects, functions codes, and specifications. The most generic one is 101, which has basic definitions of the data objects, the geographical areas, and the WAN technology. The 104 version has two additional layers from the OSI model to deal with TCP/IP protocols (Figure 4) [25,30]. This protocol is used in power transmission and distribution SCADA’s, with a data transfer speed of 64 kbits/s depending on the interface of the IEC 60870-5 [30].
  • IEC 60870-6: is known as Inter-Control Center Communications Protocol (ICCP), globally used for telecontrol of SCADA. TASE.2 version of the standard has 5–7 layers of the OSI model. It is used to connect command centres of SCADA together, and exchange measurements, time-tagged data and events [51].
  • IEC 61850: is considered the most widely adopted protocol in substations automation [27], especially for connecting multiple RTUs/IEDs. The protocol is based on the OSI model and it overcomes the DNP3 by providing higher bandwidth communication and real-time protection and control [27]. The communication architecture of IEC 61850 contains 3 hierarchical levels [7]:
    (a)
    Station Level: contains the main station HMI and computers;
    (b)
    Bay Level: is presented by the different RTUs, such as IEDs and PMU.This level is connected to the upper level through the Station Bus;
    (c)
    Process Level: is the RTU’s terminals such as sensors (CT/VT) and actuators (circuit breaker). This level uses the Process Bus to communicate.
    The IEC 61850 provide five Ethernet/IP communication services, as shown in Figure 5, which makes it superior over IEC 60870-5; these services are (1) Abstract Communication Service Interface (ACSI), (2) Generic Object Oriented Substation Event (GOOSE), (3) Generic Substation Status Event (GSSE), (4) Sampled Measured Value multicast (SMV), and (5) Time Synchronization (TS). The GOOSE and GSSE are the main two services for data and events exchange. GOOSE provides the master/slave multicast messaging and allows the transfer of binary, integers, and analog values from the slave nodes. However, GSSE is more restricted to binary status data. The TS service ensures that all IEDs work in a synchronized timestamp. This process can be implemented through GPS or the IEEE 1588 Precision Time Protocol [27,52]. ASCI is responsible of the system reports and status logging while SMV service transfer sampled analog data and binary status to the IED’s Bay level via the Process bus [27].
  • Profibus or the “Process field bus”: is an OSI model protocol used in the industrial monitoring environment. It has a bus control unit that establishes the connection between the different hardware equipment. Usually, it has a type D connector, 127 points of data, speed up to 12 Mbps, and a data message size up to 244 bytes per node [31]. There are three standard versions of the Profibus; Field bus Message Specification, Distributed Peripheral, and Processes Automation. The Distributed Peripheral version data speed can vary between 93.75 kbps and 12 Mbps for a distance of 1.2 Km and 100 m, respectively [31].
  • Highway Addressable Remote Transducer (HART): is a protocol developed by Rosemount for smart communication with sensors and actuators. It provides smart digital techniques to collect data and send commands. HART is a hybrid protocol since the physical layer can deal with 4–20 mA scheme and frequency-shift keying for analog and digital communication, respectively. Therefore, it is commonly used between PLCs and RTUs. [31,53]. The protocol has provided higher speed and efficiency, and simplified the maintenance and diagnostics processes. HART protocol can be operated in two topologies: PTP and PTM [31].
  • Modbus+ (plus): is proposed to overcome the limitations of the Modbus protocol. The Modbus+ was developed to establish a LAN connection between master stations. Also, it allows the “token” approach with unique addresses for each station in the network up to 64 addresses. Therefore, an additional special cable is used to transmit data, but this came with a price that Modbus+ is not suitable for real-time communication. Modbus+ transceivers are polarities independent, and usually twisted pair cables are used, with an additional shield wire [31].
  • Data Highway (DH)-485 and DH+: are standard protocols for Allen Bradley. The general specifications of these protocols are Peer to Peer (P2P), half-duplex LAN, up to 64 nodes, and 57.6 kbaudbit. DH+ supports the “token” approach (floating master), and it is based on 3 layers of the OSI model; the application, the data link, and the physical layers. The DH-485 is a RS-485 serial master/slave protocol [31].
  • Foundation Fieldbus: is a protocol based on the OSI model with an extra user application layer, which provides a convenient interface for users. Fieldbus protocol has less wire costs, higher data integrity, and compatibility with several industrial vendors. In addition, it uses HART protocol, making it compatible with analog and digital signals. The protocol can handle up to 32 self-powered or 12 powered devices through the communication link, with 31.25 kb/s transfer speed [31].

2.3. Operator Room

The operator room of the SCADA system has a user interface unit known as HMI, which provides a visual presentation of the hardware and the software components of the SCADA. The system operator can control and monitor the whole system from this room through telecommunication and computerized services. The HMI is connected to a powerful computing architecture that can handle complicated operations, big database and perform tests and simulations before executing commands [1,3,4]. The HMI can be either vendor restricted or compatible with other vendors hardware and software [26].

3. RTUs and Communication Networks Requirements for HVDC/AC

The modern SCADA system for the AC/DC network are presented in Figure 6.

3.1. The HVDC/AC RTUs

The demand on advanced digital RTUs for HVDC opens the door for researchers to investigate the characteristics of an HVDC side RTUs. The challenge, in this context, is to ensure new RTUs (e.g., IEDs) comply with both AC and DC time requirements. In most of the HVDC systems, the internal control procedures do not require communication with a higher-level control system. However, hierarchical control of the converter’s voltages, currents and powers is needed [54]. Besides, the control of multi-type converters DC grids could have different RTUs specifications [14].
On the AC side, there are several RTUs available; most of them have been already standardized. Synchrophasor is one of the major ones; it provides real-time measurements such as phase and frequency. A Synchrophasor block can contain PMUs, multi-stage PDCs, a communication medium and a synchronized clock (by GPS) [15,27].
PDCs are used to collect time-synchronized data from multiple PMUs. Centralized PDCs are commonly used in multi-stage data concentration (hierarchical network) [15]. The PMU is one of the main RTUs in the AC transmission networks, PMUs aim to improve the availability of the system states and maintain system stability [55]. They act 100 times faster than the SCADA by analysing the measured data in real-time and activating built-in/pre-programmed protection functions. The decision-making process (including the reading of data from sensors) inside the PMUs takes between 10 and 100 ms, while PDCs can take 100 ms and 1 s [56].
On the DC side, researchers from industry and academia are modifying the AC side RTUs to meet DC side requirements. Intensive research has been established on the AC/DC RTUs requirements. Table 2 shows a comparison between the characteristics and the collected measurements for AC and DC sides.
At the sensor level, the HVDC RTUs require fast sampling rate (bandwidth) for protection reasons, which some of the traditional AC side sensors cannot meet. Table 3 shows most of the known conventional and non-conventional sensors [28,57,59,64]. The non-conventional sensors are digital instruments that support fast response and high data bandwidth (from 10’s of kHz to a few MHz) [59]. A novel HVDC contactless magnetic-field sensor CT and VT are proposed in [60,61].
Researchers in [10] proposed an HVDC measuring unit based on PMUs and IEC 61850 protocol, named SynDC. They modified the AC measurement message standard (IEEE C37.118) to add new parameters to differ between AC and DC measurements. The paper also recommends that the time-synchronization error between the HVDC measurement units should not exceed 20 microseconds. Also, the system status update rate with the local command center should be within the 1-s. A recent publication on low cost open source IED is proposed in [29], with the following specifications:
  • Friendly user interface, flexible and expandable to meet future requirements.
  • Provided state-of-the-art in HVDC protection with a swift reaction. The papers refer to speed requirements for the DC side circuit breakers to act in 2 ms with a sampling rate >50 kHz.
  • I/O ports are: (16 analog and 26 digital) Inputs/28 digital Outputs

3.2. The HVDC/AC Communication Network

3.2.1. Inter-Control Center Communications Protocol ICCP

Modern power networks are controlled by different electric utilities (SCADAs), as shown in Figure 7. Therefore, there is a need for an ICCP data exchange agreement (IEC 60870-6/TASE.2) to be defined. It will enable data exchange between the different AC and DC network operators. However, the lack of a standardized version of the protocol has limited and restricted the communication between the network’s operators [54]. ICCP can establish an unified SCADA interface of a hybrid network with several unified toolboxes, for example, hybrid AC/DC state estimator. In addition, it will increase the power availability and reliability during abnormal conditions and blackouts [65,66].
Despite that, the ICCP is still undefined (not standardized) for communication networks between AC and DC grids. Transpower has proposed its own agreement for AC and DC RTUs data exchange; their main ICCP characteristics are [68]:
  • DNP3 protocol is used, with the maximum data rate (speed);
  • The communication network is serial optical fiber, RS232 Master/Slave architecture;
  • The data exchange scheme is defined as shown in Table 4.

3.2.2. Time Requirements

In HVDC systems, the communication network timescale requirements are still not fully defined in a standard. However, recent publications have estimated these time ranges based on similar standards or experiments. For instance, in Table 5, the IEC 61850 standard can be adapted to control the timing constrains of HVDC communications [7].
However, The delivery timing from HVDC RTUs to the SCADA side is not fully clarified by IEC 61850. Therefore, several publications have recommended the IEEE P1646 standard to be used instead. In Table 6 the AC substations values are obtained from [18]. There are some amends made by researchers such as in [28] However, the DC substations values are gathered from several recent publications, as shown in the table. Some data types have the same delivery time in AC and DC systems.

4. Case Study: HVDC/AC State Estimation Time Requirements

The case study aims to find the total elapsed time for a state estimation cycle (from sensors to SCADA) for an HVDC/AC network. The Cigre B4 network, RTUs, Data concentrators, and SCADA are shown in Figure 8. 43 virtual RTUs are installed in different locations, as shown in Figure 8b. These RTUs are collecting data from power lines, converters, and generators and transmit them to 10 data concentrators (Figure 8c). The centralized SCADA is connected to 9 data concentrators (Figure 8d). The communication medium of the entire network is optical-based - Fiber Optic. This work objectives are to estimate the following components:
  • t R T U : the measuring time from the sensor to the buffer of the RTUs;
  • t D a t a C o n and t C o m m : the time elapsed in data acquisition, from RTUs to the SCADA;
  • t S E : the state estimation processing time of the unified WLS from the moment of receiving the data.
The total cycle execution time can be estimated as below:
t t o t a l = t R T U + t D a t a C o n + t C o m m + t S E .
The latency of the P2P communication is found by aggregating the delays of data processing, queuing, transmission, and propagation [72]. In this work, the network is structured in P2P architecture and these delays are calculated/estimated as follows:
  • Transmission delay ( T P 2 P ): is calculated theoretically using Equation (2). However, additional simulations are carried out using Optiwave OptiSystems and OptiSPICE for fiber optic communication simulations [73].
    T P 2 P = Data Packets ( in bits ) Link Speed ( in Mbits / s ) .
  • Propagation delay ( P P 2 P ): theoretically it can be calculated using the below equation, for example, 1 km fiber optic link has 3.33 μ s P P 2 P delay. Further simulations are carried out for comparison.
    P P 2 P = Distance ( in km ) Medium Speed ( in km / s ) .

4.1. The 1st Layer: From Sensors to RTUs

In this layer, the communication network is between the sensors and the dedicated RTU. The RTUs (IED/PMU) are distributed in an average of 100 m away from the sensors. Each RTU has multiple sensors based on the number and the type of measurements [21]. This work focuses only on 3 main measurements: voltage, current, and power.
The total data packets from S sensors can be expressed by Equation (4), where N i c h is the number of channels, S S b i is the sample size, and F s i is the sampling rate.
TotalDataPackets = i = 1 S DataSenosr i = i = 1 S N i c h × S S b i × F s i .
Modern sensors release the data in float formats (single/double: 32/64 bits). The single float structure (Figure 9) is commonly used to interface power measurements.
A simulation test case is implemented on OptiSPICE. An electrical-optical sensor is implemented that converts the sensor’s 32 bits generated @ 100 Mbit/s into laser beams. The electro-laser converter transmits the beams at 193.4 Thz with a wavelength of 1550 nm. The fiber optic cable is 100 m long with a 0.2 dB/km attenuation rate. The receiver stands at the RTU side and converts the laser beams back into an electrical signal. These signals are amplified to the proper value and read by the RTU input port. Figure 10 shows the OptiSpice schematic of the transmitter, fiber cable, and receiver.
Theoretically, the speed of light in vacuum is 299.792 m/ μ s. However, this speed goes down in fibre material by a refractive index related to the wavelength and optical fiber brands (A and B, G.652 and G.655). It varies between 1.4677 and 1.47 for different wavelengths from 1310 to 1625 nm. Commonly 1550 nm wavelength is used due to its minimum attention ratio in fiber material, with a refractive index of 1.468. Therefore, the theoretical beam speed is close to 204.22 m/ μ s. Table 7 shows the results of the theoretical estimation of the sensor time delays and the simulation estimations.
The simulation results are shown in Figure 11, in (a) the generated 32-bits are shown vs the received data after the amplification. In (b), the propagation delay is presented from the transmitter to the receiver. The data arrived at the receiver side after 818.611 ns.
The sensing time delay of N sensors ( T s ) can be estimated as follow [21]:
t R T U = T s + N × T g ,
where T g is a constant guard time delay represented by the opticl fiber transponders, it is used to save the buffer data and establish a new channel. In ultra low latency transponders, the delay can very between 2 ns and 30 ns [74]. In this work, 20 ns is used, then the sensing time delay t R T U for 4 sensors is: 818.611 + 4 × 20 = 898.611 ns ≈ 0.9 μ s.

4.2. The 2nd Layer: RTU to Data Concentrator

In this layer, the data from the sensors to the RTU are aggregated and transferred to the concentrators through fiber cables (50 μ m core radius, and 10 μ m cladding thickness).
Table 8 shows the RTUs (IEDs) data transfer rate from an AC substation. Considering the HVDC side, a very high sampling rate is used, resulting in higher data rate messages. It can be estimated by changing the AC side sampling rate to 100 kHz and the number of channels to 2 (±lines). Hence, the DC side RTU data rate for measurements is between 5312.5–12890.6 Kbps.
From the previous analysis and [7,11], the generated data for the different RTUs can be estimated as shown in Table 9. The distances (medium length) between the RTUs and the data concentrators are shown in Table A1 in Appendix A.1. Figure 12 shows the simulated transmission and propagation delays on OptiSystem for 210 kbits of data transmited in a 100 km fiber-optic cable at 1 Gbps. Similar simulations are carried out for other data sizes and distances as shown in Table 9 and Table 10. The fiber cable uses 1550 nm wavelength.
Figure 13 shows a schematic of a 100 km fiber optic P2P communication link in OptiSystems. It shows the transmitter (IED), receiver (Data concentrator), and multiple optical blocks. Each block represents an optical amplifier and 25 km fiber cable.
The IEC 61850 protocol delays are estimated from the studies in Table 11. The maximum value is used in this work (37.4 μ s per RTU). Table 12 shows the communication delays for a P2P architecture from the RTUs to the data concentrator.
The total delays of this layer can be approximated as the maximum of the summation of the propagation, transmission, and protocols delays. The maximum delays occur at D10 due to two-stage data concentrators. As a result, the estimated t D a t a C o n delay is: 4.45602 + 0.11161 + 0.2992 = 4.86683 ms.
However, these delays can be reduced by muxponders. The overall delays is the maximum all links delays plus the muxponder delay per peer connected (RTU). Using OptiSystems, the muxponder required 3.371875 μ s per fiber cable. In order to find the delays of this case, the D9 and D10 delays have to be calculated separately. D10 has the maximum delays per peer, which comes from D9 to D10.
  • D9: The max. transmission and propagation delays in D9 are 0.759 ms and 0.01320355 ms, respectively. D9 has 6 links leading to an overall muxponders and protocols delays of 0.02023 ms and 0.2244 ms. Therefore, the D9 delay is 1.01683 ms.
  • D10: The max. transmission and propagation delays in D10 are 0.7345087 ms and 0.053371 ms, respectively. D10 has 8 links leading to an overall muxponders and protocols delays of 0.026975 ms and 0.2992 ms. Then D10 delay is 1.1140547 ms.
The final t D a t a C o n delay is the summation of D9 and D10 delays: 2.130885 ms

4.3. The 3rd Layer: DC to SCADA

Similar to the previous analysis, layer 3 uses the same characteristic. The main changes are in the transmission delays due to the increase of the data transfer blocks. The propagation delay is a function of the fiber cable lengths in Table A2. However, Table 10 results are still considered valid. The protocol’s delays are estimated to be 0.3366 ms.
Based on Table 13 and Table 14, in a P2P system, the total delay is the sum of the propagation, protocol, and transmission delays. As a result, the estimated delay is approximately t C o m m = 12.83535 ms. However, using a server-based network (muxponders) as shown in Figure 14, the overall t C o m m delay is reduced to 10.11 ms.

4.4. The 4th Layer: State Estimation Processing Time

The processing time of unified state estimation for AC and DC systems is affected by the estimation algorithm, and the measurements count, type, and error.
Let t S E A C and t S E D C represent the processing time of an AC and DC networks state estimation. Then a centralized decoupled AC/DC state estimator takes time equal to the m a x ( t S E A C , t S E D C ) . However, in a coupled centralized approach further measurements are added such as voltage and power coupling constraints (available in [67]). It can be presented as shown in equation below:
t S E = t unified t P coupling > t V coupling > m a x ( t S E A C , t S E D C ) .
In this work, only the unified state estimation is in point of interest. Therefore, the simulations are carried out for t S E = t unified and for two sets of measurements, power injection only and complete set. Table 15 shows the time elapsed in WLS state estimation for the Cigre B4 network. The simulations were run on Intel Core i7-8750H, 2.20 GHz, GTX 1070 8 GB, and 16 GB RAM. The processing (estimation) time of an unified WLS is estimated based on the algorithm of this work [67], and implemented on Julia Optimization programming language [77]. The time performance is calculated for a 100 simulations on the Cigre B4 test case and shown in Figure 15 and Table 15.
In WLS algorithm, the bad data detection requires 1 complete estimation process per bad data. Thus, for N bad data measurements, the minimum estimation time is:
t SE - BadData t S E × ( 1 + N ) .
For example, a single bad data measurement can lead to a processing delay of more than 52.65 ms. However, other estimation algorithms with internal bad data rejection, such as Least Absolute Value, can be slower than WLS. However, the processing time per bad data is not proportional [78].

4.5. Outcomes and Results

The case study presented the expected time elapsed for a complete state estimation cycle based on a fiber-optic communication network. The total delays of a state estimation cycle from the sensors up to the SCADA side is estimated by Equation (1). In a P2P architecture, the total delays are:
t total = 0.0009 + 4.8669 + 12.8354 + 26.3246 = 44.0278 ms
If muxponders are used, then the total delays can be reduced to:
t total = 0.0009 + 2.1309 + 10.1100 + 26.3246 = 38.5664 ms
The noticeable outcomes can be summarized in the following points:
  • The total state estimation cycle time is less than 100 ms, which is less than the time requirements for a hybrid HVDC/AC systems.
  • Changing the communication medium to wifi or microwave has an impact on the propagation delay.
  • The static (snapshot) state estimation can be carried out in higher frequency (seconds range) instead of the traditional 5–15 min. Furthermore, the dynamic state estimation can be implemented at the local level since the accumulated delays are in few milliseconds.

5. Main Conclusions

This paper reviewed the modern SCADA system and highlighted the new challenges of integrating HVDC networks. The SCADA components and structure have been listed and discussed. In addition, a comprehensive review of the communication mediums and protocols is presented. The state-of-the-art on RTUs modifications for HVDC integration has been described along with their time constraints. Furthermore, this work establishes a case study simulation for a unified hybrid AC/DC state estimation time requirements using WLS. It estimates the time of measurements sensing, communication data acquisition, and the state estimator processing time. A Cigre B4 case was studied with 43 RTUs and 10 data concentrators. The communication links are fiber optic based and were simulated using OptiSpice and OptiSystems software. The unified WLS state estimation is implemented in Julia and was tested in two different measurements sets. The simulations were concluded that the estimated delays to complete a unified WLS state estimation from sensors to the SCADA side is around 44.02768 ms in a P2P system and 38.556 ms in a server-based system.

Author Contributions

Formal analysis, M.A., and E.M.; Methodology, M.A.; Software, M.A.; Supervision, H.L., and H.M.; Validation, M.A.; Writing—original draft, M.A.; Writing—review & editing, H.L. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement InnoDC No 765585.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

A/DAnalog-to-Digital
ACAlternating Current
CTCurrent Transformer
D/ADigital-to-Analog
DNP3Distributed Network Protocol
EPAEnhanced Performance Architecture
FPGAField-Programmable Gate Array
GPSGlobal Positioning System
HARTHighway Addressable Remote Transducer
HMIHuman Machine Interface
HVDCHigh Voltage Direct Current
I/OInput/Output
ICCPInter-Control Center Communications Protocol
IECInternational Electro Technical Commission
IEDIntelligent Electronic Devices
IoTInternet of Things
IPInternet Protocol
LANLocal Area Network
MTMMultiple to Multiple
MTUMaster Terminal Unit
OSIOpen Systems Interconnection
P2PPeer to Peer
PDCPhasor Data Concentrator
PLCProgrammable Logic Controllers
PLCCPower Line Carrier communication
PMUPhasor Measurements Unit
PTMPoint to Multiple
PTPPoint to Point
RTURemote Terminal Unit
SCADASupervisory, Control and Data Acquisition
TCPTransmission Control Protocol
VSCVoltage Source Converter
VTVoltage Transformer
WANWide Area Network
WLSWeighted Least Squares

Appendix A

Appendix A.1. Communication Medium Lengths for Cigre B4 Network

The tables below contain the communication medium (fiber cable) lengths between #Y data concentrator and #X RTU, where G is Generator, L is Line and C is converter, and to the SCADA.
Table A1. RTUs ⇄ Data concentrators.
Table A1. RTUs ⇄ Data concentrators.
DataConc. #RTU #Distance (km)
#1#1G100
#2G100
#3C100
#4C100
#5L100
#2#1G100
#2G100
#3C100
#4C100
#5L100
#6L160
#3#1G0.5–1
#2C0.5–1
#4#1G0.5–1
#2C0.5–1
#5#1G0.5–1
#2C0.5–1
#6#1L100
#2L100
#3L100
#4L315
#5L320
#7#1L100
#2L100
#3L100
#4L100
#8#1L150
#2L100
#3L100
#4L150
#9#1L100
#2L100
#3G155
#4G155
#5C155
#6C155
#10#1G150
#2G50
#3L50
#4L100
#5L100
#6C155
#7C155
#8 (DC)150
Table A2. SCADA ⇄ #Y data concentrator.
Table A2. SCADA ⇄ #Y data concentrator.
Data ConcentratorsDistance to SCADA (km)
#1250
#2300
#3220
#4200
#5400
#6200
#7300
#8350
#10250

References

  1. Kim, T.H. Securing Communication of SCADA Components in Smart Grid Environment. Int. J. Syst. Appl. Eng. Dev. 2011, 5, 135–142. [Google Scholar]
  2. Wood, A.J.; Wollenberg, B.F.; Sheblé, G.B. Power Generation, Operation, and Control; Wiley: Hoboken, NJ, USA, 2012; p. 632. [Google Scholar]
  3. Roy, R.B. Controlling of Electrical Power System Network by using SCADA. Int. J. Sci. Eng. Res. 2012, 3, 1–6. [Google Scholar]
  4. Miceli, R. Energy Management and Smart Grids. Energies 2013, 6, 2262–2290. [Google Scholar] [CrossRef] [Green Version]
  5. Pan, X.; Zhang, L.; Xiao, J.; Choo, F.H.; Rathore, A.K.; Wang, P. Design and implementation of a communication network and operating system for an adaptive integrated hybrid AC/DC microgrid module. CSEE J. Power Energy Syst. 2018, 4, 19–28. [Google Scholar] [CrossRef]
  6. Northcote-Green, J.; Wilson, R. Control and Automation of Electrical Power Distribution Systems; CRC Press: Boca Raton, FL, USA, 2017; pp. 1–464. [Google Scholar] [CrossRef]
  7. Khan, R.H.; Khan, J.Y. A comprehensive review of the application characteristics and traffic requirements of a smart grid communications network. Comput. Netw. 2013, 57, 825–845. [Google Scholar] [CrossRef]
  8. Laursen, O.; Björklund, H.; Stein, G. Modern Man-Machine Interface for HVDC Systems; Technical Report; ABB Power Systems: Ludvika, Sweden, 2002. [Google Scholar]
  9. Castello, P.; Ferrari, P.; Flammini, A.; Muscas, C.; Rinaldi, S. A New IED With PMU Functionalities for Electrical Substations. IEEE Trans. Instrum. Meas. 2013, 62, 3209–3217. [Google Scholar] [CrossRef]
  10. Wenge, C.; Pelzer, A.; Naumann, A.; Komarnicki, P.; Rabe, S.; Richter, M. Wide area synchronized HVDC measurement using IEC 61850 communication. In Proceedings of the 2014 IEEE PES General Meeting | Conference Exposition, National Harbor, MD, USA, 27–31 July 2014; pp. 1–5. [Google Scholar]
  11. Ahmed, M.A.; Kim, C.H. Communication Architecture for Grid Integration of Cyber Physical Wind Energy Systems. Appl. Sci. 2017, 7, 1034. [Google Scholar] [CrossRef] [Green Version]
  12. Tielens, P.; Hertem, D.V. The relevance of inertia in power systems. Renew. Sustain. Energy Rev. 2016, 55, 999–1009. [Google Scholar] [CrossRef]
  13. Babazadeh, D.; Hertem, D.V.; Rabbat, M.; Nordstrom, L. Coordination of Power Injection in HVDC Grids with Multi-TSOs and Large Wind Penetration. In Proceedings of the 11th IET International Conference on AC and DC Power Transmission, Birmingham, UK, 10–12 February 2015; pp. 1–7. [Google Scholar] [CrossRef]
  14. Egea-Alvarez, A.; Beerten, J.; Van Hertem, D.; Bellmunt, O. Hierarchical power control of multiterminal HVDC grids. Electr. Power Syst. Res. 2015, 121, 207–215. [Google Scholar] [CrossRef] [Green Version]
  15. IEEE PC37.247/D2.48. IEEE Approved Draft Standard for Phasor Data Concentrators for Power Systems; IEEE: New York, NY, USA, 2019. [Google Scholar]
  16. IEEE Std C37.1-2007. IEEE Standard for SCADA and Automation Systems; Revision of IEEE Std C37.1-1994; IEEE: New York, NY, USA, 2008; pp. 1–143. [Google Scholar] [CrossRef]
  17. Ali, I.; Hussain, S.S. Control and management of distribution system with integrated DERs via IEC 61850 based communication. Eng. Sci. Technol. Int. J. 2017, 20, 956–964. [Google Scholar] [CrossRef]
  18. IEEE Std 1646-2004. IEEE Standard Communication Delivery Time Performance Requirements for Electric Power Substation Automation; IEEE: New York, NY, USA, 2005; pp. 1–36. [Google Scholar] [CrossRef]
  19. IEC/IEEE 60255-118-1:2018. IEEE/IEC International Standard—Measuring Relays and Protection Equipment—Part 118-1: Synchrophasor for Power Systems-Measurements; IEEE: New York, NY, USA, 2018; pp. 1–78. [Google Scholar] [CrossRef]
  20. Boyd, M. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station. J. Sol. Energy Eng. 2017, 139. [Google Scholar] [CrossRef]
  21. Ahmed, M.A.; Kim, Y.C. Communication Network Architectures for Smart-Wind Power Farms. Energies 2014, 7, 3900–3921. [Google Scholar] [CrossRef] [Green Version]
  22. Pettener, A.L. SCADA and communication networks for large scale offshore wind power systems. In Proceedings of the IET Conference on Renewable Power Generation (RPG 2011), Edinburgh, UK, 6–8 September 2011; pp. 1–6. [Google Scholar] [CrossRef]
  23. Ayiad, M.M.; Katti, A.; Fatta, G. Agreement in Epidemic Information Dissemination. In International Conference on Internet and Distributed Computing Systems, Proceedings of the IDCS 2016: Internet and Distributed Computing Systems, Wuhan, China, 28–30 September 2016; Springer: Cham, Switzerland, 2016; Volume 9864, pp. 95–106. [Google Scholar] [CrossRef]
  24. Herrera, J.; Mingarro, M.; Barba, S.; Dolezilek, D.; Calero, F.; Kalra, A.; Waldron, B. New Deterministic, High-Speed, Wide-Area Analog Synchronized Data Acquisition-Creating Opportunities for Previously Unachievable Control Strategies. In Proceedings of the Power and Energy Automation Conference, Spokane, WA, USA, 10 March 2016. [Google Scholar]
  25. Thomas, M.; McDonald, J. Power System SCADA and Smart Grids; CRC Press: Boca Raton, FL, USA, 2017; pp. 1–329. [Google Scholar] [CrossRef]
  26. Boyer, S.A. Scada: Supervisory Control And Data Acquisition, 4th ed.; International Society of Automation: Pittsburgh, PA, USA, 2009. [Google Scholar]
  27. Communication Networks and Systems for Power Utility Automation Part 1–2: Guidelines on Extending IEC 61850; Technical Report; IEC: London, UK, 2020.
  28. Das, R.; Kanabar, M.; Adamiak, M.; Antonova, G.; Apostolov, A.; Brahma, S.; Dadashzadeh, M.; Hunt, R.; Jester, J.; Kezunovic, M.; et al. Centralized Substation Protection and Control; Technical Report; IEEE PES Power System Relaying Committee: Bethlehem, PA, USA, 2015. [Google Scholar] [CrossRef]
  29. Jahn, I.; Hohn, F.; Chaffey, G.; Norrga, S. An Open-Source Protection IED for Research and Education in Multiterminal HVDC Grids. IEEE Trans. Power Syst. 2020, 35, 2949–2958. [Google Scholar] [CrossRef]
  30. Marihart, D.J. Communications technology guidelines for EMS/SCADA systems. IEEE Trans. Power Deliv. 2001, 16, 181–188. [Google Scholar] [CrossRef]
  31. Reynders, D.; Mackay, S.; Wright, E.; Mackay, S. Practical Industrial Data Communications. In Practical Industrial Data Communications; Butterworth-Heinemann: Oxford, UK, 2004. [Google Scholar] [CrossRef]
  32. Morreale, P.A.; Terplan, K. (Eds.) The CRC Handbook of Modern Telecommunications, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  33. Koushik, A.; Bs, R. 4th Generation SCADA Implementation for Automation. Int. J. Adv. Res. Comput. Commun. Eng. 2016, 5, 629. [Google Scholar]
  34. Mlakić, D.; Baghaee, H.; Nikolovski, S.; Vukobratović, M.; Balkić, Z. Conceptual Design of IoT-based AMR Systems based on IEC 61850 Microgrid Communication Configuration using Open-Source Hardware/Software IED. Energies 2019, 12, 4281. [Google Scholar] [CrossRef] [Green Version]
  35. Sajid, A.; Abbas, H.; Saleem, K. Cloud-Assisted IoT-Based SCADA Systems Security: A Review of the State of the Art and Future Challenges. IEEE Access 2016, 4, 1375–1384. [Google Scholar] [CrossRef]
  36. Bonaventura, G.; Hanson, T.; Tomita, S.; Shiraki, K.; Cottino, E.; Teichmann, B.; Stassar, S.C.P.; Anslow, P.; Murakami, M.; Shiraki, K.; Solina, P.; Araki, N. Optical Fibres, Cables and Systems; IUT International Telecommunication Union: Geneva, Switzerland, 2010; p. 319. [Google Scholar]
  37. Khan, A.R.; Mahmood, A.; Safdar, A.; Khan, Z.; Khan, N. Load forecasting, dynamic pricing and DSM in smart grid: A review. Renew. Sustain. Energy Rev. 2015. [Google Scholar] [CrossRef]
  38. The Future Is 40 Gigabit Ethernet. Technical Report, Cisco Systems, 2016, Ref.: C11-737238-00. Available online: https://www.cisco.com/c/dam/en/us/products/collateral/switches/catalyst-6500-series-switches/white-paper-c11-737238.pdf (accessed on 19 February 2021).
  39. Lo, C.; Ansari, N. The Progressive Smart Grid System from Both Power and Communications Aspects. IEEE Commun. Surv. Tutorials 2012, 14, 799–821. [Google Scholar] [CrossRef] [Green Version]
  40. IEEE Std 643-2004. IEEE Guide for Power-Line Carrier Applications; Revision of IEEE Std 643-1980; IEEE: New York, NY, USA, 2005; pp. 1–134. [Google Scholar] [CrossRef]
  41. Ahmed, M.; Soo, W.L. Power line carrier (PLC) based communication system for distribution automation system. In Proceedings of the 2008 IEEE 2nd International Power and Energy Conference, Johor Bahru, Malaysia, 1–3 December 2008; pp. 1638–1643. [Google Scholar] [CrossRef]
  42. Greer, R.; Allen, W.; Schnegg, J.; Dulmage, A. Distribution automation systems with advanced features. In Proceedings of the 2011 Rural Electric Power Conference, Chattanooga, TN, USA, 10–13 April 2011. [Google Scholar]
  43. Yousuf, M.S.; El-Shafei, M. Power Line Communications: An Overview-Part I. In Proceedings of the 2007 Innovations in Information Technologies (IIT), Dubai, United Arab Emirates, 18–20 November 2007; pp. 218–222. [Google Scholar] [CrossRef]
  44. Merkulov, A.G.; Adelseck, R.; Buerger, J. Wideband digital power line carrier with packet switching for high voltage digital substations. In Proceedings of the 2018 IEEE International Symposium on Power Line Communications and its Applications (ISPLC), Manchester, UK, 8–11 April 2018; pp. 1–5. [Google Scholar] [CrossRef]
  45. Eluwole, O.; Udoh, N.; Ojo, M.; Okoro, C.; Akinyoade, A. From 1G to 5G, what next? IAENG Int. J. Comput. Sci. 2018, 45, 413–434. [Google Scholar]
  46. Baronti, P.; Pillai, P.; Chook, V.W.C.; Chessa, S.; Gotta, A.; Hu, Y.F. Wireless Sensor Networks: A Survey on the State of the Art and the 802.15.4 and ZigBee Standards. Comput. Commun. 2007, 30, 1655–1695. [Google Scholar] [CrossRef]
  47. David, K.; Berndt, H. 6G Vision and Requirements: Is There Any Need for Beyond 5G? IEEE Veh. Technol. Mag. 2018, 13, 72–80. [Google Scholar] [CrossRef]
  48. Gao, J.; Liu, J.; Rajan, B.; Nori, R.; Fu, B.; Xiao, Y.; Liang, W.; Chen, C. SCADA communication and security issues. Secur. Commun. Netw. 2014, 7. [Google Scholar] [CrossRef]
  49. MODBUS Messaging on TCP/IP Implementation Guide V1.0b; Modbus-IDA: Hopkinton, MA, USA, 24 October 2006; Available online: https://modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf (accessed on 19 February 2021).
  50. East, S.; Butts, J.; Papa, M.; Shenoi, S. Chapter 5 a Taxonomy of Attacks on the DNP 3 Protocol; Springer: Berlin, Germany, 2014. [Google Scholar]
  51. Schwarz, K. Telecontrol Standard IEC 60870-6 TASE.2 Globally Adopted. In Fieldbus Technology; Dietrich, D., Schweinzer, H., Neumann, P., Eds.; Springer: Vienna, Austria, 1999; pp. 38–45. [Google Scholar]
  52. IEEE C37.237. IEEE Standard for Requirements for Time Tags Created by Intelligent Electronic Devices; IEEE: New York, NY, USA, 5 December 2018. [Google Scholar]
  53. Park, J.; Mackay, S.; Wright, E. Practical Data Communications for Instrumentation and Control. In Practical Data Communications for Instrumentation and Control; Newnes: Oxford, UK, 2003; pp. xi–xiii. [Google Scholar] [CrossRef]
  54. Roberson, D.; Kim, H.C.; Chen, B.; Page, C.; Nuqui, R.; Valdes, A.; Macwan, R.; Johnson, B.K. Improving Grid Resilience Using High-Voltage dc: Strengthening the Security of Power System Stability. IEEE Power Energy Mag. 2019, 17, 38–47. [Google Scholar] [CrossRef]
  55. IEEE Std C37.244-2013. IEEE Guide for Phasor Data Concentrator Requirements for Power System Protection, Control, and Monitoring; IEEE: New York, NY, USA, 2013; pp. 1–65. [Google Scholar] [CrossRef]
  56. Adamiak, M.; Premerlani, W.; Kasztenny, B. Synchrophasors: Definition, Measurement, and Application; Technical Report; GE Grid Solutions: Atlanta, GA, USA, 2015. [Google Scholar]
  57. Wang, M.; Abedrabbo, M.; Leterme, W.; Van Hertem, D.; Spallarossa, C.; Oukaili, S.; Grammatikos, I.; Kuroda, K. Abedrabbo, M.; Leterme, W.; Van Hertem, D.; Spallarossa, C.; Oukaili, S.; Grammatikos, I.; Kuroda, K. A Review on AC and DC Protection Equipment and Technologies: Towards Multivendor Solution. In Cigrè Winnipeg 2017 Colloquium; Cigrè: Winnipeg, MB, Canada, 2017; pp. 1–11. [Google Scholar]
  58. TPU D500 IED Datasheet; Technical Report; EFACEC: Maia, Portugal, 2016.
  59. Schmid, J.; Kunde, K. Application of non conventional voltage and currents sensors in high voltage transmission and distribution systems. In Proceedings of the 2011 IEEE International Conference on Smart Measurements of Future Grids (SMFG) Proceedings, Bologna, Italy, 14–16 November 2011; pp. 64–68. [Google Scholar] [CrossRef]
  60. Xiang, Y.; Chen, K.; Xu, Q.; Jiang, Z.; Hong, Z. A Novel Contactless Current Sensor for HVDC Overhead Transmission Lines. IEEE Sens. J. 2018, 18, 4725–4732. [Google Scholar] [CrossRef]
  61. Zhu, K.; Lee, W.K.; Pong, P.W.T. Non-Contact Voltage Monitoring of HVDC Transmission Lines Based on Electromagnetic Fields. IEEE Sens. J. 2019, 19, 3121–3129. [Google Scholar] [CrossRef]
  62. Leterme, W.; Beerten, J.; Van Hertem, D. Nonunit Protection of HVDC Grids With Inductive DC Cable Termination. IEEE Trans. Power Deliv. 2016, 31, 820–828. [Google Scholar] [CrossRef] [Green Version]
  63. Pirooz Azad, S.; Van Hertem, D. A Fast Local Bus Current-Based Primary Relaying Algorithm for HVDC Grids. IEEE Trans. Power Deliv. 2017, 32, 193–202. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, Y.; Ma, Y.; Xing, F. A prototype optical fibre direct current sensor for HVDC system. Trans. Inst. Meas. Control 2016, 38, 55–61. [Google Scholar] [CrossRef]
  65. Michalski, J.; Lanzone, A.J.; Trent, J.; Smith, S. Secure ICCP Integration Considerations and Recommendations; Sandia Report; Sandia National Laboratories: Albuquerque, NM, USA, 2007. [Google Scholar]
  66. PJM Manual 01: Control Center and Data Exchange Requirements; Technical Report; PJM System Operations Division: Columbia, SC, USA, 2020.
  67. Ayiad, M.; Leite, H.; Martins, H. State Estimation for Hybrid VSC Based HVDC/AC Transmission Networks. Energies 2020, 13, 4932. [Google Scholar] [CrossRef]
  68. Customer ICCP Interconnection Policy; Techreport 4; Transpower: Wellington, New Zealand, 2019.
  69. Abedrabbo, M.; Van Hertem, D. A Primary and Backup Protection Algorithm based on Voltage and Current Measurements for HVDC Grids. In Proceedings of the International High Voltage Direct Current Conference, Shanghai, China, 25–27 October 2016. [Google Scholar]
  70. Commission Regulation (EU) 2016/1447 of 26 August 2016 Establishing a Network Code on Requirements for Grid Connection of High Voltage Direct Current Systems and Direct Current-Connected Power Park Modules; Technical Report; Official Journal of the European Union: Brussels, Belgium, 2016.
  71. Requirements for Grid Connection of High Voltage Direct Current Systems and Direct current-Connected Power Park Modules (HVDC), Articles 11-54; Technical Report; Energinet: Fredericia, Denmark, 2019.
  72. Coffey, J. Latency in Optical Fiber Systems; Techreport; CommScope: Hickory, NC, USA, 2017. [Google Scholar]
  73. OptiSystems and OptiSPICE; Technical Report; Optiwave Systems Inc.: Ottawa, ON, Canada, 2021.
  74. Bobrovs, V.; Spolitis, S.; Ivanovs, G. Latency causes and reduction in optical metro networks. In Optical Metro Networks and Short-Haul Systems VI; Weiershausen, W., Dingel, B.B., Dutta, A.K., Srivastava, A.K., Eds.; International Society for Optics and Photonics, SPIE: San Francisco, CA, USA, 2014; Volume 9008, pp. 91–101. [Google Scholar] [CrossRef]
  75. Leon, H.; Montez, C.; Stemmer, M.; Vasques, F. Simulation models for IEC 61850 communication in electrical substations using GOOSE and SMV time-critical messages. In Proceedings of the IEEE World Conference on Factory Communication Systems (WFCS), Aveiro, Portugal, 3–6 May 2016; pp. 1–8. [Google Scholar] [CrossRef]
  76. Juárez, J.; Rodríguez-Morcillo, C.; Mondejar, J. Simulation of IEC 61850-based substations under OMNeT++. In Proceedings of the 5th International ICST Conference on Simulation Tools and Techniques, Sirmione-Desenzano, Italy, 12–23 March 2012; pp. 319–326. [Google Scholar] [CrossRef]
  77. Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef] [Green Version]
  78. Mouco, A.; Abur, A. A robust state estimator for power systems with HVDC components. In Proceedings of the North American Power Symposium (NAPS), Morgantown, WV, USA, 17–19 September 2017; pp. 1–5. [Google Scholar]
Figure 1. The layers of transmission systems SCADA. (a) The four layers of a power system SCADA. (b) SCADA in HV Transmission Network.
Figure 1. The layers of transmission systems SCADA. (a) The four layers of a power system SCADA. (b) SCADA in HV Transmission Network.
Energies 14 01087 g001
Figure 2. Different operation/control timescales in an AC/DC power system.
Figure 2. Different operation/control timescales in an AC/DC power system.
Energies 14 01087 g002
Figure 3. OSI, EPA and DNP3 Models.
Figure 3. OSI, EPA and DNP3 Models.
Energies 14 01087 g003
Figure 4. IEC 101 and 104 layers.
Figure 4. IEC 101 and 104 layers.
Energies 14 01087 g004
Figure 5. IEC 61850 communication stack model.
Figure 5. IEC 61850 communication stack model.
Energies 14 01087 g005
Figure 6. HV AC/DC SCADA: Structure from top to bottom.
Figure 6. HV AC/DC SCADA: Structure from top to bottom.
Energies 14 01087 g006
Figure 7. AC/DC SCADA: Multi-terminal network with multistage control stations [67].
Figure 7. AC/DC SCADA: Multi-terminal network with multistage control stations [67].
Energies 14 01087 g007
Figure 8. Multi-HVDC/AC transmission systems based on Cigre B4 network test case. (a) The Original Network. (b) The RTUs distribution. (c) The RTUs and data concentrators distribution. (d) The SCADA and data concentrators distribution.
Figure 8. Multi-HVDC/AC transmission systems based on Cigre B4 network test case. (a) The Original Network. (b) The RTUs distribution. (c) The RTUs and data concentrators distribution. (d) The SCADA and data concentrators distribution.
Energies 14 01087 g008
Figure 9. The structure of 4 bytes float data.
Figure 9. The structure of 4 bytes float data.
Energies 14 01087 g009
Figure 10. OptiSpice schematic for an optical sensor.
Figure 10. OptiSpice schematic for an optical sensor.
Energies 14 01087 g010
Figure 11. The total time elapsed in sending 32 bits to the input port of the RTU. (a) 32 generated bits vs the received amplified data. (b) Propagation delay in the fiber cable.
Figure 11. The total time elapsed in sending 32 bits to the input port of the RTU. (a) 32 generated bits vs the received amplified data. (b) Propagation delay in the fiber cable.
Energies 14 01087 g011
Figure 12. OptiSystems: Sending/Receiving 210 Kbits in a 100 km fiber optic cable @ 1 Gbps.
Figure 12. OptiSystems: Sending/Receiving 210 Kbits in a 100 km fiber optic cable @ 1 Gbps.
Energies 14 01087 g012
Figure 13. OptiSystems: 100 km P2P optical communication link.
Figure 13. OptiSystems: 100 km P2P optical communication link.
Energies 14 01087 g013
Figure 14. OptiSystems: schematic of multiple data concentrators connected to the SCADA side.
Figure 14. OptiSystems: schematic of multiple data concentrators connected to the SCADA side.
Energies 14 01087 g014
Figure 15. The time performance of two sets of measurements in a unified state estimation.
Figure 15. The time performance of two sets of measurements in a unified state estimation.
Energies 14 01087 g015
Table 1. Wireless/Cellular technologies.
Table 1. Wireless/Cellular technologies.
TechnologyChannel
Bandwidth
LatencyData RateCell Size
ZigBee0.3–2 MHz<100 ms250 kbps<100 m
Wifi22 MHz<100 ms54 Mbps<100 m
2G0.2–1.25 MHz300–750 ms64 kbps–2 Mbps<20 km
3G1.25–20 MHz40–400 ms2.4–300 Mbps<10 km
4G<100 MHz40–50 ms<3 Gbps<10 km
5G<100 GHz<=1 ms>3 Gbps<1 km
Table 2. AC and DC RTUs characteristics and measurements.
Table 2. AC and DC RTUs characteristics and measurements.
CharacteristicsMeasurements
AC I/O: −8 digital input and 8 to 32 analog
input with 16–24 bits A/D resolution and sampling
frequency of 4 or 8 kHz for a 50 Hz system [57].
−16 to 264 digital output for large scale protection RTU [58].
Voltages, Currents.
Active and reactive power/energy.
Power factor.
Frequency, rate of change of
frequency (ROCOF), total
vector error (TVE) and the
frequency error (FE) [19].
Digital inputs (e.g., CB).
IEC 61131-3 based or FPGA processor
Transfer rate: 64 Kbps to 2 Mbps
Time-Synchronization based on global navigation
satellite system or GPS. The standard acceptable
error is in range of ±500 nanoseconds [19].
Reporting rate based on IEC/IEEE 60255-118-1 are
{10, 25, 50, 100} frame per second (fps) for 50 Hz system,
and {10, 12, 15, 20, 30, 60, 120} fps for 60 Hz [19].
DCI/O: −32 analog input with 24 bits A/D resolution.
−16 to 264 digital output for large scale protection
RTU (ABB [8] Open Source [29]).
Voltages, Currents.
Real power/energy.
Digital inputs (e.g., CB).
Sensors with Very high sampling frequency (in
range of 100 kHz) [57,59,60,61,62,63] due to the voltage
fluctuations (approx. 1.6% per minute [13])
Sensing time interval is within 30 ms.
Transmitting time interval between RTU and MTU
is within 10 ms [24].
Table 3. Conventional and non-conventional sensors [57].
Table 3. Conventional and non-conventional sensors [57].
CategoryTypeBandwidthSuitable for
Current
Transformers/sensors
Electromagnetic (iron-core)in kHzAC
Rogowski coil Integrated
with optical signal
in MHzAC/DC
Fibre optic CTin MHzAC/DC
Zero-flux (Direct Current
or Hall-effect CT)
in hundred kHzDC
Voltage
Transformers/sensors
Inductive/Capacitive VTin kHzAC
Compensated RC-dividerin MHzAC/DC
Fibre optic VTin MHzAC/DC
Table 4. ICCP data exchange scheme for Transpower.
Table 4. ICCP data exchange scheme for Transpower.
I/O TypeData SlotsDescription
Analog5Voltages, Currents, Active & Reactive
Power and Fault Location
Flags10–17Protection and Trips
Control2Circuit Breakers control
Status10–11Circuit Breakers status
Table 5. Communication timing for wide area power network under IEC 61850 standard.
Table 5. Communication timing for wide area power network under IEC 61850 standard.
ProtocolMessage ApplicationDelay Tolerance (ms)
GOOSEFast trips3–10
Fast commands/messages20–100
Measurements/Parameters100–500
SMVRaw data3–10
TSStation bus1
Process bus0.004–0.025
(Reset)File Transfer≥1000
Low–Medium speed100–500
Table 6. Data exchange timescale & size requirements (AC based on IEEE P1646 vs. DC).
Table 6. Data exchange timescale & size requirements (AC based on IEEE P1646 vs. DC).
Data TypeAC Substations (S)DC Substations (S)Data Size [7]
InformationWithin S i
(ms)
Between
S i and S j (ms)
Within S i
(ms)
Between
S i and S j (ms)
Range
Error Time
Synchronization
<0.1 [52], <2 [19]-<0.020 [10]<1000 [10]Bytes
Protection5/4 for 50/60 Hz
(1/4 cycle)
8–12,
5–10 [28]
0.1–0.5 [62,69]3–4 [62,69]10 s of Bytes
Monitoring
and control
16100010 [70]250–500 [70,71]10 s of Bytes
Operation and
Maintenance
100010k100010k100 s of Bytes
Text Data200010k200010kKB to MB
Files10k–60k30k–600k10k–60k30k–600kKB to MB
Data Streams1000100010001000KB to MB
Table 7. Time delays from 32-bits sensors to RTU.
Table 7. Time delays from 32-bits sensors to RTU.
Time Delay TypeTheoretical (ns)Simulation (ns)
Propagation489.66491.959
Transmission306326.6521
Amplification-0.0258
Total795.66818.6111
Table 8. Data rate per message application from AC substations based on [7].
Table 8. Data rate per message application from AC substations based on [7].
Message ApplicationData PacketsSampling Rate (Hz)Data Rate
Protection Signals50 Bytes1400 bps
Measurements data102–198 Bytes1440 @ Non-Sync AC1147.5–2227.5 Kbps
4096 @ 50Hz AC3264–6336 Kbps
Interlocks150 Bytes250293 Kbps
Control Signals200 Bytes1015.26 Kbps
File Transfer1 Mb1/3600248 bps
Total Traffic 1.422–2.477 Mbps
3.489–6.489 Mbps
Table 9. Transmission delays per RTU type.
Table 9. Transmission delays per RTU type.
RTU Location/TypeGenerated DataTheoretical (ms)Simulated (ms)
Line0.623 MB0.0048670.005226
Converter1.574 MB0.0122960.01320355
Generator (50 Hz)1.027 MB0.0080230.008615
Table 10. Propagation delays based on fiber cable length.
Table 10. Propagation delays based on fiber cable length.
Distance (km)Theor. (ms)Simu. (ms)Distance (km)Theor. (ms)Simu. (ms)
500.2448340.244836252501.2241701.2241813
1000.4896680.48967253001.4690041.4690175
1500.7345020.734508753501.7138381.7138538
2000.9793360.9793454001.9586721.958690
Table 11. IEC 61850 protocol delays.
Table 11. IEC 61850 protocol delays.
ReferenceMsg. TypeDelay Range ( μ s)
2016 [75]GOOSE24–37.4
SMV24
2012 [76]GOOSE14–35
SMV26–23
Table 12. P2P total delays: Data concentrators and its corresponding RTUs (in ms).
Table 12. P2P total delays: Data concentrators and its corresponding RTUs (in ms).
TypeD1D2D3D4D5D6D7D8D9D10
Prop.2.448363.231830.0048960.0048960.0048964.578431.958692.448364.015314.45602
Tran.0.04850.053370.021810.021810.021810.024330.019460.019460.053370.11161
Prot.0.1870.22440.07480.07480.07480.1870.14960.14960.22440.2992
Table 13. Transmission delays per data concentrator.
Table 13. Transmission delays per data concentrator.
Data Conc. #Generated DataTheoretical (ms)Simulated (ms)
#15.825 MB0.0455070.048863
#26.448 MB0.0503750.054089
#32.601 MB0.020320.021818
#42.601 MB0.020320.021818
#52.601 MB0.020320.021818
#63.115 MB0.0243360.0261302
#72.492 MB0.0194680.020904
#82.492 MB0.0194680.020904
#1013.519 MB0.1056170.1134045
Table 14. P2P total delays: Data concentrators to the SCADA (in ms).
Table 14. P2P total delays: Data concentrators to the SCADA (in ms).
TypeD1D2D3D4D5D6D7D8D10
Prop.1.224181.469011.077280.979341.958690.979341.469011.713851.22418
Table 15. Time delays ( t S E ) of unified WLS state estimation.
Table 15. Time delays ( t S E ) of unified WLS state estimation.
Data SetMeasurements CountElapsed Time (ms)
Power-injection only8923.3709
Powerflow and Injection14026.3246
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ayiad, M.; Maggioli, E.; Leite, H.; Martins, H. Communication Requirements for a Hybrid VSC Based HVDC/AC Transmission Networks State Estimation. Energies 2021, 14, 1087. https://doi.org/10.3390/en14041087

AMA Style

Ayiad M, Maggioli E, Leite H, Martins H. Communication Requirements for a Hybrid VSC Based HVDC/AC Transmission Networks State Estimation. Energies. 2021; 14(4):1087. https://doi.org/10.3390/en14041087

Chicago/Turabian Style

Ayiad, Motaz, Emily Maggioli, Helder Leite, and Hugo Martins. 2021. "Communication Requirements for a Hybrid VSC Based HVDC/AC Transmission Networks State Estimation" Energies 14, no. 4: 1087. https://doi.org/10.3390/en14041087

APA Style

Ayiad, M., Maggioli, E., Leite, H., & Martins, H. (2021). Communication Requirements for a Hybrid VSC Based HVDC/AC Transmission Networks State Estimation. Energies, 14(4), 1087. https://doi.org/10.3390/en14041087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop