1. Introduction
Traditionally, for economic and safety reasons, electricity was generated by large centralized fossil-fueled generators, and transported to consumers via one-way transmission networks, load centers, and distribution networks [
1,
2,
3]. However, the liberalization of the energy markets, combined with environmental, sustainability, and reliability concerns, has forced a rethink in the way that electricity is generated and distributed to consumers [
4]. Specifically, recent times have seen the emergence of small- and medium-scaled distributed energy resources (DERs) embedded within the transmission and distribution networks, typically in the form of renewable/sustainable energy generation sources such as biomass-fueled combined heat and power (CHP) plants, wind turbines (WTs), and photovoltaics (PVs). Although such technology has the potential to improve reliability and flexibility of the network, without adequate supervisory control, monitoring and reactive supply/demand side management, problems related to grid frequency/voltage control combined with loss of stability and activation of protection systems may arise [
4,
5].
This has led to the development of the concept of the smart grid,
i.e. an energy distribution network that not only allows for the physical transfer of energy but which also supports information and communication technology (ICT) interfaces enabling real-time information exchange related to the scheduling, monitoring, control, and protection of the interconnected DERs [
4,
5,
6]. A closely related concept to the smart grid is the microgrid. A microgrid is a modern, small-scale instantiation of the larger centralized electricity generation, transmission, and distribution system [
7,
8,
9]. A microgrid generates, distributes, and regulates the flow of electricity from generators to consumers on a small (local) scale. A microgrid has an electrical point of common connection (PCC) with the larger grid, and may operate in “grid connected” mode or “islanded” mode, dynamically switching between the two depending upon internal and external operating conditions (the latter operating mode being the most common). A microgrid has local targets and constraints (such as reliability, emissions reductions, DER integration, and economic cost minimization), but also interacts with the wider smart grid to help achieve larger-scale targets. When employed correctly in co-ordination with the wider smart grid, microgrids may be used as “good citizens” which can provide adjustable and predictable power sourcing/sinking. This can aid in load balancing of the wider network and an increase in network reliability and security [
6,
7]. Microgrids can simplify problems related to the complexity of wider network control and management as system of systems (SoS) concepts can be applied [
9]. A high-level view of a typical microgrid and its constituent components is shown in
Figure 1 below.
As can be seen in
Figure 1, a number of dispatchable and non-dispatchable generation units (CHP, WT, PV,
etc.) are electrically connected to load units (domestic, commercial, small-scale industrial,
etc.) through a private wire electrical network. Other units may be present in this network, such as electric vehicle charge points (EVCP) and battery energy storage systems (BESSs). The microgrid connects to the wider grid via the PCC. Each unit is typically connected to the electrical network through a disconnection (no-load) switch and protective circuit breaker. Each unit is controlled by a local intelligent electronic device (IED), and each IED is interconnected via a communication network in which control, status, and monitoring information pertinent to the microgrid operation is exchanged. In a centralized microgrid control approach (which is most commonly used) a control center (CC) is also present and connected to each IED via the communications network. The CC is responsible for coordinating the microgrid as a whole, and it typically plans, optimizes, and manages the medium-term and short-term operating strategies of the network and also helps to facilitate real-time control, monitoring, and protection. The medium-term strategy is often determined by solving an economic dispatch optimization problem over a future horizon of approximately 24 h (e.g., see [
8,
9]), typically using an hourly timescale. The short-term strategy typically helps to put this hourly dispatch plan into practical action and operates on a typical time scale of 1–5 min [
9]. Supervisory control and data acquisition (SCADA) concepts are often used at this timescale, for example to read a BESS state of charge and set its rate of charge/discharge. At the very lowest timescales, the CC helps to facilitate real-time control, monitoring, and protection functions in the microgrid [
8,
9]. Timescales at this level are typically in the sub-second range and distributed control system (DCS) concepts are often used alongside SCADA, for example to schedule dispatchable DERs to regulate microgrid power exchange over the PCC or frequency.
In this paper, the focus is upon communication infrastructures and technologies for command, control, monitoring, and protection in microgrids. The paper considers microgrids and small smart grids operating at the “Neighborhood” level, in which it is implied that the communications between DERs take place over a distance of not more than approximately 20 km. Within such a framework, the IEC 61850 standard for communications within substations—and its extension to DER management—is seeing increasing use [
7]. Along with the standard mapping of IEC 61850 to Ethernet and manufacturing message specification (MMS) using transmission control protocol and internet protocol (TCP/IP), this is becoming the preferred means to interconnect and operate a diverse range of microprocessor-based electrical control and protection equipment. It is argued in this paper, however, that some low-level aspects of IEC 61850 are not especially well-suited for use with microgrids. In microgrids, there are typically requirements for node discovery and regular re-configuration of operations (e.g., enforced by DER reliance on weather conditions), coupled with the possibility of widely spread geographical areas and the need for command and control traffic to possibly co-exist and share resources with regular internet traffic. Such features are not typically present in substations which have comparatively smaller fixed boundaries and fixed equipment, plus isolated/firewalled internal communication infrastructures. In particular, the mapping of IEC 61850 horizontal communications to raw Ethernet does not seem well suited for reliable and timely communication between microgrid DERs. In this paper, it is proposed that the integration of IEEE time-sensitive networking (TSN) concepts (which are currently implemented as audio video bridging (AVB) technologies) within an IEC 61850 and MMS/TCP/IP framework provides a flexible and reconfigurable platform capable of overcoming such issues. By mapping horizontal protection and control messages between DERs into AVB transport protocol (AVB-TP) frames, the required time-critical information flows for protection and control may be carried over a much larger geographical area than a typical substation using low-latency AVB streams. AVB streams can be flexibly configured from one or more central locations, and bandwidth reserved for their data ensuring predictability of delivery even when mixed with regular internet traffic. A prototype test platform is described in this paper and it is verified that IEC 61850 event and sampled data may be reliably transported within the proposed reconfigurable framework, even in the presence of congestion.
The remainder of the paper is organized as follows.
Section 2 presents a brief background on smart grid and substation communications and describes the IEC 61850 standards.
Section 3 discusses IEC 61850 DER extensions, and outlines potential difficulties with regard to microgrids.
Section 4 describes the AVB standards and presents a methodology whereby critical control and protection traffic may be tunneled through a current-generation AVB network.
Section 5 details a prototype implementation and initial experimental tests which validate the concepts.
Section 6 concludes the paper and suggests areas for future work.
3. IEC 61850 Microgrid Extensions
In the context of microgrids, the IEC 61850-7-420 Communications Standard for DER extends the substation information and communication model to consider distributed resources. This extension to the basic standard is hoped to define the information models and LNs required at the process level for all DER devices, including the electrical connection points, controllers, PV generators, energy converters, DC converters (rectifiers/inverters), and auxiliary systems (measurement/protection devices). IEC 61850-90-7 defines object models of inverters for PV and energy storage systems, and IEC 61850-90-9 considers BESSs. IEC 61400-25 is concerned with the application of the IEC 61850 methodology and object modelling to WTs. With respect to
Figure 2 and the IEC 61850 main standards, in microgrid applications each DER interface is considered to be an S-Node (IED) residing at the bay level [
7]. The microgrid CC acts as the station controller, and interconnection between the S-Nodes and the main controller is through the station LAN.
A principal difference between the microgrid viewpoint and the substation viewpoint under IEC 61850 lies in the level of distribution of the resources and the degree to which traffic may be isolated between IEDs at the station level. Although much of the required DER interface traffic in a microgrid may be carried using MMS mapped to TCP/IP, support for handling horizontal time-critical GSE and SV traffic—which could be needed for optimized power grid control and protection—could be problematic. Specifically, the need for node discovery and regular system re-configuration in microgrids has been previously identified and methodologies have been developed to allow such behaviors within the IEC 61850/MMS framework [
7]. Although one could also attempt to manually configure and re-configure VLANs when nodes come on and off-line within this framework, this could be problematic as it is time-consuming and error-prone. In addition, the possibility of geographical areas spread much wider than a typical substation, and the need for microgrid control traffic to co-exist with other traffic—including regular internet traffic such as email, file transfers and web browsing—is also problematic. For instance, it becomes in effect impossible to follow previous recommendations [
13] to ensure that critical IEDs would be interconnected through only a single switch; testing to explore expected message latencies would be difficult to perform on-line.
In this context, we are motivated by aspects of the proposed utility intranet, such as the newly proposed IEC 61850-90-5 Publish-Subscribe Profile which could potentially help to alleviate some of these issues [
20]. The 90-5 addendum outlines a proposed methodology whereby GSE and SV traffic may be mapped into multicast UDP/IP frames and DIFFSERV techniques employed within an all-IP infrastructure to bring this class of traffic beyond the sub-station. However, this is very much a new development aimed at wide area networks, and there still remains many outstanding issues (such as the required procedures for setting up and starting/stopping a session) to be solved before it can be relied upon for mainstream use in a large IP-based network or microgrid [
20]. In the next Section, however, it is proposed that the integration of IEEE TSN concepts (which are currently implemented as AVB technologies) within an IEC 61850 and MMS/TCP/IP framework provides a flexible and reconfigurable platform capable of overcoming such issues on a smaller microgrid scale (
i.e., a single IP subnet or network or LANs bridged at Layer-2).
4. Tunneling IEC 61850 Frames through Current Generation AVB Networks
4.1. Audio Video Bridging (AVB)
AVB is a set of technical standards which have been developed by the IEEE AVB Task Group connected to the IEEE 802.1 standards committee, the goal of which is to provide the specifications that will allow time-synchronized low latency streaming services through IEEE 802 networks [
21,
22]. The AVB standard defines a node that wishes sends audio/video information as an AVB talker, and a node that wishes to receive audio/video information from an AVB talker as an AVB listener. The AVB standards consist of four principal elements:
IEEE 802.1AS: Timing and Synchronization for Time-Sensitive Applications. When employed with suitably accurate clock sources (e.g., GPS) and supporting network hardware, the 802.1AS protocol enables fault-tolerant clock synchronization over a switched Ethernet network at sub-microsecond levels.
IEEE 802.1QAT: Stream Reservation Protocol (SRP). SRP enables a potential AVB talker to advertise its content (one or more streams) to potential AVB listeners over an AVB-supported network of bridged LANS. SRP also allows the AVB listeners to register to receive the advertised content, and propagate this registration back through the LAN to the AVB talker. If sufficient bandwidth capacity exists on the pathway to guarantee the latency requirements between a potential talker and listener, the stream is set up and VLANs along the route are configured appropriately. On the other hand, if insufficient bandwidth capacity exists, the stream is not set up and both talker and listener are informed.
IEEE 802.1QAV: Forwarding and Queuing for Time-Sensitive Streams (FQTSS). Once AVB streams are confirmed using SRP, FQTSS defines the mechanisms by which the actual time-sensitive traffic is routed and queued. FQTSS specifies a credit based shaper (CBS) for online traffic shaping, and clock synchronization through 802.1AS ensures that all bridges, talkers, and listeners use the same time reference to implement the traffic shapers.
IEEE 802.1BA: Audio Video Bridging Systems. The success of the AVB scheme depends upon the end-to-end support for the above three protocols on the path between any talker and listener (including the network links in between). As such, it is beneficial to identify, for any particular bridge LANs, which devices and pathways could support AVB streaming before attempting to connect any AVB streams. The 802.1BA protocol provides the required procedures to identify participating AVB endpoints and internetworking devices, to identify and flag non-participating devices, and to make this information available in an efficient manner.
Together, these protocols define common QoS procedures and application interfaces for time-sensitive streaming over heterogeneous Layer 2 technologies, effectively defining an API for QoS-related services that extends well beyond the transfer of audio and video [
21]. Two classes of service are offered to endpoints by the AVB API: the low-latency “Class A” stream provides guaranteed end-to-end latency of 2 ms (± 125 μs) over seven hops, and the medium latency or “Class B” stream provides guaranteed end-to-end latency of 50 ms (± 125 μs) over seven hops. For applications to take advantage of AVB functionality, AVB transport protocols have been defined in addition to the core AVB standards described above. IEEE 1722 defines the Layer 2 transport protocol for time-sensitive streams using AVB (AVB-TP), and supports transmission of raw audio and video streams encoded using IEC 61883 and other related formats. The 802.1AS clock is sampled before adding the worst-case transport delay (2 ms for Class A streams) to the sample time to get an accurate presentation time to be inserted into the 1722 packets. IEEE 1722.1 is a closely related standard which allows audio video channel discovery, enumeration, connection management, and control (AVDECC) for devices using IEEE 1722 and AVB. The AVDECC protocol allows a remote configuration entity to discover and manage AVB audio/video endpoints and stream advertisement, bandwidth reservation and relinquishment. IEEE 1733 extends AVB functionality to IP Layer 3, allowing real time protocol (RTP) streaming of audio and video over AVB-supported networks, albeit limited to a single IP subnetwork.
4.2. Integrating AVB into an Existing IEC 61850 MMS-TCP/IP Based Infrastructure
It seems clear that with appropriate enhancements, AVB technology using automatic VLAN configuration, bandwidth reservation, and real-time streaming can provide the underlying reconfigurable communication facilities required in a microgrid for transporting horizontal IEC 61850 traffic between DERs. The use of MMS mapped into best-effort TCP/IP traffic could run unaltered within such a scheme.
Figure 4 gives an overview of a communications architecture for reconfigurable microgrid command and control which integrates AVB streaming into an existing IEC 61850 MMS-TCP/IP based infrastructure [
7]. In the Figure, the S-Node interface separates the ACSI services to run over MMS and TCP/IP in a similar fashion to that Client-Server architecture described in [
7]. However, the time-critical services (GSE/SV) are now separated and mapped to AVB streams. Both a stream configuration unit and a message filtering/forwarding unit are virtualized within the MMS framework (see, e.g., [
21] for a discussion on such an approach). This is to allow a node, once it and its services have been discovered (using a protocol such as that suggested in [
7] for example), to enable the microgrid CC to configure which specific GSE/SV messages are to be mapped to AVB and how the streams are configured, advertised, and connected/disconnected. Although in
Figure 4 only one incoming stream from the CC is shown, multiple incoming streams (from both the CC and other DERs) can be realized.
However, although working groups are considering extensions to TSN/AVB functionality into industrial domains such as automotive and process control, standards have not progressed beyond the draft stage and concrete protocols and technology implementations are some way off [
21,
22]. At the current time, professional products for AVB are almost exclusively restricted to the transport of audio and video. In the next Section, a technique by which these limitations may be overcome is outlined.
4.3. Tunneling IEC 61850 Frames through AVB Streams
A tunneling protocol allows two participating and interconnected nodes to implement a network protocol which is not directly supported by the underlying network which is facilitating their exchange of information. Tunneling can be used to allow a “foreign” or non-supported protocol to run over a network that does not, by default, support that particular protocol. Tunneling involves encoding the raw traffic payload data into a different form by the sender, and decoding into the original form by the receiver(s). Encoding and decoding normally take place at the application layer, and although this usually violates the OSI layering, it is an effective and widely used means to enable a service not normally provided by an underlying network. In the context of the current work, we seek to tunnel the horizontal traffic generated by DERs through streaming audio links. As discussed previously, GSE, SV, and MMS messages are encoded using ASN1.1 rules and the entire application protocol data units (APDUs) are carried in the payload of an Ethernet frame. Therefore, it is highly desired to buffer, frame, and handle the multicast/unicast transmission and reception of a demarked sequence of bytes representing a GSE, SV, or MMS APDU over a current-generation AVB network.
On a current-generation AVB network, there are many possible stream and audio channel configurations, some of which are more efficient in terms of the audio (application) data throughput vs the protocol overheads and stream encoding [
23]. The number of channels per stream can be varied, but is upper limited by the AVB-Ethernet packet size (over 60 is possible for encoding 24 bit pulse code modulation (PCM) audio). The number of streams may also be varied, but is also upper limited by the capacity of the AVB switch technology (many hundreds are available in practice on a 1 Gbps network). To enable the tunneling of a sequential block of bytes (a data frame) interspersed by one or more “no data” symbols, a stereo AM824 PCM 48 kHz channel conforming to IEC 61883 is employed. Such a channel can transfer 48 bits of audio data (24 bits on the left channel and 24 bits on the right) per audio sample, at a constant rate of 48,000 samples per s. One or more such channels can be packaged into an AVB stream and transported using the 1722 AVBTP, enabling the transfer of raw audio data between a sender and multiple receivers; moreover, these channels and streams are configurable remotely using the 1722.1 AVDECC protocol. Although intended to carry PCM audio data, the 48 bits per sample can in fact encode any serialized bit or byte stream and can hence be used for tunneling. Given a sequence of bytes to transmit by the sender, each successive block of six data bytes is transferred in successive audio samples; the first three bytes are transferred as the left audio data and the next three as the right. The byte value 00
h is reserved to signal a “no data” symbol, and at least one such symbol is inserted by the transmitter to signal an interframe gap and allow receivers to demark a sequential block of bytes in the serialized byte stream as a frame. Since the byte value 00
h should not then appear as useable data in any frame, some encoding and decoding is required to remove this symbol from appearing in each frame.
To achieve this encoding, the consistent overhead byte stuffing (COBS) algorithm is employed [
24]. This procedure allows the encoding of frames to remove the symbol 00
h, allowing its use to be reserved for framing purposes only. Receivers can then decode each received frame to recover the original data. Both encoding and decoding have linear time complexity, and the process of encoding a frame of length
x bytes introduces not more than ⌈
x/254⌉ stuff bytes to appear in the processed frame. In other words, even in the worst case only one additional stuff byte is inserted for every 254 bytes of user data [
22]. This results in a very small loss of useable bandwidth, and the effective bandwidth is reduced to 254/255 of the raw bandwidth. Nevertheless, an effective bandwidth of 48 × 48,000 × 254/255 = 2,294,965 bps is achieved for each stereo channel using this tunneling approach. Since an SRP bandwidth reservation of 8,384,000 bps must be made for such a stream [
22], eight streams may be reserved on any single 100 Mbps link (SRP ensures that not more than 75% bandwidth is reserved for AVB) and the overall efficiency of the stream is 27.37%. Although this figure seems low, it is principally due to the combined overheads of Ethernet, IEC 61883, and IEEE 1722, and is comparable to the efficiency of the raw audio bit stream (27.48%) [
23]. The effective bit rate of nearly 2.3 Mbps allows a 30-byte frame to be transmitted in 0.1046 ms. Adding the (fixed) 2 ms delay due to the use of synchronized presentation time, plus a worst-case additional latency of 125 μs due to AVB clock synchronization artifacts, the proposed tunneling approach should guarantee that such a message is reliably delivered from the publisher to each subscriber not more than 2.2296 ms after transmission has commenced.
4.4. Prototype Bit-W Device for S-Node Interfacing
In order to manage the streaming and regular data and provide a suitable application layer interface, a prototype “bump-in-the-wire” (Bit-W) device to interface between S-Nodes and a 100 Mbps AVB network switch has been developed. This Bit-W device provides an application interface to send data to a single AVB stream as a talker and also receive data from a single AVB stream as a listener. The device consists of an XS1-L16128QFN multicore microprocessor from XMOS
©, which hosts an IEEE 1722 compliant endpoint and several configurable hardware I2S PCM interfaces, along with a 50 MHz PIC32MX250F128C microcontroller co-processor from Microchip
© which has I2S PCM and USB interfaces. The PIC microcontroller provides the application interface to the S-Node. For streaming data, the device carries out frame buffering using 8 kb transmit and receive FIFOs, performs COBS encoding and decoding, serialization/deserialization of encoded frames to/from a byte stream, and clocking of the bytes to/from the on-chip hardware PCM interface to/from the XMOS device PCM interface. The overall tunneling concept that is proposed is illustrated in
Figure 5. Note that although the device could also manage regular IP/Ethernet transmissions with very little additional configuration, this was not considered for our prototype implementation; regular Ethernet traffic is brought directly from the S-Node to the switch. Stream configuration of the Bit-W devices is managed remotely by the microgrid CC by using the IEEE 1722.1 AVDECC protocol. Free-to-use commercial grade software such as UNOS
© Vision
© and AVDECC drivers and libraries are now available to perform such remote discovery and connection control tasks for current generation AVB networks.
Finally, it must be observed that since the rate at which GSE and SV messages could be delivered to the Bit-W device is potentially faster than the rate at which they are sent over the AVB tunnel, there is a risk of small queuing delays in the device due to time waiting in the buffer. Worst-case delays in this situation are, however, relatively simple to analyze using standard techniques [
25]. The situation may be further simplified by observing that since delivery of messages tunneled through AVB is extremely reliable due to the use of reserved bandwidth (no loss due to congestion), the stringent exponentially increasing re-transmission requirements for GSE messages in the main IEC 61850 standards (see [
11]) can be replaced with pseudo-periodic messaging.