1. Introduction
With the large-scale deployment of intelligent applications, distributed real-time systems, such as industrial control, avionics and in-vehicle control systems, need more bandwidth resources than the current bus-based communication technologies can provide [
1,
2]. Ethernet is an attractive alternative because of its high compatibility and bandwidth. However, traditional Ethernet only offers best-effort services that cannot meet the strict requirements of real-time systems for end-to-end delay, jitter and packet loss. Therefore, Deterministic Ethernet (DetEth) is proposed to enhance standard Ethernet [
3], which integrates mechanisms such as high-precision clock synchronization, time-sensitive scheduling, frame duplication and elimination. It should be noted that Deterministic Networking (DetNet), as defined in the IETF RFC 8655 cross-layer deterministic networking framework, encompasses deterministic implementations at both Layer 2 (link layer) and Layer 3 (network layer). The clock synchronization investigated in this paper specifically targets DetEth implementations, which constitute the Layer 2 deterministic networking component within this architectural framework. Time-Sensitive Networking (TSN) [
3] and Time-Triggered Ethernet (TTE) [
4] are two mainstream DetEth technologies. Researchers have also developed FlexTTE [
5], which is a low-cost deterministic Ethernet system based on commercial off-the-shelf (COTS) devices.
Clock synchronization protocol (CSP) can establish a global reference time among devices [
6,
7], which is essential for the determinism of communication in DetEth. For critical traffic with deterministic delay requirements, the time-points of sending and receiving in each device of the forwarding path need to be planned in advance. The quality of synchronization is one important constraint inputs of the time-sensitive traffic planning algorithm, which directly affects the scale of the traffic planned by the network [
1]. Moreover, without a consensus on time, the devices could not judge whether the received frames are sent at the pre-planned time in other devices and would discard the critical frame, resulting in transmission errors [
8].
Different scenarios with DetEth have different demands on clock synchronization protocols in terms of synchronization accuracy, fault tolerance, availability, etc. In addition, there may be some limitations on the occupancy of computational and storage resources for protocol implementation. One protocol can hardly fulfill all of the requirements and constraints.
Therefore, numerous CSPs are designed to adapt to different scenarios. For example, AS6802 is a fault-tolerant protocol and typically applied in safety-critical TTE scenarios [
7]; 802.1AS is a precise protocol specified in TSN [
6]. Additionally, the same protocol may have different versions to meet various scenarios. Taking 802.1AS for example, this protocol is modified and supplemented when TSN domain specifications are formulated in different applications, such as P802.1DG in the automotive domain, IEC/IEEE 60802 in the industrial automation domain, and P802.1DP in the aerospace domain. At the same time, the protocol is also under continuous discussion and optimization, and the current relevant versions include the published 802.1AS-2020, the submitted 802.1AS-Rev, and the 802.1ASdm under discussion.
Since clock synchronization is not one-size-fits-all [
9], protocol implementation needs to consider architectural flexibility and generality to facilitate protocol customization and modification. Consequently, current implementations of CSP typically adopt a hardware–software co-design approach, such as LinuxPTP and ExcelForce-gPTP, both of which are implementations of 802.1AS. However, the flexibility of these implementation architectures is limited due to several reasons: (i) the software is platform-dependent, such as LinuxPTP, which relies on the time subsystem of the Linux kernel; (ii) the hardware is non-programmable, and it is capable only of performing fixed operations for fixed types of synchronization frames; (iii) employing a tightly coupled hardware–software architecture requires each distributed synchronization node to possess sufficient computational and storage capabilities. Instead, we should provide more customized and dynamic synchronization computation defined by the platform-independent software written by operators according to the synchronizing requirements. The hardware components are limited to timestamping operations for fixed fields in fixed types of packets. To enable the deployment of the CSPs in systems with resource-constrained environments, we should design a simple, efficient and programmable data plane that is easy to implement with commodity DetEth hardware components (e.g., switch and network interface card) while leaving the customized and complex synchronization computation to the software in the control plane.
Inspired by OpenFlow [
10], which enables a simple and efficient control of switches by separating control and data functions, we design and implement a new software-defined clock synchronization architecture called OpenSync. It provides a generic and efficient clock synchronization solution by separating the synchronization control plane and data plane (
Figure 1). Like OpenFlow, OpenSync keeps the data plane simple to implement and flexible to configure, while enabling flexible and easy programming for different CSPs, which is composed of a time information injector that can be programmed by operators and a local clock that supports both frequency correction and phase correction. OpenSync makes synchronization programming easier at the control plane by freeing operators from understanding complex switching and parameter tuning in diverse synchronization solutions. In the control plane, OpenSync builds a synchronization library and exposes a programming interface that allows control applications to acquire the time data and correct the clock. Also, we design a hardware clock estimation algorithm and an accurate delay compensation algorithm, enabling the control plane to read the data plane’s clock more accurately.
Although the clock synchronization protocol is not one-size-fits-all, we expect OpenSync to enable ‘one-hardware-supports-all-CSPs’ in DetEth and reduce the complexity and cost of deploying clock synchronization across different application scenarios.
In summary, this paper makes the following contributions.
We propose a software-defined clock synchronization architecture, OpenSync, to implement CSPs in DetEth efficiently and effectively, including a data plane which is programmable and protocol-independent and a control plane which is capable of accurately perceiving the data plane’s clock.
We implement the OpenSync prototype based on FPGA and three different clock synchronization methods as example implementations on top of OpenSync.
We build a fully functional testbed with a commodity tester and conduct extensive experiments to test the synchronization accuracy and protocol consistency of the OpenSync prototype.
The rest of the paper is organized as follows:
Section 2 analyzes the basic principles of clock synchronization and reveals the motivation and challenges;
Section 3 describes the design of OpenSync architecture;
Section 4 details the implementation of the OpenSync prototype and the practical application of OpenSync; Evaluation results are presented and discussed in
Section 5;
Section 6 concludes this paper.
2. Background on Clock Synchronization
In this section, we first introduce some basic terminologies and analyze the fundamental principles of clock synchronization. We then elaborated on the requirements of clock synchronization in different scenarios. Finally, we discuss the challenges behind OpenSync for decoupling the synchronization control plane and data plane.
2.1. Terminologies
In this paper, we use the term clock to mean a time counter which is driven by the pulse signal from a physical oscillator and incremented at some frequency. We assume that each node in systems, either a switch or an end system, refers to a local clock to trigger time-related operations such as time-aware scheduling.
We use to denote the clock time of node q at a given real time t. Each oscillator is given a nominal frequency when produced, but oscillators with the same nominal frequency may work at different rates, because of the existence of frequency drift that is influenced by physical factors such as environmental temperature, voltage fluctuations and material aging. As a result, the synchronized clocks should be resynchronized and corrected periodically. If not, the difference between them, which is called offset, will be larger over time.
There are two methods of adjusting the clock: phase correction and frequency correction.For phase correction, the calculated offset will be added to or subtracted from the clock time. For frequency correction, the rate of the clock will be accelerated or decelerated indirectly by adjusting the length of the tick generated by the oscillator because the frequency cannot be changed manually. In some cases, the two approaches can be combined to optimize the performance of clock synchronization on accuracy [
11].
Clock synchronization can be classified into internal and external synchronization based on whether the system is synchronized with an external clock source like GNSS or GPS. In this paper, we focus on achieving internal synchronization in wired deterministic Ethernet, utilizing clock synchronization protocols based on Layer-2 frames.
2.2. Principles
During the clock synchronization process, nodes exchange frames containing local clock time data. Then, synchronization convergence algorithms utilize these data to calculate the offset between the local clock and the clocks of other nodes in the system. Finally, the local clock is adjusted to achieve synchronization.
We take the toy example shown in
Figure 2 to aid with the discussion about the principles. The system consists of a linear topology with three nodes, where messages between node p and node q need to be relayed by node m. Assume that the time of node p is selected as the global reference time, requiring other nodes to adjust their own local clocks according to the clock of node p.
Node p sends a frame containing the local timestamp
. After node q receives this frame at
, it calculates the correcting offset
using the method described in Equation (
1), where D represents the frame transmission delay.
The frame transmission delay consists of two parts: link delay
and residence delay
. Link delay refers to the time delay incurred during the transmission of clock synchronization frames over a communication link, which is measured from when a frame is transmitted by the source node until it is received by the destination node. The
can be calculated as half of the round-trip time using Equation (
2), which is based on the assumption that the link delay is symmetric. Residence delay refers to the time delay incurred by a synchronization frame during internal processing within a node, from reception through parsing, computation, and queuing until retransmission to the next node. The residence delay must be recorded by the relay node and included in the clock synchronization frame. In order to record the residence delay generated in the relaying device like node q, a dedicated field is always set in the CSP message. It is called CorrectionField in 802.1AS and TransparentClock in AS6802. In this paper, we use
DelayRecord uniformly to represent this field for convenience, regardless of the protocol. And the delay compensation algorithm we proposed in
Section 3 also takes advantage of the DelayRecord field.The residence delay can be computed by Equation (
3), and this value will be add to the DelayRecord domain of the response message.
From the above analysis of the principles, it is evident that the frames used in CSPs need to carry two types of time information: precise timestamps and transmission delays. This necessitates that the synchronization data plane has the capability to inject time information.
2.3. Diverse Needs for Clock Synchronization
Different scenarios have varying requirements for clock synchronization. We described common clock synchronization requirements and analyzed them in the context of specific scenarios.
Accuracy. It is the maximum clock offset among all synchronized nodes in the system during operation. In DetEth, the higher the clock synchronization accuracy, the greater the scale of time-sensitive traffic the network can support. If the precision does not meet the scenario’s requirements, it may lead to issues such as frame loss and increased delay jitter for critical traffic. Currently, DetEth applications typically require the network to ensure synchronization precision at the microsecond or sub-microsecond level.
Complexity. In cost-sensitive scenarios, such as industrial control networks, there are many low-cost nodes. These synchronized nodes may lack computational power or have limited local storage resources, making them unable to perform complex clock synchronization protocol calculations and state storage. For instance, the 802.1AS protocol involves 64-bit floating-point operations and division operations when calculating different clock frequency ratios.
Fault-tolerance. In safety-critical scenarios such as avionics, the operating environment is harsh and filled with radiation and high-energy particle streams. Time synchronization protocol frames or the local clocks of nodes may be affected by the environment, leading to internal errors. Fault-tolerant clock synchronization protocols can ensure that erroneous time information from affected nodes does not propagate to other nodes and that synchronization among unaffected nodes is maintained.
2.4. Related Work
This paper focuses on the issue of implementing and deploying clock synchronization in a flexible and effective way. In this section, we discuss and compare the state-of-the-art works that are related to this paper. A software implementation offers better flexibility but lacks timing performance [
12], whereas custom hardware delivers better performance but has poor flexibility and scalability [
13,
14,
15]. As a result, it has gradually become a trend to implement clock synchronization by means of hardware/software co-design [
16,
17,
18,
19].
Sundial [
18] implements the most essential functions of exchanging synchronization messages and detecting failures in hardware such that it can synchronize frequently and quickly detect failures. Sundial relies on software components to take action once a failure is detected by invoking a failure handler in software which re-configures the hardware to transition to the backup plan. Huygens [
20] uses hardware to record the time information of packets and then processes the purified data with Support Vector Machine, a widely used and powerful classifier, to accurately estimate one-way propagation times in the software to achieve high-precision clock synchronization with a default sync interval of 2 s. Graham [
21] characterizes the local clock using commodity hardware sensors present in nearly every server and leverages these data to further improve clock accuracy using a software daemon which builds the model and compensates the temperature errors.
However, these studies only focus on the implementation of the specific clock synchronization solutions they proposed, using a hardware/software co-design so that the solutions can be optimized conveniently later. It cannot help to implement and deploy a novel synchronization solution for different scenarios. To address this issue, software-defined networking (SDN) is exploited in clock synchronization [
22,
23,
24]. SDTS [
22] provides customized time services based on IEEE1588 with different synchronization configurations and synchronization state control for network elements (NEs). It defines a programmable transition matrix of the state of synchronization and an output matrix in the controller to manipulate the behavior of NEs. NFV-TS (NFV Enhanced Time Synchronization) [
23] also builds time synchronization as a VNF and designs a synchronization controller to manage the synchronization processing and precision compensation together with the network controller and the proposed VNF. It can work on the network with full or no IEEE1588 support. VNSS (Virtualized Network Synchronization Service) [
24] deploys clock synchronization as a virtual network function using a centralized controller to collect the synchronization network information and properly sets up the domain to make sure that the synchronization path from the master clock to the slave clock traverses nodes with on-path IEEE1588 support.
Current research on software-defined clock synchronization mainly focuses on using SDN principles to manage the synchronization network while the function of protocol control and calculation is still implemented in the data plane so that the hardware in the data plane is still related to a specific synchronization solution. Some of the researchers make assumptions that the master–slave synchronization model, defined in IEEE 1588, is adopted in these networking systems and it might lead to a bottleneck in fault tolerance.
Motivated by these researchers, after analyzing the generic principles of clock synchronization in the DetEth, we design a protocol-independent data plane abstraction in OpenSync to support programming various CSPs on a unified data plane.
2.5. Challenges
In this paper, we propose software-defined clock synchronization in DetEth. However, there are many challenges in decoupling the synchronization control plane with the data plane in terms of mechanism and strategy. In this research, we are particularly motivated by the following challenges to design OpenSync.
Challenge I: providing a protocol-independent synchronization abstraction to support different CSPs. Different clock synchronization methods primarily differ in the following four aspects: (a) the format of the frames used by the protocol, which means devices need to modify different fields of the frame; (b) the types of time data used for synchronization calculations, such as timestamps and transmission delays; (c) the algorithms used to calculate correction values, such as master–slave offset calculation, fault-tolerant median algorithms, and fault-tolerant mean algorithms based on sliding windows; and (d) the methods for adjusting the local clock, such as phase correction or frequency correction. To decouple the control plane from the data plane, we need to provide a protocol-independent, flexible data plane synchronization abstraction. This will enable operators to implement different clock synchronization methods through software programming.
Challenge II: executing time-triggered operations between the unsynchronized control plane and data plane. The time-triggered operation is a type of operation that needs to be performed at a specified point in time, which is common and important in clock synchronization [
8]. For example, a frame is required to be dispatched when the clock is at 0. However, under the condition that the frame is constructed in the control plane, the actual dispatch point in time cannot be 0 precisely unless the software in the control plane shares the same notion of time with the data plane. The correct execution of such operations depends on the synchronization between the control plane and data plane. The reason for this challenge is that the control plane and data plane use different reference clocks: the former uses the clock in the software operating system, and the latter uses the local clock in the hardware. Moreover, the software cannot read the hardware clock accurately because this procedure is complicated and uncertain, which can be influenced by many factors such as the overhead of system calls, buffering in the kernel, network stack jitter and direct memory access transmission [
25,
26]. To address this challenge, we designed the ShadowClock mechanism and a delay compensation algorithm in the control plane (see
Section 3.3). By leveraging time data from the data plane, a clock mapping is constructed in the control plane. Before executing time-triggered operations in the data plane, the delay between the actual execution time and the scheduled time is calculated, and this delay is precisely recorded in the DelayRecord field of synchronization frames, thereby eliminating the impact of software-induced delay jitter on the synchronization protocol.
3. Opensync Design
In this section, we first introduce an overview of OpenSync and then describe the architecture in detail with a focus squarely on how to abstract clock synchronization in the data plane and synchronize the control plane with the data plane.
3.1. Overview
Existing implementations of clock synchronization in DetEth realize all protocol functionalities within the distributed devices in the data plane, and thus it is difficult and time consuming for them to be modified for new application scenarios with diverse requirements, as discussed in
Section 2.3. For this issue, we propose OpenSync to enable software-defined clock synchronization. We consider the following principles when designing OpenSync.
Generality. OpenSync should support different synchronization frames, time data operations and clock-correcting methods used by existing protocols or potentially by new protocols in the future.
Efficiency. Considering the clock synchronization requirements in resource-constrained scenarios, OpenSync should provide a simple and efficient data plane design solution which neither consumes excessive computational and storage resources nor is difficult to be implemented on commercial DetEth hardware.
Precision. The control plane of OpenSync should be capable of precisely perceiving the local clocks of data plane devices. Otherwise, the time-triggered operations within CSPs may not execute correctly, thereby affecting the synchronization accuracy of the protocol and failing to meet the microsecond-level synchronization accuracy requirements.
The overview of OpenSync architecture is shown as
Figure 3. To meet the above design principles, OpenSync first separates the synchronization control plane from the data plane. OpenSync provides a programmable time-data injector (PTI) and a fine-grained calibration timer (FCT) in the data plane.The data plane functions defined by OpenSync (gray areas in in
Figure 3) are implemented locally on switches or end-system NICs. Each switch or end system only needs to operate on synchronization frames according to configuration without the need for packet parsing and protocol state maintenance or other protocol-related operations.
In the control plane, a clock synchronization development library is provided for operators to implement protocol control process of different clock synchronization methods by software programming. This library includes the functions that enable the control plane to accurately estimate the clock time in the data plane. It is important to note that the protocol control process is logically associated with each device in the data plane on a one-to-one basis. The control plane functions defined by OpenSync (blue areas in
Figure 3) can run either on the local CPU of each device or on the CPU of a remote controller, which will be illustrated more intuitively in
Section 4.2 with examples of OpenSync.
3.2. OpenSync Data Plane
OpenSync allows for more customized ways of injecting time data and more fine-grained methods to correct clocks in the data plane, so it can support a wide variety of clock synchronization methods.
3.2.1. Programmable Time-Data Injector
The PTI is used to inject time data into synchronization frames during the reception or transmission of frames by devices in the data plane.
Due to the differences in the types and positions of time data carried by frames in various CSPs, the PTI describes them using tuples in the form of , which are configured by the synchronization control program according to the protocol. (a) The matchfield is used to match the synchronization frame that needs to be operated on. It supports wildcard matching and filtering because some CSPs use different subtypes of frames, which may carry different types of time data or no time data at all. (b) The type indicates the type of time data to be injected, including precise timestamp and transmission delay. (c) The position represents the offset of the time data within the frame, which is relative to the Ethernet frame header and measured in bytes. With the help of PTI, OpenSync’s data plane devices do not need to store the format of synchronization frames or parse these frames. This makes the data plane protocol-independent while also reducing hardware design complexity and resource consumption.
3.2.2. Fine-Grained Calibrated Timer
The FCT provides the synchronized local time for other local modules like the time-aware frame scheduler and accepts the phase correction or frequency correction configured by the control program in the control plane.
The FCT is composed of four main parts, as shown in
Figure 4.
(a) Counter is driven by the pulse signal generated by the oscillators. Its value will be added one tick when receiving a pulse signal so the value is incremented monotonically and continuously. (b) LocalClock is set to save the time value which is synchronized with the global time in the network. (c) The value of Offset is added or subtracted directly to the time value of LocalClock to support phase correction. (d) The value of TickLength is the interval time between two consecutive pulse signals, and the default value of it can be computed by Equation (
4), where
denotes the nominal frequency of the oscillator. For example, if the nominal frequency is 125 MHz, the default value of
is 8 nanoseconds. During operation, the crystal oscillator exhibits a deviation
between its actual frequency and nominal frequency, thus requiring the correction
to be calculated using Equation (
5). As the crystal oscillator serving as the clock driver constitutes a physical component, its frequency is subject to variations influenced by operational temperature, supply voltage, and material aging; therefore, precisely modifying the oscillator frequency does not directly translate to accurate clock frequency adjustment, which necessitates the adoption of tick length modification for implementing clock frequency correction.
3.3. OpenSync Control Plane
With OpenSync providing a simple and efficient data plane, operators can easily program clock synchronization and even invent new CSPs based on the time data acquired from the data plane. To ensure that time-triggered operations are executed correctly and accurately, OpenSync incorporates the ShadowClock mechanism and a delay compensation algorithm in the control plane.
3.3.1. ShadowClock
In OpenSync, the protocol control program in the control plane will collect a large amount of time data from the data plane. Using the data, we designed a ShadowClock mechanism to establish a mapping relationship between the data plane clock and the control plane clock .
As described in Algorithm 1, the mechanism uses a data structure composed of , and the rate ratio between the two clocks to record the mapping between the two clocks. And will be updated when the control plane receives the time data of the local clock.
Algorithm 1: Updating ShadowClock |
![Electronics 14 01145 i001]() |
Assuming that the protocol control program needs to execute a time-triggered operation at t, it can refer to the ShadowClock by Equation (
6), where
denotes the rate ratio between
and
. Due to the delay in the exchange of time data between the control plane and the data plane, by the time the shadow is updated, the data plane clock has already reached time
t. For the existence of transmission delay D between the control plane and data plane, the data plane clock will have reached time
when ShadowClock is updated according to
. Therefore, the ShadowClock time will always lag behind the data plane clock, much like a shadow that follows behind. This characteristic is the origin of the mechanism’s name.
3.3.2. Delay Compensation
Algorithm 1 enables software to estimate the approximate time of the local clock in hardware. However, it cannot maintain accurate synchronization between the two, which still forbids the correct execution of time-triggered operations. Therefore, we propose the delay compensation algorithm to correct the errors introduced by the uncertainty of communication delay, as described in Algorithm 2.
The delay compensation algorithm involves two parts: control plane and data plane. In OpenSync, the control plane initiates time-triggered operations. When reaching a pre-scheduled trigger time , it sends a packet to the data plane. If the data plane device is triggered by the control plane program, it first computes by subtracting from actual execution time and adds to the DelayRecord field in the synchronization frame before sending it to the network.
Basically, we extend the semantic of the DelayRecord field in the synchronization frame used for remote clock reading. Although the communication delay between software and hardware is uncertain and cannot be measured accurately, we record it in the DelayRecord field so that other devices can treat the frames as if they were dispatched at exactly. This way, time-triggered operations can be executed correctly.
Algorithm 2: Compensating Delay |
![Electronics 14 01145 i002]() |
4. Prototype Implementation
In this section, we describe the implementation of the OpenSync prototype and three different clock synchronization methods built on top of OpenSync as examples to demonstrate the generality and efficiency of OpenSync.
4.1. Prototype
We prototype OpenSync based on an OpenTSN open source project [
27], which can build a real-world verification environment to evaluate critical techniques in deterministic Ethernet, such as clock synchronization and time-aware scheduler.
Control Plane. The synchronization library and synchronization control programs of three CSPs are written in C language in the Linux operating system. We implement the ShadowClock mechanism in the library and encapsulate interfaces for operators to use when developing synchronization control programs. To ensure versatility and portability, we do not make any changes to the kernel or impose any constraints on the kernel. And the synchronization control programs all run in the user space of Linux. The control plane clock used in Algorithm 1 is achieved by invoking the system call
clock_gettime(). The sending and receiving functions of frames are implemented based on
raw_socket(), which is available in most Linux systems. And we use time-sensitive management protocol (TSMP) [
27] to establish communication between the control plane and the data plane.
Data Plane. We use FPGA development boards to implement the hardware component in the data plane. The basic functions of switching and forwarding packets are provided by the modules in OpenTSN. According to OpenSync design, two components are developed in the hardware: the fine-grained calibrated timer as a separate module and the programmable time-data injector as a plugin module in existing MAC in OpenTSN. We choose to achieve time-data in the MAC, because the precision of the injected time-data is one of the most critical factors that affect the performance of the clock synchronization protocol [
28], and the MAC can record the time-data of the frames in deterministic Ethernet precisely.
4.2. Examples
To demonstrate the generality and programability of OpenSync, we implement three different clock synchronization methods as examples. These three methods can each meet different scenario requirements.
802.1AS. The 802.1AS, part of the TSN suite, is crucial for industrial automation, automotive networking, and professional audio/video applications. It is based on the IEEE 1588 Precision Time Protocol (PTP), and its key frame types include Sync, Follow_Up, Delay_Req, and Delay_Resp, which help measure and correct time offsets and delays. Peer delay measurement directly measures delays between neighboring devices, maintaining sub-microsecond synchronization accuracy. 802.1AS supports redundancy and fault tolerance by switching to a backup GM if the primary fails, ensuring continuous synchronization.
Centralized 802.1AS. (C-802.1AS) Considering the presence of nodes without computational capabilities in industrial control scenarios, we designed a centralized clock synchronization method based on IEEE 802.1AS. This method deploys a central computation node within the network, eliminating the need for each node to perform complex protocol computations. The central computation node collects time information from devices using protocol messages, calculates the clock offsets for each node in the network, and distributes the results back to the nodes.
AS6802. AS6802 defines a fault-tolerant clock synchronization protocol and is usually applied in the safety-critical scenarios such in-vehicle and avionics. The devices are referred to as the compression master (CM), synchronization master (SM) and synchronization client (SC). It uses three different types of frames: coldstart (CS), coldstart acknowledge (CA) and integrate (IN). The protocol state machine requires that each frame should be received within a specified window. A fault-tolerant average algorithm based on a sliding window is used to calculate the correction of the local clock.
6. Conclusions and Future Work
In this paper, we proposed OpenSync, enabling software-defined clock synchronization in deterministic Ethernet. The architecture defines the protocol-independent data plane abstraction of clock synchronization so that developers can implement and deploy the clock synchronization solutions by software programming in the control plane. To validate the generality and efficiency of OpenSync, we have implemented a prototype system incorporating three representative CSPs, conducting comprehensive synchronization accuracy tests and protocol-consistency tests. Experimental results demonstrate that OpenSync-based CSPs maintain stable synchronization over extended durations while achieving seamless interoperability with equivalent protocols implemented through alternative technical approaches.
Our subsequent research will progress along two primary dimensions. (a) Control Plane Optimization involves enhancing the encapsulated programming interfaces of OpenSync’s control plane to streamline the development workflow for clock synchronization control programs, thereby facilitating systematic experimentation and validation of novel synchronization protocols. (b) Application Plane Expansion will extend the architecture through an application plane overlay on the existing control plane infrastructure to implement management applications including synchronization state monitoring, fault diagnosis, and recovery mechanisms, ultimately strengthening the synchronization stability and reliability from protocol management perspectives.