1. Introduction
Over the last few decades, with the rapid expansion of inland navigation and the increase in the number of vessels, inland river shipping has increased significantly, leading to an increase in traffic congestion and greenhouse gas emissions [
1]. Traffic conflicts, combined with a lack of efficient navigation and communication systems for ships and infrastructure, lead to an increased risk of accidents and unnecessary navigation delays. Moreover, as Maritime Autonomous Surface Ships (MASS) gradually move into commercial operation, there is an urgent demand for efficient communication systems between MASS and conventional ships, MASS and ports, and MASS and shore-based navigation systems to establish secure and efficient information interaction [
2]. The safe and efficient shipping of MASS requires a variety of applications, such as auto target recognition, simultaneous localization and mapping (SLAM), remote control, virtual reality, and other intelligent applications; all navigation safety, navigation efficiency, and navigation information services should be computed and should respond in an efficient and distributed manner. Therefore, the design of a reasonable navigation support system and task service scheme is a very important issue for MASS.
To address the above-mentioned challenges in inland waterway traffic, the Internet of ships (IoS) paradigm has recently emerged [
3], being driven by the concepts of e-navigation [
4] and Internet of Things (IoT) technologies. The Internet of ships (IoS) is the interconnecting of sensing objects like ships, crews, cargoes, onboard equipment, waterway environment, waterway facilities, shore-based facilities, and other navigation elements, which are embedded with a variety of sensor and heterogeneous network technologies to enable these objects to collect and exchange data [
5]. According to the IoS principles, any ship or port or the transportation itself, which includes cranes, containers, the bridge navigation system, the ship engine, and buoys, can sense, communicate, and process the information received from the outside world. When these facilities and devices are connected to each other, information will be shared and further processed among them, which lays the foundation for various intelligent applications in water transportation, including situational awareness, path planning, collaborative decision making, fault diagnosis, environmental monitoring, etc. Once realized, these intelligent applications will largely improve the safety, efficiency, and environmental sustainability of the inland waterway shipping industry.
However, the IoS only provides a paradigm for information interaction among participants in inland waterway transportation systems and lacks a collaborative, real-time process for information data. The interaction among the participants of the IoS leads to a dramatic increase in the frequency and quantity of information interactions, which places higher demands on the comprehensive and accurate sensing, wide-area data transmission, and real-time processing and analysis capabilities of communication networks. The integration of the IoS and cloud/edge computing provides an effective solution to meet the information interaction needs of future inland waterway transportation intelligent applications or tasks. By connecting entities through various communication means and cloud/edge computing, the system has the characteristics of large bandwidth, wide connection, high reliability, and low latency, which can support larger-scale traffic participant connection, higher frequency environmental awareness data collection, and lower latency regulation command issuance, and efficiently empower traffic data transmission and information interaction. Different computing and communication frameworks may be suitable to support different types of MASS applications with highly different quality of service (QoS) requirements for latency, computational resources, and storage capacity. The computational and communication workload of MASS may also vary with time and location under different architectures, which poses challenges for the resource management of computational nodes and the computational task management of MASS. Therefore, a well-designed computing framework and resource scheduling method are very important for the MASS system.
Thus, this paper proposes a cloud–shore–ship collaboration framework that integrates the IoS and cloud/edge computing and clarifies the implementation process of cloud–shore–ship collaboration to achieve high reliability and low latency demand adaptation and meet multidimensional resource collaborative scheduling and data privacy security. On this basis, a task-driven resource scheduling method based on deep Q-learning is proposed to achieve collaborative optimization of ship-side task offloading, cloud-side and shore-side computing resource allocation under high reliability and low latency constraints, and the performance advantages of the proposed method in terms of energy consumption, latency, and throughput are verified through case studies.
The main contributions of this paper are summarized as follows:
- (1)
We propose the cloud–shore–ship collaboration framework, which is a basic architecture to support MASS operation for the new generation shipping systems, and we clarify the task processing flow of cloud–shore–ship collaboration.
- (2)
On the basis of different tasks in ship intelligent navigation, we propose a task-driven resource allocation method for cloud–shore–ship collaboration, which can adaptively select the computational location of a task based on the priority of the task, the QoS of the task, and the computational resource status of the system nodes, to maximize the average task latency of the system and save energy.
- (3)
We constructed a simulation system to evaluate the proposed resource allocation method. By comparing multiple resource management framework methods and multiple resource allocation methods, the simulation results show that our proposed method can reduce the latency of computational tasks, reduce system energy consumption, and support the large-scale computational task service requirements of MASS.
The remainder of the paper is organized as follows.
Section 2 presents related works and cloud–shore–ship collaboration framework.
Section 3 and
Section 4 introduce the system model and task-driven resource allocation scheme in cloud–shore–ship collaboration framework, and
Section 5 validates and analyzes the proposed algorithm by simulation examples. The conclusions are placed in
Section 6.
2. Related Work
Currently, many new technologies and methods are introduced in inland shipping traffic and are crucial for enhancing traffic safety and efficiency, as explained below in three aspects: the IoS, cloud/edge computing, and resource scheduling method. Finally, we describe the cloud-to-shore ship collaboration framework.
2.1. Internet of Ships
The IoS is a new concept of IoT technology applied in the maritime field; currently, it has been studied by relevant institutions and scholars. The IoS is defined from several perspectives. The paper [
6] defined the IoS as a network connecting a ship to a shore-based facility with digital entities. Another work [
7] defined the IoS as a novel ecosystem, which incorporates all IoT-based emerging technological trends that are adapted for sea transportation. In summary, the IoS is a network that connects ships and other entities in water traffic and enables information interaction. The IoS enables real-time monitoring of the ship and its onboard equipment, and furthermore, by analyzing and processing the interactive data, it can enhance the safety and efficiency of the ship’s voyage. In addition, in the maritime sector, some countries and international organizations are already using real-time IoS platforms, such as e-navigation of IMO [
8], Waterway-Information-Network of the USA [
9], Ship-Area-Network of Korea [
10], River Information-System of Europe [
11], etc. Based on the IoS platform, many emerging applications, such as smart ships, smart traffic, smart ports, smart warehouses, etc., are being developed; they can improve the efficiency of information interaction of transportation and improve transportation safety.
2.2. Cloud/Edge Computing
Cloud Computing (CC) is a computing technology that has evolved over the past decade to perform large-scale and computationally intensive computing. Cloud computing is defined as “a model that allows ubiquitous, convenient, on-demand network access to many configurations of computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and distributed with minimal administrative effort or service provider interaction” [
12]. There are many attractive features, such as parallel processing, virtualized resources, high scalability, and security. As a result, CC not only provides the ability to handle computationally intensive tasks but also provides low-cost infrastructure maintenance. However, today, MASS may generate large amounts of data underway, such as assisted driving videos captured by cameras, which will dramatically increase the data to the terabyte/petabyte level in seconds [
13]. In addition, a large number of MASS applications tend to be latency sensitive with fast big data processing and quick response requirements. For example, in autonomous navigation scenarios, the sensors and 3D cameras attached to the MASS can generate considerable data. Therefore, the cloud server must compute these data in milliseconds and send highly accurate operation commands to the steering system of the MASS [
14]. However, in terms of network topology, the distance between the cloud server and the MASS is long, and network congestion or queuing may lead to long delays and, in the worst case, traffic accidents.
Edge computing (EC) is an effective solution to the above problem, allowing for lower response latency because the computation is performed near the MASS (e.g., shore-based infrastructure in inland waterways) rather than being sent to a remote cloud [
15]. The access methods in EC have become more diverse in the MASS navigation scenario. For example, MASS can directly transfer the collected raw data to a computing unit in a robust shore-based infrastructure that can be deployed at a small base station near the navigation channel [
16]. In addition, cloudlet is one of the most typical edge computing platforms, with a cluster of resource-rich computing nodes located just a hop away from mobile users (MUs). Essentially, cloudlet is a self-managed data center. Internally, cloudlet resembles a multi-core computer cluster with gigabit internal connectivity and a high-bandwidth wireless local area network [
17]. Due to the proximity of the network, the cloudlet can provide low-latency and high-bandwidth wireless connectivity between servers and MASS, making it ideal for providing situational awareness, latency-intensive, and fast-moving managed services and applications. Different cloud/edge/fog computing architectures may be suitable to support different types of MASS applications with highly different quality of service (QoS) requirements for latency, computational resources, and storage capacity [
18]. The computational and communication workload of MASS may also vary with time and location under different architectures, which poses challenges for resource management of computational nodes and computational task management of MASS. Therefore, a well-designed computing architecture is very important for the MASS system.
2.3. Resource Allocation Method
Resource allocation is an important application of cloud/edge computing architecture, where resource-constrained ships hand over computing tasks to shore-side nodes or cloud computing centers [
19], for example, by means of task offloading to speed up processing and improve task delivery quality. Task offloading mainly includes two major issues: offloading decision and resource allocation.
In recent years, as EC and CC are used in transportation systems, their task offloading and resource allocation problems have attracted a lot of attention from researchers. The authors summarize in detail the EC architecture and computational offloading issues, including offloading decisions, computational resource allocation, and mobility management [
20]. The authors introduced the network architecture and deployment scheme of vehicle EC and compared and analyzed different deployment schemes [
21]. The authors classify the applications of autonomous vehicles according to their importance and allocate computing resources according to the importance of tasks to improve the efficiency of task execution [
22]. The authors divide computing tasks into independent tasks and dependent tasks according to the relationship before tasks [
23], and the authors consider the impact of dependencies between tasks on offloading decisions and resource allocation [
24]. From the perspective of resource allocation methods, there are mainly heuristic algorithms [
25], Markov decision processes [
26], game theory [
27], etc. Reinforcement learning is a machine learning method. The idea is to use the created software agent to continuously interact with the environment in order to conduct tests in an interactive environment and gradually approach the optimal result through the reward or punishment information fed back by the environment [
28]. At present, some studies have proposed the use of deep reinforcement learning methods to solve the task offloading problem in EC. Reference [
29] designed a value-based deep reinforcement learning scheme for the offloading problem of mobile edge computing. The value function is optimized through an iterative trial-and-error process to minimize the total cost of the entire system. Reference [
30] proposed a deep learning offloading scheme considering inter-task dependencies. The scheme uses an artificial neural network with gates to give the network the ability to learn long-term dependencies on input data so that historical data can be considered to estimate the current state.
To sum up, at present, the allocation methods of computing resources for intelligent ships focus on the solution in the fixed scene but lack the consideration of the correlation between tasks and the mobility of ships. In fact, during the operation of MASS, the related computing tasks are often highly correlated, so we need to pay attention to the logical relationship between tasks. At the same time, due to the mobility of MASS, MASS is always in the dynamic topology of communication networks, and this characteristic also needs to be carefully considered. Aiming at the above challenges, this paper models and analyzes the mission characteristics and ship mobility in the process of MASS operation.
2.4. The Cloud–Shore–Ship Collaboration Framework
Cloud–shore–ship collaboration framework is an all-factor collaborative architecture for the new generation of shipping systems, which can realize water traffic from passive traffic response and optimization to active traffic management and service by continuously enhancing ship–vessel, ship–shore, and shore–cloud collaboration [
31]. The cloud–shore–ship collaboration framework is shown in
Figure 1. Based on advanced information and communication technology, the architecture enhances the comprehensive sensing capability for both shore and ship domains. The architecture can make full use of the high reliability and low latency of high-speed communication networks such as 5G, as well as the powerful computing capability of the cloud layer, the deployment flexibility of the shore domain, and the comprehensive sensing and timely response advantages of the ship domain [
32].
The cloud layer covers the inland river traffic control center, traffic data center, traffic cloud platform, etc. It has rich storage and computing resources and can analyze, manage, and store traffic operation data, ship status data, and environmental sensing data uploaded from the shore–ship domain. The shore layer includes 5G base stations, edge servers, shore-side sensing devices, and intelligent infrastructure such as environment sensing, safety verification, and navigation control, which can meet the task response requirements of real-time and high reliability for intelligent navigation of ships in the region. The ship layer consists of intelligent sensors, a central gateway controller, and ship electronic and electrical equipment that make up the functional domain of the ship.
The basic task processing of cloud–shore–ship collaboration includes data collection, task request, task assignment, task unloading, task processing, and result feedback. As shown in
Figure 2, the details are described as follows:
First of all, shipboard and shore-based sensing equipment collects the ship’s tasks, ship navigation data, ship equipment status, channel data, environmental environment, and other data and temporarily stores them in the ship or shore cache queue. Then, they consider the real-time and non-real-time tasks generated by the ship during its voyage, such as real-time control commands from the ship, voice interactions, navigation map requests, warning messages, etc. These tasks can be handled at the ship or transmitted to shore or to the cloud via communication facilities. Tasks offloading contains two cases: ship–shore offloading and ship–shore–cloud offloading. The ship offloads the data in the task offload queue to the shore-side cache queue to wait for processing. Data with low computational complexity and latency sensitivity can be processed directly onshore, while computationally intensive data can be further offloaded from shore to the cloud cache queue to wait for processing. The data processing results from the cloud–shore side will be stored in the result feedback queue and sent back to the ship result feedback queue.
The cloud–shore–ship collaboration framework provides effective support for the water transportation system by collaboratively scheduling resources such as communication, computing, energy consumption, and storage involved in the collaboration process and adapting to the demand for high reliability and low latency of data under intelligent navigation of ships. The cloud computing center analyzes and processes global information and cooperates with distributed shore-based facilities to improve shipbuilding, sailing efficiency, and intelligence. Therefore, considering different task priorities, task correlation, QoS of tasks, and ship mobility in MASS shipping, we propose a task-driven computing offloading and resource allocation scheme for the cloud–shore–ship collaboration framework.
3. Task Processing Model and Problem Formulation
In this section, we describe the task processing model. First, we show the task model. Then, the ship mobility, communication, and computation models are discussed.
We consider the scenario of MASS navigating in inland waterways, a system consisting of a cloud control center, shore-based infrastructure, and MASS in a navigation network cloud-side computing network, as shown in
Figure 3. The cloud computing center, with its powerful computing capacity and remote deployment, can be used to process large-scale, long-term data, and it can collect data from the widely distributed shore-based infrastructure, sense the traffic system’s operating conditions, and issue rational dispatch instructions. The shore-based infrastructure layer, which has moderate computing power and is deployed close to the MASS, can be used to complement the MASS, while the shore-based infrastructure is envisioned to have a variety of sensors that can complement the shipboard sensors. The different shore-based infrastructures are interconnected by fiber optic networks to extend the service range. In the ship computing layer consisting of MASS, small-scale, latency-critical data can be processed with the shore-based infrastructure and large-scale, long-term data can be processed with the cloud computing center.
We assume that each mass has computational tasks, and the computational tasks can be executed at any layer. In addition, co-computation is considered at both the shore side and ship side to improve the task offloading performance. Here, we define all shore-based infrastructure units (SUs) and ships as SU = {1,2, …, M} and BE = {1,2, …, N}, respectively. Each shore-based infrastructure unit deploys edge servers to provide computing power. For simplicity, we assume that the ship is covered by the nearest shore-based infrastructure, which is used to perform collaborative computing through a ship-to-shore facility link, and the shore-based infrastructure is connected to each other through a cloud–shore backbone network. Efficient application services are provided for mass navigation through cloud–shore–ship collaboration.
3.1. Task Model
In this paper, we classify tasks/applications in MASS navigation into two types, independent tasks and cooperative tasks, based on whether the tasks are related to each other or not. It should be noted that our framework is not only for single MASS operations, but more for large-scale MASS operations, where tasks are also likely to be interrelated among multiple MASS.
Independent tasks. Independent tasks mean that the tasks are independent of each other, and this independence is mainly reflected between different MASS, which generate a large number of tasks during navigation, such as safety-related tasks, such as collision warning; efficiency-related tasks, such as route planning, and information service tasks, such as situational awareness. When tasks are computed at each layer, the level of urgency with which they need to be executed varies depending on the type of task. For MASS navigation safety-related tasks, they need to be provided with computational resources for computation first, while for efficiency-type tasks, they can be provided with computational resources for computation last.
For processing independent tasks, we need to group and prioritize the tasks. The tasks are characterized by higher information completeness and accuracy. The processing flow of the independent task is shown in
Figure 4.
We define the tasks generated by different MASS in the system as A = {a1,a2,⋯,an}. The tasks are first prioritized and then divided into four task queues above real-time and non-real-time, high-frequency, and low-frequency.
The task priority reflects the importance of the MASS task, the task urgency expresses the degree of urgency, and the task value density indicates the degree of actual value of the task. Therefore, the task priority is calculated by considering the task urgency and the task value density together. Since the simple task value does not reflect the actual value of the task, in order to compare the value of tasks of different lengths, this paper expresses the task value density in terms of the expected value obtained by the MASS per unit length of the task. The task value density is expressed as
The value of determines the priority of the task; the higher the value, the higher the task priority.
We assume that the current time is
t, and the remaining execution time S of the task
is obtained. The smaller the value of
S, the more urgent it is for that task.
Considering the two factors of remaining execution time and task value, we can calculate the task priority
P,
The larger the value of P, the higher the priority. In the scheduling process, tasks with higher priority are scheduled earlier so that tasks with high value and urgent time are scheduled in priority, thus increasing the success rate of tasks within the deadline and improving the completion rate of tasks.
Cooperative tasks imply that the tasks from MASS are interconnected with each other. We assume that each task
can be divided into several discrete tasks.
, n represents the total number of subtasks belonging to the task
. Task
can be processed at any layer in the cloud–shore–ship collaboration framework. In this paper, we investigate task interdependencies, which affect task execution decisions. Furthermore, according to reference [
33], we classify task dependencies in MASS navigation into two basic logical topologies, i.e., linear and lattice. As shown in
Figure 2, each task is divided into six subtasks. As shown in
Figure 5a, it is a linear serial task, and the later task must start execution after the completion time of the previous task. The total completion time of the entire task is the sum of the individual component completion times. In the mesh logic of task shown in
Figure 5b, task 6 starts after the completion time of task 3 and task 5. Therefore, when making decisions about tasks, the interdependence of individual tasks should be considered.
We can represent the association relationship between tasks as a directed acyclic graph, and furthermore, we can abstract it as an association matrix. Specifically, in this paper, we represent it by a lower triangular matrix to
I, where the elements in
I are 0 or 1,
. If Task
i requires the output data of Task
j,
; otherwise
. For example, the logical matrix for task 2 in
Figure 5 can be given by
3.2. Ship Mobility Model
One of the challenges of MASS is the maneuverability of ships on inland waterways, which leads to instability of shipboard resources, and MASS can only perform computational offloading when the ship is in the coverage area of the SU. We assume that MASS enters the cloud–shore–ship collaboration system subject to a Poisson distribution with arrival rate
. Each MASS may generate task requests from time to time. Therefore, for a given route segment
s, the number of tasks generated in a time unit follows a compound random process.
where
denotes the number of MASS on the route segment s and
A is the number of tasks generated by MASS
i for each time unit. According to existing research [
32], a joint consideration of the MASS arrival rate and the randomness of task generation shows that the task arrival model within the whole system does not obey a Poisson distribution, providing a further indication of the unevenness of task generation in the MASS system.
We define the contact interval of each MASS in the system as the time interval of the MASS within the BS coverage, defined as , where and represent the arrival and departure times, respectively. In order to ensure the availability of computing resources, the offloaded computing tasks need to be completed before the MASS leaves the system.
3.3. Computation Model
At the t time slot, the ship’s task offloading decision consists of 2 components: (1) Shore-based edge server selection, i.e., which shore-based edge server is selected for task offloading; (2) Computation mode selection, i.e., the data computation mode is selected as cloud computing or shore-based edge computing. If cloud computing is selected, the data offloaded to the shore-based edge server are further offloaded to the cloud server for processing.
Local ship computation model. The computational capability of MASS is defined as the number of CPU cycles per second; we define it as
. Thus, the local ship computation delay of task
is
The energy consumption for
executing
is calculated by the classic model used in [
32]
where
can be obtained by long-term observation and measurement methods.
Shore-based computation. The computing capability (i.e., computing resources) of
will be distributed proportionally to all
that request the
. In this article, the computing resource of
allocated to
is
where
is the sum of all calculated resource weight parameters for all BE requests
. The computation delay of
executed by
is formulated as
focuses on minimizing its energy consumption and does not care about the cost of . Therefore, the energy consumption of is zero when is executing task .
Cloud computation. Compared to SUs, we assume that the cloud has enough resources to respond to the requests of MASS and SUs. Therefore, we assume that the cloud server can handle infinitely many tasks in parallel. The computational latency of the cloud execution once can be expressed as
Similarly, focuses on minimizing its energy consumption and does not care about the cost of . Therefore, the energy consumption of is zero when is executing task .
Ship
maintains the local cache queue to store the unloaded data. Assume that the amount of data collected by ship
at time slot t is
, Then the queue backlog of the local cache queue
is expressed as
where
denotes the the amount of offloaded data of the ship
at the
t time slot; it can be expressed as
In the formula, indicates the transmission power of the ship . , , and denote the transmission bandwidth, channel gain, and interference between the ship and the communication base station, respectively.
Each shore-based edge server, as well as the cloud server, maintains a separate cache queue for the ship
to store the data that have been offloaded but not yet processed. The shore-side cache queue and the cloud-side cache queue on the shore-based edge server
and cloud server are represented as
where we define
to be the decision variable of task offloading.
indicates that at the t time slot, the ship
selects the shore-based edge server for task offloading; otherwise,
.
indicates that at the t time slot, the ship
selects cloud computing; otherwise,
.
3.4. Communication Model
Based on the time slot model, the total optimization time is divided into
T equal-length time slots; the set is denoted as
and the length of the time slots is
. It is assumed that the channel state remains constant within a time slot and varies dynamically between time slots [
34]. Similar to the computational resource allocation policy, the communication resources of
(i.e., data transfer rate) are allocated proportionally to all BEs requesting
. Thus, the
allocated data transfer rate to the
is
The computation delay of
executed by
is formulated as
The energy consumption of
offloading
to
is
cannot directly request the cloud. Given the data transmission rate between
and the Cloud, the communication delay for
offloading
to the cloud can be formulated as
Therefore, the transmission latency between and the cloud is + .
The end-to-end latency of the task offloading process consists of the queuing latency of the ship’s local cache queue, the transmission latency, the queuing latency of the shore-side cache queue and the cloud-side cache queue, the computation latency, and the result feedback latency. Since the queuing delay accounts for a relatively large portion of the end-to-end delay, ultra-reliable and low-latency communication constraints are imposed on the ship-side, cloud-side, and shore-side queuing delays to ensure the effectiveness and timeliness of task offloading.
3.5. Problem Formulation
Task completion time is an important metric for evaluating the performance of an application, indicating the time required by the application for the entire process from data input to result output. We discuss the execution time of the whole application in terms of each component of the application executing locally and the edge server executing separately.
This paper aims to solve the cloud–shore–ship multidimensional heterogeneous resource scheduling optimization problem, including the task offloading problem on the ship side and the joint optimization problem of computing resource allocation on the shore and the cloud, in order to maximize the network throughput under the constraints of task delay and energy consumption on the ship side. The multidimensional heterogeneous resource scheduling optimization problem can be modeled as
where,
denotes the end task offload decision vector,
and
denotes the shore-based edge server computing resource allocation decision vector, respectively.
C1 and
C2 indicate that at each time slot, each vessel can only select one shore-based edge server for data offloading, and only one computation mode can be selected.
C3 and
C4 are computing resource constraints for shore-based edge servers and cloud servers, respectively.
C5 and
C6 are energy consumption and task delay constraints, respectively.
6. Conclusions and Future Works
In this paper, we build a cloud–shore–ship collaborative computing architecture that integrates the Internet of ships and cloud/edge computing for inland waterway ship intelligent navigation scenarios. According to the type of tasks, time delay, and synergy, this architecture can realize having the computational tasks of ship intelligent navigation processed locally on the ship, at the shore-based edge node, or at the cloud center. In this paper, based on previous research results, we consider the task relationship, task priority, and ship mobility, which establish the ship, shore, and cloud computing models, respectively, design the joint optimized delay and energy consumption objective functions, and realize the adaptive offloading of tasks under the evaluation of each resource. In addition, this paper proposes a task offloading method based on artificial intelligence for the dynamic prediction of computing resources. Simulation results show that the scheme in this paper can provide low latency and low energy consumption under the high task concurrency of the ship.
Future research can focus on network security, communication failure, and task uncertainty. For the operation of MASS, network security and communication redundancy are indispensable. In extreme conditions, how to ensure the safe navigation of MASS needs to be seriously studied. In the process of MASS operation, the computing task is often highly uncertain and needs to be paid attention to.