Next Article in Journal
Impact of Coastal Squeeze Induced by Erosion and Land Reclamation on Salt Marsh Wetlands
Previous Article in Journal
Straight-Line Trajectory Tracking Control of Unmanned Sailboat Based on NMPC Velocity and Heading Joint Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework

1
School of Electronic Information Engineering, Henan Institute of Technology, Xinxiang 453000, China
2
School of Navigation, Wuhan University of Technology, Wuhan 430070, China
3
State Key Laboratory of Maritime Technology and Safety, Wuhan University of Technology, Wuhan 430070, China
4
Sanya Science and Education Innovation Park, Wuhan University of Technology, Sanya 572000, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(1), 16; https://doi.org/10.3390/jmse13010016
Submission received: 9 December 2024 / Revised: 23 December 2024 / Accepted: 25 December 2024 / Published: 26 December 2024
(This article belongs to the Section Ocean Engineering)

Abstract

:
Currently, Maritime Autonomous Surface Ships (MASS) have become one of the most attractive research areas in shipping and academic communities. Based on the ship-to-shore and ship-to-ship communication network, they can exploit diversified and distributed resources such as shore-based facilities and cloud computing centers to execute a variety of ship applications. Due to the increasing number of MASS and asymmetrical distribution of traffic flows, the transportation management must design an efficient cloud–shore–ship collaboration framework and smart resource allocation strategy to improve the performance of the traffic network and provide high-quality applications to the ships. Therefore, we design a cloud–shore–ship collaboration framework, which integrates ship networking and cloud/edge computing and design the respective task collaboration process. It can effectively support the collaborative interaction of distributed resources in the cloud, onshore, and onboard. Based on the global information of the framework, we propose an intelligent resource allocation method based on Q-learning by combining the relevance, QoS characteristics, and priority of ship tasks. Simulation experiments show that our proposed approach can effectively reduce task latency and system energy consumption while supporting the concurrency of scale tasks. Compared with other analogy methods, the proposed algorithm can reduce the task processing delay by at least 15.7% and the task processing energy consumption by 15.4%.

1. Introduction

Over the last few decades, with the rapid expansion of inland navigation and the increase in the number of vessels, inland river shipping has increased significantly, leading to an increase in traffic congestion and greenhouse gas emissions [1]. Traffic conflicts, combined with a lack of efficient navigation and communication systems for ships and infrastructure, lead to an increased risk of accidents and unnecessary navigation delays. Moreover, as Maritime Autonomous Surface Ships (MASS) gradually move into commercial operation, there is an urgent demand for efficient communication systems between MASS and conventional ships, MASS and ports, and MASS and shore-based navigation systems to establish secure and efficient information interaction [2]. The safe and efficient shipping of MASS requires a variety of applications, such as auto target recognition, simultaneous localization and mapping (SLAM), remote control, virtual reality, and other intelligent applications; all navigation safety, navigation efficiency, and navigation information services should be computed and should respond in an efficient and distributed manner. Therefore, the design of a reasonable navigation support system and task service scheme is a very important issue for MASS.
To address the above-mentioned challenges in inland waterway traffic, the Internet of ships (IoS) paradigm has recently emerged [3], being driven by the concepts of e-navigation [4] and Internet of Things (IoT) technologies. The Internet of ships (IoS) is the interconnecting of sensing objects like ships, crews, cargoes, onboard equipment, waterway environment, waterway facilities, shore-based facilities, and other navigation elements, which are embedded with a variety of sensor and heterogeneous network technologies to enable these objects to collect and exchange data [5]. According to the IoS principles, any ship or port or the transportation itself, which includes cranes, containers, the bridge navigation system, the ship engine, and buoys, can sense, communicate, and process the information received from the outside world. When these facilities and devices are connected to each other, information will be shared and further processed among them, which lays the foundation for various intelligent applications in water transportation, including situational awareness, path planning, collaborative decision making, fault diagnosis, environmental monitoring, etc. Once realized, these intelligent applications will largely improve the safety, efficiency, and environmental sustainability of the inland waterway shipping industry.
However, the IoS only provides a paradigm for information interaction among participants in inland waterway transportation systems and lacks a collaborative, real-time process for information data. The interaction among the participants of the IoS leads to a dramatic increase in the frequency and quantity of information interactions, which places higher demands on the comprehensive and accurate sensing, wide-area data transmission, and real-time processing and analysis capabilities of communication networks. The integration of the IoS and cloud/edge computing provides an effective solution to meet the information interaction needs of future inland waterway transportation intelligent applications or tasks. By connecting entities through various communication means and cloud/edge computing, the system has the characteristics of large bandwidth, wide connection, high reliability, and low latency, which can support larger-scale traffic participant connection, higher frequency environmental awareness data collection, and lower latency regulation command issuance, and efficiently empower traffic data transmission and information interaction. Different computing and communication frameworks may be suitable to support different types of MASS applications with highly different quality of service (QoS) requirements for latency, computational resources, and storage capacity. The computational and communication workload of MASS may also vary with time and location under different architectures, which poses challenges for the resource management of computational nodes and the computational task management of MASS. Therefore, a well-designed computing framework and resource scheduling method are very important for the MASS system.
Thus, this paper proposes a cloud–shore–ship collaboration framework that integrates the IoS and cloud/edge computing and clarifies the implementation process of cloud–shore–ship collaboration to achieve high reliability and low latency demand adaptation and meet multidimensional resource collaborative scheduling and data privacy security. On this basis, a task-driven resource scheduling method based on deep Q-learning is proposed to achieve collaborative optimization of ship-side task offloading, cloud-side and shore-side computing resource allocation under high reliability and low latency constraints, and the performance advantages of the proposed method in terms of energy consumption, latency, and throughput are verified through case studies.
The main contributions of this paper are summarized as follows:
(1)
We propose the cloud–shore–ship collaboration framework, which is a basic architecture to support MASS operation for the new generation shipping systems, and we clarify the task processing flow of cloud–shore–ship collaboration.
(2)
On the basis of different tasks in ship intelligent navigation, we propose a task-driven resource allocation method for cloud–shore–ship collaboration, which can adaptively select the computational location of a task based on the priority of the task, the QoS of the task, and the computational resource status of the system nodes, to maximize the average task latency of the system and save energy.
(3)
We constructed a simulation system to evaluate the proposed resource allocation method. By comparing multiple resource management framework methods and multiple resource allocation methods, the simulation results show that our proposed method can reduce the latency of computational tasks, reduce system energy consumption, and support the large-scale computational task service requirements of MASS.
The remainder of the paper is organized as follows. Section 2 presents related works and cloud–shore–ship collaboration framework. Section 3 and Section 4 introduce the system model and task-driven resource allocation scheme in cloud–shore–ship collaboration framework, and Section 5 validates and analyzes the proposed algorithm by simulation examples. The conclusions are placed in Section 6.

2. Related Work

Currently, many new technologies and methods are introduced in inland shipping traffic and are crucial for enhancing traffic safety and efficiency, as explained below in three aspects: the IoS, cloud/edge computing, and resource scheduling method. Finally, we describe the cloud-to-shore ship collaboration framework.

2.1. Internet of Ships

The IoS is a new concept of IoT technology applied in the maritime field; currently, it has been studied by relevant institutions and scholars. The IoS is defined from several perspectives. The paper [6] defined the IoS as a network connecting a ship to a shore-based facility with digital entities. Another work [7] defined the IoS as a novel ecosystem, which incorporates all IoT-based emerging technological trends that are adapted for sea transportation. In summary, the IoS is a network that connects ships and other entities in water traffic and enables information interaction. The IoS enables real-time monitoring of the ship and its onboard equipment, and furthermore, by analyzing and processing the interactive data, it can enhance the safety and efficiency of the ship’s voyage. In addition, in the maritime sector, some countries and international organizations are already using real-time IoS platforms, such as e-navigation of IMO [8], Waterway-Information-Network of the USA [9], Ship-Area-Network of Korea [10], River Information-System of Europe [11], etc. Based on the IoS platform, many emerging applications, such as smart ships, smart traffic, smart ports, smart warehouses, etc., are being developed; they can improve the efficiency of information interaction of transportation and improve transportation safety.

2.2. Cloud/Edge Computing

Cloud Computing (CC) is a computing technology that has evolved over the past decade to perform large-scale and computationally intensive computing. Cloud computing is defined as “a model that allows ubiquitous, convenient, on-demand network access to many configurations of computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and distributed with minimal administrative effort or service provider interaction” [12]. There are many attractive features, such as parallel processing, virtualized resources, high scalability, and security. As a result, CC not only provides the ability to handle computationally intensive tasks but also provides low-cost infrastructure maintenance. However, today, MASS may generate large amounts of data underway, such as assisted driving videos captured by cameras, which will dramatically increase the data to the terabyte/petabyte level in seconds [13]. In addition, a large number of MASS applications tend to be latency sensitive with fast big data processing and quick response requirements. For example, in autonomous navigation scenarios, the sensors and 3D cameras attached to the MASS can generate considerable data. Therefore, the cloud server must compute these data in milliseconds and send highly accurate operation commands to the steering system of the MASS [14]. However, in terms of network topology, the distance between the cloud server and the MASS is long, and network congestion or queuing may lead to long delays and, in the worst case, traffic accidents.
Edge computing (EC) is an effective solution to the above problem, allowing for lower response latency because the computation is performed near the MASS (e.g., shore-based infrastructure in inland waterways) rather than being sent to a remote cloud [15]. The access methods in EC have become more diverse in the MASS navigation scenario. For example, MASS can directly transfer the collected raw data to a computing unit in a robust shore-based infrastructure that can be deployed at a small base station near the navigation channel [16]. In addition, cloudlet is one of the most typical edge computing platforms, with a cluster of resource-rich computing nodes located just a hop away from mobile users (MUs). Essentially, cloudlet is a self-managed data center. Internally, cloudlet resembles a multi-core computer cluster with gigabit internal connectivity and a high-bandwidth wireless local area network [17]. Due to the proximity of the network, the cloudlet can provide low-latency and high-bandwidth wireless connectivity between servers and MASS, making it ideal for providing situational awareness, latency-intensive, and fast-moving managed services and applications. Different cloud/edge/fog computing architectures may be suitable to support different types of MASS applications with highly different quality of service (QoS) requirements for latency, computational resources, and storage capacity [18]. The computational and communication workload of MASS may also vary with time and location under different architectures, which poses challenges for resource management of computational nodes and computational task management of MASS. Therefore, a well-designed computing architecture is very important for the MASS system.

2.3. Resource Allocation Method

Resource allocation is an important application of cloud/edge computing architecture, where resource-constrained ships hand over computing tasks to shore-side nodes or cloud computing centers [19], for example, by means of task offloading to speed up processing and improve task delivery quality. Task offloading mainly includes two major issues: offloading decision and resource allocation.
In recent years, as EC and CC are used in transportation systems, their task offloading and resource allocation problems have attracted a lot of attention from researchers. The authors summarize in detail the EC architecture and computational offloading issues, including offloading decisions, computational resource allocation, and mobility management [20]. The authors introduced the network architecture and deployment scheme of vehicle EC and compared and analyzed different deployment schemes [21]. The authors classify the applications of autonomous vehicles according to their importance and allocate computing resources according to the importance of tasks to improve the efficiency of task execution [22]. The authors divide computing tasks into independent tasks and dependent tasks according to the relationship before tasks [23], and the authors consider the impact of dependencies between tasks on offloading decisions and resource allocation [24]. From the perspective of resource allocation methods, there are mainly heuristic algorithms [25], Markov decision processes [26], game theory [27], etc. Reinforcement learning is a machine learning method. The idea is to use the created software agent to continuously interact with the environment in order to conduct tests in an interactive environment and gradually approach the optimal result through the reward or punishment information fed back by the environment [28]. At present, some studies have proposed the use of deep reinforcement learning methods to solve the task offloading problem in EC. Reference [29] designed a value-based deep reinforcement learning scheme for the offloading problem of mobile edge computing. The value function is optimized through an iterative trial-and-error process to minimize the total cost of the entire system. Reference [30] proposed a deep learning offloading scheme considering inter-task dependencies. The scheme uses an artificial neural network with gates to give the network the ability to learn long-term dependencies on input data so that historical data can be considered to estimate the current state.
To sum up, at present, the allocation methods of computing resources for intelligent ships focus on the solution in the fixed scene but lack the consideration of the correlation between tasks and the mobility of ships. In fact, during the operation of MASS, the related computing tasks are often highly correlated, so we need to pay attention to the logical relationship between tasks. At the same time, due to the mobility of MASS, MASS is always in the dynamic topology of communication networks, and this characteristic also needs to be carefully considered. Aiming at the above challenges, this paper models and analyzes the mission characteristics and ship mobility in the process of MASS operation.

2.4. The Cloud–Shore–Ship Collaboration Framework

Cloud–shore–ship collaboration framework is an all-factor collaborative architecture for the new generation of shipping systems, which can realize water traffic from passive traffic response and optimization to active traffic management and service by continuously enhancing ship–vessel, ship–shore, and shore–cloud collaboration [31]. The cloud–shore–ship collaboration framework is shown in Figure 1. Based on advanced information and communication technology, the architecture enhances the comprehensive sensing capability for both shore and ship domains. The architecture can make full use of the high reliability and low latency of high-speed communication networks such as 5G, as well as the powerful computing capability of the cloud layer, the deployment flexibility of the shore domain, and the comprehensive sensing and timely response advantages of the ship domain [32].
The cloud layer covers the inland river traffic control center, traffic data center, traffic cloud platform, etc. It has rich storage and computing resources and can analyze, manage, and store traffic operation data, ship status data, and environmental sensing data uploaded from the shore–ship domain. The shore layer includes 5G base stations, edge servers, shore-side sensing devices, and intelligent infrastructure such as environment sensing, safety verification, and navigation control, which can meet the task response requirements of real-time and high reliability for intelligent navigation of ships in the region. The ship layer consists of intelligent sensors, a central gateway controller, and ship electronic and electrical equipment that make up the functional domain of the ship.
The basic task processing of cloud–shore–ship collaboration includes data collection, task request, task assignment, task unloading, task processing, and result feedback. As shown in Figure 2, the details are described as follows:
First of all, shipboard and shore-based sensing equipment collects the ship’s tasks, ship navigation data, ship equipment status, channel data, environmental environment, and other data and temporarily stores them in the ship or shore cache queue. Then, they consider the real-time and non-real-time tasks generated by the ship during its voyage, such as real-time control commands from the ship, voice interactions, navigation map requests, warning messages, etc. These tasks can be handled at the ship or transmitted to shore or to the cloud via communication facilities. Tasks offloading contains two cases: ship–shore offloading and ship–shore–cloud offloading. The ship offloads the data in the task offload queue to the shore-side cache queue to wait for processing. Data with low computational complexity and latency sensitivity can be processed directly onshore, while computationally intensive data can be further offloaded from shore to the cloud cache queue to wait for processing. The data processing results from the cloud–shore side will be stored in the result feedback queue and sent back to the ship result feedback queue.
The cloud–shore–ship collaboration framework provides effective support for the water transportation system by collaboratively scheduling resources such as communication, computing, energy consumption, and storage involved in the collaboration process and adapting to the demand for high reliability and low latency of data under intelligent navigation of ships. The cloud computing center analyzes and processes global information and cooperates with distributed shore-based facilities to improve shipbuilding, sailing efficiency, and intelligence. Therefore, considering different task priorities, task correlation, QoS of tasks, and ship mobility in MASS shipping, we propose a task-driven computing offloading and resource allocation scheme for the cloud–shore–ship collaboration framework.

3. Task Processing Model and Problem Formulation

In this section, we describe the task processing model. First, we show the task model. Then, the ship mobility, communication, and computation models are discussed.
We consider the scenario of MASS navigating in inland waterways, a system consisting of a cloud control center, shore-based infrastructure, and MASS in a navigation network cloud-side computing network, as shown in Figure 3. The cloud computing center, with its powerful computing capacity and remote deployment, can be used to process large-scale, long-term data, and it can collect data from the widely distributed shore-based infrastructure, sense the traffic system’s operating conditions, and issue rational dispatch instructions. The shore-based infrastructure layer, which has moderate computing power and is deployed close to the MASS, can be used to complement the MASS, while the shore-based infrastructure is envisioned to have a variety of sensors that can complement the shipboard sensors. The different shore-based infrastructures are interconnected by fiber optic networks to extend the service range. In the ship computing layer consisting of MASS, small-scale, latency-critical data can be processed with the shore-based infrastructure and large-scale, long-term data can be processed with the cloud computing center.
We assume that each mass has computational tasks, and the computational tasks can be executed at any layer. In addition, co-computation is considered at both the shore side and ship side to improve the task offloading performance. Here, we define all shore-based infrastructure units (SUs) and ships as SU = {1,2, …, M} and BE = {1,2, …, N}, respectively. Each shore-based infrastructure unit deploys edge servers to provide computing power. For simplicity, we assume that the ship is covered by the nearest shore-based infrastructure, which is used to perform collaborative computing through a ship-to-shore facility link, and the shore-based infrastructure is connected to each other through a cloud–shore backbone network. Efficient application services are provided for mass navigation through cloud–shore–ship collaboration.

3.1. Task Model

In this paper, we classify tasks/applications in MASS navigation into two types, independent tasks and cooperative tasks, based on whether the tasks are related to each other or not. It should be noted that our framework is not only for single MASS operations, but more for large-scale MASS operations, where tasks are also likely to be interrelated among multiple MASS.
Independent tasks. Independent tasks mean that the tasks are independent of each other, and this independence is mainly reflected between different MASS, which generate a large number of tasks during navigation, such as safety-related tasks, such as collision warning; efficiency-related tasks, such as route planning, and information service tasks, such as situational awareness. When tasks are computed at each layer, the level of urgency with which they need to be executed varies depending on the type of task. For MASS navigation safety-related tasks, they need to be provided with computational resources for computation first, while for efficiency-type tasks, they can be provided with computational resources for computation last.
For processing independent tasks, we need to group and prioritize the tasks. The tasks are characterized by higher information completeness and accuracy. The processing flow of the independent task is shown in Figure 4.
We define the tasks generated by different MASS in the system as A = {a1,a2,⋯,an}. The tasks are first prioritized and then divided into four task queues above real-time and non-real-time, high-frequency, and low-frequency.
The task priority reflects the importance of the MASS task, the task urgency expresses the degree of urgency, and the task value density indicates the degree of actual value of the task. Therefore, the task priority is calculated by considering the task urgency and the task value density together. Since the simple task value does not reflect the actual value of the task, in order to compare the value of tasks of different lengths, this paper expresses the task value density in terms of the expected value obtained by the MASS per unit length of the task. The task value density is expressed as
T v d = T v a l u e T l e n g t h
The value of T v d determines the priority of the task; the higher the value, the higher the task priority.
We assume that the current time is t, and the remaining execution time S of the task a i is obtained. The smaller the value of S, the more urgent it is for that task.
S = T d t t
Considering the two factors of remaining execution time and task value, we can calculate the task priority P,
P = T v d S
The larger the value of P, the higher the priority. In the scheduling process, tasks with higher priority are scheduled earlier so that tasks with high value and urgent time are scheduled in priority, thus increasing the success rate of tasks within the deadline and improving the completion rate of tasks.
Cooperative tasks imply that the tasks from MASS are interconnected with each other. We assume that each task a i can be divided into several discrete tasks. β i = b 1 i , b 2 i , , b n i , n represents the total number of subtasks belonging to the task a i . Task a i can be processed at any layer in the cloud–shore–ship collaboration framework. In this paper, we investigate task interdependencies, which affect task execution decisions. Furthermore, according to reference [33], we classify task dependencies in MASS navigation into two basic logical topologies, i.e., linear and lattice. As shown in Figure 2, each task is divided into six subtasks. As shown in Figure 5a, it is a linear serial task, and the later task must start execution after the completion time of the previous task. The total completion time of the entire task is the sum of the individual component completion times. In the mesh logic of task shown in Figure 5b, task 6 starts after the completion time of task 3 and task 5. Therefore, when making decisions about tasks, the interdependence of individual tasks should be considered.
We can represent the association relationship between tasks as a directed acyclic graph, and furthermore, we can abstract it as an association matrix. Specifically, in this paper, we represent it by a lower triangular matrix to I, where the elements in I are 0 or 1, I i , j k = 0,1 , 1 j i n . If Task i requires the output data of Task j, I i , j k = 1 ; otherwise I i , j k = 0 . For example, the logical matrix for task 2 in Figure 5 can be given by
I 2 = 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0

3.2. Ship Mobility Model

One of the challenges of MASS is the maneuverability of ships on inland waterways, which leads to instability of shipboard resources, and MASS can only perform computational offloading when the ship is in the coverage area of the SU. We assume that MASS enters the cloud–shore–ship collaboration system subject to a Poisson distribution with arrival rate λ t . Each MASS may generate task requests from time to time. Therefore, for a given route segment s, the number of tasks generated in a time unit follows a compound random process.
Y s = n = 1 N ( s ) A
where N ( s ) denotes the number of MASS on the route segment s and A is the number of tasks generated by MASS i for each time unit. According to existing research [32], a joint consideration of the MASS arrival rate and the randomness of task generation shows that the task arrival model within the whole system does not obey a Poisson distribution, providing a further indication of the unevenness of task generation in the MASS system.
We define the contact interval of each MASS in the system as the time interval of the MASS within the BS coverage, defined as δ n , μ n , where δ n and μ n represent the arrival and departure times, respectively. In order to ensure the availability of computing resources, the offloaded computing tasks need to be completed before the MASS leaves the system.

3.3. Computation Model

At the t time slot, the ship’s task offloading decision consists of 2 components: (1) Shore-based edge server selection, i.e., which shore-based edge server is selected for task offloading; (2) Computation mode selection, i.e., the data computation mode is selected as cloud computing or shore-based edge computing. If cloud computing is selected, the data offloaded to the shore-based edge server are further offloaded to the cloud server for processing.
Local ship computation model. The computational capability of MASS is defined as the number of CPU cycles per second; we define it as f n . Thus, the local ship computation delay of task a n is
t n = ω n f n
The energy consumption for B E n executing a n is calculated by the classic model used in [32]
e n = σ n t n
where σ n can be obtained by long-term observation and measurement methods.
Shore-based computation. The computing capability (i.e., computing resources) of S U m will be distributed proportionally to all B E n that request the S U m . In this article, the computing resource of S U m allocated to B E n is
f n , m = γ n v = 1 N γ v λ ~ v , m f ~ m = γ n B E v B N n γ v λ ~ v , m f ~ m
where B E v B N n γ v λ ~ v , m is the sum of all calculated resource weight parameters for all BE requests S U m . The computation delay of a n executed by S U m is formulated as
t ~ n , m = ω n f n , m
B E n focuses on minimizing its energy consumption and does not care about the cost of S U m . Therefore, the energy consumption of B E n is zero when S U m is executing task a n .
Cloud computation. Compared to SUs, we assume that the cloud has enough resources to respond to the requests of MASS and SUs. Therefore, we assume that the cloud server can handle infinitely many tasks in parallel. The computational latency of the cloud execution once can be expressed as
t ^ n = ω n f ^
Similarly, B E n focuses on minimizing its energy consumption and does not care about the cost of S U m . Therefore, the energy consumption of B E n is zero when S U m is executing task a n .
Ship B E n maintains the local cache queue to store the unloaded data. Assume that the amount of data collected by ship B E n at time slot t is B n t , Then the queue backlog of the local cache queue Q n t is expressed as
Q n t + 1 = m a x Q n t U n t + B n t , 0
where U n t denotes the the amount of offloaded data of the ship B E n at the t time slot; it can be expressed as
U n t = min τ B n log 2 ( 1 + P n t h n t e n t + δ 2 , Q n t + B n t }
In the formula, P n t indicates the transmission power of the ship B E n . B n , h n t , and e n t denote the transmission bandwidth, channel gain, and interference between the ship B E n and the communication base station, respectively.
Each shore-based edge server, as well as the cloud server, maintains a separate cache queue for the ship B E n to store the data that have been offloaded but not yet processed. The shore-side cache queue and the cloud-side cache queue on the shore-based edge server S U m and cloud server are represented as
Y n , m s u t + 1 = m a x { Y n , m s u t τ f n , m e t λ n + w n , m o t 1 w n , m c t U n t , 0 } , Y m c t + 1 = m a x Y m c t τ f n , m c ( t ) λ n + w m c t U n t , 0 ) .
where we define w t = w n , m o t , w m c t to be the decision variable of task offloading. w n , m o t = 1 indicates that at the t time slot, the ship B E n selects the shore-based edge server for task offloading; otherwise, w n , m o t = 0 . w m c t = 1 indicates that at the t time slot, the ship B E n selects cloud computing; otherwise, w n , m c t = 0 .

3.4. Communication Model

Based on the time slot model, the total optimization time is divided into T equal-length time slots; the set is denoted as 𝒯 = 1 , , t , , T and the length of the time slots is τ . It is assumed that the channel state remains constant within a time slot and varies dynamically between time slots [34]. Similar to the computational resource allocation policy, the communication resources of S U m (i.e., data transfer rate) are allocated proportionally to all BEs requesting S U m . Thus, the S U m allocated data transfer rate to the B E n is
f n , m = γ n v = 1 N γ v λ ~ v , m f ~ m = γ n B E v B N n γ v λ ~ v , m f ~ m
The computation delay of a n executed by S U m is formulated as
t ~ n , m = ω m f n , m
The energy consumption of B E n offloading a n to B E n is
e ~ n , m = σ n ´ t ~ n , m
B E n cannot directly request the cloud. Given the data transmission rate between S U m and the Cloud, the communication delay for S U m offloading a n to the cloud can be formulated as
t ^ n , h i = δ n r ^
Therefore, the transmission latency between B E n and the cloud is t ~ n , m + t ^ n , h i .
The end-to-end latency of the task offloading process consists of the queuing latency of the ship’s local cache queue, the transmission latency, the queuing latency of the shore-side cache queue and the cloud-side cache queue, the computation latency, and the result feedback latency. Since the queuing delay accounts for a relatively large portion of the end-to-end delay, ultra-reliable and low-latency communication constraints are imposed on the ship-side, cloud-side, and shore-side queuing delays to ensure the effectiveness and timeliness of task offloading.

3.5. Problem Formulation

Task completion time is an important metric for evaluating the performance of an application, indicating the time required by the application for the entire process from data input to result output. We discuss the execution time of the whole application in terms of each component of the application executing locally and the edge server executing separately.
This paper aims to solve the cloud–shore–ship multidimensional heterogeneous resource scheduling optimization problem, including the task offloading problem on the ship side and the joint optimization problem of computing resource allocation on the shore and the cloud, in order to maximize the network throughput under the constraints of task delay and energy consumption on the ship side. The multidimensional heterogeneous resource scheduling optimization problem can be modeled as
P 1 : max w , p , f su , f c t = 1 T n = 1 N U n t   s . t .   C 1 : w n , m o t , w n , m c t 0,1 , n N , m M , t T     C 2 : m = 1 M w n , m o t = 1 , m M , t T   C 3 :   n = 1 N f n , m e t f n , m a x e t , n N , t T     C 4 :   n = 1 N f n c t f m a x c t , t T       C 5 :   t = 1 T E n t E n , m a x t t , n N     C 6 :   lim T 1 T t = 1 T P r Q n t 1 t 1 j = 1 t 1 B n t 1 > d n Q ς n Q ;   lim T 1 T t = 1 T P r Y n , m s u t U n , m s u t 1 > d n , m s u ς n s u ; lim T 1 T t = 1 T P r Y m c t U m c t 1 > d m c ς n c ; n N , m M , t T
where, a = a ( t ) , t T denotes the end task offload decision vector, f s u = f n , m e t , n N , m M , t T and f c = f m c t , m M , t T denotes the shore-based edge server computing resource allocation decision vector, respectively. C1 and C2 indicate that at each time slot, each vessel can only select one shore-based edge server for data offloading, and only one computation mode can be selected. C3 and C4 are computing resource constraints for shore-based edge servers and cloud servers, respectively. C5 and C6 are energy consumption and task delay constraints, respectively.

4. Task-Driven Computing Unloading and Resource Allocation Optimization Scheme

4.1. Task Offloading Decision

P1 is an NP-hard problem, which is difficult to solve directly, so the problem needs to be transformed. We use the Lyapunov optimization principle [35] to transform the optimization problem into a series of short-term deterministic problems to be solved. By defining the drift minus reward function, we transform P1 into an upper bound for the minimization of the drift minus reward function under the C1C7 constraint. According to the different optimization variables involved in each of the upper bound formulations, the optimization problem can be further decoupled into three deterministic sub-problems, namely, the ship-side task offloading sub-problem SP1, the shore-side computing resource allocation sub-problem SP2 and the cloud-side computing resource allocation sub-problem SP3.
SP1: Ship-side task offloading and communication resource allocation, which can realize joint optimization of ship-side task offloading and communication resource allocation, whose expression is
S P 1 : max w n , m o t , w n , m c t , P n t Ψ ( w n , m o ( t ) , w n , m c t , P n ( t ) ) s . t .   C 1 , C 2 , C 3 , C 4 , C 5
In this paper, we first model SP1 as a Markov decision process model and then propose a learning-based joint optimization algorithm for semi-distributed task offloading and communication resource allocation to achieve global performance optimization through cloud–shore–ship collaborative resource scheduling.
(1) State space. The state space S n ( t ) a contains information related to the task queue backlog, historical experience, and task arrivals of ship B E n , and it is denoted as
S n t = Q n t , Y n c t , Y n , m s u t , U m c t 1 , U n , m s u t 1
(2) Action space. The action space is defined as the task offloading decision and communication resource allocation decision of the ship B E n , and it is denoted as
X n t = w n , m o t , w n , m c t , P n t
(3) Reward function. The reward function is defined as the optimization objective of SP1, which is defined as
Ψ ( w n , m o t , w n , m c t , P n t )
(4) State transfer probability. The state transfer probability is defined as the probability that the ship B E n selects an action X n t in the state S n ( t ) and transfers to the next state S n ( t + 1 ) . It is denoted as
P ( S n ( t + 1 ) | S n t , X n t )
SP2: Shore-side edge computing resource allocation, which can be expressed as
S P 2 : max { f n , m e t } n = 1 N Y n , m s u t τ f n , m e t λ n s . t . C 3   ,                                         C 7 : τ f n , m e t λ n < Y n , m s u t .
SP3: Cloud computing resource allocation, which can be expressed as
S P 3 : max { f m c t } n = 1 N Y m c t τ f m c t λ n s . t . C 4   ,                                         C 8 : τ f m c t λ n < Y m c t
For SP2 and SP3, we propose a task queueing-based heuristic algorithm. The algorithm can prioritize the allocation of sufficient computational resources to the ships with the largest backlog in the shore-side and cloud cache queues to maximize the optimization objectives of SP2 and SP3 in a low-complexity manner. The steps of shore-side computing resource allocation are as follows:

4.2. Resource Allocation Scheme

Due to the high dimensionality of the state space and action space of the joint task offloading and communication resource allocation problem solved in SP1, the statistical model and numerical solution of the state transfer probability cannot be obtained. Therefore, this paper proposes a joint optimization algorithm based on deep learning. The algorithm performs state action value assessment by continuously interacting with the environment, fits the state-action-value function using a deep neural network, and approximates the optimal value by learning to update the network parameters to obtain the optimal policy for task offloading and communication resource allocation. The algorithm is shown in Figure 6, including global model download, action selection, local model update, local model upload, and cloud aggregation update. Through the interaction of task offloading optimization models for cloud–shore–ship, on the one hand, the problem of high cost and computational complexity of centralized intelligent communication can be avoided, and on the other hand, the problem of poor network performance due to the inability of distributed intelligent algorithms to make full use of global information can be solved.
The ship-side and cloud-side Q-learning models consist of a master network responsible for generating task offloading and communication resource allocation decisions and a target network used to assist the training of the master network, where the parameters of the local master network, local target network, global master network, and global target network are set as w l m a i n , w l t a r g e t , w g m a i n , w g t a r g e t , The steps of the algorithm are as follows.
(1)
Cloud and shore model download. In the t time slot, B E n downloads w g m a i n and w g t a r g e t from the cloud server, then set w n m a i n = w g m a i n and w n t a r g e t = w g t a r g e t .
(2)
Action space. Estimated Q ( S n t , X n t , w n m a i n ) based on the local network, B E n execute action X n t , Shore-based edge servers and cloud servers complete the allocation of computing resources according to Algorithm 1. Update the parameters to move the state to S n t + 1 . Meanwhile, samples are generated for updating the historical experience pool and B E n calculates the rewards Ψ ( w n , m o t , w n , m c t , P n t ) .
(3)
Local model update. B E n randomly selects samples from historical experience and calculates the loss function
ϕ n = 1 V Ψ ( w n , m o t , w n , m c t , P n t ) + γ max X n t + 1 Q ( S n t + 1 , X n t + 1 , w n t a r g e t Q ( S n t , X n t , w n m a i n ) ) 2 .
where V is the number of samples and γ is the discount factor.
(4)
Local model upload. B E n uploads local models to the cloud server for the global model update.
(5)
Shore aggregation update. At the end of the t time slot, the shore edge servers perform the shore aggregation update process based on the collected local model.
w g m a i n = w g m a i n φ D k D w k m a i n ϕ k 2
w g t a r g e t = w g t a r g e t φ D k D w k t a r g e t ϕ k 2
(6)
Cloud aggregation update. At the end of the t time slot, the cloud server performs the cloud aggregation update process based on the collected local and shore models.
Algorithm 1: The Task Queueing-based Task Offloading Algorithm
the   set   of   ships   that   need   to   be   allocated   computing   resources   in   t ,   R n , t = { B E n R Y n , m s u t > 0 } ,   and   initialize   the   available   computing   resources   f n a v a t = f n , m a x e t
for   B E n R , v n , t e t = m i n f n a v a t , λ n Y n , m s u t τ , B E n = a r g m a x B E n R n , t Y n , m s u t τ v n , t e t \ λ n R n , t R n , t \ B E n ,   f n a v a t f n a v a t v n , t e t if   R n , t ,   or   f n a v a t 0 repeat   v n , t e t   and   B E n
Return resource allocation strategy Q.

5. Case Studies

5.1. Basic Parameters

In this paper, we consider a cloud–shore–ship collaborative resource scheduling scenario with one cloud server, three shore-based edge servers, and ten ships, and verify the effectiveness of the proposed task-driven cloud–shore–ship collaborative resource scheduling method through simulation case analysis, with the specific simulation parameters shown in Table 1. In order to verify the impact of the above experimental contents on the system performance, this paper compares the scheme with the following five task processing methods:
(1)
Ship: Tasks are processed and calculated only at the ship’s terminal.
(2)
Shore: Task offload to shore-based edge nodes for computation.
(3)
Cloud: All tasks are offloaded to the cloud center for execution.
(4)
Ship-Shore: Tasks are performed locally on the ship and at the shore-based edge nodes.
(5)
Random: Tasks are computed in a randomized cloud–shore–ship execution.

5.2. Simulation Results and Analysis

(1)
Comparison of the proposed algorithm with ABC, PSO, and RAN
In order to verify the proposed algorithm, this paper compares the performance of the Proposed algorithm (PA), Artificial Bee Colony algorithm (ABC) [36], Particle Swarm Optimization algorithm (PSO) [37], and Random Offloading algorithm (RAN) [38] from the task delay, energy consumption, and system throughput to evaluate the performance of the proposed algorithm.
After processing different task amounts and after 100 iterative calculations, the average task delay and energy consumption comparison of the four offloading decision algorithms are shown in Figure 7 and Figure 8, respectively. When the number of tasks is small, the gap between PM, ABC, and PSO is very small. With the increase in the number of tasks, the average task delay of PM is reduced by 15.7%, 25.6%, and 54.8%, respectively, compared with ABC, PSO, and RAN. Figure 9 and Figure 10 shows the changes in system energy consumption. Compared with different numbers of tasks, the system energy consumption of PM is significantly reduced to varying degrees. Among them and compared with ABC, the energy consumption of PM is reduced by 15.4%. Compared with PSO, the energy consumption is reduced by 38.8%, which is 54.9% lower than RAN. It can be seen that PM has certain advantages and can improve the performance of the whole system.
Task processing delay refers to the time from task initiation to receiving task processing feedback, which includes task queuing time, communication time, processing time, and so on. Task energy consumption refers to the energy consumed from task initiation to receiving task processing feedback, including communication consumption and calculation consumption. The level of task delay indicates the timeliness of task feedback. In the process of MASS operation, the smaller the general task and task delay, the higher the calculation efficiency. The level of task energy consumption indicates the cost spent by the system in processing tasks. It is generally believed that the smaller the task energy consumption, the better the task processing strategy.
(2)
The effect of the number of ships on the system performance
Figure 9 and Figure 10 evaluate the impact of the number of ships on the average task delay and system energy consumption, respectively. It can be seen from Figure 8 that the average task delay increases with the increase in the number of ships for all schemes, but the average delay of the cloud–shore–ship cooperative scheme is always the smallest. When the number of ships increases, the computational resources required for the generated tasks also increase rapidly. Combining the data volume and QoS of the tasks, the system offloads some tasks from the ships to the shore-based edge computing nodes or cloud centers for processing based on the offloading model to obtain the smallest task processing delay. In addition, it is found that when the number of ships gradually increases, the average task delay of the computation scheme relying only on the shore-based edge computing nodes is larger than that of the ship local computation scheme, which is due to the competition for communication and computation resources during the task offloading process and does not follow a reasonable offloading scheme, resulting in unnecessary waiting links for tasks in both the transmission process and the computation process, and therefore the latency is getting higher.
It can be seen from Figure 11 that the system energy consumption also increases with the number of ships, and the energy consumption of the cloud–shore–ship cooperative scheme has a strong superiority. The energy consumption of Shore–Ship is lower than that of Cloud. Compared with the Cloud center, the energy consumption of the waiting process during the task arrival at the Shore nodes is negligible, and its transmission energy consumption is significantly lower; therefore, the overall energy consumption of Shore scheme is lower than that of Cloud.
(3)
The effect of the task data volume on the system performance
The impact of the task data volume on the task average delay and system energy consumption is evaluated here (the number of ships is 10). The task average delay and system energy consumption for each scenario in this configuration are shown in Figure 11 and Figure 12. The task average delay increases with increasing task data volume for all six scenarios, but the task average delay of the cloud–shore–ship cooperative scenario is the smallest. Because of the limited amount of local computation on the ship, the task latency will first keep a stable value and then increase beyond a certain amount. The task average delay of shore mode is higher than that of cloud mode and ship local mode because of the competition for communication and computation resources when offloading tasks and data volume increase. From the practical performance, the proposed scheme can quickly handle the tasks with high input data volume on the ships.
From Figure 13, the energy consumption of the six schemes gradually increases with the increase in task data volume, but the cloud–shore–ship collaborative scheme performs better. Therefore, when faced with a large amount of data input, the energy consumption of this scheme outperforms the other modes, which is 59.72% lower than Random mode, 31.28% lower than Cloud mode, and 28.67% lower than Shore mode.
(4)
The effect of the maximum task delay on the system performance
The impact of the maximum task delay on the average task delay and energy consumption is evaluated here. As shown in Figure 13 and Figure 14, when the maximum task delay gradually increases, more tasks will be executed in the cloud and shore, while tasks requiring large amounts of computational resources are offloaded to the edge nodes, so the average task delay decreases as the delay tolerance increases. The cloud–shore–ship collaborative scheme always maintains the minimum average task delay compared with other schemes, the system energy consumption also shows a similar performance to the delay state, and the scheme in this paper always maintains low values for the corresponding average task delay and energy consumption, which is superior.
(5)
The effect of ship’s maximum communication power on system performance
Figure 15 and Figure 16 show the effect of the maximum communication power of the ship on the average task delay and system energy consumption, respectively. As the ship’s communication power increases, the transmission rate of communication increases, the task unloading speed becomes faster, and the task average delay becomes smaller. The results show that the maximum communication power of the ship increases less, but the average task delay decreases faster, so the overall energy consumption of the system tends to decrease. The method proposed in this paper shows certain advantages over other schemes in terms of both task delay and energy consumption.

6. Conclusions and Future Works

In this paper, we build a cloud–shore–ship collaborative computing architecture that integrates the Internet of ships and cloud/edge computing for inland waterway ship intelligent navigation scenarios. According to the type of tasks, time delay, and synergy, this architecture can realize having the computational tasks of ship intelligent navigation processed locally on the ship, at the shore-based edge node, or at the cloud center. In this paper, based on previous research results, we consider the task relationship, task priority, and ship mobility, which establish the ship, shore, and cloud computing models, respectively, design the joint optimized delay and energy consumption objective functions, and realize the adaptive offloading of tasks under the evaluation of each resource. In addition, this paper proposes a task offloading method based on artificial intelligence for the dynamic prediction of computing resources. Simulation results show that the scheme in this paper can provide low latency and low energy consumption under the high task concurrency of the ship.
Future research can focus on network security, communication failure, and task uncertainty. For the operation of MASS, network security and communication redundancy are indispensable. In extreme conditions, how to ensure the safe navigation of MASS needs to be seriously studied. In the process of MASS operation, the computing task is often highly uncertain and needs to be paid attention to.

Author Contributions

Conceptualization, S.X.; methodology, Y.Z.; software, Y.Z.; validation, H.C. and Y.W.; formal analysis, C.X.; investigation, S.X.; resources, C.X.; data curation, Y.W.; writing—original draft preparation, S.X.; writing—review and editing, H.C.; visualization, Y.Z.; supervision, Y.W.; project administration, C.X.; funding acquisition, S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Doctoral Research Start Project of Henan Institute of Technology (grant number KQ2110), the Research Program of Sanya City through Grant No. 2022KJCX36, and the Zhejiang Provincial Science and Technology Program (Grant No. 2021C01010).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

Thank you to all those who have contributed to the research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, H.; Zhu, M.; Wen, Y.; Xiao, C.; Axel, H.; Cheng, X. An implementable architecture of inland autonomous waterway transportation system. IFAC-PapersOnLine 2021, 54, 37–42. [Google Scholar] [CrossRef]
  2. Ramos, M.A.; Thieme, C.A.; Utne, I.B.; Mosleh, A. Human-system concurrent task analysis for maritime autonomous surface ship operation and safety. Reliab. Eng. Syst. Saf. 2020, 195, 106697. [Google Scholar] [CrossRef]
  3. Chen, H.; Wen, Y.; Huang, Y.; Xiao, C.; Sui, Z. Edge Computing Enabling Internet of Ships: A Survey on Architectures, Emerging Applications, and Challenges. IEEE Internet Things J. 2024, 1. [Google Scholar] [CrossRef]
  4. Jeevan, J.; Ramamoorthy, K.; Salleh, N.H.M.; Hu, Y.; Park, G.-K. Implication of e-navigation on maritime transportation efficiency. WMU J. Marit. Aff. 2020, 19, 73–94. [Google Scholar] [CrossRef]
  5. Aslam, S.; Michaelides, M.P.; Herodotou, H. Internet of ships: A survey on architectures, emerging applications, and challenges. IEEE Internet Things J. 2020, 7, 9714–9727. [Google Scholar] [CrossRef]
  6. Liu, G.; Perez, R.; Muñoz, J.; Regueira, F. Internet of ships: The future ahead. World J. Eng. Technol. 2016, 4, 220. [Google Scholar] [CrossRef]
  7. Liu, R.W.; Guo, Y.; Nie, J.; Hu, Q.; Xiong, Z.; Yu, H.; Guizani, M. Intelligent Edge-Enabled Efficient Multi-Source Data Fusion for Autonomous Surface Vehicles in Maritime Internet of Things. IEEE Trans. Green Commun. Netw. 2022, 6, 1574–1587. [Google Scholar] [CrossRef]
  8. Fiorini, M.; Galloro, M. Initial descriptions of e-navigation Common Shore-based System Architecture (CSSA). In Proceedings of the 2022 IEEE 9th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Pisa, Italy, 27–29 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 169–173. [Google Scholar]
  9. Malyankar, R.M.; Shea, K.M.; Spalding, J.W.; Lewandowski, M.J.; Baddam, A.R. Managing heterogeneous models and schemas in the Waterway Information Network. In Proceedings of the 2003 Annual National Conference on Digital Government Research, Boston MA USA, 18–21 May 2003; pp. 1–4. [Google Scholar]
  10. Chang, S.H.; Mao-Sheng, H. A novel software-defined wireless network architecture to improve ship area network performance. J. Supercomput. 2017, 73, 3149–3160. [Google Scholar] [CrossRef]
  11. Verberght, E.; Rashed, Y.; van Hassel, E.; Vanelslander, T. Modeling the impact of the River Information Services Directive on the Performance of inland navigation in the ARA Rhine Region. Eur. J. Transp. Infrastruct. Res. 2022, 22, 53–82. [Google Scholar] [CrossRef]
  12. Wang, H.; Liu, T.; Kim, B.G.; Lin, C.-W.; Shiraishi, S.; Xie, J.; Han, Z. Architectural design alternatives based on cloud/edge/fog computing for connected vehicles. IEEE Commun. Surv. Tutor. 2020, 22, 2349–2377. [Google Scholar] [CrossRef]
  13. Arthurs, P.; Gillam, L.; Krause, P.; Wang, N.; Halder, K.; Mouzakitis, A. A taxonomy and survey of edge cloud computing for intelligent transportation systems and connected vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6206–6221. [Google Scholar] [CrossRef]
  14. Jiang, M.; Wu, T.; Wang, Z.; Gong, Y.; Zhang, L.; Liu, R.P. A multi-intersection vehicular cooperative control based on end-edge-cloud computing. IEEE Trans. Veh. Technol. 2022, 71, 2459–2471. [Google Scholar] [CrossRef]
  15. Zhou, X.; Ke, R.; Yang, H.; Liu, C. When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges. Appl. Sci. 2021, 11, 9680. [Google Scholar] [CrossRef]
  16. Song, X.; Guo, Y.; Li, N.; Zhang, L. Online traffic flow prediction for edge computing-enhanced autonomous and connected vehicles. IEEE Trans. Veh. Technol. 2021, 70, 2101–2111. [Google Scholar] [CrossRef]
  17. Liu, J.; Ahmed, M.; Mirza, M.A.; Khan, W.U.; Xu, D.; Li, J.; Aziz, A.; Han, Z. RL/DRL meets vehicular task offloading using edge and vehicular cloudlet: A survey. IEEE Internet Things J. 2022, 9, 8315–8338. [Google Scholar] [CrossRef]
  18. Pan, J.; McElhannon, J. Future edge cloud and edge computing for internet of things applications. IEEE Internet Things J. 2017, 5, 439–449. [Google Scholar] [CrossRef]
  19. Cao, B.; Sun, Z.; Zhang, J.; Gu, Y. Resource allocation in 5G IoV architecture based on SDN and fog-cloud computing. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3832–3840. [Google Scholar] [CrossRef]
  20. Feng, C.; Han, P.; Zhang, X.; Yang, B.; Liu, Y.; Guo, L. Computation offloading in mobile edge computing networks: A survey. J. Netw. Comput. Appl. 2022, 202, 103366. [Google Scholar] [CrossRef]
  21. Gasmi, K.; Dilek, S.; Tosun, S.; Ozdemir, S. A survey on computation offloading and service placement in fog computing-based IoT. J. Supercomput. 2022, 78, 1983–2014. [Google Scholar] [CrossRef]
  22. Liu, L.; Chang, Z.; Guo, X.; Mao, S.; Ristaniemi, T. Multi-objective optimization for computation offloading in fog computing. IEEE Internet Things J. 2017, 5, 283–294. [Google Scholar] [CrossRef]
  23. Wang, Z.; Zhao, D.; Ni, M.; Li, L.; Li, C. Collaborative mobile computation offloading to vehicle-based cloudlets. IEEE Trans. Veh. Technol. 2020, 70, 768–781. [Google Scholar] [CrossRef]
  24. Luo, Q.; Li, C.; Luan, T.H.; Shi, W.; Wu, W. Self-learning based computation offloading for internet of vehicles: Model and algorithm. IEEE Trans. Wirel. Commun. 2021, 20, 5913–5925. [Google Scholar] [CrossRef]
  25. Pliatsios, D.; Sarigiannidis, P.; Lagkas, T.D.; Argyriou, V.; Boulogeorgos, A.-A.A.; Baziana, P. Joint wireless resource and computation offloading optimization for energy efficient internet of vehicles. IEEE Trans. Green Commun. Netw. 2022, 6, 1468–1480. [Google Scholar] [CrossRef]
  26. Huang, X.; Leng, S.; Maharjan, S.; Zhang, Y. Multi-agent deep reinforcement learning for computation offloading and interference coordination in small cell networks. IEEE Trans. Veh. Technol. 2021, 70, 9282–9293. [Google Scholar] [CrossRef]
  27. Lyu, Y.; Liu, Z.; Fan, R.; Zhan, C.; Hu, H.; An, J. Optimal Computation Offloading in Collaborative LEO-IoT Enabled MEC: A Multi-agent Deep Reinforcement Learning Approach. IEEE Trans. Green Commun. Netw. 2022, 7, 996–1011. [Google Scholar] [CrossRef]
  28. Pan, C.; Wang, Z.; Liao, H.; Zhou, Z.; Wang, X.; Tariq, M.; Al-Otaibi, S. Asynchronous Federated Deep Reinforcement Learning-Based URLLC-Aware Computation Offloading in Space-Assisted Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2022, 24, 7377–7389. [Google Scholar] [CrossRef]
  29. Bi, S.; Huang, L.; Wang, H.; Zhang, Y.-J.A. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks. IEEE Trans. Wirel. Commun. 2021, 20, 7519–7537. [Google Scholar] [CrossRef]
  30. Chen, Z.; Zhang, L.; Pei, Y.; Jiang, C.; Yin, L. NOMA-Based Multi-User Mobile Edge Computation Offloading via Cooperative Multi-Agent Deep Reinforcement Learning. IEEE Trans. Cogn. Commun. Netw. 2021, 8, 350–364. [Google Scholar] [CrossRef]
  31. Chen, H.; Wen, Y.; Zhu, M.; Huang, Y.; Xiao, C.; Wei, T.; Hahn, A. From automation system to autonomous system: An architecture perspective. J. Mar. Sci. Eng. 2021, 9, 645. [Google Scholar] [CrossRef]
  32. Chen, H.; Wen, Y.; Zhu, M.; Huang, Y.; Xiong, W.; Xiao, C.; Lu, Z. A Function-Oriented Electronic and Electrical Architecture of Remote Control Ship on Inland River: Design, Verification, and Evaluation. IEEE Trans. Transp. Electrif. 2022, 9, 1641–1652. [Google Scholar] [CrossRef]
  33. Sun, F.; Hou, F.; Cheng, N.; Wang, M.; Zhou, H.; Gui, L.; Shen, X. Cooperative Task Scheduling for Computation Offloading in Vehicular Cloud. IEEE Trans. Veh. Technol. 2018, 67, 11049–11061. [Google Scholar] [CrossRef]
  34. Mabrouk, A.; Naja, A.; Oualhaj, O.A.; Kobbane, A.; Boulmalf, M. A Cooperative Game Based Mechanism for Autonomous Organization and Ubiquitous Connectivity in VANETs. Simul. Model. Pract. Theory 2020, 107, 102213. [Google Scholar] [CrossRef]
  35. Zhou, T.; Qin, D.; Nie, X.; Li, X.; Li, C. Energy-Efficient Computation Offloading and Resource Management in Ultradense Heterogeneous Networks. IEEE Trans. Veh. Technol. 2021, 70, 13101–13114. [Google Scholar] [CrossRef]
  36. Kien, C.V.; Anh, H.; Son, N.N. Adaptive inverse multilayer fuzzy control for uncertain nonlinear system optimizing with differential evolution algorithm. Appl. Intell. 2021, 51, 527–548. [Google Scholar] [CrossRef]
  37. Xu, X.; Hao, J.; Zheng, Y. Multi-objective artificial bee colony algorithm for multi-stage resource leveling problem in sharing logistics network. Comput. Ind. Eng. 2020, 142, 106338. [Google Scholar] [CrossRef]
  38. Hamdi, M.; Zaied, M. Resource allocation based on hybrid genetic algorithm and particle swarm optimization for D2D multicast communications. Appl. Soft Comput. 2019, 83, 105605. [Google Scholar] [CrossRef]
Figure 1. The cloud–shore–ship collaboration framework integrating IoS and cloud/edge computing.
Figure 1. The cloud–shore–ship collaboration framework integrating IoS and cloud/edge computing.
Jmse 13 00016 g001
Figure 2. The basic task processing of cloud–shore–ship collaboration framework.
Figure 2. The basic task processing of cloud–shore–ship collaboration framework.
Jmse 13 00016 g002
Figure 3. The diagram of cloud–shore–ship collaboration task processing.
Figure 3. The diagram of cloud–shore–ship collaboration task processing.
Jmse 13 00016 g003
Figure 4. The tasks unloading process of MASS.
Figure 4. The tasks unloading process of MASS.
Jmse 13 00016 g004
Figure 5. (a) The processing flow of serial tasks; (b) The processing flow of ring tasks.
Figure 5. (a) The processing flow of serial tasks; (b) The processing flow of ring tasks.
Jmse 13 00016 g005
Figure 6. The Q-learning based task offloading algorithm for cloud–shore–ship collaboration.
Figure 6. The Q-learning based task offloading algorithm for cloud–shore–ship collaboration.
Jmse 13 00016 g006
Figure 7. The average task delay of the four algorithms with different task numbers.
Figure 7. The average task delay of the four algorithms with different task numbers.
Jmse 13 00016 g007
Figure 8. The system energy consumption of the four algorithms with different task numbers.
Figure 8. The system energy consumption of the four algorithms with different task numbers.
Jmse 13 00016 g008
Figure 9. The impact of the number of ships on the task average delay.
Figure 9. The impact of the number of ships on the task average delay.
Jmse 13 00016 g009
Figure 10. The impact of the number of ships on the system energy consumption.
Figure 10. The impact of the number of ships on the system energy consumption.
Jmse 13 00016 g010
Figure 11. The impact of task data volume on the task average delay.
Figure 11. The impact of task data volume on the task average delay.
Jmse 13 00016 g011
Figure 12. The impact of the task data volume on the system energy consumption.
Figure 12. The impact of the task data volume on the system energy consumption.
Jmse 13 00016 g012
Figure 13. The impact of the maximum task delay on the task average delay.
Figure 13. The impact of the maximum task delay on the task average delay.
Jmse 13 00016 g013
Figure 14. The impact of the maximum task delay on the system energy consumption.
Figure 14. The impact of the maximum task delay on the system energy consumption.
Jmse 13 00016 g014
Figure 15. The impact of the maximum communication power on the task average delay.
Figure 15. The impact of the maximum communication power on the task average delay.
Jmse 13 00016 g015
Figure 16. The impact of the maximum communication power on the system energy consumption.
Figure 16. The impact of the maximum communication power on the system energy consumption.
Jmse 13 00016 g016
Table 1. Simulation parameters.
Table 1. Simulation parameters.
Parameter TypeParameter NameValue
System parametersSlot T100 s
Slot   length   τ 0.1
Bandwidth   B k 1 MHz
Link   interference   δ 2 −120
Weights   V 10
Explore   factor   ε 0.1
Discount   factor   γ 0.95
Task   data   volume   A [1.0, 6.0] Mbit
Ship parameters Calculate   ability   f n 1000 cycles/bit
Queuing   delay   threshold   d n Q 0.2 s
Transmission   power   P n t 0.4
Shore parameters Calculate   ability   f n , m e t [4, 8] GHz
Queuing   delay   threshold   d n , m s u 0.2 s
Cloud parameters Calculate   ability   f n , m c t 10 GHz
Queuing   delay   threshold   d m c 0.2 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiu, S.; Zhang, Y.; Chen, H.; Wen, Y.; Xiao, C. Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework. J. Mar. Sci. Eng. 2025, 13, 16. https://doi.org/10.3390/jmse13010016

AMA Style

Xiu S, Zhang Y, Chen H, Wen Y, Xiao C. Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework. Journal of Marine Science and Engineering. 2025; 13(1):16. https://doi.org/10.3390/jmse13010016

Chicago/Turabian Style

Xiu, Supu, Ying Zhang, Hualong Chen, Yuanqiao Wen, and Changshi Xiao. 2025. "Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework" Journal of Marine Science and Engineering 13, no. 1: 16. https://doi.org/10.3390/jmse13010016

APA Style

Xiu, S., Zhang, Y., Chen, H., Wen, Y., & Xiao, C. (2025). Task-Driven Computing Offloading and Resource Allocation Scheme for Maritime Autonomous Surface Ships Under Cloud–Shore–Ship Collaboration Framework. Journal of Marine Science and Engineering, 13(1), 16. https://doi.org/10.3390/jmse13010016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop