1. Introduction
With the development of information and communication technologies, such as wireless sensor networks [
1], industrial Internet of Things (IIoT) [
2,
3], and cloud computing [
4], the transformation and upgrading of manufacturing technology has been promoted, which makes traditional manufacturing shift to intelligent manufacturing. The production equipment, transmission devices, sensors, and other terminal devices in the intelligent production lines have been connected based on various heterogeneous communication networks, making the traditional information islands evolve to the interconnection model. Meanwhile, the extensive use of intelligent devices has generated large amounts of tasks requiring real-time processing [
5]. Cloud computing was initially considered the primary enabler capable of processing the massive data generated by IIoT devices. However, there are many problems in cloud computing, the most prominent of which are mainly manifested in the following two aspects: transferring large-scale data from the IIoT devices to the cloud may not be effective, and in some cases, it may not even be feasible due to bandwidth limitations. On the other hand, the more considerable geographic distance between the intelligent edge device and the cloud service center may lead to higher service delays [
6], which violates the quality of service requirements for customer requests, such as ultra-low latency requests in intelligent production lines.
A computing paradigm closer to connected devices is needed to solve the problems mentioned above. Fog computing/edge computing, an extension and improvement of cloud computing, deploys fog nodes with certain computing and storage capabilities near terminal devices, enabling cloud services to migrate to the edge of the network for faster response to requests for time-sensitive tasks [
7]. However, fog computing cannot completely replace cloud computing. In contrast, both technologies can work together to improve latency and reliability, reduce response time, and are widely used in many fields [
8]. For example, in [
9], one strategy based on cloud-fog computing was proposed for the virtual reality system of the Industry 4.0 shipyard.
There are some urgent problems in cloud-fog computing and intelligent production line task scheduling. There are considerable differences in computing power, storage, and communication capabilities among various fog nodes in a cloud-fog computing environment. The tasks generated by terminal devices are highly heterogeneous in real time and energy consumption [
10,
11]. In the intelligent production line, different task service sequences bring different delays. In particular, some time-sensitive tasks, such as production line early warning and high delays caused by unreasonable task scheduling strategies, can result in catastrophic results.
On the other hand, in intelligent production lines where batteries power many fog nodes, different task scheduling strategies lead to different energy consumption, which inevitably brings many problems. For example, a study found that when a device cannot be charged in time, frequent data exchange, transmission, and processing can cause the battery’s life to be significantly reduced due to instantaneous discharge, thereby causing a data leakage security risk [
12]. It is a tremendous challenge for intelligent production lines to ensure low delay to complete tasks and effectively reduce the power consumption of fog nodes. However, there are still few studies on task scheduling for intelligent production lines in our literature survey. In the intelligent production line, we take latency and energy minimization as the optimization direction for task scheduling, considering the time-sensitive differences of various types of tasks and heterogeneity of the running power consumption of different computing nodes.
The task scheduling problem is a challenging non-deterministic polynomial difficulty (NP-hard) problem [
13]. To date, the hybrid heuristic algorithm can combine the advantages of various heuristic algorithms to solve the task scheduling problem with high accuracy [
14]. Therefore, we use a hybrid heuristic algorithm to solve the proposed optimization problem, realize the efficient use of cloud computing resources, and reduce the overall consumption of computing resources while satisfying low latency. The main contributions of our study are listed as follows:
We present a multi-objective task scheduling optimization problem in intelligent production lines. A multi-priority task scheduling strategy based on a cloud-fog computing architecture is used to solve this problem, achieving a fast response to intelligent production line tasks and reducing energy consumption.
A new task scheduling algorithm hybridizing the MBO and ACO is implemented in our study. The improved MBO and ACO more easily converge. More importantly, this is the first time that MBO has been applied to task scheduling scenarios in intelligent production lines.
We establish an intelligent production line simulation experiment platform based on C++ and evaluate the proposed algorithm. The results show that it is superior to other strategies in terms of average delay and power consumption.
The remainder of the paper is organized as follows. In
Section 2, we describe the related work. In
Section 3, we introduce the system model and problem formulation. In
Section 4, we propose a task scheduling algorithm.
Section 5 discusses the performance evaluation. Finally, in
Section 6, we give a brief conclusion.
2. Related Work
In recent years, with the continuous development of fog computing and the requirements of terminal equipment for real-time performance and energy consumption, cloud-fog computing has become a trend, and task scheduling under cloud-fog computing has become a necessary research hotspot. We reviewed many studies on task scheduling for cloud and fog computing and listed them below.
2.1. Cloud Computing Task Scheduling
Cloud computing provides rich computing, storage, and other application services for industrial production, bringing huge energy consumption. With increasing attention being paid to carbon neutrality [
15], it is imperative to improve the task allocation efficiency of cloud computing and reduce energy consumption in the industry. To obtain the best performance of task scheduling in cloud computing, Rajakumari et al. [
16] proposed a fuzzy hybrid particle swarm parallel ant colony algorithm. This algorithm improved task scheduling with the objectives of minimizing execution and waiting time, increasing system throughput, and maximizing resource utilization. However, the study did not consider energy efficiency. Under the premise of ensuring cloud computing service quality, Rao et al. [
17] completed the coordination and energy consumption minimization of data center scheduling. Lin et al. [
18] proposed two IoT-Aware multi-resource task scheduling algorithms to reorder tasks based on priority, and task scheduling using heuristic algorithms. The simulation results showed that this method could reduce energy consumption as much as possible while ensuring the response time and load balancing results of IoT tasks. Although the above two studies guide task allocation, the difference in power consumption of the computing unit itself was not considered for task scheduling.
2.2. Fog Computing Task Scheduling
Fog nodes have differences in distribution and computing capacity. Effectively scheduling tasks requested by terminal devices can reduce service delay and energy consumption [
19]. In the field of intelligent manufacturing, Mithun et al. [
20] proposed a solution to the fog computing task offloading problem. This solution modelled the optimization problem mathematically and used quadratic constraint quadratic programming to solve the de-weighting problem, and finally solved the optimization problem by the semi-deterministic relaxation method. Chekired et al. [
21] proposed a self-adaptive fog computing multi-objective optimization task scheduling method, which solved the multi-objective optimization problem of fog computing task scheduling with the total execution time and resource cost of tasks as the optimization objectives. Both studies provided excellent ideas for reducing task processing and waiting time, but neither reduced the task processing power consumption. In the research of Hang et al. [
22], a joint computing offloading and wireless resource allocation algorithm based on Lyapunov optimization was proposed to minimize system delay, energy consumption, MDs weighting, and other associated costs. However, the study ignored the interaction between the cloud center and fog nodes and only divided the main problem into several sub-problems in each time slot and then allocated them to different fog nodes for calculation. Suppose we encounter a task that requires a large number of computing resources and is not divided, such as intelligent production line image processing. In that case, the resource-constrained fog node cannot process it, resulting in the task being unable to be completed. For the task offloading problem in fog computing, Keshavarznejad et al. [
23] proposed a multi-objective optimization problem of energy consumption and delay, which was solved using a hybrid heuristic algorithm. The results showed that the best trade-off was obtained between the probability of offloading and the energy consumption required for data transmission. Regrettably, this approach did not categorize tasks to respond to urgent tasks quickly.
2.3. Cloud-Fog Computing Environment Task Scheduling
In cloud-fog computing, scheduling IoT tasks to reduce the delay and energy of time-sensitive tasks has attracted the attention of researchers [
24]. Abdelmoneem et al. [
25] proposed a mobile-aware task scheduling and allocation method under the cloud-fog computing paradigm, which greatly reduced the energy consumption of task processing and task delay. This method effectively solved the task assignment of the sensing device in the mobile scene. However, it was unsuitable for intelligent production lines, etc., where the sensors were mostly fixed. Mokney et al. [
26] studied IoT tasks with dependencies under cloud-fog computing, proposed modeling workflow planning as a multi-objective optimization problem, and designed a compromise solution regarding response time, cost, and maximum completion time. The proposed algorithm was superior in solving the scheduling problem that depended on task flow. However, it did not solve the scheduling problem of independent tasks. The algorithm obtained the Pareto optimal solution, which could not meet the urgent tasks that require ultra-low time response. Bisht et al. [
27] studied the problem of rapid task response in the cloud-fog computing environment. They proposed a workflow scheduling method with the smallest maximum completion time and energy consumption. The research was based on task length for scheduling, and could not respond quickly to urgent and complex tasks.
Based on the above studies, we find that the task scheduling problem in the cloud-fog computing environment is a research hotspot in the IoT field, and the existing research cannot meet the requirements of low latency and low power consumption for multi-priority task scheduling in intelligent production lines.
2.4. Heuristic Algorithm to Solve the Task Scheduling Problem
Heuristic algorithms are a subset of the artificial intelligence field, which is popular in solving different optimization problems and is often used to solve task scheduling problems [
28,
29]. Common heuristic algorithms include ant colony optimization (ACO) [
30], genetic algorithm (GA) [
31], particle swarm optimization (PSO) [
32], simulated annealing algorithm (SAA) [
33], Grey Wolf Optimizer (GWO) [
34], monarch butterfly optimization algorithm (MBO) [
35], and so on. The MBO, with simple computational procedures and fewer parameters, is more easily to implement in these algorithms. MBO is suitable for solving small-scale search problems and is widely used in many fields [
36]. The ACO can search on a large scale. To improve its search process, it can have excellent exploration and development capabilities at the stage of generating the optimal solution [
28].
Many scholars have proposed that multiple single heuristics should be combined into a hybrid algorithm to obtain better task scheduling performance. In [
37], a meta-heuristic-based service allocation framework was designed to schedule edge service requests using three meta-heuristic techniques, PSO, binary PSO, and Bat algorithm (Bat). The experimental results showed that the framework solved the dual objective minimization problem of energy consumption and maximum completion time. Ref et al. [
38] proposed a hybrid bionic algorithm for cloud computing task scheduling and resource management. Fu et al. [
39] improved the service quality of cloud computing by adopting the task scheduling optimization algorithm of hybrid PSO and GA. There are many documents on cloud computing task scheduling algorithms, but few studies on hybrid heuristic scheduling algorithms for task scheduling under cloud-fog computing.
Inspired by the above studies, we use a hybrid heuristic algorithm of MBO and ACO to solve the intelligent production line task scheduling problem.
3. System Model and Problem Formulation
In this section, we present the mathematical description of the task scheduling problem for intelligent production lines under cloud-fog computing.
3.1. System Architecture
We built a cloud-fog computing architecture to solve the increasing problems of delay-sensitive and computationally intensive tasks in intelligent production lines. The system architecture is given in
Figure 1; it consists of three layers: infrastructure layer, fog computing layer, and cloud computing layer.
Infrastructure layer: The infrastructure layer consists of terminal devices with different functions, such as various sensors, processing devices and various smart terminals. Smart terminals handle simple tasks locally but are unable to perform complex tasks in real time.
Fog computing layer: The fog computing layer is mainly composed of fog nodes. These are servers with certain computing, communication, and storage capabilities in intelligent production lines, such as smart sensors, smart processing devices, and intelligent multimedia devices. This layer can sense the requests of intelligent production line terminals and provide various services in real time, which can greatly reduce the delay of task processing and ensure the quality of service of real-time applications.
Cloud computing layer: The cloud computing layer consists of clusters with huge computing and storage capacity, providing remote services for intelligent production lines to handle complex computing tasks.
3.2. System Model
3.2.1. Description of System Model
In fields that require high real-time task processing, such as intelligent production lines and smart hospitals, ensuring the task processing reliability has always been a crucial issue [
40]. It is necessary to analyze and process various data tasks in real time, such as material information reading and multi-axis robot posture analysis, when the intelligent production line is running. For example, the intelligent production line for personalized production of candy packaging realizes any combination of different shapes, colors and quantities of candy packaging. The machine vision-based candy sorting system is the key to complete the candy packaging, and the data uploaded by its image acquisition module need to be analyzed and processed in real time during the operation.
Figure 2 depicts the flow of image processing.
However, the processing and analysis capabilities of fog nodes are limited. With the increasing number of tasks to be processed, an unreasonable task allocation mechanism often leads to an increase in task delay, a decrease in completion and an increase in energy consumption. Consequently, we propose a task scheduling algorithm based on cloud-fog computing. The proposed algorithm uses a hybrid heuristic scheduling algorithm to reasonably allocate tasks to fog nodes and cloud servers, and thus solves the above problems. The tasks generated by the terminals of the intelligent production line are processed in the cloud-fog computing environment as shown in
Figure 3. We classify the tasks generated by the terminal device according to their urgency and sort them according to their priority. The scheduling algorithm distributes the sorted tasks to the fog nodes and cloud servers to ensure that the tasks will be served as much as possible.
3.2.2. Latency Model and Energy Consumption Mode
First, we assume that there are
tasks
, the priority of each task is
, and they are independent of each other. Among them, each task
can be expressed as a three tuple
.
represents the input data volume of the task, in kbit,
defines the load of the task, in MIPS, and
indicates the output of the task, in kbit. Denote
as the number of service nodes
, which includes fog nodes and a cloud service. For the convenience of numbering, we let
denote the cloud service. We define a non-negative integer variable
to indicate whether the node
can handle service request number
, such as:
Since fog nodes have heterogeneous resources, they provide different types of computing services for terminal devices. To allow each task to be served, we ensure that at least one node can serve each task when setting up the nodes. Formally, we have:
Data transmission in the task from the terminal device to the node is often limited by the transmission speed, and the transmission rate is usually defined as follows:
where
is the transmission power of the fog node,
is the Gaussian noise power in the channel,
is the bandwidth provided by the access network, and
is the channel gain parameter between the fog nodes. In this study, we assume that the information gain parameters between all fog nodes are the same. In addition, it can be seen from Equation (3) that when a channel is used at the same time, the data transmission rate will be reduced.
Then, the total transmission time of the node
receiving the task and sending the task after processing is expressed as:
We use
to define the ability of each node to process tasks, and the unit is MPIS. The time to complete a process in the node should be the time it is served, so we can obtain this formula:
Since all tasks are served in the allocated order, the subsequent tasks in each node must wait for the previous processing to be completed before being served. Therefore, the queuing time of task
in each node
is:
Then, the total queue time of each node task is:
Obviously, we can find from Equations (4)–(7) that the total time for each node to complete all scheduled tasks can be expressed as:
It should be noted here that since the work of each node is independent of each other, the time for all tasks to complete is equivalent to the longest time the task takes. The maximum delay can be expressed as:
In this study, we use
to represent the transmission power of the node. Combined with Equation (4), we can obtain the transmission energy consumption of each node as:
As above,
represents the service power of node
. Taking into account Equation (5), the energy consumption of each node when providing services is:
The power consumption generated by the task in queuing is extremely small, and we do not calculate the energy consumption when the task is waiting. Then, the total energy consumption of each node is:
From Equations (10)–(12), we can easily know that the total energy consumption generated by nodes in the entire task scheduling cycle can be expressed as:
3.2.3. Time Delay and Power Consumption Evaluation Model Based on Task Priority
In intelligent production lines, tasks with different degrees of urgency have different response time requirements, yet usually the time tolerance for urgent tasks is low. To ensure that urgent tasks are processed quickly, we classify the tasks in the buffer list according to their time tolerance. We simply classify the tasks into two categories: the tasks with lower time delay requirements are considered high-priority tasks, and the others are considered low-priority tasks. The priority of a task is denoted by
,
The delay and power consumption of different nodes in the fog computing layer differ when processing the same task. When using heuristics to search for potentially good solutions, the search direction should be adaptively adjusted according to the priority of the task. For high-priority tasks, lower service latency should be the main search direction, while for low-priority tasks with lower latency requirements, lower power consumption should be the search target. Therefore, we construct a latency-power evaluation model based on the different requirements of latency and power consumption for the two priority tasks using the properties of the exponential function.
The evaluation formula is:
From the formula, we can find that when it is a high-priority task, the value of the evaluation function is affected by the time delay. In the opposite case, it is affected by energy consumption parameters. Among them, and represent the coefficients of latency and energy consumption, respectively.
Through the description of the above formula, our goal formula becomes clear, including the completion time of all tasks and the energy consumption of all nodes. Our goal is to ensure that all are completed on time while minimizing energy consumption, namely:
where we use constraints
and
to ensure that each task can be served, and each node will provide at least one service. In
,
represents the correspondence between node
and task
after scheduling, and we constrain each task to be served by only one node.
and
constrain the latency and energy consumption, respectively.
5. Performance Evaluation
In this section, we perform simulations to verify the feasibility of the proposed method. We present the simulation environment and compare its performance with that of the conventional method. The results further validate the effectiveness of the proposed strategy.
5.1. Simulation Settings
The experiment is conducted on a computer with a 3.7 GHz AMD Ryzen 5, 3400 G CPU and 16 G memory storage space. We build a cloud-fog computing architecture and task scheduling model simulation platform using C++ based on a task scheduling scenario in the intelligent production line. To facilitate the performance comparison between algorithms, we set the transmission rate between fog nodes as 3 M/s and that between cloud nodes and fog nodes as 10 M/s. The simulation parameters are shown in
Table 1. The parameter settings in Algorithm 1 and Algorithm 2 are shown in
Table 2.
5.2. Performance Evaluations
To highlight our proposed HMA task scheduling algorithm, we compare two existing scheduling methods: first-come-first-served (FCFS) scheduling [
45] and only cloud service methods [
18]. We also compare the performance of the IMBO algorithm or the IACO algorithm running alone. Finally, we experiment with a priority-based task reordering strategy. We execute all the algorithms 30 times and average the results to reduce the error due to randomness.
We summarize the results of previous studies and identify three evaluation indicators to evaluate the experimental results. The first is the maximum completion time [
37,
46,
47,
48,
49], which is the time required to complete the last task. The second is energy consumption [
37,
47,
48], which is the sum of energy required in completing all tasks. The third is the task completion rate (
CR), which is the number of tasks successfully completed within the maximum tolerance time divided by the total tasks, and can be expressed by the following formula:
Next, we compare five algorithms under three performance indicators. The number of fog nodes is set to 5, 10 and 20, and the task amount is set to 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100.
5.2.1. Maximum Completion Time
For each experiment, before the task scheduling is completed without execution, we set a timer for each cloud/fog node that starts when the first task arrives and stops when the last task is completed. At the end of a task cycle, we compare the times recorded by each node and select the largest one as the maximum completion time for that task scheduling method.
Figure 6 illustrates the performance comparison of the maximum completion time between the five task scheduling strategies. Here,
Figure 6A–C are the result of sorting tasks, and
Figure 6D–F are before sorting. As the number of tasks increases, the task completion time increases. In contrast, as the number of fog nodes increases, it decreases. Furthermore, the cloud has the highest completion time, which may be due to network congestion caused by long-distance transmission between the cloud and the terminal device. While the cloud server has the most robust processing performance, it takes the most time. The remaining algorithms can reduce the huge latency caused by communication by offloading the tasks to the fog nodes. The cloud-fog computing architecture is an effective way to reduce the latency of intelligent production line tasks. When our proposed task rescheduling strategy is used, the latency of IMBO, IACO, and HMA is lower than that of the unused strategy. This shows that the task rushing sequencing strategy based on task priority can reduce the time-sensitive task latency. The proposed HMA can ensure that all tasks can be completed within the maximum tolerance time when the number of tasks is less than or equal to 30 after the tasks are sorted. Furthermore, FCFS scheduling ignores the performance differences between different nodes, resulting in the second-highest delay cost.
5.2.2. Energy Consumption
During the experiment, when the task starts to execute, we calculate the energy consumption required to process each task according to Equation (12). After the task is completed, we use Equation (13) to obtain the energy consumed by all tasks to complete in one task cycle.
Figure 7 compares the energy consumption performance between the five task scheduling strategies. Similarly,
Figure 7A–C are the result of sorting tasks, and
Figure 7D–F are before sorting. We can clearly understand that energy consumption increases with the number of tasks and fog nodes. Here, only the running power consumption of cloud processing tasks is calculated, and the static power consumption of the server is ignored, so the energy consumption brought by cloud services is the lowest. Generally, energy consumption is related to power and running time and decreases as the completion time shortens. The energy consumption of
Figure 7A–C is much lower than that of
Figure 7D–F, which shows that our proposed task reordering strategy reduces the overall execution time of tasks. It is also worth noting that the energy consumption of HMA is significantly lower than that of FCFS. Traditional scheduling strategies such as FCFS do not consider the load of the fog nodes in the task allocation process, resulting in unbalanced load distribution and higher energy consumption. Equation (14) enables the HMA to adaptively adjust the search direction according to the priority of the task, thus reducing energy consumption. The energy consumption of HMA is less than that of IACO but more than that of IMBO. This shows that in large-scale search problems, HMA has more vital searchability than IMBO. The results obtained better balance time constraints and power consumption, resulting in a slight increase in energy consumption. We believe this is acceptable.
5.2.3. Task Completion Rate
In our experiments, we set a timer for each task to keep track of the time it takes from being assigned to completion. We set a counter that counts the number of tasks that can be completed within
after all tasks are completed. If the task completion time is less than
, then the counter is incremented by one.
is different for tasks with different priorities, which are listed in
Table 1. Finally, we calculate the task completion rate using Equation (26).
Figure 8 shows the completion rate of all tasks within the task tolerance time. We still use
Figure 8A–C to indicate the performance results after sorting the tasks, and
Figure 8D–F to indicate the results before sorting. According to Equation (26), we know that the trend of the task completion rate is consistent with the maximum completion time. In other words, as the number of tasks increases, the completion rate decreases again, and as the number of fog nodes increases, the task completion rate increases. Whether the task reschedule strategy is adopted, the task completion rate of HMA is always the highest. This is because HMA balances time and task delay through task priority when processing task scheduling, ensuring the priority execution of time-sensitive tasks. When the number of tasks exceeds 30, the completion rate of the IMBO strategy is ranked behind FCFS. As the solution set space increases, the IMBO algorithm is prone to poor performance caused by falling into suboptimal solutions. IACO is the same as HMA in terms of completion rate. Cloud computing is ranked last due to large latency, which results in a low completion rate.
We compared the completion rates of high-priority tasks to show the responsiveness of different algorithms to urgent tasks, as shown in
Figure 9. Comparing
Figure 9A–C and
Figure 9D–F, we find that the task rescheduling strategy can improve the completion rate of high-priority tasks, which ensures the requirements of the intelligent production line for time-sensitive tasks. From the information of
Figure 9A–C, we can conclude that when the number of tasks is less than 60, the HMA algorithm ensures that all high-priority tasks are completed within the fault tolerance time. This verifies that the task priority-based strategy takes time delay and energy consumption as the optimization goal and converges to the optimal solution better than other algorithms. When the number of fog nodes is 20, all high-priority tasks within 100 are completed. For large-scale tasks, adding a certain number of service nodes can effectively improve the task success rate. The task completion rate of nodes 5 and 10 proves that HMA is superior under the limitation of the number of service nodes. Similar to the total completion rate, IACO ranks second, FCFS leads IMBO when the number of tasks is large, and cloud always ranks last.
In this paper, the proposed HMA scheduling strategy reduces energy consumption as much as possible by reducing the task delay and increasing the completion rate. The experimental results in terms of completion time, energy consumption, and task completion rate show the same trend, proving the feasibility of the proposed strategy in task scheduling. In summary, the experimental results show that the proposed cloud-fog computing architecture and the HMA algorithm based on task priority can provide a rapid response in the intelligent production line. The energy consumption of the intelligent production line system was also reduced.
6. Conclusions
This paper highlights the task scheduling problem in intelligent production lines. For the requirement of ultra-low latency, we establish a mathematical model for intelligent production line task scheduling to achieve ultra-low latency and low power consumption of time-sensitive tasks. We transform it into a multi-objective optimization of time delay and energy consumption. Combining the advantages of cloud computing and fog computing, we propose a cloud-fog computing architecture for intelligent production lines and develop a priority-based task rescheduling strategy to ensure that time-sensitive tasks are prioritized services. In addition, we propose the HMA algorithm with a mixture of IMBO and IACO algorithms to solve the optimization problem. This is the first time that the MBO algorithm is used to solve the task scheduling problem of intelligent production lines. We evaluate the performance of HMA in a simulation environment. We find that as the number of tasks increases, the performance improvement also increases. When the number of tasks is 100 and the number of nodes is 10, the maximum completion time is only 37.8% of cloud, 59.6% of IMBO, and 69.9% of FCFS, while the power consumption is 82.9% of FCFS, and the task completion rate is 5.3 times better than cloud, 1.5 times better than IMBO, and 1.25 times better than FCFS. The experimental results show that our proposed strategy can respond quickly to tasks and reduce energy consumption. In the future, we will enhance the proposed task scheduling strategy in the cloud-fog computing architecture to solve the task flow scheduling problem of intelligent production lines.