Next Article in Journal
Reward Function and Configuration Parameters in Machine Learning of a Four-Legged Walking Robot
Previous Article in Journal
Bearing Capacity of Foundation and Soil Arching in Rigid Floating Piled Embankments: Numerical Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Memory Allocation Strategy in Edge Programmable Logic Controllers Based on Dynamic Programming and Fixed-Size Allocation

Shandong Computer Science Center (National Supercomputer Center in Jinan), Shandong Key Laboratory of Computer Networks, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250014, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10297; https://doi.org/10.3390/app131810297
Submission received: 11 August 2023 / Revised: 6 September 2023 / Accepted: 12 September 2023 / Published: 14 September 2023
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:
With the explosive growth of data at the edge in the Industrial Internet of Things (IIoT), edge devices are increasingly performing more data processing tasks to alleviate the load on cloud servers. To achieve this goal, Programmable Logic Controllers (PLCs) are gradually transitioning into edge PLCs. However, efficiently executing a large number of computational tasks in memory-limited edge PLCs is a significant challenge. Therefore, there is a need to design an efficient memory allocation strategy for edge PLCs. This paper proposes a dynamic memory allocation strategy for edge PLCs. It adopts an approach of organizing memory into small blocks to handle memory requests from real-time tasks and utilizes a well-performing dynamic programming method for resource allocation problems to handle memory requests from non-real-time tasks. This approach ensures real-time performance while improving the efficiency of non-real-time task processing. In the simulation experiments, the algorithm implemented based on this allocation strategy is compared with the default method and several open-source memory allocators. The experimental results demonstrate that the proposed algorithm, on average, improves the speed of real-time task processing by 13.7% and achieves a maximum speed improvement of 17.0% for non-real-time task processing. The experimental results show that the allocation strategy effectively improves memory allocation efficiency in memory-limited environments.

1. Introduction

1.1. Background

The continuous development of the Industrial Internet of Things (IIoT) has greatly improved industrial efficiency. Nowadays, with the continuous growth of IoT networks, cloud computing has become crucial as an optimal way to handle big data processing and complex computation for practical applications. However, cloud computing faces bottlenecks due to the large amount of data generated by advanced IoT applications and the need for fast response time [1]. Processing the large volume of real-time data collected from IIoT terminal devices can lead to high server latency. This can make traditional cloud computing architectures inadequate in meeting the speed requirements of the IIoT [2]. Taking smart cities as an example, with the increasing amount of edge data and the growing demand for encryption, it has been shown that large-scale centralized server clusters in cloud computing are insufficient to meet the needs of rapidly expanding intelligent IoT systems [3].
To address this issue, edge computing has emerged as a new paradigm. Over the past decade, edge computing has grown in popularity and is expected to continue growing in the coming years [4]. However, as edge computing continues to grow, it faces challenges such as search efficiency, reliability requirements, and resource allocation [5]. As edge computing-based solutions rapidly proliferate, many edge computing applications utilize the cloud for data processing and analysis. However, latency-sensitive applications with low-latency requirements and high bandwidth consumption can be inefficient and costly when processed through cloud servers [6].
Thus, edge computing involves deploying distributed computing resources at the network edge [7]. Before the popularization of edge computing, the emergence of a large number of mobile devices led to extremely high network latency [8]. Nowadays, by offloading computation to nearby edge nodes with sufficient computing capabilities, terminal devices can receive timely responses to meet user demands [9].
As a traditional industrial controller, the Programmable Logic Controller (PLC) plays a critical role in industrial processes. Traditional PLCs are primarily responsible for performing simple tasks such as real-time control and data collection, with most data processing being carried out on industrial servers or in the cloud. PLCs themselves perform minimal computing tasks. However, with the current edge computing paradigm and the influx of large amounts of field data and more complex new applications [10], PLCs are beginning to require more powerful computing capabilities. Therefore, PLCs are gradually evolving into edge PLCs [11].The emergence of edge PLCs has effectively enhanced productivity in industrial automation processes [12]. As shown in Figure 1, edge PLCs act as edge nodes close to the data source, performing tasks such as executing control commands and processing data, thus avoiding the high costs of centralized servers. Previous edge PLCs used a static memory allocation scheme internally, which resulted in low memory utilization and slow processing speeds when handling increasing amounts of edge data. Therefore, a more efficient dynamic memory allocation method is needed to ensure real-time performance and task processing efficiency [13].
By analyzing the working environment of edge PLCs, we can devise a better allocation strategy to maximize the utilization of memory resources within edge PLCs and improve task processing efficiency. In real-world edge PLC production environments, non-real-time tasks often coexist with real-time tasks. In order to ensure real-time performance while maximizing the processing efficiency of non-real-time tasks, we adopt an improved fixed-size allocation algorithm for allocating memory to real-time tasks. At the same time, we model the memory allocation problem of non-real-time tasks as a special knapsack problem and apply a dynamic programming algorithm to find the optimal memory allocation strategy.

1.2. Related Works

As a key technology that directly affects system performance and resource utilization, there have been extensive research efforts from different perspectives on improving memory allocation efficiency. Pi et al. [14] proposed a memory allocation mechanism called Hermes, which improves the speed of handling latency-sensitive tasks under memory pressure through fast allocation and proactive memory reclamation. Liu et al. [15] proposed a memory allocation and migration mechanism in a hybrid memory system, which effectively improves system performance by using performance/energy models to guide memory allocation and object migration between NVM and DRAM. Joshi et al. [16] proposed a dynamic degree memory balanced allocation algorithm as a solution for task scheduling. This algorithm schedules virtual machines based on the availability of memory and microprocessors, resulting in significant performance improvements compared to other load balancing algorithms. Lu et al. [17] designed an elastic memory management scheme called Spring Buddy suitable for highly concurrent environments. It significantly reduces the latency of memory allocation and deallocation by adjusting the focus of concurrent response or resource aggregation based on different memory request patterns to adapt to concurrency pressure. Aghilinasab et al. [18] studied memory-intensive applications in real-time and mixed-criticality systems and proposed a scheme to adjust memory bandwidth allocation by dynamically monitoring application progress. Its effectiveness was demonstrated through benchmark testing. Most of these studies on memory allocation are conducted based on relatively abundant memory resources, and the resulting algorithms or allocation mechanisms need to operate in an environment with a large amount of available memory for allocation.
Dynamic programming, as a mature method for solving optimization problems in decision-making processes, has been widely applied in various fields. In recent research, Xu et al. [19] developed effective energy management strategies using dynamic programming to address a multi-objective optimal control problem involving power loss, battery thermal management requirements, and battery health performance. Koch et al. [20] used approximate dynamic programming methods to solve the constrained outbound service demand management problem, effectively reducing potential penalty losses by coordinating feasible time windows and estimating opportunity costs. Wang et al. [21] modeled the electric bus fleet scheduling problem as a dynamic programming model, significantly reducing battery replacement investments by minimizing battery replacement costs. Ding et al. [22] proposed an energy station planning method for distributed energy systems, optimizing the energy supply range of energy stations using 0-1 dynamic programming and demonstrating its effectiveness in reducing investment and costs through a campus area case study. Multiple studies have shown that dynamic programming has good performance in resource allocation problems under limited dimensions.
PLCs, as an important technological foundation for industrial process automation, can be assumed to continue to be widely used in future production, even in the era of Industry 4.0 and the Industrial Internet [23]. The concept of edge PLCs is a good example of the continued use of PLCs in future production. Edge PLCs need to adapt to the architecture of edge computing by upgrading existing systems, creating new designs, and enhancing security [24]. There have been many studies on how to improve the efficiency of edge PLCs. Liu et al. [25] proposed a fault diagnosis optimization method for edge PLCs based on a random forest approach to handle feature selection problems, effectively improving the performance of fault diagnosis. Stankovski et al. [26] discussed the possibility of applying small PLCs in edge computing and demonstrated it with an example of measuring and monitoring cylinders. JiYuLei et al. [27] proposed a software-defined PLC technology that integrates control functions into edge computing servers to achieve the functionality of edge PLCs. Fu et al. [28] studied the characteristics of data arriving over time in edge PLCs in the industrial internet and applied reinforcement learning methods to optimize memory allocation methods, reducing data loss rates.
However, there is a lack of research on how to efficiently allocate memory in task-intensive and memory-limited edge PLCs. For some cost-sensitive and space-limited industrial applications, edge PLCs have limited memory. Existing work has improved the efficiency of edge PLCs from multiple angles but their research focus has not been on memory allocation schemes. Current schemes mostly use static allocation methods or default dynamic allocation methods for memory allocation in edge PLCs. Therefore, for industrial scenarios where edge PLCs’ memory is limited, a lightweight and efficient memory allocation method is an emerging need.
The main contributions of this paper are as follows:
  • We propose an improved memory allocation strategy where fixed-size blocks are organized in advance into a pool. This strategy ensures real-time memory allocation for real-time tasks while also optimizing memory space utilization.
  • We introduce a memory allocation strategy based on dynamic programming, which models the memory allocation problem for non-real-time tasks as a knapsack problem. We incorporate a special parameter to adjust the item values, aiming to enhance decision-making results and improve memory allocation efficiency.
  • We implement the algorithm based on this strategy and conduct comparative testing under identical conditions against the default allocation method and several mature open-source memory allocators. The experimental results validate the real-time performance and efficiency of this strategy in the edge PLC environment.
The remainder of this paper is arranged as follows: In Section 2, we provide a detailed introduction to memory allocation strategies. In Section 3, we conduct several comparative experiments and verify the effectiveness of these strategies through the experimental results. Finally, in Section 4, we summarize the paper.

2. Materials and Methods

In the industrial production process, the tasks undertaken by edge PLCs can be divided into two categories based on their real-time requirements.
The first category is real-time tasks. These tasks typically require less memory for allocation but have high real-time requirements. Real-time tasks often involve control instructions that are either automatically generated or manually executed, with unpredictable intervals between them. Failure to process these tasks in a timely manner can result in production accidents, which highlights the importance of ensuring their real-time processing. Common real-time tasks include the execution of control commands and fault diagnosis.
The second category is non-real-time tasks. Non-real-time tasks, which primarily involve data processing at the edge, have relatively lower real-time requirements. Due to the large scale of edge data, non-real-time tasks require more memory for allocation. The execution time of non-real-time tasks is directly correlated with the scale of the data and usually cannot be completed within a short period. Examples of common non-real-time tasks include data storage and log recording.
Due to constraints of space and cost, the memory resources in edge PLCs are limited. Therefore, the efficient utilization of these limited memory resources is crucial for improving task processing efficiency. The objectives of memory allocation decisions in edge PLCs can be summarized as follows: for real-time tasks, ensure real-time processing while conserving memory space; for non-real-time tasks, allocate memory efficiently to enhance task processing efficiency. We divide the memory allocation zone into two parts: the real-time zone and the non-real-time zone. These zones are responsible for processing real-time tasks and non-real-time tasks, respectively. When allocating memory, the memory pools used by these two zones are relatively independent.
The memory allocation process based on the source of memory requests is illustrated in Figure 2. The allocation strategies in the real-time zone and the non-real-time zone will be discussed in the following two sections.

2.1. Memory Allocation for Real-Time Tasks

The fixed-size allocation algorithm is a common algorithm for allocating memory space to real-time tasks. The fixed-size allocation algorithm divides the memory into several equally sized memory blocks and returns these blocks to fulfill requests when memory allocation is needed. This approach generally ensures timeliness but it has lower memory utilization. In situations with a high density of real-time tasks, this method may fail to allocate memory in a timely manner if there is insufficient memory space. To meet the demands of real-time tasks for timeliness and memory efficiency, we have made improvements to the fixed-size allocation algorithm.
The improved fixed-size algorithm introduces a collection of free lists. The free lists contain small memory blocks organized in the form of linked lists, with different free lists responsible for different sizes of memory blocks. Memory blocks within the same free list always have the same size. When a real-time task generates a memory request, it is directly mapped to the most suitable free list according to size and the corresponding memory block is returned. The memory allocation method in the real-time zone is illustrated in Figure 3.
The improved fixed partitioning algorithm has the following characteristics:
  • Memory blocks can be found through direct mapping. When the memory pool is initialized, a contiguous memory space is first divided into several regions of equal size. These regions are then allocated to the free lists in ascending order according to the size of the memory blocks they handle. After the regions are allocated, further division of small memory blocks occurs within the free lists. This organization ensures that memory blocks of the same size come from a fixed address range, which allows for efficient allocation and deallocation of small memory blocks through direct mapping.
  • The range of the free lists is relatively small. The memory request sizes of real-time tasks in the edge PLCs are typically concentrated within a small range. Therefore, we limit the sizes of the small memory blocks maintained in the free lists to this range, reducing wastage of memory space due to excessively large or small blocks.
  • The memory pool maintains an uninitialized region, referred to as the reserved region. When there are no available memory blocks in the free lists, conventional allocation methods can be used within this space to ensure timeliness during periods of high density real-time tasks.
Algorithm 1 demonstrates the improved fixed-size allocation algorithm. In the algorithm, F r e e L i s t i represents the i th free list.
Algorithm 1 Improved Fixed-size Allocation Algorithm
  • Input: Memory request for real-time tasks
  • Output: Allocated memory address
1:
Map memory request to F r e e L i s t j
2:
for each  i j  in set of free lists do
3:
   if  F r e e L i s t i contains memory blocks then
4:
     Extract a memory block from F r e e L i s t i
5:
     Record the allocated memory address
6:
     break
7:
   end if
8:
end for
9:
if Allocation fails in the free lists then
10:
   Perform conventional allocation in the reserved region
11:
   Record the allocated memory address
12:
end if
13:
return Allocated memory address

2.2. Memory Allocation for Non-Real-Time Tasks

2.2.1. Problem Description

In the non-real-time zone, we model the memory allocation problem as a special type of knapsack problem, which can be described as follows: Suppose there are multiple items, each with a different value, and several knapsacks. For these items, we need to select a packing strategy that fits them into one or more knapsacks in order to maximize the packing value without exceeding the capacity of the knapsacks. In the memory allocation environment of edge PLCs, this problem can be described as allocating corresponding free memory blocks for multiple memory requests with the aim of achieving maximum allocation value. Table 1 shows the correspondence between the concepts in the knapsack problem and the memory allocation problem.
Specifically, in this problem, the value of an item is not fixed. Instead, it is associated with the knapsack in which it will be placed. The value generated by a memory allocation decision is negatively correlated with the size of the memory fragmentation caused by that decision. The variables involved in the non-real-time zone are shown in Table 2.
Considering a decision to allocate m i to R e q j , V a l u e i j can be represented as:
V a l u e i j = M a x M e m m S i z e i R e q S i z e j + 1
For an allocation decision, the smaller the fragmentation it produces, the greater the value generated by making that allocation. An allocation relationship cannot be established in the non-real-time zone when m S i z e i is less than R e q S i z e j . In this case, we consider this allocation method to have no value, with a value of 0. Therefore, the value generated by the decision to allocate m i to R e q j can be represented as follows:
V a l u e i j = 0 if m S i z e i < R e q S i z e j , M a x M e m m S i z e i R e q S i z e j + 1 if m S i z e i R e q S i z e j .
During the process of solving this memory allocation problem using dynamic programming, considering an increasing number of memory requests simultaneously can lead to exponential growth in the required solution space, resulting in the curse of dimensionality. The high cost of the solution space is unaffordable for edge PLCs with limited memory resources. Therefore, this paper only considers scenarios where each allocation involves two memory requests in order to achieve a balance within the limited memory space. The memory allocation in the non-real-time zone is shown in Figure 4.
The following sections will solve the memory allocation problem according to the general process of dynamic programming.

2.2.2. Partitioning Stages

In the problem of dynamic programming for memory allocation, the solution process can be divided into N stages, where N is the total number of available memory blocks. In each stage, we seek the optimal decision for the current stage. Let the current stage be denoted as t. The solution for stage t involves making the best decision to allocate memory space for two memory requests, R e q 0 and R e q 1 , among the first t available memory blocks.

2.2.3. Determining States

We define a state variable S ( i , j ) k , which represents the maximum allocation value when the decision status of R e q 0 is i and the decision status of R e q 1 is j among all available memory blocks with numbers not greater than k. The values of i and j belong to the binary tuple (0, 1), where 0 indicates that the decision has not been made and 1 indicates that the decision has been made. Our goal is to determine the decision corresponding to the value of S ( i , j ) k .

2.2.4. Establishing State Transition Equations

In stage k, there are four possible decisions:
  • Allocating the available memory block k to R e q 1 when the decision for R e q 0 has been made.
  • Allocating the available memory block k to R e q 0 when the decision for R e q 1 has been made.
  • Allocating the available memory block k to both R e q 0 and R e q 1 .
  • Not allocating the available memory block k to either R e q 0 or R e q 1 , and continuing with the previous decision.
When both memory requests are allocated to the same available memory block i, we combine the two requests into one by creating a new temporary request with a temporary identifier 2, referred to as R e q 2 . If the decision is made to allocate both memory requests to the same available memory block, a split operation is required and, thus, an additional group of memory header information needs to be considered. Size of R e q 2 can be represented as:
R e q S i z e 2 = R e q S i z e 0 + R e q S i z e 1 + H e a d S i z e
According to the four possible decision scenarios, the state transition equation for the maximum allocation value S ( 1 , 1 ) k when both memory requests make decisions among all available memory blocks with a number not greater than k is expressed as:
S ( 1 , 1 ) k = m a x { S ( 0 , 1 ) k 1 + V a l u e i 0 , S ( 1 , 0 ) k 1 + V a l u e i 1 , V a l u e i 2 , S ( 1 , 1 ) k 1 }
The maximum allocation value S ( 1 , 0 ) k when only R e q 0 makes decisions among all available memory blocks with a number not greater than k is as follows:
S ( 1 , 0 ) k = m a x { V a l u e i 0 , S ( 1 , 0 ) k 1 }
Similarly, the maximum allocation value S ( 0 , 1 ) k when only R e q 1 makes decisions among all available memory blocks with a number not greater than k is as follows:
S ( 0 , 1 ) k = m a x { V a l u e i 1 , S ( 0 , 1 ) k 1 }

2.2.5. Determining Initial and Boundary Conditions

The initial condition refers to considering the allocation decisions for R e q 0 and R e q 1 when only considering m 0 . Under this condition, the values of S ( 0 , 1 ) 0 and S ( 1 , 0 ) 0 are equal to V a l u e 0 0 and V a l u e 0 1 , respectively. According to Formulas (3) and (4), the value of S ( 1 , 1 ) 0 can be expressed as:
S ( 1 , 1 ) 0 = m a x { V a l u e 0 0 , V a l u e 1 0 , V a l u e 2 0 } .
Regarding the boundary condition, in the knapsack problem, the weight of items stored in the knapsack should not exceed its capacity. Therefore, according to Formula (2), for solutions with weights exceeding the capacity, we consider their value as 0. Conversely, if the value of a particular state is non-zero, it indicates that there is at least one valid allocation solution in that state.

2.2.6. Solving Sequentially

In addition to the value of a particular decision, it is necessary to record the specific decisions during the solving process for reference in the subsequent memory allocation stage. Assuming that the current list of free memory blocks includes n + 1 blocks with IDs ranging from 0 to n, the algorithm for solving the memory allocation strategy using dynamic programming is depicted in Algorithm 2.
Algorithm 2 Dynamic Programming Algorithm for Memory Allocation Problem
  • Input:  R e q 0 , R e q 1
  • Output:  D e c 0 , D e c 1
1:
Initialize boundary conditions S ( 1 , 0 ) 0 , S ( 0 , 1 ) 0 , S ( 1 , 1 ) 0
2:
for each  k > 0   in  M e m   do
3:
   Calculate V a l u e k 0 , V a l u e k 1
4:
    V A L 0 S ( 0 , 1 ) k 1 + V a l u e k 0
5:
    V A L 1 S ( 1 , 0 ) k 1 + V a l u e k 1
6:
   if  V A L 0 > S ( 1 , 1 ) k 1  or  V A L 1 > S ( 1 , 1 ) k 1  then
7:
     if  V A L 0 V A L 1  then
8:
         S ( 1 , 1 ) k V A L 0
9:
         D e c 0 k
10:
     else
11:
         S ( 1 , 1 ) k V A L 1
12:
         D e c 1 k
13:
     end if
14:
   end if
15:
   Calculate V a l u e i 2
16:
   if  V a l u e i 2 > S ( 1 , 1 ) k  then
17:
      S ( 1 , 1 ) k V a l u e k 2
18:
      D e c 0 , D e c 1 k
19:
   end if
20:
   Update S ( 1 , 0 ) k , S ( 0 , 1 ) k
21:
end for
22:
return  D e c 0 , D e c 1

2.2.7. Memory Compression Factor

In order to further optimize memory allocation decisions in the non-real-time zone, we introduce a parameter called the “memory compression factor” to adjust the decision value.
The memory compression factor is an attribute of memory blocks and comes into effect when computing the value of a particular allocation decision. The value generated by allocating a certain free memory block will be multiplied by its memory compression factor. With the introduction of the memory compression factor, the formula for computing the allocation value changes as follows:
V a l u e i j = 0 if m S i z e i < R e q j , M a x M e m · M C F i m S i z e i R e q S i z e j + 1 if m S i z e i R e q S i z e j .
When a new memory block is created, its memory compression factor is set to the default value. Each time a memory block participates in memory allocation, the value of its memory compression factor is shifted to the right by one bit. Each time a memory block participates in memory compression, its memory compression factor is reset. If a memory block is divided, the newly generated blocks inherit the memory compression factor of the original block.
The introduction of the memory compression factor aims to increase the chances of memory blocks participating in memory compression, thereby reducing external fragmentation and improving memory utilization. The performance of allocation strategies using different memory compression factors for a set of non-real-time tasks is shown in Figure 5. In this paper, all figures related to data were plotted using the Matplotlib library implemented in Python.
It can be observed that, compared to not introducing a memory compression factor (i.e., a memory compression factor of 1), introducing a memory compression factor within a certain range has a significant effect on reducing the average fragmentation rate and reducing the required execution time. When the memory compression factor is 5, the shortest execution time for processing the task set is achieved, which is a 6.2% improvement compared to not introducing the factor. From Figure 5, it can be seen that, as the memory compression factor increases, the average fragmentation rate fluctuates less within a certain range, and the execution time for processing the task set decreases significantly at the beginning but increases significantly when the memory compression factor exceeds 25.
Experimental results show that low average fragmentation rate does not always mean high processing efficiency. It can be observed that, under similar average memory fragmentation rates, there are still significant differences in the required execution time for processing the task set with different memory compression factors. To explain this phenomenon, we sampled the memory utilization rate at intervals of every 100 allocations using the same task set. We observed the sampling results of four memory compression factors with similar average memory fragmentation rates (5 (18.4%), 9 (18.7%), 22 (17.7%), and 27 (17.4%)), as shown in Figure 6.
It can be observed that, when the memory compression factor is 5, the fluctuation of memory utilization rate is smaller at different time intervals. In the sampling results, the memory utilization rate is mainly concentrated between 10% and 25%, and the probability of sampling points beyond this range is 21%. However, under other memory compression factors, the fluctuation in memory utilization rate increases significantly. When the memory compression factors are 9, 22, and 27, the probabilities of sampling points beyond this range are 41%, 44%, and 40%, respectively.
In edge PLCs, the time it takes for computing tasks to occupy allocated memory space is relatively long. Large fluctuations in the memory utilization rate can lead to frequent attempts by the memory allocation program to allocate memory, resulting in wasted computational resources and ultimately increasing the required execution time for processing the task set. From the experimental results, we can conclude that a memory compression factor of 5 achieves the best value correction effect, significantly reducing the required execution time for processing the task set compared to other values of the memory compression factor. Therefore, in the following experiments, we will set the memory compression factor to 5 in the allocation decisions.

3. Results

In order to evaluate the effectiveness of memory allocation strategies in edge PLCs, we tested the memory allocation performance of the real-time and non-real-time zone under this strategy using a synthetic dataset. This dataset was generated based on the characteristics of real-time tasks from the article [29] as well as production task flows in industrial environments. We refer to the memory allocation algorithm implemented based on the edge PLC’s memory allocation strategy as EPmalloc (malloc in edge PLC). The development and testing of the algorithm were conducted on the CentOS 7 distribution of the Linux operating system, with the code written using the Visual Studio Code 1.81.1 software.

3.1. Real-Time Zone Experiment

For the real-time part, we compare different approaches using the following two metrics:
  • Maximum response latency: This represents the longest time required for a memory request to receive a response. A lower value indicates higher reliability of the allocation method.
  • Average response latency: This represents the average response time for all memory requests. A lower value indicates better real-time performance of the allocation method.
Using EPmalloc, we process a set of memory requests from a group of real-time tasks with short intervals and compare it with the default allocation method to evaluate its performance. The tests were conducted in a 16KB memory environment. The results are shown in Figure 7.
It can be observed that, as the number of real-time tasks increases, EPmalloc is significantly better than the system’s default memory allocation method in terms of maximum response speed and average response speed. Specifically, compared to the default method, EPmalloc reduces the maximum response latency by 41.2% and the average latency by 13.7%. Therefore, EPmalloc is superior to the system’s default method in terms of real-time performance and reliability. These results are displayed in Table 3.

3.2. Non-Real-Time Zone Experiment

For the non-real-time part, we use the total execution time of task processing as the metric, which is also the most intuitive indicator for evaluating memory allocation efficiency. We compare EPmalloc with three widely used and open-source memory allocators, including mimalloc [30], TCMalloc [31], and the default malloc from the Glibc runtime library [32]. Among them, mimalloc is a compact general-purpose allocator developed by Microsoft, TCMalloc is a multithread-optimized allocator custom implemented by Google, and Glibc’s default malloc is a mature memory allocator based on ptmalloc. By comparing the time they take to process a set of tasks in the same environment, we evaluate the performance of EPmalloc in handling memory requests for non-real-time tasks. The non-real-time task set was tested in an environment with 128 KB of memory. The experimental results are shown in Figure 8. In order to highlight the differences between allocation methods, the curves representing time consumption have been logarithmically scaled.
The experimental results demonstrate the differences between EPmalloc and the other three memory allocators in terms of total execution time and execution time at several progress points. It can be observed that, as the number of completed tasks increases, the performance difference between EPmalloc and the other memory allocators becomes increasingly evident. In terms of performance comparison for the non-real-time task set, EPmalloc improves by 17.0% compared to mimalloc, 13.4% compared to TCMalloc, and 8.1% compared to the default malloc from the Glibc runtime library. Table 4 presents the time taken by several allocation algorithms at three task processing milestones.
It can be observed that, as the progress of task processing advances, the performance advantage of EPmalloc gradually expands, proving its superior memory allocation efficiency in edge PLC environments.
To test the performance of EPmalloc under different memory sizes, we conducted tests of the algorithm in environments with different memory sizes. We compared the time it took for EPmalloc and the three aforementioned memory allocators to process the same set of tasks under different memory sizes. The results are shown in Figure 9.
The experimental results show that, in environments where memory is no larger than 128 KB, EPmalloc has a significant advantage over other allocators. It can be observed that, as the available memory size increases, the performance of EPmalloc does not increase significantly. The performance of other allocators in processing the task set generally improves as memory increases. The time required for decision-making when allocating memory using the dynamic programming strategy will increase significantly with the number of free memory blocks to be considered, so the performance of EPmalloc may actually decrease in environments with larger memory. Therefore, EPmalloc performs better in edge PLCs where memory is limited.
Finally, to demonstrate the applicability of the EPmalloc algorithm, we tested it on a small edge PLC that is widely available on the market. The appearance of the edge PLC is shown in Figure 10.
The communication protocol used by the edge PLC is ModbusTCP, with a memory space of 128 KB. The internal program is written in the C language. We tested its performance by comparing the number of tasks completed over time when using EPmalloc and the default method, respectively. The input for the test came from the same set of tasks. The results of the test are shown in Table 5.
The test results show that the algorithm performs well when running on the device, demonstrating its applicability.

4. Conclusions

In this paper, we conducted a series of studies on how edge PLCs can efficiently allocate memory under memory-limited conditions. We have achieved the following research results:
  • We proposed an improved fixed-size allocation algorithm for memory allocation of real-time tasks in edge PLCs. The results of comparative experiments showed that, compared to the default method, the improved method reduced the maximum response latency by 41.2% and the average response latency by 13.7%.
  • We proposed a dynamic programming-based memory allocation strategy for memory allocation of non-real-time tasks in edge PLCs. The results of comparative experiments showed that, compared to three open-source memory allocators, the allocation strategy improved task processing efficiency by at least 8.1% and up to 17.0%.
It is undeniable that this research still has some limitations. Due to the curse of dimensionality in dynamic programming, the effectiveness of memory allocation in the non-real-time zone will significantly decrease when more than three memory requests are considered simultaneously. In addition, for environments with relatively large available memory space, the memory allocation strategies proposed in this paper do not perform well.
There are also possible directions for improvement of this research. For example, by sharing a memory pool between the real-time and non-real-time zones and dynamically adjusting the size that each can use according to memory pressure, memory utilization can be improved. We will explore the possibility of improvement in subsequent research and hope that our research can provide inspiration to other researchers.

Author Contributions

Conceptualization, G.C. and R.S.; methodology, Z.W.; software, Z.W.; validation, W.D.; formal analysis, Z.W.; investigation, G.C.; resources, G.C.; data curation, R.S.; writing—original draft preparation, Z.W.; writing—review and editing, W.D.; visualization, Z.W.; supervision, R.S.; project administration, G.C.; funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science and Technology SMEs Innovation Capacity Enhancement Project in Shandong Province grant number 2021TSGC1089, Science and Technology SMEs Innovation Capacity Enhancement Project in Shandong Province grant number 2023TSGC0587, “20 New Colleges and Universities” Funding Project in Jinan grant number 2021GXRC074, and Major Science and Technology Innovation Project of Shandong Province grant number 2018CXGC0601.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request through email to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmad, S.; Shakeel, I.; Mehfuz, S.; Ahmad, J. Deep learning models for cloud, edge, fog, and IoT computing paradigms: Survey, recent advances, and future directions. Comput. Sci. Rev. 2023, 49, 100568. [Google Scholar] [CrossRef]
  2. Wu, Y.; Dai, H.N.; Wang, H. Convergence of Blockchain and Edge Computing for Secure and Scalable IIoT Critical Infrastructures in Industry 4.0. IEEE Internet Things J. 2021, 8, 2300–2317. [Google Scholar] [CrossRef]
  3. Yao, A.; Li, G.; Li, X.; Jiang, F.; Xu, J.; Liu, X. Differential privacy in edge computing-based smart city Applications:Security issues, solutions and future directions. Array 2023, 19, 100293. [Google Scholar] [CrossRef]
  4. Sahni, Y.; Cao, J.; Zhang, S.; Yang, L. Edge Mesh: A New Paradigm to Enable Distributed Intelligence in Internet of Things. IEEE Access 2017, 5, 16441–16458. [Google Scholar] [CrossRef]
  5. Liu, F.; Tang, G.; Li, Y.; Cai, Z.; Zhang, X.; Zhou, T. A Survey on Edge Computing Systems and Tools. Proc. IEEE 2019, 107, 1537–1562. [Google Scholar] [CrossRef]
  6. Badidi, E. On workflow scheduling for latency-sensitive edge computing applications. Procedia Comput. Sci. 2023, 220, 958–963. [Google Scholar] [CrossRef]
  7. Shirazi, S.N.; Gouglidis, A.; Farshad, A.; Hutchison, D. The extended cloud: Review and analysis of mobile edge computing and fog from a security and resilience perspective. IEEE J. Sel. Areas Commun. 2017, 35, 2586–2595. [Google Scholar] [CrossRef]
  8. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile Edge Computing: A Survey. IEEE Internet Things J. 2018, 5, 450–465. [Google Scholar] [CrossRef]
  9. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog Computing and Its Role in the Internet of Things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing; Association for Computing Machinery: New York, NY, USA, 2012; MCC ‘12; pp. 13–16. [Google Scholar] [CrossRef]
  10. Hortelano, D.; de Miguel, I.; Barroso, R.J.D.; Aguado, J.C.; Merayo, N.; Ruiz, L.; Asensio, A.; Masip-Bruin, X.; Fernández, P.; Lorenzo, R.M.; et al. A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems. J. Netw. Comput. Appl. 2023, 216, 103669. [Google Scholar] [CrossRef]
  11. Peng, Y.; Liu, P.; Fu, T. Performance analysis of edge-plcs enabled industrial internet of things. Peer- Netw. Appl. 2020, 13, 1830–1838. [Google Scholar] [CrossRef]
  12. Stankovski, S.; Ostojić, G.; Baranovski, I.; Tegeltija, S.; Smirnov, V. Robust automation with PLC/PAC and edge controllers. IFAC-Pap. Online 2022, 55, 316–321. [Google Scholar] [CrossRef]
  13. Mandić, Z.; Stankovski, S.; Ostojić, G.; Popović, B. Potential of Edge Computing PLCs in Industrial Automation. In Proceedings of the 2022 21st International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, Bosnia and Herzegovina, 16–18 March 2022; pp. 1–5. [Google Scholar] [CrossRef]
  14. Pi, A.; Zhao, J.; Wang, S.; Zhou, X. Memory at your service: Fast memory allocation for latency-critical services. In Proceedings of the 22nd International Middleware Conference, Quebec City, QC, Canada, 6–10 December 2021; pp. 185–197. [Google Scholar]
  15. Liu, H.; Liu, R.; Liao, X.; Jin, H.; He, B.; Zhang, Y. Object-Level Memory Allocation and Migration in Hybrid Memory Systems. IEEE Trans. Comput. 2020, 69, 1401–1413. [Google Scholar] [CrossRef]
  16. Joshi, A.; Munisamy, S.D. Enhancement of Cloud Performance Metrics Using Dynamic Degree Memory Balanced Allocation Algorithm. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 1697–1707. [Google Scholar] [CrossRef]
  17. Lu, Y.; Liu, W.; Wu, C.; Wang, J.; Gao, X.; Li, J.; Guo, M. Spring Buddy: A Self-Adaptive Elastic Memory Management Scheme for Efficient Concurrent Allocation/Deallocation in Cloud Computing Systems. In Proceedings of the 2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS), Beijing, China, 14–16 December 2021; pp. 402–409. [Google Scholar] [CrossRef]
  18. Aghilinasab, H.; Ali, W.; Yun, H.; Pellizzoni, R. Dynamic Memory Bandwidth Allocation for Real-Time GPU-Based SoC Platforms. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2020, 39, 3348–3360. [Google Scholar] [CrossRef]
  19. Xu, X.; Li, G.; Zhang, H. Optimal Energy Management System Design Based on Dynamic Programming for Battery Electric Vehicles. IFAC-PapersOnLine 2020, 53, 634–637. [Google Scholar] [CrossRef]
  20. Koch, S.; Klein, R. Route-based approximate dynamic programming for dynamic pricing in attended home delivery. Eur. J. Oper. Res. 2020, 287, 633–652. [Google Scholar] [CrossRef]
  21. Wang, J.; Kang, L.; Liu, Y. Optimal scheduling for electric bus fleets based on dynamic programming approach by considering battery capacity fade. Renew. Sustain. Energy Rev. 2020, 130, 109978. [Google Scholar] [CrossRef]
  22. Ding, Y.; Wang, Q.; Tian, Z.; Lyu, Y.; Li, F.; Yan, Z.; Xia, X. A graph-theory-based dynamic programming planning method for distributed energy system planning: Campus area as a case study. Appl. Energy 2023, 329, 120258. [Google Scholar] [CrossRef]
  23. Langmann, R.; Stiller, M. The PLC as a Smart Service in Industry 4.0 Production Systems. Appl. Sci. 2019, 9, 3815. [Google Scholar] [CrossRef]
  24. Halterman, D. Edge controller, PLC, or PAC? Control Eng. 2020, 67, 34. [Google Scholar]
  25. Liu, P.; Zhang, Y.; Wu, H.; Fu, T. Optimization of Edge-PLC-Based Fault Diagnosis with Random Forest in Industrial Internet of Things. IEEE Internet Things J. 2020, 7, 9664–9674. [Google Scholar] [CrossRef]
  26. Stankovski, S.; Ostojić, G.; Šaponjić, M.; Stanojević, M.; Babić, M. Using micro/mini PLC/PAC in the Edge Computing Architecture. In Proceedings of the 2020 19th International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, Bosnia and Herzegovina, 18–20 March 2020; pp. 1–4. [Google Scholar] [CrossRef]
  27. Dai, J.; Hu, J.; Shu, K.; Zhou, H. Field gas supply station monitoring based on software-defined PLC edge server. In Proceedings of the International Conference on Intelligent Systems, Communications, and Computer Networks (ISCCN 2023), Changsha, China, 24–26 February 2023; Liu, X., Wang, L., Eds.; International Society for Optics and Photonics; SPIE: Bellingham, WA, USA, 2023; Volume 12702, p. 127022V. [Google Scholar] [CrossRef]
  28. Fu, T.; Peng, Y.; Liu, P.; Lao, H.; Wan, S. Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT. J. Cloud Comput. 2022, 11, 73. [Google Scholar] [CrossRef]
  29. Wu, H.; Yan, Y.; Sun, D.; Wu, H.; Liu, P. Multibuffers Multiobjects Optimal Matching Scheme for Edge Devices in IIoT. IEEE Internet Things J. 2021, 8, 11514–11525. [Google Scholar] [CrossRef]
  30. Microsoft. Mimalloc. Available online: https://microsoft.github.io/mimalloc/ (accessed on 21 July 2023).
  31. Google. TCMalloc: Thread-Caching Malloc. Available online: https://gperftools.github.io/gperftools/tcmalloc.html (accessed on 18 July 2023).
  32. Foundation, F.S. The GNU C Library (Glibc). Available online: https://www.gnu.org/software/libc/ (accessed on 17 July 2023).
Figure 1. Edge PLC in edge computing architecture.
Figure 1. Edge PLC in edge computing architecture.
Applsci 13 10297 g001
Figure 2. Decision-making process of memory allocation.
Figure 2. Decision-making process of memory allocation.
Applsci 13 10297 g002
Figure 3. Memory allocation in the real-time zone.
Figure 3. Memory allocation in the real-time zone.
Applsci 13 10297 g003
Figure 4. Memory allocation in the non-real-time zone.
Figure 4. Memory allocation in the non-real-time zone.
Applsci 13 10297 g004
Figure 5. Processing time of task set and average memory fragmentation rate under different memory compression factors.
Figure 5. Processing time of task set and average memory fragmentation rate under different memory compression factors.
Applsci 13 10297 g005
Figure 6. Memory utilization at different memory compression factors. (a) Memory compression factor of 5; (b) memory compression factor of 9; (c) memory compression factor of 22; (d) memory compression factor of 27.
Figure 6. Memory utilization at different memory compression factors. (a) Memory compression factor of 5; (b) memory compression factor of 9; (c) memory compression factor of 22; (d) memory compression factor of 27.
Applsci 13 10297 g006
Figure 7. Comparison between EPmalloc and the default system method. (a) Comparison of maximum response delay; (b) comparison of average response delay.
Figure 7. Comparison between EPmalloc and the default system method. (a) Comparison of maximum response delay; (b) comparison of average response delay.
Applsci 13 10297 g007
Figure 8. Comparison of EPmalloc with three memory allocators.
Figure 8. Comparison of EPmalloc with three memory allocators.
Applsci 13 10297 g008
Figure 9. Performance comparison in environments with different memory sizes.
Figure 9. Performance comparison in environments with different memory sizes.
Applsci 13 10297 g009
Figure 10. The edge PLC used for experimentation.
Figure 10. The edge PLC used for experimentation.
Applsci 13 10297 g010
Table 1. Correspondence between the knapsack problem and the memory allocation problem.
Table 1. Correspondence between the knapsack problem and the memory allocation problem.
Concepts in Knapsack ProblemConcepts in Memory Allocation
KnapsackFree memory block
Knapsack capacitySize of free memory block
ItemMemory request
Item weightSize of memory request
Item valueValue of the allocation decision for the item
Item selection schemeAllocation decision for free memory blocks
Table 2. Variables in the non-real-time zone.
Table 2. Variables in the non-real-time zone.
VariableMeaning
M e m Set of free memory blocks
m i Free memory block i
m S i z e i Size of free memory block i
R e q i Memory request i
R e q S i z e i Size of memory request i
D e c i Allocation decision for memory request i
M a x M e m Maximum available memory size in edge PLC
M C F i Memory compression factor of free memory block i
V a l u e i j Value of allocating free memory block i to memory request j
H e a d S i z e Size of memory header information
S ( i , j ) k State variable for memory request decision
Table 3. Experimental results in the real-time zone (unit: μ s).
Table 3. Experimental results in the real-time zone (unit: μ s).
Memory Allocation MethodMax Response DelayAverage Response Delay
Default System Method2406189
EPmalloc1413163
Table 4. The time taken by several allocation algorithms at three task processing milestones (unit: ms).
Table 4. The time taken by several allocation algorithms at three task processing milestones (unit: ms).
Number of Tasks CompletedGlibcTCMallocMimallocEPmalloc
25,000 5.94 × 10 5 6.54 × 10 5 6.64 × 10 5 5.58 × 10 5
50,000 1.19 × 10 6 1.32 × 10 6 1.33 × 10 6 1.11 × 10 6
100,000 2.40 × 10 6 2.55 × 10 6 2.66 × 10 6 2.20 × 10 6
Table 5. The number of tasks completed increases with time (unit: s).
Table 5. The number of tasks completed increases with time (unit: s).
Elapsed TimeDefault System MethodEPmalloc
20070268522
50018,65321,230
100038,26442,488
200070,01284,272
4000151,739168,422
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, G.; Wan, Z.; Ding, W.; Sun, R. Memory Allocation Strategy in Edge Programmable Logic Controllers Based on Dynamic Programming and Fixed-Size Allocation. Appl. Sci. 2023, 13, 10297. https://doi.org/10.3390/app131810297

AMA Style

Cheng G, Wan Z, Ding W, Sun R. Memory Allocation Strategy in Edge Programmable Logic Controllers Based on Dynamic Programming and Fixed-Size Allocation. Applied Sciences. 2023; 13(18):10297. https://doi.org/10.3390/app131810297

Chicago/Turabian Style

Cheng, Guanghe, Zhong Wan, Wenkang Ding, and Ruirui Sun. 2023. "Memory Allocation Strategy in Edge Programmable Logic Controllers Based on Dynamic Programming and Fixed-Size Allocation" Applied Sciences 13, no. 18: 10297. https://doi.org/10.3390/app131810297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop