Next Article in Journal
Three-Dimensional Convolutional Vehicle Black Smoke Detection Model with Fused Temporal Features
Previous Article in Journal
Evaluating the Resilience of the Cocoa Agroecosystem in the Offinso Municipal and Adansi North Districts of Ghana
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Petri Net-Based Algorithm for Solving the One-Dimensional Cutting Stock Problem

by
Irving Barragan-Vite
*,
Joselito Medina-Marin
,
Norberto Hernandez-Romero
and
Gustavo Erick Anaya-Fuentes
Área Académica de Ingeniería y Arquitectura, Instituto de Ciencias Básicas e Ingeniería, Universidad Autónoma del Estado de Hidalgo, Carretera Pachuca-Tulancingo km. 4.5, Ciudad del Conocimiento, Mineral de la Reforma 42184, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8172; https://doi.org/10.3390/app14188172
Submission received: 28 July 2024 / Revised: 31 August 2024 / Accepted: 5 September 2024 / Published: 11 September 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
This paper addresses the one-dimensional cutting stock problem, focusing on minimizing total stock usage. Most procedures that deal with this problem reside on linear programming methods, heuristics, metaheuristics, and hybridizations. These methods face drawbacks like handling only low-complexity instances or requiring extensive parameter tuning. To address these limitations we develop a Petri-net model to construct cutting patterns. Using the filtered beam search algorithm, the reachability tree of the Petri net is constructed level by level from its root node to find the best solution, pruning the nodes that worsen the solution as the search progresses through the tree. Our algorithm is compared with the Least Lost Algorithm and the Generate and Solve algorithm over five datasets of instances. These algorithms share some characteristics with ours and have proven to be effective and efficient. Experimental results demonstrate that our algorithm effectively finds optimal and near-optimalsolutions for both low and high-complexity instances. These findings confirm that Petri nets are suitable for modeling and solving the one-dimensional cutting stock problem.

1. Introduction

Cutting stock problems are combinatorial problems involving the creation of small objects from large objects according to a cutting plan made of cutting patterns. The larger the number of small objects, the greater the number of possible cutting patterns, and hence these problems are classified as NP-hard optimization problems [1]. In this study, we focus on the single stock-size cutting problem according to Wascher’s classification [2] for the one-dimensional case, or one-dimensional cutting stock problem (1D-CSP), as it is best known in most of the literature. The 1D-CSP arises in industries manufacturing goods from paper, metal, glass, and wood, among others. The goal is the reduction in material usage and, therefore, the minimization of the related costs. Apart from cost reduction, industries such as the building industry are concerned with reducing gas emissions due to metal and concrete usage, as pointed out in [3]. Some other studies also address the problem by considering mathematical models that include the reuse of leftovers, as in [4]. Reference [5] presents some LP formulations proposed in the early stages of addressing the 1D-CSP.
In the literature, most mathematical models for the 1D-CSP consider functions of only one term, either the total waste or the total number of stocks used. However, ref.  [6] demonstrated the effectiveness of using a function that involves simultaneously the total waste and the total number of stocks used. In this study, we consider the objective function of the mathematical model (1)–(3) based on the one used in [7] for the 1D-CSP without contiguity. The objective function to be minimized accounts for both the minimization of waste and the minimization of the number of stocks with waste.
min W = 1 k + 1 j = 1 k w j L j + j = 1 k V j k
subject to
j = 1 k x i j = d i , i = 1 , 2 , , m
x i j 0 and integer .
In model (1)–(3), L j is the length of the stock, m is the number of different items, k is the total number of stocks used. d i represents the demand or total number of orders of item i, while l i is the length of item i, x i j is the number of orders of item i in stock j. w j and V j are defined in Equations (4) and (5), respectively, where w j represents the wastage of stock j.
w j = L j i = 1 m x i j l i , j = 1 , 2 , , k
V j = 1 if w j > 0 , j = 1 , 2 , , k 0 otherwise .
Most approaches to solving the 1D-CSP are based on linear programming (LP) methods, with the column generation technique being one of the most outstanding ones. Column generation and other techniques focus on generating and reducing the number of cutting patterns. In the 1D-CSP, a cutting pattern is an ordered sequence of items cut from a large object or stock. One-cut models and arc-flow models are other paradigms used to address the 1D-CSP, as solutions to the 1D-CSP can be constructed as sequences of items. This sequential approach to constructing the 1D-CSP solutions motivates our proposal of using ordinary Petri nets (PN) to model the solution construction. We take advantage of the reachability tree to find the best solution through the implementation of a filtered beam search algorithm (FBS). Beam search algorithms are a type of any-time search algorithm, suitable for optimization problems where solutions can be constructed sequentially and represented as a tree structure. Unlike other any-time search methods, beam search algorithms do not perform backtracking. Only the most promising nodes at each level are retained to continue searching for the best path from the root node to a desired final node, while non-promising nodes are pruned permanently. The FBS is an improvement to the classical beam search algorithm. At each level, it first filters the nodes through a local evaluation and then performs a global evaluation among the filtered nodes to retain the best ones for further investigation.
Even though the main drawback of PN is state explosion, we do not generate the complete reachability tree. Instead, we generate promising states from the initial state that lead to the desired final state with minimum waste, according to the FBS, using the objective function (1) to evaluate the nodes. The initial state or marking of the PN is where no items have been taken to form a solution, and the final state or marking is where all items have been taken to form a solution. At each level of the tree, partial solutions are evaluated locally, and the best ones are then evaluated globally. Nodes with the best partial solutions are kept for further investigation. The advantages of using PN are twofold: they are useful for validating solutions when PN is deadlock-free, and they can be combined with graph search methods to solve optimization problems. Some of the problems addressed under the PN paradigm include job and flow shop scheduling problems. In these cases, PN and graph search methods like A * algorithms [8,9,10] and beam search algorithms [11,12,13] have been used to obtain the best scheduling solutions. However, to the best of our knowledge, PN has not been used to solve cutting problems. Based on this, the issues concerned with heuristic, metaheuristic, LP, and even hybridized methods have motivated our study to explore PN as an approach to address the 1D-CSP. Our experimental results show that PN is suitable for modeling pattern generation and solving the 1D-CSP, with acceptable performance in both solution quality and computational time.

2. Literature Review

It is widely considered that Kantorovich first addressed the 1D-CSP in [14] providing an LP formulation of the problem and using the multipliers method to solve it. Later, refs. [15,16] used the column generation technique to solve the integer linear programming formulation by addressing an associated knapsack problem. Following the introduction of the column generation technique, several procedures were developed to improve its computational performance using branch-and-price algorithms [17,18,19,20]. Other algorithms have focused on constructing and reducing the number of patterns [21,22,23,24,25]. Alternative approaches to modeling and solving the 1D-CSP include the arc-flow and one-cut formulations. The arc-flow formulation, a graph-based approach, was introduced in [26]. Recently, [27] proposed and compared arc-flow formulations based on other algorithms. One-cut formulations have received little attention since their use in [28]. Moreover, [29] proved the equivalence among pattern-based formulations, such as that of Gilmore & Gomory [15,16], and the arc-flow and one-cut formulations. [30] used dynamic programming to solve the 1D-CSP, and [31] showed the relationship between arc-flow formulations and dynamic programming in solving optimization problems. Additionally, a comprehensive survey of LP methods used to solve the 1D-CSP up to 2015 can be found in [32].
LP methods are restricted to instances of a small number of items, and to overcome this limitation, many researchers have proposed metaheuristic methods like genetic algorithms. However, the use of genetic algorithms faces the difficulty of finding the best way to encode the solutions or cutting patterns, as pointed out in [33,34,35,36]. Evolutionary programming was used in [7,37] to address the 1D-CSP with and without contiguity. Additionally, ref. [38] concluded that evolutionary programming performs better than genetic algorithms for solving the 1D-CSP. Swarm intelligence algorithms, such as particle swarm optimization algorithms [39,40,41,42] and ant colony optimization algorithms [43,44,45], have also been used to solve the 1D-CSP. The main issues with these two algorithms are parameter tuning and the need for hybridizations to avoid premature convergence and control the search space. Additionally, discretization procedures are required since these algorithms are better suited for continuous problems. Other metaheuristic algorithms for solving the 1D-CSP include tabu search, simulated annealing [46], and iterative local search [47,48]. More recently, ref. [49] introduced the least-lost algorithm to simultaneously minimize waste and stocks with waste in the 1D-CSP, showing better results than the evolutionary programming algorithm of [7]. Ref. [50] compares a Generate and Solve algorithm to the column generation technique. The Generate and Solve algorithm uses a reduction technique based on the one presented in [51] to deal with large instances of the 1D-CSP. Although the algorithm is not better than the column generation technique, it can solve larger instances in acceptable computational time with quasi-optimal solutions. Ref. [52] introduces a deep reinforcement learning algorithm to minimize the number of stocks and maximize the length of leftovers. The algorithm performs well in terms of computational time and handling large instances.
The algorithm we propose in this paper covers some of the limitations of the methods described above. Unlike LP methods, our algorithm is capable of addressing large instances as shown in Section 6 and Section 7 in acceptable computational time. It only needs the instance information to construct the PN model and the tuning of two parameters for the filtered beam search algorithm, while most metaheuristic algorithms and hybridizations require more parameters to address large instances of the 1D-CSP. Additionally, our algorithm yields optimal solutions for the majority of small instances and near-optimal solutions for medium and large instances without the need to implement complex procedures to reduce the instances or to select the promising solutions, as the Generate and Solve method, and the one-cut, arc-flow models, and other similar methods do.

3. Basic Concepts

In this section, we present the basic concepts of ordinary PN and the beam search algorithm necessary to introduce our algorithm. However, PN theory is extensive, so readers are referred to [53,54] for more information on the properties and variants of PN.

3.1. Petri Nets

PN was introduced by Carl A. Petri in 1960 as part of his doctoral dissertation. Since then, many researchers have studied PN theory and its applications. Additionally, other types of PN, such as timed PN, colored PN, and stochastic PN, have been developed to enhance their modeling power. In this paper, we will refer to the ordinary PN.
An ordinary PN, or simply PN, is a bipartite graph consisting of two sets of nodes: P = { p 1 , p 2 , , p r } , and transitions T = { t 1 , t 2 , , t s } , such that P T = . These nodes are connected by a set of directed arcs F { ( P × T ) ( T × P ) } . Places may represent activities or conditions in systems modeling, whereas transitions represent the occurrence of a condition or the execution of an activity [53]. Figure 1 shows a typical example of a PN model, where circles represent places and rectangles represent transitions, respectively. Each arc carries a number or arc weight, defined by the function W : F N + , which helps simulate the system’s dynamics. Moreover, a particular state of the system is represented by a distribution of tokens (black dots) within the places, resulting in the marking of the PN. Formally, a marking is defined by the function M : P N . A marked place means that the corresponding activity or condition is being executed.
The dynamics or evolution of PN markings involves removing and adding tokens when a firing occurs. Let   t and t denote the sets of input and output places of transition t. If M ( p ) W ( p , t ) for each p   t at some marking M, then the transition t is enabled and can be fired. When transition t is fired, W ( p , t ) tokens are removed for each p   t , and W ( t , p ) tokens are added to each output place p t . The dynamics of a PN can also be described by matrix operations using Equation (6), known as the state equation where M k is an r × 1 vector representing the current state or marking of the PN, and M 0 is the initial marking or state of the PN. I is the incidence matrix of size r × s , given by Equation (7) where I + and I are the input and output incidence matrices, respectively.
M k + 1 = M k + Iu
I = I + I
Each entry of I + corresponds to the weights W ( t , p ) , while in I the entries correspond to the weights W ( p , t ) , as shown in Equation (8) for the PN from Figure 1. Finally, u is the firing vector of size s × 1 , where one entry is 1 (indicating the transition to be fired at marking M k ), and the remaining entries are zero. Typically, from a set of enabled transitions at a marking M , one transition is fired at a time to explore the possible states that can be reached from that marking. This is represented by the notation M t M , where t is the fired transition and M is the marking reached from M .
I = I + I = 0 0 1 0 0 2 0 1 2 0 0 1 1 0 0 0 = 2 0 1 1 1 2 0 1
Given a PN with an initial marking M 0 , a firing path σ is a sequence of transitions fired from M 0 to marking M , denoted as M σ M . When this occurs, M is said to be reachable from M 0 . The set of all possible reachable markings from M 0 can be represented by a reachability tree, provided that this set is finite. An example of a reachability tree, corresponding to the PN in Figure 1, is shown in Figure 2. Taking the marking M 0 as the root of the reachability tree, a new branch is created for each transition fired from M 0 . After exhausting all possible firings from M 0 , a new level of markings is created, such as M 1 in Figure 2. At the new level, for each marking new branches are created using the same procedure. When all the possible markings have been obtained, a new level is added to the tree. The process is repeated for each of the markings at every new level until all the reachable markings from M 0 have been obtained. However, when no transition can be fired from a specified marking, it is considered a dead marking. If the evolution of markings is stopped by dead markings without reaching a desired final marking, the PN is blocked.
It should be noted that although the set of reachable markings from M 0 can be finite, the number of these markings could be so large that it becomes impossible to enumerate or represent them using a reachability tree. This issue is known as the state explosion of PN.

3.2. Filtered Beam Search Algorithm

The beam search algorithm was first used as an artificial intelligence technique for speech recognition. It has since been applied to combinatorial optimization problems, such as scheduling problems [55], the traveling salesman problem [56,57,58], and cutting and packing problems [59,60,61,62,63]. This method is efficient because solutions to these problems can be constructed sequentially by appending the next operation, city, or item to the partial solution based on the cost it adds. Thus, the construction of the solution can be viewed as a decision tree. At each level, the corresponding nodes represent partial solutions with given costs, while at the last level, the nodes represent complete solutions. One of these complete solutions is then selected as the best or optimal solution. The beam search algorithm performs a breadth-first search followed by a depth-first search. At each level of the tree, the nodes are branched, and those that are not promising for further branching based on a cost function evaluation are pruned. The number of nodes β that are branched at each level are the beams. Once these nodes have been selected, the others are discarded permanently. Since the beam search has no backtracking, it is possible to discard good solutions, but it saves much computational effort. In [64], a modification to the beam search was introduced known as the filtered beam search algorithm. As the name suggests, at each level a local evaluation is carried out over the nodes that share the same parent node. Only those nodes that satisfy a local evaluation condition are considered beam nodes, thereby filtering the nodes. Once all the filtered nodes have been obtained from the parent nodes, only those that satisfy a global evaluation condition become the new beam nodes or parent nodes for the next level.
The number of filtered nodes α (filter width) and the number of beam nodes β (beam width) are user-defined. However, it is known that larger values require more computational effort to search the tree, while smaller values reduce the likelihood of finding the best or optimal solution. A representation of the filtered beam search algorithm is shown in Figure 3 for a solution with a sequence of n nodes. Sometimes, it is desired to maintain diversity in the search by setting α < β , which guarantees that at least one beam node is selected from a different parent node [60].

4. PN-Based Algorithm for Solving the 1D-CSP

In this section, we first provide details about the PN model used to construct solutions for a given instance of the 1D-CSP. Next, we explain how the beam search algorithm, based on the reachability tree, is used to find the best solution.

4.1. PN Model for the 1D-CSP

As mentioned above, we use ordinary PN to model a given instance of the 1D-CSP. Figure 4 illustrates the general PN model for a given instance of the 1D-CSP, with m different items and stock length L. The construction of a solution for a 1D-CSP instance through the PN model follows the next-fit heuristic principle. This means that items are added sequentially to the solution as long as the total length of the items, or pattern length, does not exceed the length of the stock. Otherwise, the next item is put into a new stock, and the process is repeated until all items have been added to the solution. In this way, there will be patterns with or without waste, and the total waste of the solution will be the sum of the waste yielded from each pattern. The PN model allows the formation of patterns with and without waste, enabling the beam search algorithm to find the solution with the minimum waste and the minimum number of stocks with waste.
According to the general PN model, there will be m places for each item type labeled p 2 , p 3 , , p m + 1 , as shown in Figure 4. We refer to these places as item places. The number of tokens inside the item places represents the number of orders of each item, denoted d 1 , d 2 , , d m . The stock is represented by a single place labeled p 1 , and the number of tokens inside this place is equal to the stock length, L. We refer to this place as the stock place. Item places, as well as the stock place, are input places for transitions t 1 , t 2 , , t m , referred to as item transitions. Each item transition consumes a token from its corresponding item place, indicating that an order of that item is being put into a solution. Similarly, each item transition removes l i tokens from the stock place, which corresponds to the length of the item of the corresponding item place. As items are taken, a pattern is formed, and its length is accumulated in the place labeled p m + 2 , which we name the pattern place. As shown in Figure 4, the pattern place is an output place of each item transition, where each one adds l i tokens to the pattern place according to the item length it is being added to the solution. Therefore, the number of tokens inside the pattern place varies from 0 to L. However, the minimum pattern length will be λ = m i n { l 1 , l 2 , , l m } , representing a pattern formed by only one order of the item with the smallest length.
The transitions t m + 1 , t m + 2 , t m + 3 , , t m + L are the pattern transitions, and there will be L transitions of this type. A pattern transition removes tokens from the pattern place according to the corresponding pattern length that has been formed. For example, t m + 1 removes L tokens, t m + 2 removes L 1 tokens, and so on, such that t m + L removes λ tokens. Since a pattern includes the lengths of the items as well as the unused material of the stock (if any), pattern transitions t m + 2 , t m + 3 , , t m + L are output transitions of the stock place to complete the pattern formation. Thus, t m + 2 removes one token, t m + 3 removes two tokens, and so on, such that t m + L removes L λ tokens from the stock place. Transition t m + 1 is not an output transition of the stock place since it corresponds to a pattern without waste. At the same time, the pattern transitions are input transitions of the stock place so that once a pattern has been formed and included in a solution, another stock must be available to continue the process of forming the solution. Each pattern transition adds L tokens to the stock place as shown in Figure 4. So far, the PN model can construct any possible solution by starting with any item, but we included three more places to enhance the model. These places are p m + 3 , p m + 4 and p m + 5 , where p m + 3 records the amount of waste, p m + 4 helps to count the number of stocks without waste, and p m + 5 counts the number of stocks with waste. Places p m + 3 and p m + 5 are output places of the pattern transitions. Each pattern transition adds one token to p m + 5 while p m + 3 receives 1 , 2 , , L λ tokens from transitions t m + 2 , t m + 3 , , t m + L , respectively. Place p m + 4 is an output place of transition t m + 1 , which adds one token each time there is a pattern without waste. Based on the above descriptions, Figure 5 shows the general form of the incidence matrices of the PN model for an instance of the 1D-CSP.

4.2. Marking Evolution and Solution Construction

To obtain a solution for a given instance of the 1D-CSP with the PN model described in the previous section, referred to as PN-CSP, we begin with the initial marking as shown in Figure 4 where there is an unused stock (place p 1 ) and m items with their corresponding number of orders (places p 2 to p m + 1 ). Hence, the general form of the initial marking of the PN-CSP is as follows:
M 0 = [ L d 1 d 2 d m + 1 0 0 0 0 ] T .
From the initial marking, the enabled transitions are t 1 , t 2 , , t m so that any of the items could be the first in the solution. The solution is constructed in a linear sequence of items arranged from left to right. Once an enabled transition is fired, the pattern place accumulates tokens representing the length of the item placed first in the solution. Suppose transition t 2 is fired; then, the resulting marking is as follows:
M 1 = [ L l 2 d 1 d 2 1 d m + 1 l 2 0 0 0 ] T .
While the number of tokens in p m + 2 is not equal to or greater than any of the weights of the arcs L , L 1 , L 2 , , λ , transitions t 1 , t 2 , , t m can be fired one at a time because they will be the only ones enabled. This means the pattern formation continues as long as there is space in the current stock. However, if a determined pattern length less than the stock length is reached, such as L 1 , L 2 , or λ , the corresponding transition from t m + 2 to t m + L is enabled, considering there will be 1 , 2 , or L λ tokens in place p 1 , representing the waste of the stock. If one of the transitions t m + 2 , , t m + L is selected to be fired, the pattern formation stops to renew the stock for a new pattern. For example, consider the marking:
M k = [ 2 a b c L 2 0 0 0 ] T ,
where a , b , , c are the corresponding remaining number of item orders after k 1 items have been included in the first pattern of the solution, and M k ( p m + 2 ) = L 2 . Then, t m + 3 is enabled and can be fired, yielding the marking:
M k + 1 = [ L a b c 0 2 0 1 ] T .
Here, there is a new empty stock of length L, and one stock with waste ( M k + 1 ( p m + 5 ) = 1 ) has been generated as part of the solution with a waste of 2 ( M k + 1 ( p m + 3 ) = 2 ). With the new empty stock, the construction of the solution continues with the remaining items’ orders. When a pattern transition is fired, it means that a cutting pattern has been formed, adding a new stock with or without waste to the solution. These markings that enable and fire a pattern transition are called pattern markings. On the other hand, if an item transition is fired, it means that an item has been added to the current pattern in construction. These markings that enable and fire an item transition are called item markings. The marking evolution is performed as described, forming patterns and renewing the stocks when needed until all of the items have been included in a solution.
When the number of tokens in places p 2 , p 3 , , p m + 1 is zero, the solution has been completed, and the marking M f = [ L 0 0 0 0 ω ν η ] T is considered the final marking, where ω is the final total waste, ν is the final number of stocks without waste, and η is the final number of stocks with waste. Therefore, the solution can be obtained with the firing sequence σ obtained from M 0 to M f , that is M 0 σ M f . A final firing sequence would have the following form:
σ = t 2 , t m , , t 1 , t m + 3 , t 1 , t 3 , , t m , t m + 1 , , t m + L , t m , t 1 , , t 3 , t m + 2 .
To decode σ into a sequence of item lengths, the pattern transitions t m + 3 , t m + 1 , t m + L and t m + 2 are dropped, and a new sequence is made up with the remaining transitions. This new sequence is as follows:
ς = t 2 , t m , , t 1 , t 1 , t 3 , , t m , , t m , t 1 , , t 3 .
The solution is formed with either the weights W ( t i , p m + 2 ) or W ( p 1 , t i ) for 1 i m for each of the transitions in ς , such that these weights correspond to the lengths of the items. Therefore, for the final firing sequence σ , the solution would have the following form:
s = l 2 , l m , , l 1 , l 1 , l 3 , , l m , , l m , l 1 , , l 3 .

5. Filtered Beam Search Algorithm Implementation

Consider the reachability tree construction of the PN-CSP with initial marking M 0 = [ L d 1 d 2 d m + 1 0 0 0 0 ] T for a given instance of the 1D-CSP. Starting from this initial marking, all potential solutions are derived by investigating firing sequences leading to the final marking M f = [ L 0 0 0 0 ω ν η ] T . However, exhaustive exploration of all sequences is computationally intensive. To solve this issue, we use the FBS, which reduces explored markings as we progress in the reachability tree. FBS does not perform backtracking like traditional any-time search methods, optimizing computational efficiency. The effectiveness of FBS depends on parameters α and β , chosen to balance exploration and exploitation. Algorithm 1 outlines our FBS implementation, starting by generating all child nodes from firing transitions t 1 , t 2 , , t m at the initial marking or the root node (level 0 or L 0 ), resulting in m markings at level 1 ( L 1 ), as shown in Figure 6. In our FBS, each marking at L1 is neither locally nor globally evaluated, treating each node as a potential beam node, contrary to [55]. We made this decision because, by the local evaluation, the algorithm will tend to select the solutions with the largest items placed first, minimizing waste in the initial stock. Preliminary experiments showed this approach yields better solutions compared to selecting only β nodes at L 1 . Thus, starting from L2, we apply both local and global evaluations, setting α β to ensure solution diversity.
Algorithm 1 Pseudocode of algorithm FBS-PN-CSP
Input: 
Number of different item types (m), stock length (L), item lengths ( l 1 , l 2 , , l m ) , and item orders ( d 1 , d 2 , , d m ) .
Output: 
F b e s t
1:
With the instance data, generate the incidence matrices I + , I and obtain the incidence matrix I .
2:
Generate the initial marking as M 0 = [ L d 1 d 2 d m + 1 0 0 0 0 ] .
3:
Set the beamwidth β and the filterwidth α .
4:
F b e s t
5:
W w 0
6:
W s w 0
7:
Generate level L 0 of the reachability tree with the marking M 0 as the root node such that C u r r e n t l e v e l = { M 0 } .
8:
l e v e l 0
9:
B e s t s o l u t i o n F a l s e
10:
while  B e s t s o l u t i o n i s F a l s e   do
11:
   for each marking in C u r r e n t l e v e l  do
12:
     Get W w , W s w and F g l o b a l .
13:
   end for
14:
   if level > 1 then
15:
     Kept the μ = min ( β , | C u r r e n t l e v e l | ) best markings in C u r r e n t l e v e l with the lowest F g l o b a l cost and form the set B with these markings. Eliminate the remaining markings from C u r r e n t l e v e l .
16:
   else
17:
     Kept the μ = | C u r r e n t l e v e l | markings in C u r r e n t l e v e l and form the set B with these markings.
18:
   end if
19:
    N e w l e v e l =
20:
   for each marking M in B  do
21:
     Determine the reachable markings M from M with Equation (6) and form the set M r with those reachable markings, where r = 1 , 2 , , | B | .
22:
     if  M r  then
23:
        Obtain W w , W s w , F l o c a l and F g l o b a l for each M M r .
24:
        if level >0 then
25:
          Check whether any marking M M r has already been generated previously. If it has, let M p r e v M 1 M 2 M r 1 be the previously generated marking that is equal to M M r , and then eliminate from the reachability tree the one with the maximum F g l o b a l cost.
26:
          if  M r  then
27:
             Kept the τ = min ( α , | M r | ) markings with the lowest F l o c a l cost and eliminate the remaining ones from M r .
28:
          end if
29:
        else
30:
          Keep the markings in M r .
31:
        end if
32:
        Make N e w l e v e l M r
33:
     end if
34:
   end for
35:
   Check whether the final marking M f = [ L 0 0 0 0 * * * ] has been reached.
36:
   if  M f has been reached then
37:
      F b e s t F g l o b a l
38:
      B e s t s o l u t i o n T r u e
39:
   else
40:
      l e v e l l e v e l + 1
41:
      C u r r e n t l e v e l = N e w l e v e l
42:
   end if
43:
end while
In most beam search algorithm implementations, node costs at any node are computed using a function that considers cumulative costs up to the node and an estimation of the remaining solution’s cost. The effectiveness of these algorithms heavily depends on choosing the right estimation cost function. Usually, in the FBS algorithm, local evaluation uses cumulative costs, while global evaluation considers both cumulative and estimated costs. Our FBS implementation employs the objective function (1) as a fitness function to perform both local and global evaluations. This function calculates the complete solution, capturing the number of stocks used, the total waste generated, and the number of stocks with waste that have been generated in the solution.
As we progress in constructing the reachability tree, each node or marking represents only a partial solution, thus its corresponding cost is also partial. Despite not having a complete solution, partial solutions provide insights into the number of stocks used, those with or without waste, and the amount of unused material or potential waste on the current stock. For example, at the initial marking or root node, no stock has been used, resulting in no stocks with or without waste, and an unused material length of L. At level L1, item markings represent solutions where one order of each item has been taken, but no stock has been fully utilized with or without waste. Eventually, as the reachability tree progresses, pattern markings are reached at some level, indicating that a cutting pattern has been obtained. At the first pattern marking, a stock has been used, and the waste amount for the stock is known. Subsequent item markings’ costs must consider the used stock and its waste. It can be seen from the objective function (1) that the first term sums the partial wastage of each used stock, while the second sums the number of stocks with waste. We compute two cumulative costs accordingly: one for waste using Equation (9) and another for the number of stocks with waste using Equation (10). Both are employed in Equation (11) for the global evaluation of nodes. These two cumulative costs are computed upon reaching a pattern marking, while Equation (12) is utilized for local evaluations, considering the unused material of the current stock in each solution.
W w = W w + w j L
W s w = M ( p m + 5 ) M ( p m + 5 ) + M ( p m + 4 )
F g l o b a l = W w + W s w M ( p m + 5 ) + M ( p m + 4 ) + 1
F l o c a l = 1 M ( p m + 5 ) + M ( p m + 4 ) + 1 M ( p 1 ) L + W s w if M is an item marking . 1 M ( p m + 5 ) + M ( p m + 4 ) + 1 w j L + W s w if M is a pattern marking .
In Equations (9) and (12), w j is obtained from the incidence matrix of PN-CSP, where w j = I ( p m + 3 , t ) for t { t m + 2 , t m + 3 , , t m + L } . This is because when a pattern transition is fired, the marking of place p 1 is immediately updated to L, indicating that the amount of waste in the already used stock is lost. However, this amount of waste can be determined from the incidence matrix. The cumulative costs W w and W s w are set to zero for the item markings between the initial marking and the first pattern marking. These costs are computed and updated at each reached pattern marking. Consequently, the outcomes of W w and W s w are retained for the item markings between two subsequent pattern markings. Once the final marking is reached, the last F g l o b a l outcome is set as the best solution F b e s t . To determine whether the final marking has been reached, at each level, we compare the markings obtained with the desired final marking M f ( p 1 ) . However, we only consider the marking of places p 1 , p 2 , , p m + 2 . Specifically, we compare the marking M = [ M ( p 1 ) M ( p 2 ) M ( p 3 ) M ( p m + 1 ) M ( p m + 2 ) * * * ] , obtained at some level, with the final marking M f = [ L 0 0 0 0 * * * ] . The ‘*’ means that the marking of the corresponding places is not considered for the comparison. This is because we do not know in advance how much waste would be generated by the best solution or the number of stocks with and without waste. However, we know that at the final marking, there are no more item orders to place, and the last stock is generated when the last item order is taken for the final cutting pattern.
Observe from Algorithm 1, lines 17 and 29, that when performing global and local evaluations, we do not necessarily select β or α number of markings, respectively. Instead, we select the minimum value between β (or α ) and the number of markings obtained globally (or locally). This is because, in a reachability tree, a marking could be generated by firing different transitions, resulting in fewer markings than the β markings needed at a level or fewer than the α markings needed from a parent marking or beam node. In some beam search algorithm implementations, if the required β nodes are not met for some level, then more parent nodes at the previous level are explored to meet the required β nodes. However, we do not conduct this in our FBS implementation, as it is possible to encounter dead markings. This might make it impossible to complete the required number of nodes. Dead markings are related to the liveness property of a PN and this property is closely related to the presence of siphons [65]. It is known that a PN with siphons is likely to be non-live, as once the places of a siphon become unmarked at some marking, they remain unmarked in all subsequent markings. We performed a structural analysis for an instance with ten different items and a total of 20 orders, using an open-source PN modeling platform called PIPE2 version 4.3.0 [66] to determine the presence of siphons. The results are shown in Figure 7, indicating that place p 2 ( p 1 according to the enumeration of the PIPE2 platform) is a minimal siphon. Extending this result for the general PN-CSP model with the initial marking in the general form, and considering the dynamics described in the above paragraphs, places p 1 , p 2 , , p m could be minimal siphons and could become unmarked at some marking. The PN-CSP would become deadlocked if all the places p 1 , p 2 , , p m are unmarked, although this situation indicates that the desired final marking has been reached. However, to prevent our algorithm from getting stuck due to incomplete β nodes at some levels and potentially failing to reach the desired final marking, we decided to use the minimum number of nodes as outlined in Algorithm 1.

6. Experiments

Although we identified a wide variety of datasets in the literature, most studies do not use common datasets or performance measures to compare the algorithms. Additionally, implementing the published algorithms correctly can be challenging, and often, the code is not shared. Nevertheless, we identified two algorithms that share some similarities with our algorithm and are not based on populations. These algorithms were tested on five datasets that we found suitable for comparison with our algorithm. The first algorithm is the Least Lost Algorithm (LLA), introduced in [49], which uses a dataset of ten instances with a single stock size. We refer to this dataset as hinterding because, to the best of our knowledge, it was first used in [67]. The LLA is executed in two phases. In the first one, the item lengths are arranged in decreasing order while in the second, the second half of the arrangement is put before the first half. In both phases, the items are taken from the arrangement to construct cutting patterns by beginning with those that have no waste. If no additional cutting patterns without waste can be found, the search continues with patterns that have one unit of waste. This process continues by increasing the allowed waste by one unit until all the item orders have been included in a cutting pattern. If the number of stocks used is above the optimal theoretical value by more than 5%, the algorithm performs the second phase. The feature of increasing the allowed waste by one unit is represented in our PN-CSP model with the pattern transitions. The LLA was compared to the EP algorithm shown in [7] over the same dataset and the LLA yielded better results.
The other algorithm we compare to the FBS-PN-CSP is the Generate and Solve algorithm (G&S) presented in [50]. This algorithm is also executed in two phases. The first one reduces the problem size using a tree method that constructs all possible cutting patterns. Feasible cutting patterns are then selected and sent to the second phase, where an integer linear programming (ILP) method is implemented to obtain the best solution for the reduced set of solutions. In the first phase, a new reduced set of solutions is generated by considering an aging mechanism for previously used solutions, which can be re-selected as long as they have not reached a specified age limit. These new solutions are then sent to the second phase to determine the best solution among them, replacing the older solution if the new one is better. This process continues until a predefined time limit is reached, and the last solution found is returned as the best solution for the problem. The G&S algorithm was compared to the column generation technique. Both algorithms performed similarly for small instances, but the G&S algorithm was more effective for large instances. From the datasets used in [50], we selected the ones called hard28, hard10, and wae_gau1, available at [68]. Additionally, we included the dataset falkenauer_U from the same repository, which consists of four groups named binpack1, binpack2, binpack3, and binpack4. Table 1 provides details about the datasets. The third column lists the minimum and maximum number of item types among the instances of each dataset, while the fourth column shows the number of different item types, along with the standard deviation in rounded brackets. Both the hinterding dataset and the hard10 dataset have four different item types, but the variability in the former is greater than in the latter.
The comparisons among the three algorithms were made with respect to the number of stocks used and their optimal theoretical value, which can be obtained using Equation (13). The experiments were conducted on a Quad-Core Intel(R) Core(TM) i5 computer with a 3.4 GHz processor and 8 GB RAM. All three algorithms were coded in Python 3.11.1. For the second phase of the G&S algorithm, we used the Scipy package with the linprog library. We also set the same parameter values as in [50], that is, specifically: a time limit of 600 s for the G&S algorithm, a mutation rate of 10%, a maximum age of 2, and a maximum number of cutting patterns for the tree construction of 1,000,000. Additionally, we also executed the G&S algorithm 10 times for each instance. However, we did not set a time limit in the second phase of the G&S algorithm, as the solutions from the ILP solver were obtained in an acceptable time.
l b = i m l i d i L
One of the issues with beam search algorithms is the selection of the beam width β . The smaller the β value, the faster the search, but this comes with the loss of optimality due to aggressive pruning. We conducted preliminary experiments to determine suitable β and α values, considering the condition α β . These experiments were carried out with the hinterding dataset and we tested for β values chosen from 5 to 45. In all ten instances of this dataset, we observed that the best solutions were found with a beam width between 5 and 25. For small instances, the best solutions were found with lower β values while for large instances the values were higher. However, with higher values (namely from 30 to 45) there was no improvement in the solutions among the tested instances. Thus, for all the experiments with the FBS-PN-CSP algorithm, we used the interval 5 β 35 for the beam width, at steps 5, 15, 25, and 35. We chose these β values since from the preliminary experiments, at these values we observed improvements in the solutions of the tested instances. Furthermore, apart from finding no solution improvements in the preliminary experiments, higher values of β impact the efficiency of the FBS algorithm. However, for the hinterding dataset, we also implemented the β value of 45 as in the preliminary experiments. For the hard10 dataset, we only could implement a beam width of β = 5 due to the length of the stock, as it will be discussed in Section 8. The filter width was set in the interval 2 α β for each value of β . With α = 1 , no good solution was found in any instance.

7. Results

For each tested instance of the datasets, we report the best results obtained with each algorithm for the number of stocks used and the time taken in seconds. Additionally, we provide the l b value for all the instances and the percentage above this value (% > l b ) obtained by each algorithm. The FBS-PN-CSP, G&S, and LLA algorithms were compared only with the hinterding and falkenauer_U datasets since the LLA algorithm ran out of memory for the other datasets. The minimum value of stocks used among the compared algorithms is highlighted in boldface. For the G&S algorithm, we provide the best solution found within the 10 executions, and likewise, for the FBS-PN-CSP algorithm, we include the best combinations of beam width and filter width where the best stock result was found. The results of the hinterding, hard10, and waegau1 datasets are reported in this section. The results of the hard28, binpack1, binpack2, binpack3, and binpack4 datasets are included in Appendix A.
For the hinterding dataset, the three algorithms obtained the same optimal theoretical value for the 1a–5a instances and the 7a instance as shown in Table 2. The FBS-PN-CSP and the G&S algorithms also obtained the same best result for the 6a instance. Only the G&S had the optimal theoretical value for the 8a–10a instances. The FBS-PN-CSP algorithm had better results than the LLA algorithm for the 6a, 8a, and 9a instances, while for the 10a instance, both had the same result. Although the FBS-PN-CSP algorithm was not better than the G&S algorithm for instances 8a–10a, it obtained a percentage above the optimal theoretical value of less than 1%. The FBS-PN-CSP algorithm was more efficient than the G&S algorithm in six out of seven instances where they are equal, namely 2a–7a. However, the LLA algorithm had better times in all the instances where it had the same results as the other two algorithms. The beam width of the FBS-PN-CSP algorithm seems to increase as the number of items and the stock length increase. The highest beam width and filter width values were obtained in the 10a instance, which has the higher number of items and the longest stock.
In 12 out of 17 instances of the wae_gau1 dataset, the FBS-PN-CSP algorithm matched the best results of the G&S algorithm (Table 3), and in 9 of these 12 instances, it had better times. In 4 out of the 17 instances, the FBS-PN-CSP had better results, and only in one instance, the G&S was better than the FBS-PN-CSP. In 7 out of the 17 instances, the FBS-PN-CSP obtained the optimal theoretical value, meanwhile, the G&S did so in only four instances. Regarding beam width, four instances required the higher tested values, namely 25 and 35, but with a small filter width. For the hard10 instances, the G&S algorithm was better than the FBS-PN-CSP algorithm as shown in Table 4. However, the G&S algorithm only had the optimal theoretical value in one instance, namely HARD9. Even with the smaller value of the beam width, the time required for the FBS-PN-CSP was far higher than the time of the G&S algorithm. For this reason, we could not implement larger values for the beam width since the computational cost was prohibited, although it did not run out of memory. Despite the results for stock used by the FBS-PN-CSP algorithm, it was not far from those of the G&S and could be at least equal if higher values of the beam width can be implemented.
The results of the FBS-PN-CSP and G&S algorithms are shown in Table A1 for the hard28 dataset. Both algorithms obtained the same best results except for the BPP900 and BPP781 instances, where the FBS-PN-CSP had the better results. Nevertheless, neither obtained the optimal theoretical value. The FBS-PN-CSP was more efficient in 15 out of 27 instances where both algorithms obtained the same result. As for the beam width of the FBS-PN-CSP, this was the minimum value of the tested values, that is, β = 5 for all the instances.
Table A2 shows the best stock results for the binpack1 instances of the falkenauer_U dataset, where the FBS-PN-CSP, G&S, and LLA algorithms are compared. The FBS-PN-CSP and the G&S algorithms obtained the optimal theoretical values of each instance, but the FBS-PN-CSP had the best times. The LLA algorithm achieved the optimal theoretical values in 12 out of 20 instances and had better times than the FBS-PN-CSP algorithm. Only in two instances did the FBS-PN-CSP use the higher beam width values (25 and 35), but in most instances, the value was 5, which is the smallest value tested. For the binpack2 instances, the FBS-PN-CSP had the same results as the G&S algorithm in 19 instances (Table A3), and these were equal to the corresponding optimal theoretical values. All the results of the G&S algorithm were equal to the optimal theoretical value. The LLA algorithm had the optimal theoretical values in half of the instances and the best times among the three algorithms. The FBS-PN-CSP algorithm had better times than the G&S in ten of the instances where they had the same stock result. The beam width was mostly 15 throughout the binpack2 instances. The binpack3 results (Table A4) show that the G&S algorithm obtained the optimal theoretical values in all instances, and the FBS-PN-CSP algorithm did so in 17 instances. Likewise, the LLA algorithm achieved this in only 11 instances, but with better times than the FBS-PN-CSP and G&S algorithms. Only in the instance u500_14 did the FBS-PN-CSP obtain a better time than the G&S algorithm. The values of the beam width were 15 and 25 in most instances, and the filter value was higher than in the previous groups of instances of the falkenauer_U dataset. Similar results were obtained for the binpack4 instances shown in Table A5, where the G&S algorithm outperformed the other two algorithms. In all instances, this algorithm obtained the optimal theoretical value, the FBS-PN-CSP did so in 14 instances, and the LLA algorithm in seven instances, but only with better times than the FBS-PN-CSP. The beam width was mostly higher, namely 25 throughout the instances, as was the filter width. Although in the instances of binpack2, binpack3, and binpack4 the FBS-PN-CSP was not better than the G&S in terms of the number of stocks used, the percentage above the optimal theoretical values was less than 1% with a difference of one stock in all cases.

8. Discussion

It can be observed from the results shown in the previous section that the FBS-PN-CSP algorithm is effective and efficient in most of the instances it was tested in comparison with the other two algorithms, except for the hard10 dataset. The LLA algorithm could only address the instances of the hinterding, binpack1, binpack2, binpack3, and binpack4 datasets. It was less effective than the FBS-PN-CSP but more efficient in those instances where they obtained the same best stock result. A reason for this better efficiency is that to construct the cutting patterns the LLA increases the allowed waste in one unit from one iteration to the next, obtaining the best cutting patterns that meet the waste requirement without considering higher amounts of allowed waste. Meanwhile, the FBS-PN-CSP considers several amounts of waste simultaneously, represented by the pattern transitions. As the stock gets longer and the item types increase, the incidence matrix increases accordingly making the computations slower. This observation can be noted with the hard10 dataset, which has the longest stock among the datasets tested, and a higher number of item types. A modification in the structure of the PN model such as considering one pattern transition per unit of waste like in the LLA algorithm may improve the performance of the FBS-PN-CSP. Contrary to the LLA algorithm, which tests complete cutting patterns for each allowed waste on each iteration impacting its effectiveness as the stock gets longer, the FBS-PN-CSP constructs the solutions adding one item at a time. The algorithm would have to be modified accordingly to consider the different amounts of waste.
Considering the time for the best solution found by the G&S algorithm, the FBS-PN-CSP was more efficient in 50% or more of the instances where both algorithms obtained the same best results for the hinterding, wae_gau1, hard28, binpack1, and binpack2 datasets. Most of the time required by the G&S algorithm is spent on the cutting patterns construction. Then, as the number of items increases the number of cutting patterns also increases. Nevertheless, the G&S algorithm was limited to constructing 1,000,000 cutting patterns for each instance. In contrast, the FBS-PN-CSP only generates cutting patterns considering those partial solutions with the least waste, and discards immediately the worst solutions. It also should be taken into account that the G&S algorithm had a time limit for its execution of 600 s. Except for the hard10 dataset, the FBS-PN-CSP best results were obtained below this time in only one execution. Additionally, the average time from 10 executions of the G&S was longer than the FBS-PN-CSP algorithm and the LLA algorithm in most instances.
The results of the binpack1, binpack2, binpack3, and binpack4 datasets provide us insights into the sensitivity of the beam width and filter width as the number of item types and the number of orders increase while the stock length remains equal. As the number of orders and item types increased, the best solutions were found with the higher tested values for the beam width and the filter width. The reason is that the more item types, the more item places are needed in the PN model. Thus, more solutions need to be evaluated at the first level of the reachability tree, and higher values of beam width and filter width are preferred to evaluate most of those solutions from the second level, but this impacts the efficiency. The proposed improvement for the PN model structure could also be useful to address this efficiency issue since with a smaller incidence matrix size the computations should be faster and the FBS algorithm could be implemented with higher values of beam width and filter width.
From the results of all the tested instances, it is demonstrated that the FBS-PN-CSP could address real-world 1D-CSP problems from small to large number of items. However, the length of the stock impacts its efficiency as shown with the hard10 dataset. It should be remarked that, in instances where the FBS-PN-CSP algorithm does not achieve the best solutions, its results were very close to the best ones and in most cases just by one stock above. Therefore, industries handling stock lengths and item types as those of the hinterding and falkenauer_U datasets could obtain good or optimal cutting plans. Even, the FBS-PN-CSP can yield optimal or near-optimal cutting plans for stock lengths like the ones of hard28 and wae_gau1 datasets in acceptable times. The building industry, the door and window framing industry, and the furniture industry to name a few, could benefit from the FBS-PN-CSP algorithm implementation. These industries usually have several orders, few item types, and not-so-long stocks. However, in industries like the wire industry where the stock length can be extremely long, in the tens of thousands of units, the FBS-PN-CSP could fail to yield cutting plans close to the optimum number of stocks used and in an acceptable time.

9. Conclusions

In this paper, an algorithm based on the modeling power of Petri nets was presented for solving the one-dimensional cutting stock problem (1D-CSP). To the best of our knowledge, Petri nets had not been used to address the 1D-CSP. Firstly, we provided the Petri net general model to construct the solutions of a given instance of the 1D-CSP. Each solution consists of a series of cutting patterns with or without waste. Then, the filtered beam search algorithm is used to search for the best solution throughout the exploration of the nodes of the reachability tree. The algorithm, called FBS-PN-CSP, was compared to the Generate and Solve and the Least Lost algorithms over five datasets with a total of 145 instances and shown to be effective and efficient in most datasets. Therefore, either with instances of low or high complexity, the FBS-PN-CSP is shown to be competitive. However, the efficiency of the FBS-PN-CSP is sensitive to the sock length. In further research, we envisage improving the PN model structure and modifying the algorithm implementation, accordingly, to address this issue. Other research directions may include the beam width and filter width values tuning according to the number of item types to use suitable parameter values for specific features of the instances.

Author Contributions

Conceptualization, I.B.-V.; methodology, I.B.-V., J.M.-M., N.H.-R. and G.E.A.-F.; software, I.B.-V., J.M.-M. and G.E.A.-F.; validation, J.M.-M., N.H.-R. and G.E.A.-F.; formal analysis, I.B.-V., J.M.-M. and N.H.-R.; data curation, I.B.-V., N.H.-R. and G.E.A.-F.; writing—original draft preparation, N.H.-R. and G.E.A.-F.; writing—review and editing, I.B.-V. and J.M.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Experimental Results

Table A1. Best stock results for hard28 instances.
Table A1. Best stock results for hard28 instances.
FBS-PN-CSP G&S
Instances lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time
BPP136768521.4962.64 681.4962.8568.00318.00
BPP146162531.6439.00 621.6480.0262.20271.94
BPP405960521.6945.43 601.6955.6860.00386.12
BPP477172531.4155.45 721.4133.3072.10335.88
BPP606364521.5941.82 641.5983.7664.00341.97
BPP1197677521.3281.96 771.3270.2077.00394.30
BPP1447374531.3786.90 741.3755.3074.00154.26
BPP1758384521.2079.39 841.20162.1784.20368.87
BPP1788081521.2576.44 811.2556.9181.00317.22
BPP1817273521.3953.15 731.3977.7073.00241.26
BPP1956465521.5663.60 651.5680.0665.00230.26
BPP3597576521.3355.20 761.3342.1976.00277.42
BPP3606263521.6142.86 631.6146.4463.00278.08
BPP4198081521.2588.21 811.25101.7181.00393.38
BPP4857172521.4160.96 721.4150.0372.00293.62
BPP5318384521.2066.61 841.2056.3287.80232.66
BPP5617273531.3991.02 731.39104.5673.00266.22
BPP6407475521.3557.74 751.3550.6575.00174.81
BPP6455859521.7242.29 591.7250.2959.00333.46
BPP7096768521.4963.62 681.4964.8568.00353.67
BPP7167576521.3348.80 761.33121.9776.00437.33
BPP7426465521.5643.19 651.5648.3965.00279.50
BPP7666263521.6142.38 631.6141.5263.00325.24
BPP7817172521.4180.94 755.6377.7479.10323.78
BPP7856869521.4763.15 691.4756.9869.00276.04
BPP8148182521.2376.95 821.2335.5786.70281.63
BPP8326061531.6744.76 611.6764.4561.00349.86
BPP9007576521.3384.50 784.0062.2281.70318.96
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table A2. Best stock results for binpack1 instances of the falkenauer_U dataset.
Table A2. Best stock results for binpack1 instances of the falkenauer_U dataset.
FBS-PN-CSP G&S LLA
Instance lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time Stock % > lb Time (s)
u120_004848540.003.63 480.0066.1448.00301.95 504.170.82
u120_014949520.003.81 490.0069.3549.00387.81 490.000.31
u120_024646520.004.13 460.0021.0546.00278.44 460.000.13
u120_034949540.005.05 490.0066.6549.00267.99 502.040.20
u120_045050520.003.89 500.0042.7750.00309.69 500.000.64
u120_054848540.003.99 480.005.3048.00301.55 480.000.11
u120_0648481540.0015.98 480.0060.5148.00280.38 480.000.08
u120_074949520.004.30 490.0078.0949.00338.12 502.040.31
u120_08505035190.0089.01 500.0049.4750.20360.79 512.000.46
u120_0946461550.0016.70 460.005.4146.00228.50 472.170.17
u120_105252540.004.37 520.00101.6152.00346.15 520.000.61
u120_114949530.003.72 490.00113.5749.00309.34 502.040.25
u120_1248481570.0017.26 480.0011.0348.00265.06 480.000.07
u120_134949520.004.21 490.0072.0249.00286.47 490.000.08
u120_145050520.003.90 500.0016.7950.00354.72 500.000.12
u120_154848520.004.08 480.0014.2348.00315.78 480.000.17
u120_165252520.003.94 520.0018.8452.00212.41 520.000.40
u120_1752521540.0015.45 520.00126.7852.00360.08 543.850.40
u120_184949520.005.57 490.0085.7349.00386.07 490.000.14
u120_19494925150.0046.86 490.00248.7249.00421.32 502.040.22
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table A3. Best stock results for binpack2 instances of the falkenauer_U dataset.
Table A3. Best stock results for binpack2 instances of the falkenauer_U dataset.
FBS-PN-CSP G&S LLA
Instance lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time Stock % > lb Time (s)
u250_0099991560.0033.85 990.004.2699.00228.18 1001.011.73
u250_011001001520.0029.94 1000.0011.45100.00325.27 1000.001.54
u250_021021021560.0035.00 1020.007.85102.00210.82 1020.001.06
u250_031001001580.0044.55 1000.003.51100.00244.05 1000.000.79
u250_041011011540.0038.23 1010.0060.58101.00308.18 1020.991.54
u250_0510110125120.00111.61 1010.0032.51101.00224.12 1020.990.99
u250_06102102550.0010.71 1020.0039.23102.00266.80 1020.000.97
u250_07103104520.978.56 1030.00229.20103.00436.35 1062.9118.98
u250_0810510515150.0042.41 1050.00125.70105.00323.60 1060.951.17
u250_091011011530.0036.00 1010.00130.38101.00269.30 1010.001.42
u250_10105105540.009.29 1050.0012.88105.00186.12 1060.951.84
u250_111011011540.0037.57 1010.0090.22101.00348.27 1010.000.91
u250_121051061540.9536.89 1060.9552.65106.00275.21 1071.901.69
u250_13102103550.988.73 1030.982.53103.00303.67 1052.9411.61
u250_14100100550.009.34 1000.0061.36100.00266.81 1000.001.37
u250_1510510535110.00208.32 1050.0081.33105.00323.16 1060.951.84
u250_1697971530.0035.66 970.0015.3197.00303.89 970.001.56
u250_17100100520.008.62 1000.0021.74100.00269.75 1000.002.12
u250_1810010015140.0039.82 1000.0050.33100.00232.61 1011.000.95
u250_191021021520.0037.07 1020.0033.11102.00269.78 1020.001.14
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table A4. Best stock results for binpack3 instances of the falkenauer_U dataset.
Table A4. Best stock results for binpack3 instances of the falkenauer_U dataset.
FBS-PN-CSP G&S LLA
Instance lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time Stock % > lb Time (s)
u500_0019819825180.00208.24 1980.009.56198.00213.83 2001.0138.97
u500_0120120125250.00204.73 2010.003.34201.00218.07 2010.0011.38
u500_0220220215150.0086.07 2020.004.22202.00227.23 2030.507.78
u500_0320420425180.00225.18 2040.0017.38204.00311.51 2060.9817.62
u500_042062061550.0080.46 2060.008.85206.00135.02 2060.008.72
u500_05206206550.0014.90 2060.005.98206.0084.34 2070.4992.10
u500_0620720815150.4880.16 2070.0011.64207.00267.07 2090.97145.43
u500_0720420515140.4975.37 2040.0048.52204.00274.48 2050.499.92
u500_08196197550.5114.51 1960.0013.30196.00274.11 1970.5110.39
u500_0920220215150.0084.90 2020.0010.33202.00235.44 2030.5018.24
u500_102002001550.0075.91 2000.007.73200.00155.28 2000.009.01
u500_1120020015150.0079.07 2000.0032.06200.00338.97 2000.0011.20
u500_1219919915150.0081.50 1990.0013.29199.00270.59 1990.008.36
u500_1319619625240.00183.00 1960.0040.66196.00224.57 1960.0013.95
u500_1420420415140.0079.55 2040.0084.34204.00327.05 2050.49256.22
u500_1520120115150.0092.35 2010.005.28201.00153.72 2010.0015.26
u500_162022021540.0076.45 2020.0014.52202.00259.93 2020.0011.55
u500_1719819815140.0077.82 1980.0015.23198.00303.33 1980.0014.39
u500_1820220215140.0083.07 2020.004.49202.00223.85 2020.0010.47
u500_1919619625240.00175.40 1960.0040.97196.00283.72 1960.0017.06
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table A5. Best stock results for binpack4 instances of the falkenauer_U dataset.
Table A5. Best stock results for binpack4 instances of the falkenauer_U dataset.
FBS-PN-CSP G&S LLA
Instance lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time Stock % > lb Time (s)
u1000_0039939925250.00384.32 3990.0033.14399.00273.93 3990.00233.65
u1000_0140640625250.00396.57 4060.0038.95406.00327.19 4060.00183.11
u1000_0241141115140.00157.75 4110.0011.72411.00235.33 4130.49222.21
u1000_0341141215140.24149.09 4110.0033.87411.00376.97 4140.731180.83
u1000_0439739815150.25166.43 3970.0014.33397.00217.89 3980.25131.97
u1000_0539939925250.00402.64 3990.0080.63399.00306.18 4000.25239.32
u1000_0639539525250.00411.92 3950.0052.29395.00261.37 3950.00324.66
u1000_0740440425160.00448.63 4040.0017.09404.00285.22 4050.25157.17
u1000_0839939925250.00361.70 3990.0052.52399.00196.93 3990.00181.93
u1000_0939739825250.25422.89 3970.00110.50397.00275.50 3980.25227.06
u1000_1040040025250.00415.00 4000.0034.23400.00319.72 4010.25244.35
u1000_1140140125250.00376.47 4010.0027.73401.00262.44 4010.00152.76
u1000_123933932550.00383.05 3930.0016.95393.00268.04 3940.251167.22
u1000_1339639625250.00389.70 3960.0016.50396.00248.92 3960.00110.01
u1000_1439439515150.25176.44 3940.0030.22394.00175.43 3950.25199.63
u1000_1540240325250.25366.57 4020.0024.67402.00202.19 4030.25137.30
u1000_1640440425250.00361.73 4040.008.34404.00177.77 4050.25355.25
u1000_1740440515140.25152.53 4040.0039.31404.00241.15 4050.25240.64
u1000_1839939925250.00365.81 3990.0023.69399.00221.26 3990.00227.91
u1000_1940040025180.00412.94 4000.0024.85400.00232.95 4010.25919.10
From the algorithms’ stock results for each instance, the minimum is in boldface.

References

  1. Faina, L. A survey on the cutting and packing problems. Boll. dell’Unione Mat. Ital. 2020, 13, 567–572. [Google Scholar] [CrossRef]
  2. Wäscher, G.; Hausner, H.; Schumann, H. An improved typology of cutting and packing problems. Eur. J. Oper. Res. 2007, 183, 1109–1130. [Google Scholar] [CrossRef]
  3. Lee, D.; Son, S.; Kim, D.; Kim, S. Special-Length-Priority Algorithm to Minimize Reinforcing Bar-Cutting Waste for Sustainable Construction. Sustainability 2020, 12, 5950. [Google Scholar] [CrossRef]
  4. Vishwakarma, R.; Powar, P.L. An efficient mathematical model for solving one-dimensional cutting stock problem using sustainable trim. Adv. Ind. Manuf. Eng. 2021, 3, 100046. [Google Scholar] [CrossRef]
  5. Valério de Carvalho, J. LP models for bin packing and cutting stock problems. Eur. J. Oper. Res. 2002, 141, 253–273. [Google Scholar] [CrossRef]
  6. Machado, A.A.; Zayatz, J.C.; da Silva, M.M.; Melluzzi Neto, G.; Leal, G.C.L.; Palma Lima, R.H. Aluminum bar cutting optimization for door and window manufacturing. DYNA 2020, 87, 155–162. [Google Scholar] [CrossRef]
  7. Liang, K.H.; Yao, X.; Newton, C.; Hoffman, D. A new evolutionary approach to cutting stock problems with and without contiguity. Comput. Oper. Res. 2002, 29, 1641–1659. [Google Scholar] [CrossRef]
  8. Lee, D.Y.; DiCesare, F. Scheduling flexible manufacturing systems using Petri nets and heuristic search. IEEE Trans. Robot. Autom. 1994, 10, 123–132. [Google Scholar] [CrossRef]
  9. Huang, B.; Jiang, R.; Zhang, G. Heuristic Search for Scheduling Flexible Manufacturing Systems Using Multiple Heuristic Functions. In Proceedings of the Modern Advances in Applied Intelligence, Kaohsiung, Taiwan, 3–6 June 2014; Ali, M., Pan, J.S., Chen, S.M., Horng, M.F., Eds.; Springer: Cham, Switzerland, 2014; pp. 178–187. [Google Scholar]
  10. Xu, G.; Chen, Y. Petri-Net-Based Scheduling of Flexible Manufacturing Systems Using an Estimate Function. Symmetry 2022, 14, 1052. [Google Scholar] [CrossRef]
  11. Mejía, G.; Niño, K. A new Hybrid Filtered Beam Search algorithm for deadlock-free scheduling of flexible manufacturing systems using Petri Nets. Comput. Ind. Eng. 2017, 108, 165–176. [Google Scholar] [CrossRef]
  12. Birgin, E.; Ferreira, J.; Ronconi, D. A filtered beam search method for the m-machine permutation flowshop scheduling problem minimizing the earliness and tardiness penalties and the waiting time of the jobs. Comput. Oper. Res. 2020, 114, 104824. [Google Scholar] [CrossRef]
  13. Libralesso, L.; Focke, P.A.; Secardin, A.; Jost, V. Iterative beam search algorithms for the permutation flowshop. Eur. J. Oper. Res. 2022, 301, 217–234. [Google Scholar] [CrossRef]
  14. Kantorovich, L.V. Mathematical methods of organizing and planning production. Manag. Sci. 1960, 6, 366–422. [Google Scholar] [CrossRef]
  15. Gilmore, P.C.; Gomory, R.E. A Linear Programming Approach to the Cutting-Stock Problem. Oper. Res. 1961, 9, 849–859. [Google Scholar] [CrossRef]
  16. Gilmore, P.C.; Gomory, R.E. A linear programming approach to the cutting stock problem—Part II. Oper. Res. 1963, 11, 863–888. [Google Scholar] [CrossRef]
  17. Vance, P.H. Branch-and-Price Algorithms for the One-Dimensional Cutting Stock Problem. Comput. Optim. Appl. 1998, 9, 211–228. [Google Scholar] [CrossRef]
  18. Belov, G.; Scheithauer, G. The Number of Setups (Different Patterns) in One-Dimensional Stock Cutting; Technical Report MATH-NM-15-2003; Institute for Numerical Mathematics, Dresden University: Dresden, Germany, 2003. [Google Scholar]
  19. Belov, G.; Scheithauer, G. A branch-and-cut-and-price algorithm for one-dimensional stock cutting and two-dimensional two-stage cutting. Eur. J. Oper. Res. 2006, 171, 85–106. [Google Scholar] [CrossRef]
  20. Alves, C.; Valério de Carvalho, J.M. A branch-and-price-and-cut algorithm for the pattern minimization problem. RAIRO-Oper. Res. 2008, 42, 435–453. [Google Scholar] [CrossRef]
  21. Haessler, R.W. One-dimensional cutting stock problem problems and solution procedures. Mathl. Comput. Model. 1992, 16, 1–8. [Google Scholar] [CrossRef]
  22. Gradišar, M.; Kljajić, M.; Resinovič, G.; Jesenko, J. A sequential heuristic procedure for one-dimensional cutting. Eur. J. Oper. Res. 1999, 114, 557–568. [Google Scholar] [CrossRef]
  23. Foerster, H.; Wascher, G. Pattern reduction in one-dimensional cutting stock problems. Int. J. Prod. Res. 2000, 38, 1657–1676. [Google Scholar] [CrossRef]
  24. Yanasse, H.H.; Limeira, M.S. A hybrid heuristic to reduce the number of different patterns in cutting stock problems. Comput. Oper. Res. 2006, 33, 2744–2756. [Google Scholar] [CrossRef]
  25. Renildo, G.; Cerqueira, L.; Yanasse, H. A pattern reduction procedure in a one-dimensional cutting stock problem by grouping items according to their demands. J. Comput. Interdiscip. Sci. 2009, 1, 159–164. [Google Scholar] [CrossRef]
  26. Valério de Carvalho, J. Exact solution of bin-packing problems using column generation and branch-and-bound. Ann. Oper. Res. 1999, 86, 629–659. [Google Scholar] [CrossRef]
  27. da Silva, H.V.; Lemos, F.K.; Cherri, A.C.; de Araujo, S.A. Arc-flow formulations for the one-dimensional cutting stock problem with multiple manufacturing modes. RAIRO-Oper. Res. 2023, 57, 183–200. [Google Scholar] [CrossRef]
  28. Dyckhoff, H. A New Linear Programming Approach to the Cutting Stock Problem. Oper. Res. 1981, 29, 1092–1104. [Google Scholar] [CrossRef]
  29. Martinovic, J.; Scheithauer, G.; Valério de Carvalho, J. A comparative study of the arcflow model and the one-cut model for one-dimensional cutting stock problems. Eur. J. Oper. Res. 2018, 266, 458–471. [Google Scholar] [CrossRef]
  30. Berberler, M.; Nuriyev, U. A New Heuristic Algorithm for the One-Dimensional Cutting Stock Problem. Appl. Comput. Math. 2010, 9, 19–30. [Google Scholar]
  31. de Lima, V.L.; Alves, C.; Clautiaux, F.; Iori, M.; Valério de Carvalho, J.M. Arc flow formulations based on dynamic programming: Theoretical foundations and applications. Eur. J. Oper. Res. 2022, 296, 3–21. [Google Scholar] [CrossRef]
  32. Delorme, M.; Iori, M.; Martello, S. Bin packing and cutting stock problems: Mathematical models and exact algorithms. Eur. J. Oper. Res. 2016, 255, 1–20. [Google Scholar] [CrossRef]
  33. Peng, J.; Chu, Z.S. A Hybrid Multi-chromosome Genetic Algorithm for the Cutting Stock Problem. In Proceedings of the 2010 3rd International Conference on Information Management, Innovation Management and Industrial Engineering, Kunming, China, 26–28 November 2010; Volume 1, pp. 508–511. [Google Scholar] [CrossRef]
  34. Araujo, S.A.d.; Poldi, K.C.; Smith, J. A Genetic Algorithm for the One-dimensional Cutting Stock Problem with Setups. Pesqui. Oper. 2014, 34, 165–187. [Google Scholar] [CrossRef]
  35. Parmar, K.B.; Prajapati, H.B.; Dabhi, V.K. Cutting stock problem: A solution based on novel pattern based chromosome representation using modified GA. In Proceedings of the 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], Nagercoil, India, 19–20 March 2015; pp. 1–7. [Google Scholar] [CrossRef]
  36. Chen, Y.H.; Huang, H.C.; Cai, H.Y.; Chen, P.F. A Genetic Algorithm Approach for the Multiple Length Cutting Stock Problem. In Proceedings of the 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech), Osaka, Japan, 12–14 March 2019; pp. 162–165. [Google Scholar] [CrossRef]
  37. Liang, K.H.; Yao, X.; Newton, C.; Hoffman, D. Solving cutting stock problems by evolutionary programming. In Proceedings of the 7th Annual Conference on Evolutionary Programming, EP 1998, San Diego, CA, USA, 25–27 March 1998; Volume 1447, pp. 755–764. [Google Scholar] [CrossRef]
  38. Chiong, R.; Beng, O.K. A Comparison between Genetic Algorithms and Evolutionary Programming based on Cutting Stock Problem. Eng. Lett. 2007, 14, 72–77. [Google Scholar]
  39. Shen, X.; Li, Y.; Yang, J.; Yu, L. A Heuristic Particle Swarm Optimization for Cutting Stock Problem Based on Cutting Pattern. In Proceedings of the Computational Science—ICCS 2007, Beijing China, 27–30 May 2007; Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1175–1178. [Google Scholar]
  40. Li, Y.; Zheng, B.; Dai, Z. General Particle Swarm Optimization Based on Simulated Annealing for Multi-specification One-Dimensional Cutting Stock Problem. In Proceedings of the Computational Intelligence and Security, Guangzhou, China, 3–6 November 2006; Wang, Y., Cheung, Y.M., Liu, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 67–76. [Google Scholar]
  41. Ben Lagha, G.; Dahmani, N.; Krichen, S. Particle swarm optimization approach for resolving the cutting stock problem. In Proceedings of the 2014 International Conference on Advanced Logistics and Transport (ICALT), Tunis, Tunisia, 1–3 May 2014; pp. 259–263. [Google Scholar] [CrossRef]
  42. Asvany, T.; Amudhavel, J.; Sujatha, P. One-dimensional cutting stock problem with single and multiple stock lengths using DPSO. Adv. Appl. Math. Sci. 2017, 17, 147–163. [Google Scholar]
  43. Levine, J.; Ducatelle, F. Ant colony optimization and local search for bin packing and cutting stock problems. J. Oper. Res. Soc. 2004, 55, 705–716. [Google Scholar] [CrossRef]
  44. Peng, J.; Chu, Z.S. A hybrid ant colony algorithm for the Cutting Stock Problem. In Proceedings of the 2010 International Conference on Future Information Technology and Management Engineering, Changzhou, China, 9–10 October 2010; Volume 2, pp. 32–35. [Google Scholar] [CrossRef]
  45. Evtimov, G.; Fidanova, S. Ant Colony Optimization Algorithm for 1D Cutting Stock Problem. In Proceedings of the Advanced Computing in Industrial Mathematics: 11th Annual Meeting of the Bulgarian Section of SIAM, Sofia, Bulgaria, 20–22 December 2016; Georgiev, K., Todorov, M., Georgiev, I., Eds.; Revised Selected Papers. Springer International Publishing: Cham, Switzerland, 2018; pp. 25–31. [Google Scholar] [CrossRef]
  46. Jahromi, M.H.; Tavakkoli-Moghaddam, R.; Makui, A.; Shamsi, A. Solving an one-dimensional cutting stock problem by simulated annealing and tabu search. J. Ind. Eng. Int. 2012, 8, 1–8. [Google Scholar] [CrossRef]
  47. Umetani, S.; Yagiura, M.; Ibaraki, T. One-dimensional cutting stock problem to minimize the number of different patterns. Eur. J. Oper. Res. 2003, 146, 388–402. [Google Scholar] [CrossRef]
  48. Umetani, S.; Yagiura, M.; Ibaraki, T. One-Dimensional Cutting Stock Problem with a Given Number of Setups: A Hybrid Approach of Metaheuristics and Linear Programming. J. Math. Model. Algorithms 2006, 5, 43–64. [Google Scholar] [CrossRef]
  49. Alfares, H.K.; Alsawafy, O.G. A Least-Loss Algorithm for a Bi-Objective One-Dimensional Cutting-Stock Problem. Int. J. Appl. Ind. Eng. (IJAIE) 2019, 6, 1–19. [Google Scholar] [CrossRef]
  50. Sá Santos, J.V.; Nepomuceno, N. Computational Performance Evaluation of Column Generation and Generate-and-Solve Techniques for the One-Dimensional Cutting Stock Problem. Algorithms 2022, 15, 394. [Google Scholar] [CrossRef]
  51. Suliman, S.M. Pattern generating procedure for the cutting stock problem. Int. J. Prod. Econ. 2001, 74, 293–301. [Google Scholar] [CrossRef]
  52. Fang, J.; Rao, Y.; Luo, Q.; Xu, J. Solving One-Dimensional Cutting Stock Problems with the Deep Reinforcement Learning. Mathematics 2023, 11, 1028. [Google Scholar] [CrossRef]
  53. Han, L.; Xing, K.; Chen, X.; Xiong, F. A Petri net-based particle swarm optimization approach for scheduling deadlock-prone flexible manufacturing systems. J. Intell. Manuf. 2018, 29, 1083–1096. [Google Scholar] [CrossRef]
  54. Li, S.; An, A.; Wu, H.; Hou, C.; Cai, Y.; Han, X.; Wang, Y. Policy to cope with deadlocks and livelocks for flexible manufacturing systems using the max’-controlled new smart siphons. IET Control Theory Appl. 2014, 8, 1607–1616. [Google Scholar] [CrossRef]
  55. Sabuncuoglu, I.; Bayiz, M. Job shop scheduling with beam search. Eur. J. Oper. Res. 1999, 118, 390–412. [Google Scholar] [CrossRef]
  56. Shih, H.M.; Cai, Y.; Sekiguchi, T. A Method of Filtered Beam Search Based Delivery Scheduling. IEEJ Trans. Ind. Appl. 1993, 113, 1061–1068. [Google Scholar] [CrossRef]
  57. Wu, K.C.; Ting, C.J.; Lan, W.C. A Beam Search Heuristic for the Traveling Salesman Problem with Time Windows. J. East. Asia Soc. Transp. Stud. 2011, 9, 702–712. [Google Scholar] [CrossRef]
  58. Ibtissem, B.N.; Rym, M. A beam search for the equality generalized symmetric traveling salesman problem. RAIRO-Oper. Res. 2021, 55, 3021–3039. [Google Scholar] [CrossRef]
  59. Bennell, J.A.; Song, X. A beam search implementation for the irregular shape packing problem. J. Heuristics 2010, 16, 167–188. [Google Scholar] [CrossRef]
  60. Bennell, J.; Cabo, M.; Martínez-Sykora, A. A beam search approach to solve the convex irregular bin packing problem with guillotine cuts. Eur. J. Oper. Res. 2018, 270, 89–102. [Google Scholar] [CrossRef]
  61. Parreño, F.; Alonso, M.; Alvarez-Valdes, R. Solving a large cutting problem in the glass manufacturing industry. Eur. J. Oper. Res. 2020, 287, 378–388. [Google Scholar] [CrossRef]
  62. Smith, N.R.; Rao, Y.; Wang, P.; Luo, Q. Hybridizing Beam Search with Tabu Search for the Irregular Packing Problem. Math. Probl. Eng. 2021, 2021, 5054916. [Google Scholar] [CrossRef]
  63. Libralesso, L.; Fontan, F. An anytime tree search algorithm for the 2018 ROADEF/EURO challenge glass cutting problem. Eur. J. Oper. Res. 2021, 291, 883–893. [Google Scholar] [CrossRef]
  64. Ow, P.S.; Morton, T.E. Filtered beam search in scheduling. Int. J. Prod. Res. 1988, 26, 35–62. [Google Scholar] [CrossRef]
  65. Liu, G.; Li, Z.; Al-Ahmari, A.M. Liveness Analysis of Petri Nets Using Siphons and Mathematical Programming. IFAC Proc. Vol. 2014, 47, 383–387. [Google Scholar] [CrossRef]
  66. Dingle, N.J.; Knottenbelt, W.J.; Suto, T. PIPE2: A Tool for the Performance Evaluation of Generalised Stochastic Petri Nets. ACM Sigmetrics Perform. Eval. Rev. 2009, 36, 34–39. [Google Scholar] [CrossRef]
  67. Hinterding, R.; Khan, L. Genetic algorithms for cutting stock problems: With and without contiguity. In Proceedings of the Progress in Evolutionary Computation, Granada, Spain, 4–6 June 1995; Yao, X., Ed.; Springer: Berlin/Heidelberg, Germany, 1995; pp. 166–186. [Google Scholar] [CrossRef]
  68. OR Library by J. E. Beasley. Available online: http://people.brunel.ac.uk/~mastjjb/jeb/info.html (accessed on 18 October 2023).
Figure 1. PN example.
Figure 1. PN example.
Applsci 14 08172 g001
Figure 2. Reachability tree for the PN from Figure 1.
Figure 2. Reachability tree for the PN from Figure 1.
Applsci 14 08172 g002
Figure 3. Filtered beam search scheme.
Figure 3. Filtered beam search scheme.
Applsci 14 08172 g003
Figure 4. General PN model for a given instance of the 1D-CSP.
Figure 4. General PN model for a given instance of the 1D-CSP.
Applsci 14 08172 g004
Figure 5. General form of the incidence matrices: (a) input incidence matrix, I + . (b) output incidence matrix, I . (c) incidence matrix, I.
Figure 5. General form of the incidence matrices: (a) input incidence matrix, I + . (b) output incidence matrix, I . (c) incidence matrix, I.
Applsci 14 08172 g005
Figure 6. Reachability tree construction with FBS implementation.
Figure 6. Reachability tree construction with FBS implementation.
Applsci 14 08172 g006
Figure 7. Result of a structural analysis for a PN-CSP model of an instance of ten different items.
Figure 7. Result of a structural analysis for a PN-CSP model of an instance of ten different items.
Applsci 14 08172 g007
Table 1. Data sets details used in this study.
Table 1. Data sets details used in this study.
DatasetNumber of InstancesMin-Max Number of Item TypesNumber of Different Item Types (Std. Dev.)Number of Orders 1Stock Length 2
hinterding108–364 (10.55)20, 50, 60, 126, 200, 400, 60014, 15, 25, 25, 4300, 86, 120, 120, 120, 120
hard2828136–18921 (14.43)160, 180, 2001000
hard1010197–2004 (0.89)200100,000
wae_gau11733–6414 (9.39)114, 96, 57, 111, 164, 141, 144, 142, 239, 91, 60, 163, 228, 86, 92, 153, 11910,000
falkenauer_U:
binpack12058–6811 (2.71)120150
binpack22071–818 (2.14)250150
binpack32080–812 (0.40)500150
binpack420811 (0)1000150
1 Among all instances. 2 For each instance.
Table 2. Best stock results for hinterding instances.
Table 2. Best stock results for hinterding instances.
FBS-PN-CSP G&S LLA
Instance lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time Stock % > lb Time (s)
1a99550.000.08 90.000.019.009.18 90.000.00
2a2323520.000.19 230.0034.1523.00304.45 230.000.00
3a1515550.000.23 150.002.0915.00269.23 150.000.05
4a19191520.001.24 190.0027.1619.00266.60 190.000.03
5a5153553.9225.71 533.9235.8353.00276.56 533.925.06
6a78791541.289.87 791.2821.9179.00243.05 802.561.13
7a686815150.009.89 680.00110.5868.00373.81 680.000.53
8a1431441530.7023.18 1430.0011.25143.00273.35 1451.4029.14
9a14915015150.6731.43 1490.0035.82149.00339.76 1522.0110.86
10a21521625220.47117.90 2150.0012.10215.00294.33 2160.4713.39
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table 3. Best stock results for wae_gau1 instances.
Table 3. Best stock results for wae_gau1 instances.
FBS-PN-CSP G&S
Instances lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time
TEST00052829523.5756.71 293.57139.5829.00352.83
TEST00142324524.3539.37 244.3564.8324.00243.59
TEST00221415527.1419.90 157.1448.7215.00322.90
TEST00302728523.7052.06 283.7061.9428.00312.79
TEST004414142540.00367.03 157.14380.1642.0038.02
TEST004911112520.00276.48 129.09241.3215.50304.08
TEST005414143530.00484.86 140.00356.1047.80146.63
TEST00551516526.6762.91 166.67427.3634.90154.52
TEST0055_2202015140.00257.22 96380.000.0096.000.01
TEST00582021525.0034.71 200.00350.5820.60351.83
TEST00651516526.6722.52 166.6718.9416.00309.42
TEST00681213528.3361.89 138.33240.9849.00143.78
TEST007513132560.00418.74 147.69294.4247.30320.35
TEST00822425524.1736.54 254.17136.4525.00426.39
TEST0084161615150.00116.07 160.0076.7717.80223.36
TEST00951617526.2577.83 176.2588.6317.00354.11
TEST0097121215150.00110.32 120.00397.7725.20197.42
From the algorithms’ stock results for each instance, the minimum is in boldface.
Table 4. Best stock results for hard10 instances.
Table 4. Best stock results for hard10 instances.
FBS-PN-CSP G&S
Instances lb Stock β α % > lb Time (s) Stock % > lb Time (s) Avg. Stock Avg. Time
HARD05559527.275673.86 561.8279.6464.40129.34
HARD15660527.145669.64 571.79327.9759.20394.00
HARD25660527.145707.35 571.79132.0157.00276.31
HARD35559527.275663.16 561.8296.0157.90296.15
HARD45660527.145628.17 571.79139.2857.00331.95
HARD55559527.275454.04 561.82241.5260.20325.61
HARD65660527.145462.07 571.79123.1957.00370.79
HARD75459529.265645.90 563.70273.0661.10273.82
HARD85659525.365447.00 571.79112.2059.30314.82
HARD95660527.145529.23 560.00265.3464.40266.41
From the algorithms’ stock results for each instance, the minimum is in boldface.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barragan-Vite, I.; Medina-Marin, J.; Hernandez-Romero, N.; Anaya-Fuentes, G.E. A Petri Net-Based Algorithm for Solving the One-Dimensional Cutting Stock Problem. Appl. Sci. 2024, 14, 8172. https://doi.org/10.3390/app14188172

AMA Style

Barragan-Vite I, Medina-Marin J, Hernandez-Romero N, Anaya-Fuentes GE. A Petri Net-Based Algorithm for Solving the One-Dimensional Cutting Stock Problem. Applied Sciences. 2024; 14(18):8172. https://doi.org/10.3390/app14188172

Chicago/Turabian Style

Barragan-Vite, Irving, Joselito Medina-Marin, Norberto Hernandez-Romero, and Gustavo Erick Anaya-Fuentes. 2024. "A Petri Net-Based Algorithm for Solving the One-Dimensional Cutting Stock Problem" Applied Sciences 14, no. 18: 8172. https://doi.org/10.3390/app14188172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop