Next Article in Journal
A Gradient-Based Cuckoo Search Algorithm for a Reservoir-Generation Scheduling Problem
Next Article in Special Issue
Near-Optimal Heuristics for Just-In-Time Jobs Maximization in Flow Shop Scheduling
Previous Article in Journal
Failure Mode and Effects Analysis Considering Consensus and Preferences Interdependence
Previous Article in Special Issue
A New Greedy Insertion Heuristic Algorithm with a Multi-Stage Filtering Mechanism for Energy-Efficient Single Machine Scheduling Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy-Based Algorithm for Supply-Chain Complexity Assessment

1
School of Economics, Ashkelon Academic College, Ashkelon 84101, Israel
2
Department of Computer Science, Holon Institute of Technology, Holon 58102, Israel
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(4), 35; https://doi.org/10.3390/a11040035
Submission received: 28 February 2018 / Revised: 20 March 2018 / Accepted: 21 March 2018 / Published: 24 March 2018
(This article belongs to the Special Issue Algorithms for Scheduling Problems)

Abstract

:
This paper considers a graph model of hierarchical supply chains. The goal is to measure the complexity of links between different components of the chain, for instance, between the principal equipment manufacturer (a root node) and its suppliers (preceding supply nodes). The information entropy is used to serve as a measure of knowledge about the complexity of shortages and pitfalls in relationship between the supply chain components under uncertainty. The concept of conditional (relative) entropy is introduced which is a generalization of the conventional (non-relative) entropy. An entropy-based algorithm providing efficient assessment of the supply chain complexity as a function of the SC size is developed.

1. Introduction

This paper presents the entropy-based optimization model for estimating structural complexity of supply chain (SC), and, in particular, complexity of relationship between the principal equipment manufacturer (a root node) and its suppliers (preceding supply nodes). The information entropy is used as a measure of decision maker’s knowledge about the risks of shortages and pitfalls in relations between the supply chain components under uncertainty, the main attention being paid to relationships between the principal equipment manufacturer (a root node of the corresponding graph model) and its suppliers. A concept of the conditional (relative) entropy is introduced which is a generalization of the conventional (non-relative) entropy that provides more precise estimation of complexity in supply chains as it takes into account information flows between the components of different layers. A main advantage of the suggested entropy-based approach is that it can essentially simplify the hierarchical tree-like model of the supply chain, at the same time retaining the basic knowledge about main sources of risks in inter-layer relationships.
Processes of production, storage, transportation, and utilization of products in SCs may lead to many negative effects on the environment, such as emissions of pollutants into the air and soil; discharges of pollutants into surface and groundwater basins; pollution of soil and water with waste products of production, all these phenomena in their entirety deteriorating the relationships between manufacturing components and their suppliers. These and many other risky situations for manufacturing lead to the uncertainty in SC’s phases, i.e., engineering, procurement, production, and distribution [1]. The entropic approach developed in this paper aims at defining the best (minimal but sufficient) level of information going through the several supply chain phases. This paper analyzes the structural complexity of the supply chain affected by technological, organizational and environmental adverse events in the SCs, consequences of which lead, as a result, to violations of correct functioning of the SC. The detailed definition and analysis of the supply-chain structural complexity can be found in [2,3,4,5,6,7,8,9,10].
Similar to many other researchers (see, e.g., [11,12,13]), in order to accurately measure the risks of pitfalls and cases of failed supplies, this paper tends to determine the probabilities of undesirable events and their negative impacts on material, financial, and information flows. The economic losses are caused by failures to supply, shipments of inappropriate products, incomplete deliveries, delays in deliveries, etc., and the harmful effect of the adverse events may be expressed in monetary form by relevant penalties.
The main differences of the present work in comparison with close earlier papers (see, e.g., [7,12,14,15]) also exploiting the entropy approach for the SC complexity analysis are the following:
  • This paper develops a new graph-theoretic model as a tool for selecting the most vulnerable disruption risks inside the SC which, in turn, can essentially decrease the size of the initial SC model without sacrificing essential knowledge about the risks;
  • This paper introduces the conditional entropy as a tool for integrated analysis of the SC complexity under uncertainty, provides more precise estimation of the supply chain complexity taking into account links between the nodes of different layers;
  • This paper suggests a new fast entropy-based algorithm for minimizing the SC size.
This paper is structured as follows. The related definitions from graph theory are presented in the next section. The definition of information entropy and the detailed problem description are given in Section 3. Section 4 describes the entropy-based algorithm permitting to reduce the SC model size without a loss of essential information. Section 5 describes the numerical example. Section 6 concludes the paper.

2. Basic Definitions

Wishing to avoid any ambiguity in further discussions, we begin with key definitions of risk and ambiguity in relationships between the manufacturer and its supplier. There is a wide specter of different definitions of risk and uncertainty. In this study, we follow Knight’s [16] view and his numerous followers. The uncertainty is the absence of certainty in our knowledge, or, in other words, a situation wherein it is impossible to precisely describe future outcomes. In the Knightian sense, the risk is measurable uncertainty, which is possible to calculate.
Similar to many other risk evaluators, we assume that the notion of risk can be described as the expected value of an undesirable outcome, that is, the product of two characteristics, the probability of an undesirable event (that is, a negative deviation of the delayed supply or failure to reach the planned supply target), and the impact or severity, that is, an expected loss in the case of the disruption affecting the supply of products across organizations in a supply network. In a situation with several possible accidents, we admit that the total risk is the sum of the risks for different accidents (see, e.g., [11,17,18]).
In the model considered below, an “event” is the observable discrete change in the state of the SC or its components. A “risk driver” is a factor, a driving force that may be a cause of the undesirable unforeseen event, such as disruptions, breakdowns, defects, mistakes in the design and planning, shortages of material in supply in the SC, etc. In this paper, we study the situations with an “observable uncertainty” where there is an objective opportunity to register, for a pre-specified period of time, the adverse events in the relationships between the components in the SC. Such a registration list, called a “risk protocol”, provides us the information whether or not the events are undesirable and, in the case if the event is undesirable what are its risk drivers and possible loss (see [12,19]). Such statistics in the risk protocols permits the decision maker to quantitatively evaluate a contribution of each driver and the total (entropy-based) observable information in the SC.
There exists a wide diversity of risk types, risk drivers, and options for their mitigation in the SC. Their taxonomy lies beyond the scope of this paper. Many authors noticed that if a researcher tries to analyze potential failures/disruptions of all the suppliers in a SC or their absolute majority, he/she encounters a simply impractical and unrealistic problem demanding an astronomic amount of time and budget. Moreover, the supply chain control of the root node of the supply chain is much more important than the control of any of its successors ([6,8]).
Consider a tree-type graph representing the hierarchical structure of an industrial supply chain Define the “parent layer”, called also the “main layer”, as consisting of a single node, as follows: L0 = {n0} where n0 is the single node of the parent layer. Called also the original equipment manufacturer (OEM), or the root node. OEM is a firm (or company) that creates an end product, for, instance, assembles and creates an automobile.
Define a layer L s (also denoted as layer s) as the set of nodes which are on the same distance s from the root node n0 in the underlying graph of the SC.
Layer 1 (also called Tier 1) are the companies supplying components directly to the OEM that set up the chain. In a typical supply chain, companies in Tier 2 supply the companies in Tier 1; Tier 3 supplies Tier 2, and so on. Tiered supply chains are common in industries such as aerospace or automotive manufacturing where the final product consists of many complex components and sub-assemblies.
Define a “cut” (also called a “cross-section”) C s as a union of all the layers L 0 , L 1 , L 2 , L s , from 0 to s. It is evident that C 0 = L 0 ,   C s 1 C s ,   C s = { C ( s 1 ) , L s }   s = 1 , 2 , , S.
Assume that, for each node of the SC, the list of risk drivers F = { f 1 , f 2 , f N } is known, each being a source of different adverse events in the nodes of the SC. For simplicity, but without loss of generality, assume that any adverse event is caused by a single risk driver (otherwise, one can split such a multi-drive event into several elementary events each one being caused by a single driver). Here N is the total number of all the drivers.
Another basic assumption to be used in this work is that the drive factors are mutually dependent. It means, for example, that an unfavorable technological decision or an environmental pollution, that is, caused by a technology-based driver in some component at Tier 2 may lead to an adverse event in supply operations to a node at Tier 1. A technological mistake at Tier 3 may be a source of a delayed supply to Tier 2, and so on. In general, any factors f happening at tier s may be depending on a factor f′ at an earlier tier s + 1, f = 1,…, N; f′ = 1,…, N; s = 1,…, S. Below, the dependencies will be described with the help of the N × N matrix of relative probabilities.
The following Markovian property is assumed to take place. Assume that the dependence between any factor in tier s, on the one hand, and the factors in the lower tiers s + 1, s + 2,…, S actually exists only for the factors in a pair of neighboring tiers (s, s + 1), where s = 0,1,…, S. Moreover, assume that the pitfalls and any defective decisions do not flow downwards, that is, any risk factor in tier s does not depend upon the risk drivers in the nodes of higher layers, numbered s − 1, s − 2,…, 1.
In each layer s, when computing the probability of risk drivers f occurring in the nodes of the layer s, two types of adverse events have a place. First, there are the events (called “primary events” and denoted by Afprime(s)) that have happened in the nodes of layer s and which are caused by the risk driver f, f = 1,…, N. Second, there are the events (called “secondary events” and denoted by Afsecond(s + 1, s)) that have happened in the nodes of the next layer (s ± 1) but have an indirect impact upon inverse events in s, since the risk factors are dependent. More precisely, different drivers f′ in s+1 have impact upon the driver f in layer s, f = 1,…, N; f′ = 1,…, N; s = 1,2,…, S.
The impact from f′ to f is estimated with the help of the transition probability matrix M which is defined below and computed from the data in the risk protocols.
Denote by Aj(s) the following events:
Af(s) = {risk driver f is the source of various adverse events in supply to all the nodes of layer s}, f = 1,…, N, s = 0,1,…, S.
Denote by pf(s) = Pr (Af(s)) the probability that the risk driver f is the source of different adverse events in supply in layer s,
p i ( s ) = Pr ( A i ( s ) ) = Pr {the risk driver fi is the cause of adverse event on the layer s only}
Denote by pfprime(s) = Pr (Af prime(s)) the probability that the risk driver f is the source of different adverse events in layer s, and which are caused by the risk driver f. These probabilities are termed as “primary”. Next, denote by pfsecond(s) = Pr (Af second(s)) the probability that the risk driver f is a source of different adverse events in layer s which is a result of the indirect effect on the f by the risk drivers f′ that have caused the adverse events in layer s+1; these probabilities are termed as “secondary”.
Introduce the following notation:
p i ( 1 ) ( s ) = Pr ( A i p r i m e ( s ) ) = Pr ( A i ( 1 ) ) = Pr {the risk driver fi is the cause of adverse event on the layer s only}
p i ( 2 ) ( s ) = Pr ( A i sec o n d ( s ) ) = Pr ( A i ( 2 ) ) = Pr {the risk driver f i is the cause of adverse effect on the layer s as the result of the risk drivers on the layers s + 1}.
For simplicity, and without loss of generality, suppose that the list of risk drivers F = { f 1 , f 2 , f N } is complete for each layer. Then the following holds
i = 1 N p i ( s ) = 1   for   s = 0 , 1 , 2 , .
Denote p ¯ ( s ) = ( p 1 ( s ) , p 2 ( s ) , p N ( s ) ) .
It is obvious that
A i ( s ) = A i ( 1 ) ( s ) A i ( 2 ) ( s )   and   A i ( 1 ) ( s ) A i ( 2 ) ( s ) = ,   j = 1 , 2 , , N .
Therefore,
p ( A i ( s ) ) = p ( A i ( 1 ) ( s ) ) + p ( A i ( 2 ) ( s ) )
or
p i ( s ) = p i ( 1 ) ( s ) + p i ( 2 ) ( s ) i = 1 , 2 , , N .
Then the vector of risk driver probabilities p ¯ ( s ) = ( p 1 ( s ) , p 2 ( s ) , p N ( s ) ) can be decomposed into two vectors as
p ¯ ( s ) = p ¯ ( 1 ) ( s ) + p ¯ ( 2 ) ( s ) ,
where p ¯ ( 1 ) ( s ) = ( p 1 ( 1 ) ( s ) , p 2 ( 1 ) ( s ) , , p N ( 1 ) ( s ) ) is the vector of drivers’ primary probabilities and p ¯ ( 2 ) ( s ) = ( p 1 ( 2 ) ( s ) , p 2 ( 2 ) ( s ) , , p N ( 2 ) ( s ) ) the vector of drivers’ secondary probabilities.
For any layer s, define the transition matrix M ( 2 ) ( s ) of conditional probabilities of the risk drivers on layer s that are obtained as the result of risk drivers existing on layer s + 1
M ( 2 ) ( s ) = ( p i j ( 2 ) ( s ) ) N × N ,   s = 0 , 1 , 2 ,   ,   with
p i j ( 2 ) ( s ) = Pr ( A j ( 2 ) ( s ) | A i ( s + 1 ) ) i , j = 1 , 2 , , N , s = 0 , 1 , 2
Next, define the matrices M L ( s ) of the primary drivers’ probabilities as
M L ( s ) = ( q i j ( s ) ) N × N ,   s = 0 , 1 , 2 ,   ,
q i j ( s ) = p j ( 1 ) ( s ) , i , j = 1 , 2 , , N ,   s = 0 , 1 ,
M L ( s ) = ( p 1 ( 1 ) ( s ) p 2 ( 1 ) ( s ) . . p N ( 1 ) ( s ) p 1 ( 1 ) ( s ) p 2 ( 1 ) ( s ) . . p N ( 1 ) ( s ) . . . . . . . . . . p 1 ( 1 ) ( s ) p 2 ( 1 ) ( s ) . . p N ( 1 ) ( s ) )
Define the complete transition matrices as
M ^ ( 1 ) ( s ) = ( p ^ i j ( 1 ) ( s ) ) N × N ,   s = 0 , 1 , 2 ,   ,   with
p ^ i j ( 1 ) ( s ) = p j ( 1 ) ( s ) + p i j ( 2 ) ( s ) ,   i , j = 1 , 2 , , N ,   s = 0 , 1 , 2 ,
From (5) and (6) it follows that
M ^ ( 1 ) ( s ) = M L ( s ) + M ( 2 ) ( s ) ,   s = 0 , 1 , 2
p j ( 2 ) ( A j ( 2 ) ( s ) ) = i = 1 N p i ( s + 1 ) p i j ( 2 ) ( s ) , j = 1 , 2 , , N ,   s = 0 , 1 ,
or
p j ( 2 ) ( A j ( 2 ) ( s ) ) = i = 1 N p i ( s + 1 ) p i j ( 2 ) ( s ) , j = 1 , 2 , , N ,   s = 0 , 1 ,
In the matrix form, Equation (8) can be rewritten as
p ¯ ( 2 ) ( s ) = p ¯ ( s + 1 ) M ( 2 ) ( s ) , s = 0 , 1 ,
The following claim is true.
Claim.
The following relation holds:
i = 1 N ( p i ( s + 1 ) ( j = 1 N p i j ( 2 ) ( s ) ) ) = 1 j = 1 N p j ( 1 ) ( s ) ,   s = 0 , 1 ,
The proof is straightforward and skipped here.

3. Information Entropy as a Measure of Supply Chain Complexity

Information entropy is defined by Shannon as follows [20]. Given a set of events E = {e1,…, en} with a priori probabilities of event occurrence P = {p1,…, pn}, pi ≥ 0, such that pi + … + pn = 1, the entropy function H is defined by
H = −∑I pi log pi.
In order to yield all necessary information on the SC complexity issues, this paper uses the data recording of all adverse events occurred. The enterprise is to collect and store the data about main adverse events that occurred and led to economic losses in the enterprise, compensation cost, as well as the statistical analysis of the recorded data. Such requirement also applies to the registration of information about control of compliance of target and actual environmental characteristics.
Similar to [12], for each node u, consider an information database called a ‘risk protocol’. This is a registration list of most important events that have occurred in the node during a pre-specified time period. The protocol provides us the information whether or not the events are undesirable and, in the latter case, what are its risk drivers and possible losses.
This data is recorded in tables TBLu, representing lists of events in each node u during a certain time period T (e.g., month, or year). Each row in the table corresponds to an individual event occurring in a given node at a certain time moment (for example, a day). We use symbol f as an index of risk drivers, F as the total number of risk drivers, and r as an index of the event (row). The value zrf at the intersection of column f and row r is equal to 1 if the risk factor f is a source of the adverse event r, and 0—otherwise. The last column, F + 1, in each row (r) contains the magnitude of economic loss caused by the corresponding event r.
As far as the tables TBLu, for all the nodes belonging to a certain arbitrary SC layer, says, are derived, all the tables are gathered into the Cut_Table CTs for the entire SC cut. Let Rs(u) denote the total number of observed adverse (critical) events in a node u of cut Cs during a certain planning period. If such cut contains n(s) nodes, the total number of critical events in it is Ns = ∑u=1,,n(s) Rs(u). Assume that there are F risk drivers. For each risk driver f (f = 1, …, F), we can compute the number Ns(u, f) of critical events caused by driver f in the node u and the total number Ns(f) of critical events in all nodes of cut Cs, as registered in the risk protocols.
The relative frequency ps(f) of that driver f is the source of different critical events in nodes of s and can be treated as the estimation of the corresponding probability. Then we compute the latter probability as
ps(f) = Ns(f)/Ns
Then ∑f ps(f) = 1.
For the sake of simplicity of further analysis, our model applies to the case when the critical events are independent within the same tier and the losses are additive (these assumptions will be relaxed in our future research). For any node u from s, we can define corresponding probabilities ps(u, f) of the event that a driver f is the source of adverse events in node u
ps(u, f) = Ns(u, f)/Rs(u).
This paper treats the ps(u, f) values defined by Equation(3) as probabilities of events participating in calculation of the entropy function in Equation (1).
The main idea of the suggested entropic approach is that the information entropy in this study estimates the average amount of information contained in a stream of critical events of the risk protocol. Thus the entropy characterizes our uncertainty, or the absence of knowledge, about the risks. The idea here is that the less the entropy is, the more information and knowledge about risks is available for the decision makers.
The entropy value can be computed iteratively for each cut of the SC. Assume that the nodes of a cut Cs−1, are defined at step (iteration) s − 1. Denote by Ts all supplier-nodes in the supply layer s of the given tree. Let Ls(Ts) denote the total losses defined by the risk protocol and summed up for all nodes of cut Cs in tiers Ts: Ls(Ts) = ∑uTs cs(u). Further, let LT denote the total losses for all nodes of the entire chain. Thus, Ls(Ts) are contributions of the suppliers of cut Cs into the total losses LT.
Then Ls(Ts)/LT define the relative losses in the s-truncated supply chain. The relative contribution of lower tiers, that is, of those with larger s values, are, respectively, (LTLs(Ts))/LT. One can observe that the larger is the share (LTLs(Ts))/LT in comparison with Ls(Ts)/LT, the less is the available information about the losses in the cut Cs. For example, if the ratio Ls(Ts)/LT = 0.2, this case give us less information about the losses in cut Cs in comparison with the opposite case of Ls(Ts)/LT = 0.8. This argument motivates us to take the ratios (LTLs(Ts))/LT as the coefficients (weights) of the entropy (or, of our unawareness) about the economic losses incurred by adverse events affecting the environmental quality. In other words, the latter coefficients weigh the lack of our knowledge about the losses; as far as the number s grows, these coefficients become less and less significant.
Then the total entropy of all nodes u included into the cut s will be defined as
H(s) = ∑u H(u)
where the weighted entropy in each node u is computed as
H(u) = −Usf ps(u, f) log ps(u, f),
Us = (LTLs(Ts))/LT,
Let xu = 1 if node u from Ts is included into s, and xu = 0, otherwise.
Then the total entropy of all nodes included into the cut s will be defined as
H(s) = ∑u H(u) xu,
where H(u) is defined in (15).
Computations of ps(u, f) as well as the summation over risk factors f are taken in the risk event protocols for all the events related to nodes u from Ts. As far as entropy values are found for each node, the vulnerability to risks over the supply chain is measured as a total entropy of the s-truncated supply chain subject to the restricted losses.
Define the weighted entropy for each cut s as
H ( C s ) = c ( s ) H * ( C s )
where
H * ( C s ) = j = 1 N p j ( C s ) log p j ( C s )  is the entropy of cut  C s
We assume that the weight c ( s ) satisfies the following conditions:
(i)
c ( s ) is decreasing;
(ii)
c ( 0 ) = L ;
(iii)
lim s c ( s ) = 0 .
Define the “variation of relative entropy” depending upon the cut number is
R E V ( s ) = H ( s 1 ) c ( s 1 ) c ( s ) H ( s ) H ( 1 ) c ( s 1 ) c ( s ) H ( s ) .
The following claim is valid:
Theorem.
For the process of sequentially computing of the relative entropy variation (REV), for any fixed value ε , there exists the layer number s* for which it holds: | R E V ( s * ) | < ε .
Proof. 
For simplicity, we assume that the entropy of any layer depends only upon the information of the neighbor layers, that is,
H * ( L s | L s + 1 , L s + 2 , L k ) = H * ( L s | L s + 1 ) , s = 0 , 1 , 2 , , k 1
Let us exploit the following Formula for the entropy of combined system (see [21]):
H ( X 1 , X 2 , X s ) = H ( X 1 ) + H ( X 2 | X 1 ) + H ( X 3 | X 1 , X 2 ) + + H ( X s | X 1 , X 2 , , X s 1 ) ,
Applying it for the entropy H * ( C s ) of cut C s . We have
H * ( C s ) = H * ( L s , L s 1 , L s 2 , , L 0 ) = = H * ( L s ) + H * ( L s 1 | L s ) + H * ( L s 2 | L s 1 , L s ) + + H * ( L 0 | L 1 , L 2 , L s ) = = H * ( L s ) + H * ( L s 1 | L s ) + H * ( L s 2 | L s 1 ) + + H * ( L 0 | L 1 )
Using the latter Formula for cut C s 1 , we obtain
H * ( C s 1 ) = H * ( L s 1 ) + H * ( L s 2 | L s 1 ) + H * ( L s 3 | L s 2 ) + + H * ( L 0 | L 1 )
From Formulas (20) and (21), we obtain that
H * ( C s ) = H * ( C s 1 ) + H * ( L s ) H * ( L s 1 ) + H * ( L s 1 | L s )
Here H * ( L s 1 | L s ) denotes the conditional entropy of the layer s−1 under the condition that the probabilities and entropy of layer s are found.
Denote, for convenience, H ( C s ) = H ( s ) and H * ( C s ) = H * ( s ) .
Using the definition of the weighted entropy and Formula (21), we obtain
H ( s ) = c ( s ) H * ( s ) = c ( s ) ( H * ( C s 1 ) + H * ( L s ) H * ( L s 1 ) + H * ( L s 1 | L s ) ) .
Using the Formula of the conditional entropy, the definitions of events A i ( s ) , probabilities p i ( s ) and matrices M ( 2 ) ( s ) , we can write that
H * ( L s 1 | L s ) = i = 1 N ( P ( A i ( s ) ) j = 1 N P ( A j ( s 1 ) | A i ( s ) ) logP ( A j ( s 1 ) | A i ( s ) ) ) = i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) )
H * ( L s 1 | L s ) = i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ ( 1 ) ( s 1 ) ) )
Using Formula (22) we can write
H ( s ) = c ( s ) ( H * ( C s 1 ) + H * ( L s ) H * ( L s 1 ) i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) ) )
s = 1 , 2 ,
H ( s 1 ) H ( s ) = = H ( s 1 ) c ( s ) ( H * ( C s 1 ) + H * ( L s ) H * ( L s 1 ) i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) ) ) = = ( 1 c ( s ) c ( s 1 ) ) H ( s 1 ) c ( s ) ( H * ( L s ) H * ( L s 1 ) ) + c ( s ) i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) ) .
We obtain that
H ( s 1 ) H ( s ) = ( 1 c ( s ) c ( s 1 ) ) H ( s 1 ) c ( s ) ( H * ( L s ) H * ( L s 1 ) ) + c ( s ) i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) )
s = 1 , 2 ,
Since the following relations are valid,
0 < 1 c ( s ) c ( s 1 ) < 1 , 0 < i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) ) < log N 0 < H * ( L s ) < log N , 0 < H * ( L s 1 ) < log N lim s   H ( s 1 ) = 0 ,     lim s   c ( s ) = 0
we obtain that
lim s ( H ( s 1 ) H ( s ) ) = 0
Therefore, for any accuracy level ε, we can select the number of a cut s 1 for which
H ( s 1 1 ) H ( s 1 ) H ( 0 ) H ( s 1 ) < ε
The truncated part of the SC containing only layers of the cut ( s 1 1 ) possesses the required level of the entropy variation. The theorem is proved.
The theorem permits the decision maker to define the decreased size of the SC model, such that the decreased number of the layers in the SC model is sufficient for planning and coordinating the knowledge about the risks in the relations between the SC components, without the loss of essential information about the risks.

4. Entropy-Based Algorithm for Complexity Assessment

This section summarizes theoretical findings of the previous sections for obtaining a decreased SC model on which planning and coordination of supplies can be done without loss of essential information.
Input data of the algorithm:
  • the given number N of risk drivers,
  • weight functions c ( s ) selected by the decision maker,
  • probabilities pfprime(s) = Pr (Af prime(s)) that the risk driver f is the direct source of the supply failure/delay in layer s, which are caused by risk driver f,
  • probabilities pfsecond(s) = Pr (Af second(s)) that the risk driver f is the source of the supply failure/delay in layer s, which is a result of the indirect effect on the f by the risk drivers f′ of supply delay in layer s + 1; these probabilities are termed as secondary.
  • transition probability matrices M ( 2 ) ( s ) = ( p i j ( 2 ) ( s ) ) N × N , s = 0, 1,2,…, k.
Step 1. Using the entropy Formulas (11)–(17), calculate entropy of the layer 0:
H * ( L 0 ) = j = 1 N p j ( 0 ) log ( p j ( 0 ) ) H ( 0 ) = H * ( L 0 )
Step 2. Using Formulas (2)–(9), compute the matrix
M ^ ( 1 ) ( 0 ) , and vector p ¯ ( 1 ) = ( p 1 ( 1 ) , p 2 ( 1 ) , p N ( 1 ) )
Step 3. Compute the corrected vector of probabilities for the layer L 0 , using Formula
p ¯ c ( 0 ) = p ¯ ( 1 ) M ^ ( 1 ) ( 0 ) and corrected entropy for layer L 0
H C * ( L 0 ) = j = 1 N p j c ( 0 ) log ( p j c ( 0 ) )
Step 4. For s = 2,3,…, using matrix M ^ ( 1 ) ( s ) , vector p ¯ ( s ) = ( p 1 ( s ) , p 2 ( s ) , p N ( s ) ) compute sequentially the corrected vectors of probabilities for the layers L s 1 : p ¯ c ( s 1 ) = p ¯ ( s ) M ^ ( 1 ) ( s 1 )
Step 5. Compute H C * ( L s 1 ) = j = 1 N p j c ( s 1 ) log ( p j c ( s 1 ) )
Step 6. Compute
H ( 0 ) H ( 1 ) = = ( 1 c ( 1 ) c ( 0 ) ) H ( 0 ) c ( 1 ) ( H * ( L 1 ) H C * ( L 0 ) ) + c ( 1 ) i = 1 N ( p i ( 1 ) j = 1 N p ^ i j ( 1 ) ( 0 ) log ( p ^ i j ( 1 ) ( 0 ) ) )
Step 7. For s − 1,2,…, compute
H ( s 1 ) H ( s ) = = ( 1 c ( s ) c ( s 1 ) ) H ( s 1 ) c ( s ) ( H * ( L s ) H C * ( L s 1 ) ) + c ( s ) i = 1 N ( p i ( s ) j = 1 N p ^ i j ( 1 ) ( s 1 ) log ( p ^ i j ( 1 ) ( s 1 ) ) )
As the stopping rule use the following rule:
Stop at the cut s 1 for which H ( s 1 1 ) H ( s 1 ) H ( 0 ) H ( s 1 ) < ε holds.
Then the reduced SC model contains only the cut ( s 1 1 ) .

5. Numerical Example

Input data:
-
the number or risk factor drivers in each layer, N = 3;
-
level of accuracy ε = 0.01 ;
-
the weight function c ( s ) = 1 ( s + 1 ) 2 (selected by the decision maker).
-
probabilities pfprime(s) = Pr (Af prime(s))
p ¯ f prime ( 0 ) = ( 0.3457 0.0835 0.0918 ) p ¯ f prime ( 1 ) = ( 0.1644 0.3017 0.0542 ) p ¯ f prime ( 2 ) = ( 0.1256 0.1602 0.2156 ) p ¯ f prime ( 3 ) = ( 0.0845 0.2001 0.3025 ) p ¯ f prime ( 4 ) = ( 0.2623 0.1056 0.2369 ) p ¯ f prime ( 5 ) = ( 0.2014 0.2032 0.1356 ) p ¯ f prime ( 6 ) = ( 0.1422 0.2258 0.1047 ) p ¯ f prime ( 7 ) = ( 0.1056 0.3241 0.2658 ) p ¯ f prime ( 8 ) = ( 0.1599 0.3056 0.1422 ) p ¯ f prime ( 9 ) = ( 0.2014 0.3068 0.0856 ) p ¯ f prime ( 10 ) = ( 0.2145 0.0241 0.2536 )
-
probabilities pfsecond(s) = Pr (Af second(s))
p ¯ f second ( 0 ) = ( 0.3014 0.0725 0.1051 ) p ¯ f second ( 1 ) = ( 0.1851 0.2532 0.0414 ) p ¯ f second ( 2 ) = ( 0.2098 0.1308 0.1280 ) p ¯ f second ( 3 ) = ( 0.0837 0.2011 0.1281 ) p ¯ f second ( 4 ) = ( 0.1272 0.1013 0.1667 ) p ¯ f second ( 5 ) = ( 0.2334 0.0687 0.1577 ) p ¯ f second ( 6 ) = ( 0.1393 0.2709 0.1171 ) p ¯ f second ( 7 ) = ( 0.0824 0.1803 0.0417 ) p ¯ f second ( 8 ) = ( 0.1379 0.1143 0.1401 ) p ¯ f second ( 9 ) = ( 0.1456 0.1703 0.0903 ) p ¯ f second ( 10 ) = ( 0.2350 0.0383 0.2345 )
transition probability matrices M ( 2 ) ( s ) = ( p i j ( 2 ) ( s ) ) N × N , s = 1,2,…,10,
M ( 2 ) ( 1 ) = 0.3124 0.3320 0.3556 0.3456 0.4158 0.2386 0.4258 0.0256 0.5486
M ( 2 ) ( 2 ) = 0.2587 0.3568 0.3845 0.6587 0.0254 0.3159 0.4872 0.0246 0.4882
M ( 2 ) ( 3 ) = 0.1054 0.4503 0.4443 0.0865 0.6514 0.2621 0.4536 0.1458 0.4006 M ( 2 ) ( 4 ) = 0.3548 0.2149 0.4303 0.2826 0.5239 0.1935 0.2596 0.6823 0.0581
M ( 2 ) ( 5 ) = 0.1036 0.5483 0.3481 0.0598 0.2136 0.7266 0.1269 0.2456 0.6275 M ( 2 ) ( 6 ) = 0.3265 0.4251 0.2484 0.6421 0.2386 0.1193 0.0425 0.1243 0.8332 M ( 2 ) ( 7 ) = 0.0934 0.4652 0.4414 0.5219 0.1361 0.3420 0.4582 0.0348 0.5070
M ( 2 ) ( 8 ) = 0.2458 0.1965 0.5577 0.0563 0.7581 0.1856 0.4103 0.1784 0.4113 M ( 2 ) ( 9 ) = 0.4397 0.1535 0.4068 0.3627 0.4863 0.1510 0.2642 0.5529 0.1829 M ( 2 ) ( 10 ) = 0.7412 0.0985 0.1603 0.0525 0.4368 0.5107 0.1056 0.7296 0.1648
The algorithm solves this example as follows.
Using Formulas (3)–(6) compute:
The probabilities of risk factor drivers on the layers L 0 , L 1 , L 2 ,
p ¯ ( 0 ) = ( 0.6471 0.1560 0.1969 ) p ¯ ( 1 ) = ( 0.3495 0.5549 0.0956 ) p ¯ ( 2 ) = ( 0.3354 0.2910 0.3736 ) p ¯ ( 3 ) = ( 0.1682 0.4012 0.4306 ) p ¯ ( 4 ) = ( 0.3895 0.2069 0.4036 ) p ¯ ( 5 ) = ( 0.4348 0.2719 0.2933 ) p ¯ ( 6 ) = ( 0.2815 0.4967 0.2218 ) p ¯ ( 7 ) = ( 0.1880 0.5044 0.3075 ) p ¯ ( 8 ) = ( 0.2978 0.4199 0.2823 ) p ¯ ( 9 ) = ( 0.3470 0.4771 0.1759 ) p ¯ ( 10 ) = ( 0.4495 0.0624 0.4881 )
The complete transition matrices
M ^ ( 1 ) ( s ) = ( p ^ i j ( 1 ) ( s ) ) 3 × 3
  M ^ ( 1 ) ( s ) = ( p i j ( s ) ) , s = 1 , 2 , , 10 M ^   ( : , : , 1 ) = 0.4124 0.4227 0.1649 0.8076 0.0130 0.1794 0.5732 0.0114 0.4153
M ^   ( : , : , 2 ) = 0.1978 0.6567 0.1455 0.1029 0.8599 0.0372 0.6779 0.2258 0.0963 M ^   ( : , : , 3 ) = 0.6331 0.3176 0.0494 0.1494 0.0275 0.8230 0.3925 0.5261 0.0814
  M ^   ( : , : , 4 ) = 0.1104 0.8743 0.0153 0.1262 0.0455 0.8284 0.2454 0.1270 0.6276 M ^   ( : , : , 5 ) = 0.4019 0.3097 0.2884 0.7173 0.2444 0.0382 0.0672 0.0197 0.9131 M ^   ( : , : , 6 ) = 0.1399 0.6222 0.2379 0.4887 0.1686 0.3427 0.6885 0.0584 0.2531
M ^   ( : , : , 7 ) = 0.5105 0.1190 0.3705 0.3377 0.4472 0.2152 0.0493 0.8089 0.1418
M ^   ( : , : , 8 ) = 0.1930 0.2289 0.5781 0.0701 0.8885 0.0414 0.3581 0.2238 0.4180 M ^   ( : , : , 9 ) = 0.4487 0.2271 0.3242 0.2554 0.4309 0.3137 0.1151 0.7706 0.1143 M ^   ( : , : , 10 ) = 0.6751 0.1230 0.2019 0.0287 0.3383 0.6330 0.0856 0.8210 0.0934
Omitting intermediate calculations, at Steps 6 and 7, the algorithm computes values H(s) and REV(s) for each cut s, as presented in Table 1.
We observe that REV(6) value is less then ε = 0.01 , therefore we can reduce our SC size by taking the truncated model with s = 6. The results of computations are graphically presented in Figure 1.

6. Conclusions

A main contribution of this paper is the entropy-based method for a quantitative assessment of the information and knowledge relevant to the analysis of the SC size and complexity. Using the entropy approach, the suggested model extracts a sufficient amount of useful information from the risk protocols generated at different SC units. Assessment of the level of entropy for the risks allows the decision maker, step by step, to assess the entropy in the SC model and, consequently, to increase the amount of useful knowledge. As a result, we arrive at a reduced graph model of the supply chain such that it contains essentially the same amount of information about controllable parameters as the complete SC graph but much smaller in size.
An attractive direction for further research is to incorporate the human factor and, in particular, the effect of human fatigue (see [22]) on performance of industrial supply chains under uncertainty in dynamic environments.

Acknowledgments

There are no grants or other sources of funding of this study.

Author Contributions

Boris Kriheli contributed to the overall idea, model and algorithm, as well as the detailed writing of the manuscript; Eugene Levner contributed to the overall ideas and discussions on the algorithm, as well as the preparation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fera, M.; Fruggiero, F.; Lambiase, A.; Macchiaroli, R.; Miranda, S. The role of uncertainty in supply chains under dynamic modeling. Int. J. Ind. Eng. Comput. 2017, 8, 119–140. [Google Scholar] [CrossRef]
  2. Calinescu, A.; Efstathiou, J.; Schirn, J.; Bermejo, J. Applying and assessing two methods for measuring complexity in manufacturing. J. Oper. Res. Soc. 1998, 49, 723–733. [Google Scholar] [CrossRef]
  3. Sivadasan, S.; Efstathiou, J.; Calinescu, A.; Huatuco, L.H. Advances on measuring the operational complexity of supplier–customer systems. Eur. J. Oper. Res. 2006, 171, 208–226. [Google Scholar] [CrossRef]
  4. Sivadasan, S.; Efstathiou, J.; Frizelle, G.; Shirazi, R.; Calinescu, A. An information-theoretic methodology for measuring the operational complexity of supplier-customer systems. Int. J. Oper. Prod. Manag. 2002, 22, 80–102. [Google Scholar] [CrossRef]
  5. Sivadasan, S.; Smart, J.; Huatuco, L.H.; Calinescu, A. Reducing schedule instability by identifying and omitting complexity-adding information flows at the supplier–customer interface. Int. J. Prod. Econ. 2013, 145, 253–262. [Google Scholar] [CrossRef]
  6. Battini, D.; Persona, A. Towards a Use of Network Analysis: Quantifying the Complexity of Supply Chain Networks. Int. J. Electron. Cust. Relatsh. Manag. 2007, 1, 75–90. [Google Scholar] [CrossRef]
  7. Isik, F. An Entropy-based Approach for Measuring Complexity in Supply Chains. Int. J. Prod. Res. 2010, 48, 3681–3696. [Google Scholar] [CrossRef]
  8. Allesina, S.; Azzi, A.; Battini, D.; Regattieri, A. Performance Measurement in Supply Chains: New Network Analysis and Entropic Indexes. Int. J. Prod. Res. 2010, 48, 2297–2321. [Google Scholar] [CrossRef]
  9. Modraka, V.; Martona, D. Structural Complexity of Assembly Supply Chains: A Theoretical Framework. Proced. CIRP 2013, 7, 43–48. [Google Scholar] [CrossRef]
  10. Ivanov, D. Entropy-Based Supply Chain Structural Complexity Analysis. In Structural Dynamics and Resilience in Supply Chain Risk Management; International Series in Operations Research & Management Science; Springer: Berlin, Germany, 2018; Volume 265, pp. 275–292. [Google Scholar]
  11. Kogan, K.; Tapiero, C.S. Supply Chain Games: Operations Management and Risk Valuation; Springer: New York, NY, USA, 2007. [Google Scholar]
  12. Levner, E.; Ptuskin, A. An entropy-based approach to identifying vulnerable components in a supply chain. Int. J. Prod. Res. 2015, 53, 6888–6902. [Google Scholar] [CrossRef]
  13. Aven, T. Risk Analysis; Wiley: New York, NY, USA, 2015. [Google Scholar]
  14. Harremoës, P.; Topsøe, F. Maximum entropy fundamentals. Entropy 2001, 3, 191–226. [Google Scholar] [CrossRef]
  15. Herbon, A.; Levner, E.; Hovav, S.; Shaopei, L. Selection of Most Informative Components in Risk Mitigation Analysis of Supply Networks: An Information-gain Approach. Int. J. Innov. Manag. Technol. 2012, 3, 267–271. [Google Scholar]
  16. Knight, F.H. Risk, Uncertainty, and Profit; Hart, Schaffner & Marx: Boston, MA, USA, 1921. [Google Scholar]
  17. Zsidisin, G.A.; Ellram, L.M.; Carter, J.R.; Cavinato, J.L. An Analysis of Supply Risk Assessment Techniques. Int. J. Phys. Distrib. Logist. Manag. 2004, 34, 397–413. [Google Scholar] [CrossRef]
  18. Tapiero, C.S.; Kogan, K. Risk and Quality Control in a Supply Chain: Competitive and Collaborative Approaches. J. Oper. Res. Soc. 2007, 58, 1440–1448. [Google Scholar] [CrossRef]
  19. Tillman, P. An Analysis of the Effect of the Enterprise Risk Management Maturity on Sharehokder Value during the Economic Downturn of 2008–2010. Master’s Thesis, University of Pretoria, Pretoria, South Africa, 2011. [Google Scholar]
  20. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  21. Wentzel, E.S. Probability Theory; Mir Publishers: Moscow, Russia, 1982. [Google Scholar]
  22. Fruggiero, F.; Riemma, S.; Ouazene, Y.; Macchiaroli, R.; Guglielmim, V. Incorporating the human factor within the manufacturing dynamics. IFAC-PapersOnline 2016, 49, 1691–1696. [Google Scholar] [CrossRef]
Figure 1. Results of entropy computations.
Figure 1. Results of entropy computations.
Algorithms 11 00035 g001
Table 1. Computational results.
Table 1. Computational results.
S0123456
H(s)2.56931.28610.59720.17350.09530.03780.0235
REV(s)-10.53680.33010.06090.04480.0111

Share and Cite

MDPI and ACS Style

Kriheli, B.; Levner, E. Entropy-Based Algorithm for Supply-Chain Complexity Assessment. Algorithms 2018, 11, 35. https://doi.org/10.3390/a11040035

AMA Style

Kriheli B, Levner E. Entropy-Based Algorithm for Supply-Chain Complexity Assessment. Algorithms. 2018; 11(4):35. https://doi.org/10.3390/a11040035

Chicago/Turabian Style

Kriheli, Boris, and Eugene Levner. 2018. "Entropy-Based Algorithm for Supply-Chain Complexity Assessment" Algorithms 11, no. 4: 35. https://doi.org/10.3390/a11040035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop