Next Article in Journal
Minimal Products of Coordinate and Momentum Uncertainties of High Orders: Significant and Weak High-Order Squeezing
Next Article in Special Issue
Complexity as Causal Information Integration
Previous Article in Journal
Dynamic and Renormalization-Group Extensions of the Landau Theory of Critical Phenomena
Previous Article in Special Issue
Robustness Analysis of the Estimators for the Nonlinear System Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy Analysis of a Flexible Markovian Queue with Server Breakdowns

1
Department of Mathematics, King Saud University, Riyadh 11451, Saudi Arabia
2
Department of Industrial Engineering, Alfaisal University, Riyadh 12714, Saudi Arabia
3
Department of Computer Engineering, King Saud University, Riyadh 11453, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(9), 979; https://doi.org/10.3390/e22090979
Submission received: 6 July 2020 / Revised: 16 August 2020 / Accepted: 25 August 2020 / Published: 3 September 2020
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)

Abstract

:
In this paper, a versatile Markovian queueing system is considered. Given a fixed threshold level c, the server serves customers one a time when the queue length is less than c, and in batches of fixed size c when the queue length is greater than or equal to c. The server is subject to failure when serving either a single or a batch of customers. Service rates, failure rates, and repair rates, depend on whether the server is serving a single customer or a batch of customers. While the analytical method provides the initial probability vector, we use the entropy principle to obtain both the initial probability vector (for comparison) and the tail probability vector. The comparison shows the results obtained analytically and approximately are in good agreement, especially when the first two moments are used in the entropy approach.

1. Introduction

The concept of entropy was introduced by Shannon in his seminal papers, Shannon [1]. In information theory, entropy refers to a basic quantity associated to a random variable. Among a number of different probability distributions that express the current state of knowledge, the maximum entropy principle allows to choose the best one, that is the one with maximum entropy.
Originally, the entropy was created by Shannon as part of his theory of communication. However, since then, the principle of maximum entropy has found applications in a multitude of other areas such as statistical mechanics, statistical thermodynamics, business, marketing and elections, economics, finance, insurance, spectral analysis of time series, image reconstruction, pattern recognition, operations research, reliability theory, biology, medicine, and so forth, see Kapur [2].
In operations research, and particularly in queueing theory, a large number of papers has used the maximum entropy principle to determine the steady-state probability distribution of some process. The earliest document using entropy maximization in the context of queues that came to our attention is that of Bechtold et al. [3]. Among the latest theoretical papers applying the maximum entropy principle we cite Yen et al. [4], Shah [5], and Singh et al. [6], while She et al. [7], Giri and Roy [8], and Lin et al. [9] present recent applications.
The intention of this paper is to resume work on a paper started by Bounkhel et al. [10], who studied a flexible queueing system and used an analytical method to obtain the initial steady state probability vector. For other possible approaches to calculate the probabilities see the references in Reference [10]. The objective of this paper is threefold. First, the maximum entropy principle is used to derive the initial steady state probability vector and make sure it is in agreement with the one obtained by Bounkhel et al. [10]. Second, we use the maximum entropy principle to obtain the tail steady state probability vector. Third, improve both initial and tail probability vectors by providing more information to the maximum entropy technique.
The rest of this paper is structured as follows. In Section 2, we describe the flexible queueing system and recall the results obtained by Bounkhel et al. [10]. Our main results are presented in Section 3 where we use the maximum entropy principle to obtain the different probabilities. The theoretical results are verified with numerical illustrations. The paper is concluded in Section 4.

2. Model Formulation and Previous Results

Bounkhel et al. [10] studied a versatile single-server queueing system where service is regulated by an integer threshold level c 2 , and can be either single or batch as follows—if the queue length is less than c, then service is single and exponential with parameter μ 1 . If the queue length is equal to c, then service is batch of size c and follows the exponential distribution with parameter μ 2 μ 1 > 0 . Finally, if the queue size is greater than c, then service is again batch of fixed size c and follows the exponential distribution with parameter μ 2 . The server is subject to breakdowns which happen according to a Poisson process with rate α 1 when service is single and α 2 when service is batch. Repairs that follow breakdowns are exponentially distributed with rate β 1 when service is single and β 2 when service is batch. Assume that costumers arrive according to a Poisson process with positive rate λ .
We let X ( t ) represent the number of customers in the system at time t and introduce w n , n = 0 , 1 , 2 , the probability of n customers in the system in the steady-state when the server is in a working state, and p n the probability of n customers in the system in the steady-state regardless of the server state. Also, for | z | 1 , we introduce the probability generating functions:
W ( z ) = n = 0 w n z n a n d P ( z ) = n = 0 p n z n .
Then,
W ( z ) = A 1 ( z ) S 1 ( z ) + A 2 ( z ) w 0 z c 1 + A 3 ( z ) z c w 1 z c ( λ + μ 2 ) λ z c + 1 μ 2 ,
P ( z ) = 1 + α 2 β 2 W ( z ) + α 1 β 1 α 2 β 2 S 1 ( z ) ,
where
A 1 ( z ) : = ( μ 2 μ 1 ) z c + μ 1 z c 1 μ 2 , A 3 ( z ) : = μ 1 2 ( z c 1 1 ) μ 1 μ 2 , A 2 ( z ) : = ( λ + μ 1 ) z μ 1 + λ z ( μ 1 z c 1 μ 2 ) μ 2 μ 1 , S 1 ( z ) : = n = 0 c 1 w n z n .
The c unknown probabilities w n , n = 0 , 1 , , c 1 , are determined by solving the system of c equations:
A 1 ( z ) S 1 ( z ) + A 2 ( z ) w 0 z c 1 + A 3 ( z ) z c w 1 | z = z i = 0 , i = 1 , 2 , , c 1 ,
n = 0 c 1 a n w n = 1 ,
where z i are the c 1 roots inside the open unit ball of the equation
λ z c + 1 + z c ( λ + μ 2 ) μ 2 = 0 ,
and
a n = ( α 2 + β 2 ) [ A 1 ( 1 ) + A 2 ( 1 ) ] β 2 ( c μ 2 λ ) + α 1 β 1 α 2 β 2 , n = 0 , ( α 2 + β 2 ) [ A 1 ( 1 ) + A 3 ( 1 ) ] β 2 ( c μ 2 λ ) + α 1 β 1 α 2 β 2 , n = 1 , ( α 2 + β 2 ) A 1 ( 1 ) β 2 ( c μ 2 λ ) + α 1 β 1 α 2 β 2 : = a , n 2 ,
with
A 1 ( 1 ) = c μ 2 μ 1 , A 2 ( 1 ) = ( λ + μ 1 ) + λ ( c μ 1 μ 2 ) μ 2 μ 1 , A 3 ( 1 ) = ( c 1 ) μ 1 2 μ 1 μ 2 .
Writing W ( z ) = N ( z ) D ( z ) , the expected number of customers in the system in the steady-state is
E ( X ) = 1 + α 2 β 2 W ( 1 ) + α 1 β 1 α 2 β 2 S 1 ( 1 ) ,
where
W ( 1 ) = N ( 1 ) D ( 1 ) N ( 1 ) D ( 1 ) 2 D ( 1 ) 2 ,
with
D ( 1 ) = c μ 2 λ , D ( 1 ) = c ( c 1 ) ( μ 2 + λ ) λ ( c + 1 ) , N ( 1 ) = A 1 ( 1 ) S 1 ( 1 ) + A 2 ( 1 ) w 0 + A 3 ( 1 ) w 1 , N ( 1 ) = A 1 ( 1 ) S 1 ( 1 ) + 2 A 1 ( 1 ) S 1 ( 1 ) + A 2 ( 1 ) + 2 ( c 1 ) A 2 ( 1 ) w 0 + A 3 ( 1 ) + 2 c A 3 ( 1 ) w 1 ,
and
A 1 ( 1 ) = ( c 1 ) ( c μ 2 2 μ 1 ) , A 2 ( 1 ) = λ μ 1 c ( c 1 ) μ 2 μ 1 , A 3 ( 1 ) = ( c 2 ) A 3 ( 1 ) .

3. Entropy Approach

By solving the system of Equations (3)–(4), only the probabilities w n , n = 0 , 1 , · , c 1 are obtained. The rest of the probabilities w n , n = c , c + 1 , can be obtained by successive differentiations of (1). Note that using (2), we have
p i = 1 + α 1 β 1 w i , i < c , 1 + α 2 β 2 w i , i c .
Therefore, the initial probability vector P i = ( p 0 , p 1 , , p c 1 ) is completely determined while the tail probability vector P t = ( p c , p c + 1 , ) is yet to be determined. However, since the first moment E ( X ) of the process X ( t ) has been found in (5), we can use this information, along we the maximum entropy principle, to obtain approximate values for the components of the tail probability vector P t .

3.1. Entropy Solution Using the First Moment

In a first step, we will calculate the initial probability vector using the maximum entropy principle and compare it with the initial probability vector obtained in the previous section to make sure they are in agreement. To this end, consider the following nonlinear maximization problem:
max Z = i = 0 p i ln p i
s . t . n = 0 c 1 a n p n = 1 + α 1 β 1
( E P ) i = 0 i p i = E ( X ) p i 0 , for all i
Constraint (8) is the summability-to-one condition while constraint (9) is the mean system size equation. This maximization problem can be solved by the method of Lagrange multipliers, see for example Luenberger and Ye [11]. The Lagrangian function associated with problem ( E P ) is given by:
L ( P i , λ ) = i = 0 p i ln p i + λ 1 n = 0 c 1 a n p n 1 α 1 β 1 + λ 2 i = 0 i p i E ( X ) ,
where the vector λ = ( λ 1 , λ 2 ) stands for the Lagrange multipliers. Setting the derivative of L ( P i , λ ) with respect to p k to zero yields
p k = e 1 e a k λ 1 e k λ 2 , k = 0 , 1 , 2 , .
Substituting (10) in the constraints (8) and (9), we get:
e a 0 λ 1 + e a 1 λ 1 e λ 2 + e a λ 1 e 2 λ 2 1 e ( c 2 ) λ 2 1 e λ 2 + e c λ 2 1 e λ 2 = e
e a 1 λ 1 e λ 2 + e a λ 1 e 3 λ 2 1 + ( c 3 ) e ( c 2 ) λ 2 ( c 2 ) e ( c 3 ) λ 2 ( 1 e λ 2 ) 2 + 2 e 2 λ 2 1 e ( c 2 ) λ 2 1 e λ 2 + e ( c + 1 ) λ 2 ( 1 e λ 2 ) 2 + c e c λ 2 1 e λ 2 = e E ( X ) .
All we need to do now is solve numerically the nonlinear system (11) and (12) to find e λ 1 and then e λ 2 and then substitute in (10) to obtain the probabilities p k .
Example 1.
Some numerical tests are conducted here to see how good is the solution procedure proposed in this section. In this sequel, we will refer to the solution obtained analytically as the exact solution, and to the solution obtained using the entropy approach with the first moment as the approximate solution 1. To compare these solutions, we will use the percentage error ( P E 1 ):
P E 1 = e x a c t   v a l u e a p p r o x i m a t e   v a l u e   1 e x a c t   v a l u e .
Let us take a numerical example where c = 5 and calculate the initial probability vector P i = ( p 0 , , p 4 ) . Assume μ 1 = 2 , μ 2 = 5.5 , α 1 = 0.05 , α 2 = 0.08 , β 1 = 0.07 , and β 2 = 0.06 . Table 1 shows the exact solution, the approximate solution 1, and the percentage error for two different values of the arrival rate λ.
When λ = 0.5 , the average P E 1 is 0.6032 and when λ = 5.5 , the average P E 1 is 0.5432 . The overall average percentage error is 0.5732 , which can be greatly improved.

3.2. Entropy Solution Using the Second Moment

In this subsection, we use (2) to calculate the second moment E ( X 2 ) and we will show that the use of the second moment instead of the first moment as an extra constraint leads to an initial steady state probability vector that is also closer to the initial steady state probability vector obtained analytically. We find that the second moment is given by
E ( X 2 ) = 1 + α 2 β 2 W ( 1 ) + α 1 β 1 α 2 β 2 S 1 ( 1 ) + E ( X ) ,
where
W ( 1 ) = V ( 1 ) U ( 1 ) U ( 1 ) V ( 1 ) 3 V ( 1 ) 2 ,
with
V ( 1 ) = 2 ( c μ 2 λ ) 2 U ( 1 ) = D ( 1 ) N ( 1 ) N ( 1 ) D ( 1 ) V ( 1 ) = 6 c ( c μ 2 λ ) ( c 1 ) μ 2 λ U ( 1 ) = 2 [ D ( 1 ) N ( 1 ) N ( 1 ) D ( 1 ) ] A 1 ( 1 ) = ( c 1 ) ( c 2 ) c μ 2 3 μ 1 A 2 ( 1 ) = λ c ( c 1 ) ( c 2 ) μ 2 μ 1 A 3 ( 1 ) = μ 1 2 ( c 1 ) ( c 2 ) ( c 3 ) μ 1 μ 2 D ( 1 ) = c ( c 1 ) ( c 2 ) μ 2 3 λ N ( 1 ) = 3 A 1 ( 1 ) S 1 ( 1 ) + 3 A 1 ( 1 ) S 1 ( 1 ) + S 1 ( 1 ) A 1 ( 1 ) + w 0 [ A 2 ( 1 ) + 3 ( c 1 ) A 2 ( 1 ) + 3 ( c 1 ) ( c 2 ) A 2 ( 1 ) ] + w 1 [ A 3 ( 1 ) + 3 c A 3 ( 1 ) + 3 c ( c 1 ) A 3 ( 1 ) ] S 1 ( 1 ) = n = 0 c 1 n ( n 1 ) w n .
The nonlinear maximization problem to solve in this case is the following:
max Z = i = 0 p i ln p i
s . t . n = 0 c 1 a n p n = 1 + α 1 β 1
( E P 2 ) i = 0 i 2 p i = E ( X 2 ) p i 0 , for all i .
Similarly to the case where we only used the first moment, we use the classical method of Lagrange. The following system of nonlinear equations, where the unknowns are the Lagrange multipliers ( λ 1 , λ 2 ) is obtained:
e a 0 λ 1 + e a 1 λ 1 + λ 2 + e a λ 1 k = 2 c 1 e k 2 λ 2 + k = c e k 2 λ 2 e = 0 e a 1 λ 1 + λ 2 + e a λ 1 k = 2 c 1 k 2 e k 2 λ 2 + k = c k 2 e k 2 λ 2 e E ( X 2 ) = 0
This system can be solved numerically. Once we have the values of ( λ 1 , λ 2 ) , we replace these values in the following formula to obtain the probabilities p k :
p k = e 1 e a k λ 1 e k 2 λ 2 , k = 0 , 1 , 2 , .
Example 2.
Some numerical tests are conducted here to see how good is the solution procedure proposed in this subsection. Similarly to the previous subsection, we will refer to the percentage error obtained using the entropy approach with the second moment as P E 2 . Then we compare the two approximate solutions using the percentage errors. Let us take a numerical example with the same data in Example 1, that is, c = 5 , μ 1 = 2 , μ 2 = 5.5 , α 1 = 0.05 , α 2 = 0.08 , β 1 = 0.07 , and β 2 = 0.06 . Table 2 shows the exact solution, the approximate solution obtained in Section 3.1, the approximate solution obtained in this subsection, the percentage errors P E 1 and P E 2 , for two different values of the arrival rate λ.
When λ = 0.5 , the average of P E 1 is 0.6032 and the average of P E 2 is 0.5119 , and when λ = 5.5 , the average of P E 1 is 0.5432 and the average of P E 2 is 0.4661 . The overall average percentage error of P E 1 and P E 2 , respectively, are 0.5732 and 0.4890 , which can be greatly improved in the next subsection.

3.3. Entropy Solution Using Both First and Second Moments

Our objective here is to improve the probability vector obtained in the previous subsections. This is realized by including both first and second moments to the previous formulation. We will show that the use of the two moments as extra constraints leads to best approximation to the initial steady state probability vector obtained analytically. The nonlinear maximization problem to solve in this case is the following:
max Z = i = 0 p i ln p i
s . t . n = 0 c 1 a n p n = 1 + α 1 β 1
( E P 3 ) i = 0 i p i = E ( X )
i = 0 i 2 p i = E ( X 2 ) p i 0 , for all i .
Similarly to the previous cases, we use the classical method of Lagrange. The following system of nonlinear equations, where the unknowns are the Lagrange multipliers ( λ 1 , λ 2 , λ 3 ) is obtained:
e a 0 λ 1 + e a 1 λ 1 + λ 2 + λ 3 + e a λ 1 k = 2 c 1 e a λ 1 + k λ 2 + k 2 λ 3 + k = c e k λ 2 + k 2 λ 3 e = 0 e a 1 λ 1 + λ 2 + λ 3 + e a λ 1 k = 2 c 1 k e a λ 1 + k λ 2 + k 2 λ 3 + k = c k e k λ 2 + k 2 λ 3 e E ( X ) = 0 e a 1 λ 1 + λ 2 + λ 3 + e a λ 1 k = 2 c 1 k 2 e a λ 1 + k λ 2 + k 2 λ 3 + k = c k 2 e k λ 2 + k 2 λ 3 e E ( X 2 ) = 0 .
This system can be solved numerically. The values of ( λ 1 , λ 2 , λ 3 ) obtained numerically will be replaced in the following formula to obtain the probabilities p k :
p k = e 1 e a k λ 1 e k λ 2 e k 2 λ 3 , k = 0 , 1 , 2 , .
Example 3.
Let us take the same data as in Examples 1 and 2 and calculate the initial probability vector using the analytical method (exact), entropy approach with first moment only (Entropy 1), entropy approach with second moment only (Entropy 2), and entropy approach with both first and second moments (Entropy 1&2). Table 3 and Table 4 show the exact solution and the approximate solutions along with the corresponding percentage errors for λ = 0.5 and λ = 5.5 , respectively.
We denote by P E 1 the percentage error when Entropy 1 is used, by P E 2 the percentage error when Entropy 2 is used, and by P E 1 & 2 the percentage error when Entropy 1 & 2 is used. The overall average percentage error using the entropy approach with the first moment is 0.5732 , while the overall average percentage error using the entropy approach with the second moment is 0.4890 which represents a slight improvement of | 0.5732 0.4890 | 0.5732 = 14.68 % . However, the overall average percentage using both moments is 0.1622 , which represents a substantial improvement of | 0.5732 0.1622 | 0.5732 = 71.70 % . We show in Figure 1 the two distributions for a better visualisation. We can see that the entropy approach with both moments always outperforms the entropy approach with only the first moment or only second moment.
Since the results obtained by the entropy method with both two moments are satisfactory, we also calculated the tail probability vector and present in Figure 2 both initial and tail probability vectors. For comparison, we present the distribution obtained when only one moment (first or second moment) is used and when both first and second moments are used.
One other remark we make when looking at Table 2, Table 3 and Table 4 is that the probability mass function is concentrated at p 0 for small values of λ and as λ increases, this distribution becomes more evenly distributed and the value of p 0 decreases. Intuitively, this makes sense since we expect the probability of no customers in the system to decrease as the arrival rate increases. Therefore, to further compare the approximate entropy approaches, we conduct next a sensitivity analysis to investigate the effect of λ on the percentage errors of p 0 . We also explore the effect of other parameters, namely c, μ 1 and μ 2 . The parameters α i and β i do not seem to have any effect on the deviations. For the sensitivity analysis, we keep the base values of Example 1 and change one parameter at a time.
Effect of λ on the Percentage Error of p 0 .
Table 5 shows the values of p 0 calculated using the three methods, while Figure 3 shows the variations of the percentage errors as λ changes.
We read from Table 5 two points: First, the approximation results obtained using Entropy 1&2 are clearly better than the ones obtained by the two other methods. Second, the efficiency of the best method is inversely proportional to the values of λ.
Figure 3 shows that Entropy 1&2 always has the lowest P E , however, there are values of λ for which P E 1 < P E 2 . In other words, if we are using a single moment, then better use the first moment for small values of λ and the second moment for larger values of λ.
Effect of μ 1 on the Percentage Error of p 0 .
The results are summarized in Table 6 and Figure 4.
We read from Table 6 three points: First, the approximation results obtained using Entropy 1&2 are clearly better than the ones obtained by the two other methods. Second, Entropy 1 is much better than Entropy 2, that is, if we are using a single moment, then better use the first moment than the second moment. Third, the efficiency of all three methods is directly proportional to the values of μ 1 .
We can see from Figure 4 that we always have P E 1 & 2 < P E 1 < P E 2 , which confirms our conclusions from Table 6 stated above.
Effect of μ 2 on the Percentage Error of p 0 .
The results are summarized in Table 7 and Figure 5.
We read from Table 7 three points: First, the approximation results obtained using Entropy 1&2 are much better than the ones obtained by the two other methods. Second, if we are using a single moment, then better use Entropy 1 than Entropy 2. Third, the efficiency of the best method is directly proportional to the values of μ 2 and the efficiency of the other two methods is inversely proportional.
Again observe from Figure 5 that we always have P E 1 & 2 < P E 1 < P E 2 , which confirms our conclusions from Table 7 stated above.
Effect of c on The Initial Probability Vector P i .
The results are summarized in Table 8 and Figure 6. Superiority of Entropy 1&2 is demonstrated for all values of c.
We read from Table 8 and Figure 6 three points: First, obviously the approximation results obtained using Entropy 1&2 are clearly better than the ones obtained by the two other methods. Second, if we are using a single moment, then better use Entropy 2 than Entropy 1 for large values of c and for small values of c there is no big difference between the two methods. Third, the efficiency of all three methods is inversely proportional to the values of c.
From our previous sensitivity analysis, we conclude that if a single moment is used to estimate the probabilities, then it makes a difference whether we use the first moment or the second moment. Also, the more information we feed the maximum entropy technique, the more accurate the results are. Although we did not do it, we conjecture that inclusion of the third moment would confirm our findings that including more information would result in higher accuracy.

4. Conclusions

An analytical and the maximum entropy principle are used in this paper to calculate the steady-state initial probabilities of the number of customers in a Markovian queueing system. The entropy solution is further improved by including second moment information. When the analytical and entropy solutions are in agreement, the entropy solution is used to obtain the tail probabilities of the number of customers in the system. These probabilities cannot be obtained analytically.
The number of customers in the system is a discrete random variable. This paper can be followed by one where a continuous random variable such as the waiting time or the busy period is studied. In this case, the probability density function, instead of the probability mass function, needs to be calculated.

Author Contributions

All authors contributed equally to this article. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding the work through the research group Project No. RGP-024.

Acknowledgments

The authors would like to thank the referees for the com-plete reading of the first version of this work and for their suggestions allowing us to improve the presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Kapur, J.N. Maximum Entropy Models in Science and Engineering; Wiley Eastern Limited: New Delhi, India, 1989. [Google Scholar]
  3. Bechtold, W.R.; Medlin, J.E.; Weber, D.R. PCM Telemetry Data Compression Study, Phase 1 Final Report, Prepared by Lockheed Missiles & Space Company Sunnyvale, California for Goddard Space Flight Center Greenbelt, Maryland. 1965. Available online: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19660012530.pdf (accessed on 6 July 2020).
  4. Yen, T.-C.; Wang, K.-H.; Chen, J.-Y. Optimization Analysis of the N Policy M/G/1 Queue with Working Breakdowns. Symmetry 2020, 12, 583. [Google Scholar] [CrossRef] [Green Version]
  5. Shah, N.P. Entropy Maximisation and Queues with or without Balking. Ph.D. Thesis, School of Electrical Engineering and Computer Science, Faculty of Engineering and Informatics, University of Bradford, Bradford, UK, 2014. [Google Scholar]
  6. Singh, C.J.; Kaur, S.; Jain, M. Unreliable server retrial G-queue with bulk arrival, optional additional service and delayed repair. Int. J. Oper. Res. 2020, 38, 82–111. [Google Scholar] [CrossRef]
  7. She, R.; Liu, S.; Fan, P. Recognizing information feature variation: Message importance transfer measure and its applications in big data. Entropy 2018, 20, 401. [Google Scholar] [CrossRef] [Green Version]
  8. Giri, S.; Roy, R. On NACK-based rDWS algorithm for network coded broadcast. Entropy 2019, 21, 905. [Google Scholar] [CrossRef] [Green Version]
  9. Lin, W.; Wang, H.; Deng, Z.; Wang, K.; Zhou, X. State machine with tracking tree and traffic allocation scheme based on cumulative entropy for satellite network. Chin. J. Electron. 2020, 29, 185–189. [Google Scholar] [CrossRef]
  10. Bounkhel, M.; Tadj, L.; Hedjar, R. Steady-state analysis of a flexible Markovian queue with server breakdowns. Entropy 2019, 21, 259. [Google Scholar] [CrossRef] [Green Version]
  11. Luenberger, D.G.; Ye, Y. Introduction to Linear and Nonlinear Programming, 4th ed.; Springer: Cham, Switzerland, 2016. [Google Scholar]
Figure 1. Initial probability vectors comparison (left λ = 0.5 and right λ = 5.5 ).
Figure 1. Initial probability vectors comparison (left λ = 0.5 and right λ = 5.5 ).
Entropy 22 00979 g001
Figure 2. Initial and tail probability vectors.
Figure 2. Initial and tail probability vectors.
Entropy 22 00979 g002
Figure 3. Effect of λ on the percentage errors of p 0 .
Figure 3. Effect of λ on the percentage errors of p 0 .
Entropy 22 00979 g003
Figure 4. Effect of μ 1 on the percentage errors of p 0 .
Figure 4. Effect of μ 1 on the percentage errors of p 0 .
Entropy 22 00979 g004
Figure 5. Effect of μ 2 on the percentage errors of p 0 .
Figure 5. Effect of μ 2 on the percentage errors of p 0 .
Entropy 22 00979 g005
Figure 6. Effect of c on the average percentage error.
Figure 6. Effect of c on the average percentage error.
Entropy 22 00979 g006
Table 1. Initial probability vectors comparison.
Table 1. Initial probability vectors comparison.
c = 5
λ λ = 0.5 λ = 5.5
ExactApprox. 1 PE 1 ExactApprox. 1 PE 1
p 0 0.75190.72650.03380.10770.23491.1815
p 1 0.18760.17940.04410.15610.21470.3748
p 2 0.04650.06530.40490.17760.14380.1906
p 3 0.01120.02060.83460.17990.10230.4316
p 4 0.00240.00651.69860.15720.07270.5373
Average 0.6032 0.5432
Table 2. Initial probability vectors comparison.
Table 2. Initial probability vectors comparison.
c = 5
λ λ = 0.5 λ = 5.5
ExactAppr. 1 PE 1 Appr. 2 PE 2 ExactAppr. 1 PE 1 Appr. 2 PE 2
p 0 0.75190.72650.03380.65600.12750.10770.23491.18150.03910.6372
p 1 0.18760.17940.04410.28920.54140.15610.21470.37480.06080.6108
p 2 0.04650.06530.40490.05230.12370.17760.14380.19060.13870.2191
p 3 0.01120.02060.83460.00250.78100.17990.10230.43160.11120.3821
p 4 0.00240.00651.69860.00000.98580.15720.07270.53730.08150.4812
Average 0.6032 0.5119 0.5432 0.4661
Table 3. Initial probability vectors comparison ( λ = 0.5 ).
Table 3. Initial probability vectors comparison ( λ = 0.5 ).
ExactEntropy 1 PE 1 Entropy 2 PE 2 Entropy 1 & 2 PE 1 & 2
p 0 0.75190.72650.03380.65600.12750.75500.0040
p 1 0.18760.17940.04410.28920.54140.17950.0431
p 2 0.04650.06530.40490.05230.12370.05270.1324
p 3 0.01120.02060.83460.00250.78100.01080.0346
p 4 0.00240.00651.69860.00000.98580.00180.2421
Average 0.6032 0.5119 0.0912
Table 4. Initial probability vectors comparison ( λ = 5.5 ).
Table 4. Initial probability vectors comparison ( λ = 5.5 ).
ExactEntropy 1 PE 1 Entropy 2 PE 2 Entropy 1 & 2 PE 1 & 2
p 0 0.10770.23491.18150.03910.63720.08780.1843
p 1 0.15610.21470.37480.06080.61080.21670.3881
p 2 0.17760.14380.19060.13870.21910.17160.0341
p 3 0.17990.10230.43160.11120.38210.14230.2090
p 4 0.15720.07270.53730.08150.48120.10210.3503
Average 0.5432 0.4661 0.2332
Table 5. Effect of λ on p 0 .
Table 5. Effect of λ on p 0 .
λ ExactEntropy 1Entropy 2Entropy 1&2
0.20.53410.53930.43660.5352
0.40.28940.38260.27390.2673
0.60.19370.31890.12790.1618
0.80.14680.27940.12560.1169
1.00.11830.24860.01110.0949
1.20.09860.22190.14350.0821
1.40.08380.19740.13300.0733
1.60.07210.17440.08800.0660
1.80.06270.15230.03390.0597
Table 6. Effect of μ 1 on p 0 .
Table 6. Effect of μ 1 on p 0 .
μ 1 ExactEntropy 1Entropy 2Entropy 1&2
0.10.33120.41240.01360.3065
0.20.54290.53060.01260.5349
0.30.67560.63530.54930.6730
0.40.75260.71310.63380.7517
0.50.80090.76850.70000.8007
0.60.83370.80800.75200.8337
0.70.85730.83690.79250.8574
0.80.87510.85870.82370.8752
0.90.88890.87560.84780.8890
10.90000.88900.86650.9001
Table 7. Effect of μ 2 on p 0 .
Table 7. Effect of μ 2 on p 0 .
μ 2 ExactEntropy 1Entropy 2Entropy 1&2
40.75160.74220.68610.7586
50.75180.72990.66230.7558
60.75200.72410.65160.7544
70.75210.72100.64600.7536
80.75220.71900.64260.7531
90.75230.71770.64040.7528
100.75230.71680.63890.7525
110.75230.71610.63780.7524
120.75240.71550.63700.7523
130.75240.71510.63640.7522
Table 8. Effect of c on initial probability vector P i .
Table 8. Effect of c on initial probability vector P i .
c = 3 p 0 p 1 p 2
Exact0.76870.18630.0400
Entropy 10.75820.17070.0575
Entropy 20.70520.26250.0318
Entropy 1&20.77010.18240.0438
c = 4 p 0 p 1 p 2 p 3
Exact0.75620.18760.04530.0097
Entropy 10.73480.17730.06330.0197
Entropy 20.67230.28100.04490.0017
Entropy 1&20.75820.18190.05000.0091
c = 5 p 0 p 1 p 2 p 3 p 4
Exact0.75190.18760.04650.01120.0024
Entropy 10.72650.17940.06530.02060.0065
Entropy 20.65600.28920.05230.00250.0000
Entropy 1&20.75500.17950.05270.01080.0018
c = 6 p 0 p 1 p 2 p 3 p 4 p 5
Exact0.75060.18760.04680.01160.00280.0006
Entropy 10.72360.18010.06620.02090.00660.0021
Entropy 20.64880.29270.05570.00280.00000.0000
Entropy 1&20.75440.17770.05360.01170.00220.0003
c = 7 p 0 p 1 p 2 p 3 p 4 p 5 p 6
Exact0.75020.18750.04690.01170.00290.00070.0001
Entropy 10.72240.18040.06650.02110.00670.00210.0007
Entropy 20.64570.29410.05710.00300.00000.00000.0000
Entropy 1&20.75450.17660.05400.01210.00230.00040.0001
c = 8 p 0 p 1 p 2 p 3 p 4 p 5 p 6 p 7
Exact0.75000.18750.04690.01170.00290.00070.00020.0000
Entropy 10.72190.18050.06680.02110.00670.00210.00070.0002
Entropy 20.64450.29470.05770.00310.00010.00000.00000.0000
Entropy 1&20.75470.17610.05410.01220.00240.00040.00010.0000

Share and Cite

MDPI and ACS Style

Bounkhel, M.; Tadj, L.; Hedjar, R. Entropy Analysis of a Flexible Markovian Queue with Server Breakdowns. Entropy 2020, 22, 979. https://doi.org/10.3390/e22090979

AMA Style

Bounkhel M, Tadj L, Hedjar R. Entropy Analysis of a Flexible Markovian Queue with Server Breakdowns. Entropy. 2020; 22(9):979. https://doi.org/10.3390/e22090979

Chicago/Turabian Style

Bounkhel, Messaoud, Lotfi Tadj, and Ramdane Hedjar. 2020. "Entropy Analysis of a Flexible Markovian Queue with Server Breakdowns" Entropy 22, no. 9: 979. https://doi.org/10.3390/e22090979

APA Style

Bounkhel, M., Tadj, L., & Hedjar, R. (2020). Entropy Analysis of a Flexible Markovian Queue with Server Breakdowns. Entropy, 22(9), 979. https://doi.org/10.3390/e22090979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop