**4. Discussion**

In Table 3, the bold-faced GEs indicate the best GEs obtained from the corresponding reference approach and proposed PMP hard computing approach. Of these 39 test incidences, the best published values of GE under the corresponding conditions of *p* and *U* for four incidences of problem 22 and three extra test incidences of problems 11, 23, and 24 have not been reported in the literature. Therefore, these instances are not used for performance comparison between the reference approaches and the proposed PMP approach and are reported only for the purpose of future comparison.

According to the computational result presented in Table 3, for six incidences (6/32 = 18.8%) of problems 1, 2, 4, 7, 10, and 26, both approaches yield the same GEs. For nine incidences (9/32 = 28.1%) of problems 3, 6, 8, 10, 13, 15, and 20, the reference approaches yield better GEs than the PMP approach. For 17 incidences (17/32 = 53.1%) of problems 2, 5, 7, 9, 11, 12, 14, 16, 17, 18, 19, 21, 23, 24, and 25, the PMP approach yields better GEs than the reference approaches. In summary, the proposed PMP hard computing approach absolutely outperforms the existing soft or hard computing approaches by 53.1% for the small to intermediate GCF incidences available in the literature. Furthermore, the proposed PMP hard computing approach finds better alternative solutions which were not found by the reference approaches with values of *p* and *U* which were different from the original problems 11, 23, and 24.

Regarding the computing time required to implement model 3, the proposed PMP hard computing approach reached the global optimum within 0.3 s even for a large incidence of 24 including 3025 binary variables. The execution times required to implement subsequent procedures for part assignment and improperly assigned EM/RM reassignment were also negligibly small since those procedures were terminated within 0.1 s for all incidences.

The proposed PMP hard computing approach has been applied to larger GCF incidences in order to show how efficiently the PMP hard computing approach solves large-sized GCF incidences. To the best of our knowledge, problem 24 is the largest open GCF incidence that is available in the literature and therefore can be used for comparative purposes with different CF solution approaches. Some authors [61,81] tested the efficiency of their CF solution approaches with randomly generated incidences in order to show the efficiency of their methods for large-sized incidences. However, they did not provide solutions that show an explicit configuration of machine cells and associated part families. As a result, the best published values of GE for those incidences are not available for benchmark testing with such randomly generated incidences.

To show how efficiently the proposed PMP hard computing approach solves even larger GCF incidences, we have chosen to double-expand some original large incidences instead of using randomly generated incidences. For this purpose, problems 8 and 21 to 24 were selected and expanded. A typical strategy used to randomly generate large-sized matrices is to create an ideal block diagonal structure first and then destroy it gradually by using random flips. The random flipping of the original GCF matrices is performed to change 1s in the diagonal blocks into zeros and zeros in the off-diagonal blocks into 1s [34]. The more flipped the expanded matrices, the less random they become. However, the large GCF incidences generated through double-expansion in our computational experiment are not randomly flipped, meaning that the resulting matrices do not approach completely random ones. By avoiding random flips over the expanded matrices, we can test both the efficiency and robustness of the proposed PMP approach. The only element randomly scrambled is the order of part numbers.

Table 4 shows the double-expanded incidences and computational results with those incidences. The numbers of machines of expanded incidences vary from 60 to 110, and the numbers of binary variables therefore vary from 3600 to 12,100. As far as the author knows, few mathematical models

have been implemented to use a hard computing approach to optimally solve large-sized GCF incidences with such a large number of binary variables. According to Table 4, model 3 reaches the global optimum for all the expanded incidences within one second. Subsequent procedures for part assignment and improperly assigned EM/RM reassignment were terminated within 0.2 s. Clearly, this shows the computational efficiency of the proposed PMP hard computing approach for large-sized GCF problems. Furthermore, except for incidence 2, which was double-expanded for problem 21, the remaining double-expanded incidences even yielded better GEs than the original incidence before double-expansion. This reveals the robustness of the proposed PMP approach when applied to large-sized GCF problems.
