Next Article in Journal
A 60 GHz Power Amplifier with Neutralization Capacitors and Compensation Inductors
Next Article in Special Issue
A Review of Advanced Thermal Interface Materials with Oriented Structures for Electronic Devices
Previous Article in Journal
A Temporal–Geospatial Deep Learning Framework for Crop Yield Prediction
Previous Article in Special Issue
Hybridization of Learning Techniques and Quantum Mechanism for IIoT Security: Applications, Challenges, and Prospects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Predicting Work-in-Process in Semiconductor Packaging Using Neural Networks: Technical Evaluation and Future Applications

1
College of Business, National Taipei University of Business, Taipei 100025, Taiwan
2
Powertech Technology Inc., Hsinchu 303035, Taiwan
3
Department of Accounting Information, National Taipei University of Business, Taipei 100025, Taiwan
4
JHJ School of Business, Texas Southern University, Huston, TX 77004, USA
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(21), 4275; https://doi.org/10.3390/electronics13214275
Submission received: 21 September 2024 / Revised: 23 October 2024 / Accepted: 29 October 2024 / Published: 31 October 2024
(This article belongs to the Special Issue Feature Review Papers in Electronics)

Abstract

:
This review paper focuses on the application of neural networks in semiconductor packaging, particularly examining how the Back Propagation Neural Network (BPNN) model predicts the work-in-process (WIP) arrival rates at various stages of semiconductor packaging processes. Our study demonstrates that BPNN models effectively forecast WIP quantities at each processing step, aiding production planners in optimizing machine allocation and thus reducing product manufacturing cycles. This paper further explores the potential applications of neural networks in enhancing production efficiency, forecasting capabilities, and process optimization within the semiconductor industry. We discuss the integration of real-time data from manufacturing systems with neural network models to enable more accurate and dynamic production planning. Looking ahead, this paper outlines prospective advancements in neural network applications for semiconductor packaging, emphasizing their role in addressing the challenges of rapidly changing market demands and technological innovations. This review not only underscores the practical implementations of neural networks but also highlights future directions for leveraging these technologies to maintain competitiveness in the fast-evolving semiconductor industry.

1. Introduction

With the evolving demand patterns in the consumer electronics market, there is a noticeable increase in consumer demand for memory capacity and performance [1]. This trend is not only limited to mainstream products; it is equally significant for some niche and diversified items. Given the shifting needs of end products, the semiconductor industry has confronted such big challenges as rapidly changing market conditions, shortened product lifecycles, and higher capital investments in terms of production capacity. These aforementioned challenges often result in greater difficulty in accurately forecasting production capacity or meeting unexpected changes in customer orders [2]. In response, companies must continuously innovate in the areas of management and technology. As Longauer et al. [3] highlight, the interplay between learning-by-doing and make-or-buy decisions is critical for maintaining resilience and productivity in the semiconductor industry.
In fact, Moore’s Law [4] suggests that electronic product integration capabilities and speed are continuously improving. However, unfortunately, IC semiconductor design and manufacturing have not kept pace with this trend. To this end, the industry has begun to study and implement packaging technology enhancements and also adopt structural modifications to bridge this gap [5,6]. Among these, 3D stacked packaging technology, as illustrated in Figure 1, has proven to be a prevalent approach. This method allows for stacking multiple chips, significantly boosting memory capacity and enhancing the resulting performance.
Nonetheless, during the packaging process, the variability in manufacturing cycles and the choice of different equipment and techniques do elevate the complexity [7]. For instance, specialized wire bonding machines are required for achieving the connections when stacking multiple chips [8], as depicted in Figure 2. This challenge or bottleneck, often occurring during the chip stacking and wire bonding stages, complicates the forecasting of production capacity and meeting delivery timelines, as illustrated in Figure 3. In essence, traditional computation methods are no longer adequate and sufficient for meeting the scenarios discussed above.
Within the supply chain process, more upstream industry positions may negatively cause more significant fluctuations in market demand, a phenomenon known as the bullwhip effect [9]. Since semiconductor packaging is relatively upstream in the electronic product supply chain, it certainly bears the brunt of these demand fluctuations. Furthermore, the longer the manufacturing cycle of wafer packaging, the more pronounced the bullwhip effect becomes. To react to this issue, packaging manufacturers must reduce their extended manufacturing cycle times with the shifting market demand for consumer electronics, leading to increased product diversity and shortened product lifecycles. Apparently, this reduction is essential to help customers and the supply chain cycle shorten their response times to demand changes.
Beyond its significance to the electronic product supply chain, reducing manufacturing cycle time is also essential for wafer packaging manufacturers to maintain their competitive edge. Shorter manufacturing cycles can reduce work-in-process inventory, minimize storage space requirements, and ultimately enhance the yield of processes by shortening engineering response times [2]. As such, packaging manufacturers must pay close attention to and continuously refine their production processes to adapt to dynamic market demand changes and remain competitive.
From the research background discussed above, it is obvious that the rapid changes in the consumer electronics market have created numerous challenges for the semiconductor industry, such as inadequate market supply–demand imbalances and increased complexity in manufacturing processes. These aforementioned factors negatively impact production capacity planning, where the control of the manufacturing cycle time is increasingly crucial. Shortening this cycle not only allows for quicker responses to market shifts but also bolsters a company’s competitive advantage. Further, within semiconductor packaging processes, several production constraints may exist. For instance, certain products or process steps can only be executed by specific machines. Moreover, the waiting time for certain process steps, known as Window Gap or Q-time, should not exceed the process operation time. These limitations suggest that traditional queuing theory models might not be apt for this context.
This study emphasizes designing a model that can accurately predict the arrival rate of work-in-process (WIP) at various processing points within the semiconductor packaging process. Predictions obtained from this model can be employed to inform the early allocation of the appropriate number of machines for the production line, thereby reducing the manufacturing cycle time. By doing so, it will aid various businesses to easily adapt to market changes and thus maintain their competitive edge. In today’s rapidly advancing information technology era, the semiconductor industry has faced increasing manufacturing and management challenges. To meet these challenges, this research aims to delve deeper into the understanding of each processing point of the semiconductor’s production and accordingly propose more efficient management strategies. We gathered historical WIP data from the semiconductor packaging process and used the neural network technique to construct a model to predict the arrival rate of WIP at each processing point. As modern factories have implemented various manufacturing systems to monitor machine production statuses in a real-time manner, this research also integrates real-time machine data with transaction data to conduct the analysis. To this end, it aids in studying the arrival rate of WIP, enabling production staff to plan machine allocation effectively and ensure optimal capacity utilization with more precision.
Taking into account the multiple variables involved in the manufacturing process, such as machine quantities, machine-specific process limitations, and process waiting times, this research explicitly employs the Back Propagation Neural Network (BPNN) model to perform the analysis. Specifically, it investigates the relationships among the variables, including WIP inventory, machine quantities, and WIP arrival rates. After model training, testing, and validation, we can periodically use the BPNN model to simulate current production information and machine configurations, thereby predicting the WIP arrival rate at each processing point. Based on these simulation outcomes, predictions about station arrival times, aggregated WIP estimation times for each station, production time ratios, scheduling, manufacturing time (Cycle Time), and delivery time predictions for capacity can be made accordingly. Through this comprehensive research approach, we not only provide the semiconductor industry with a practical predictive model but also aid their capability of quickly adapting to market changes, hence maintaining their competitive advantage.

2. Related Research

2.1. Reducing Manufacturing Cycle Time

According to the concept of queuing theory, reducing a factory’s product cycle time includes two different approaches, namely enhancing production capacity and minimizing process duration [10]. Factors contributing to the variability in product arrival time may include machine downtimes, operator absences or delays, rework, batch production, and production machine constraints. The main objective of job scheduling and material control is to diminish the variability in product arrival times at individual factory machines [11]. Batch production strategies tend to influence capacity and cause fluctuations in the average effective production time. Most process improvements primarily place a focus on capacity enhancement, circumventing sporadic and prolonged downtimes, and alleviating machine-specific constraints [12]. Kriett et al. [13] mentioned that employing cycle time-oriented mid-term production planning can boost service levels and truncate cycle times in contrast to utilizing the WIP-oriented planning approach. In addition, Tirkel [14] asserted that reducing machine repair time is the most potent and cost-efficient approach to curtail wafer fabrication cycle times since it is a cardinal element impacting production variability.

2.2. Reducing Production Variability

A factory’s layout can be described as an ensemble of various machine functionalities and diverse product mixes. A production scheduling and planning system is needed to automate the assignment of products to different machines for manufacturing. It is imperative in practical scenarios that some machines might have production constraints, which must be taken into account in order to maintain smooth production. In other words, its effectiveness is acknowledged if the system is adept at minimizing the following factors, such as machine idle times, product wait times, and production cycle times. In the semiconductor factories, typical scheduling decision support systems aim at minimizing the cycle time while working with a system considering the finite wait times between workstations. Please also note that excessively delayed semiconductor wafers might undergo chemical alterations, potentially rendering the product unusable [15].
Moreover, data mining and big data analytics can be employed to enhance wafer yields, increase production rates, forecast actual demand [16], and discern the potential impacts of factors by considering vast amounts of information. While there is considerable research focusing on shortening the manufacturing cycle time in factories, there has been little exploration of the impact of forecasting the arrival rate of work-in-process between various processes to schedule machine counts preemptively. From the above discussion, data analysts can develop data-driven forecasting models to enhance output further, thereby sidestepping mistakes historically rooted in the subjective experiences of manufacturing supervisors. Taking future disturbance forecasts into account can effectively mitigate compounded errors [2], allowing the contract manufacturers to devise stable production strategies. Grounded on the results from big data analytics, factories can also establish domain-specific databases to effectively manage production. Furthermore, when introducing new processes or products, managers can leverage historical data and use the models as references to swiftly pinpoint problems and identify critical factors.

2.3. Neural Networks

The neural networks technique is widely used in various fields of semiconductor packaging, such as defect detection, fault diagnosis, process optimization, and quality control.
  • Neural Network Applications in Semiconductor Production and Inspection
Neural networks have been identified as an instrumental tool in the semiconductor industry for production forecasting and post-process inspection. For example, Huang [17] pioneered this realm, highlighting the efficacy of neural networks in predicting production performances of local DRAM wafer fabrications. Furthermore, Su et al. [18] ventured into utilizing these networks for post-sawing semiconductor wafer inspections, providing an optimized solution with inspection times plummeting to less than a second per die. The effectiveness of these aforementioned models in real-world applications underscores their importance and potential value to drive cost savings and heighten product quality.
  • Innovative Chip Designs and Simulation Efficiency
Diving deeper into neural networks’ capabilities, Kim et al. [19] championed a neuromorphic chip design based on the employment of spiking neural networks. By emulating the efficiencies of biological neural systems and implementing them on FPGA platforms, they showcased the adaptability of these networks in handling the varied image processing tasks. Moreover, Han et al. [20] studied the dimension of semiconductor device simulation, deploying the trained neural networks to deduce approximate solutions for reducing specific bias conditions. Such endeavors accentuate the role of neural networks in streamlining computational costs and enhancing fast-tracking semiconductor technology advancements.
  • Deep Learning and Bayesian Network Approaches for Defect Detection
As related to the area of defect detection, Wang et al. [21] developed an etching process inference system by harnessing the Bayesian networks, which was capable of seamlessly processing both discrete and continuous variables. On the other hand, a focus placed on deep learning has shown promising results in defect detection. For instance, Wang et al. [22] introduced an attention-mechanism-based deep learning framework, significantly improving chip-surface-defect detection accuracy and efficiency. Similarly, Schlosser et al. [23] proposed a hybrid system, integrating traditional computer vision with deep learning neural networks to discern minuscule defect patterns in semiconductor imagery. Additionally, Wang et al. [24] took a step further, proposing a three-dimensional convolutional neural network-based classification model adept at recognizing defects on wafers.
  • Advanced Neural Network Architectures in Manufacturing
Kusiak [25] offers a broader perspective on integrating advanced neural network architectures in manufacturing. By elucidating the potential of convolutional and generative adversarial neural networks, this aforementioned study paints a roadmap for identifying research voids and incites machine learning advancements in diverse manufacturing niches. Echoing this sentiment, Shin and Yoo [26] accentuated the efficiency of convolutional neural network models in the semiconductor wafer bin map classification, setting new standards for performance and resource utilization.
The literature, as discussed above, underscores the profound influence of neural networks and deep learning in revolutionizing the semiconductor industry, ranging from production processes and chip design to defect detection and manufacturing innovations.

3. Research Methodology

3.1. Problem Statement

In the contemporary semiconductor manufacturing industry, precise production planning is pivotal for enhancing productivity and minimizing wastage. Production planners typically evaluate and forecast the WIP (Work-In-Process) arrival rate at the workstations based on the WIP inventory volume received from the upstream processes, usually measured as the standard manufacturing cycle time. Based on these scenarios, they further devise the most appropriate machine allocation strategy in accordance with the current inventory levels and anticipated product arrival rates.
However, this aforementioned method possesses a major shortcoming. When predictions of product arrival rates primarily hinge on the accuracy of the standard manufacturing cycle, any minor discrepancies that occur in this standard cycle can lead to inaccurate forecasts. It is notable that the standard manufacturing cycle is typically derived under an idealized condition and the negligence of such external disturbances as equipment malfunctions and/or variability in association with employee skillsets. Consequently, the actual arrival rate of WIP in a real-world manufacturing setting might deviate greatly from the predicted values.
Given the above challenges, this study aims to develop a model rooted in a BPNN (Back Propagation Neural Network) specifically designed to forecast the WIP arrival rate. Leveraging this model, production line personnel can allocate the required number of machines with greater precision, culminating in an optimized production setup. Additionally, through this predictive model, enterprises can periodically adjust the machine count at various process stations based on the model’s forecasting results. By doing so, it tends to further optimize the overall product manufacturing cycle, realizing more efficient production workflows, and hence curtailing unnecessary resource wastage.

3.2. Model Development

The Back Propagation Neural Network (BPNN) stands out as a premier model in contemporary neural network learning paradigms, boasting a broad spectrum of applications and research foundations in this subject field. Unfortunately, the majority of the academic literature underscores its profound academic and practical application value. In a 2003 study, Chen [27] categorized the estimation methodologies for the work cycle time of semiconductor manufacturing plants into six main types, namely Multiple Factor Linear Regression (MFLR), Production Simulation (PS), BPN, CBR, Fuzzy Modeling Methods, and Hybrid Methods. When dealing exclusively with sample data, MFLR indeed demonstrates superior results. Nevertheless, considering the rapid advancements in terms of technology and the gradual reduction in costs, most technology firms now possess the capability for/with big data processing techniques. Consequently, full data computation has become feasible to accommodate this trend. Further, in a 2009 publication, Chen et al. [28] highlighted that, if complete plant-level data can be accessed, both the accuracy and efficiency of results predictions would be significantly enhanced.
The core operating mechanism of a BPNN is simply based on the Gradient Steepest Descent Method to minimize the error function. Among the myriad neural network models, a BPNN exhibits distinctive advantages, including the following:
  • Inclusion of Hidden Layers. This fact enables the network to easily articulate the interactions and influences among input units.
  • Use of Smooth and Differentiable Transfer Functions. This unique advantage ensures the network can derive weight adjustment formulas with ease using the steepest descent method.
  • High Learning Precision. This strength can enable it to tackle complex sample recognition and highly non-linear function simulation issues.
  • Efficient Data Recall Speed
  • Wide Application Spectrum.
Owing to its continuous output values, a BPNN is particularly suitable for handling tasks ranging from sample recognition, classification problems, function simulation, adaptive control, noise filtering, data compression, and expert systems development. In other words, its application scope apparently far surpasses traditional academic boundaries.
Moreover, the Back Propagation Neural Network, as a sophisticated deep learning model, has held an indispensable position in recent research. As detailed in Figure 4, the network comprises three primary components and each plays a specific and irreplaceable role. Based on the description of the backpropagation learning method in [29], we delve deeper into the structure and function of each layer, as discussed below.
  • Input Layer. For the front end of the network, its primary role is to receive external data and convert these into a format recognizable by the network. The number of processing units it encompasses is not fixed but may be dependent on the nature and complexity of the problem. For instance, in terms of image recognition, the input units might correspond to the pixels of an image, whereas in voice recognition, they might correlate to specific voice features.
  • Hidden Layer. Positioned between the input and output layers, this is the network’s core. Its main function is to extract features from the input data and carry out a series of transformations and computations. The number of processing units in this layer is very flexible, and the optimal count is typically determined through multiple experiments and validations. Notably, the hidden layer can be singular or multiple, with its depth and structure often influencing the network’s performance and learning outcomes.
  • Output Layer. The final stage of the network, its primary function is to generate the final prediction or classification results based on the computations obtained from the hidden layer. The number of processing units in this layer closely relates to the specific problem’s characteristics. For instance, in terms of the classification tasks, the number of output units might be equal to the number of target categories; whereas in regression tasks, it might be just a single output unit.
In summary, all three main components of the Back Propagation Neural Network are vital in ensuring the network’s proper functioning and efficient learning. Properly designing and adjusting these components can substantially enhance the network’s performance and its resulting prediction accuracy.

4. Case Study

4.1. Environmental Description

In the semiconductor manufacturing industry, the effective prediction of the flow rate of work-in-process (WIP) is crucial for ensuring production efficiency and consequently reducing lead times.
From a time perspective, if we can grasp the production time of core stations, then this ensures the effective flow rate of WIP. Therefore, in this study, we aim to provide a regression model to estimate Turnaround Time (TAT) and assist production line staff in conducting more effective equipment scheduling. With this context, our research thoroughly evaluates the key processes and corresponding parameters in the sector of semiconductor manufacturing, aiming to provide a scientific and empirically based predictive model.
Table 1 lists the critical input variables and response used in our predictive model, which are crucial for determining the flow rate of WIP. Following detailed data collection, preliminary analysis, and valuable suggestions gathered from industry experts, we chose to simplify the in-house packaging process by dividing it into five core stations (i.e., Dicing, Die Bond, Wire Bond, Mold, and Ball Mount) as shown in Figure 5. This approach allowed us to concentrate on the steps that significantly impact the production flow.
For our predictive model, we adopted the BPNN as the foundational framework, as it has been proven to be an effective model for predicting the arrival rate of WIP at processing stations. Through in-depth interviews with professionals and drawing from their past practical experience and feedback, we carefully selected the input variables shown in Table 1, which play a pivotal role in determining the flow rate of the WIP:
To train and validate our model, we extracted data from the in-house production system between 1 October 2023 and 31 December 2023. After data cleaning, this yielded 5740 production data entries. To ensure effective model learning and prediction accuracy, we split data into three datasets: a training set, a validation set, and a testing set. The training set (68%) is used to train the regression model, the validation set (12%) is used to evaluate performance in model training, and the testing set (20%) is used to show the performance of new data inference. All these data were meticulously categorized based on the process station that fits into the previously defined five core stations (Table 2).

4.2. Model Validation

In terms of the prevailing approaches to predictive model evaluation, prior researchers commonly employ the Mean Absolute Percent Error (MAPE) as the principal metric to assess the accuracy of a prediction model in forecasting the arrival rate of work-in-process products at various processing stations. In fact, MAPE offers a quantitative means to gauge the error of the model, enabling a clear understanding of the model’s prediction accuracy. Should the predictive accuracy of the model fall short of expectations, we would revert it to the initial design stage, re-evaluating the structure of the input variables and the integrity and quality of the data, or delving further into the architecture of the neural network.
The formula for the MAPE is:
M A P E = i = 1 n y i y i / y i n
where:
y i = The true target value of data sample i
y i = The model’s predicted target value for data sample i
In Figure 6, we present the MAPE performance of the BPNN model after multiple training iterations. According to this figure, we can observe that as the number of training iterations increases, the MAPE value of the model decreases consistently. This scenario suggests that our model is progressively converging and has successfully learned the underlying patterns from the data.
To provide a more comprehensive perspective, we juxtaposed the SVM and BPNN models under the same configuration, computing their MAPE values at the different process stations. Table 3 shows the comparative analysis of the MAPE results of both models across various processing stations.
The data analysis presented in Figure 7 shows that the BPNN model consistently outperforms the SVM model when predicting for each validation set. Specifically, regardless of the process stations, the MAPE value achieved by the BPNN model is substantially lower than that of the SVM model. Such results offer a clear indication, underscoring the superior performance of the BPNN model for handling this prediction task.
In real-world manufacturing, many products require multi-layered chip stacking, implying that they repeatedly undergo die-attach and wire-bonding processes, and many expert technology and monitor mechanisms are needed to ascertain stable relationships between manufacturing parameters and actual performance [31]. In other words, those stations had a chance to record more data, and this factor may primarily contribute to the model’s higher MAPE values for the Die Bond and Wire Bond processes. A critical factor noted is that many products require the multi-layered chip stacking process, implying they repeatedly undergo the die-attach and wire-bonding processes. Combined with the waiting time constraints on the production line, these intricate factors may be employed to contribute primarily to the model’s higher MAPE values for these processes.
In summary, while the BPNN model generally exhibits superior predictive performance compared to the SVM model, there is still room for improvement in the prediction accuracy for specific processes. This fact necessitates the need to perform further exploration and other optimization on our part.

4.3. Computational Consumption

The hardware used for training the BPNN model consists of an Intel 12th Gen i7-12700 CPU with 20 cores (eight Performance cores and 4 Efficient cores) and a maximum single-core frequency of 4.9 GHz, manufactured by Intel Corporation, headquartered in Santa Clara, California, USA. The system also includes 32 GB of RAM and a NVIDIA GeForce RTX 3090 graphics card, manufactured by NVIDIA Corporation, also headquartered in Santa Clara, California, USA.

5. Conclusions

In the semiconductor packaging manufacturing industry, a shorter manufacturing cycle time is not only pivotal for enhancing production efficiency but also a decisive factor in determining whether or not a company can maintain a competitive edge in an aggressively competitive market. Consequently, this research used the BPNN model to predict the arrival rate of work-in-process (WIP) in the packaging processes. Validation through extensive real-world data affirmed the model’s superior predictive capability. Notably, this proposed model transcends its orientation of being just a theoretical model, presenting tangible industrial value. With this model, production line personnel can better understand the WIP count at each processing station. Such insights may allow for more informed equipment allocation decisions, effectively reducing the WIP level at each station and subsequently shortening the overall product manufacturing cycle time.
Now, we must deal with a pragmatic issue and that is the truth that possessing a prediction model alone is not the panacea for dealing with the myriad challenges faced by the manufacturing sector. When introducing the model into a company, a suite of standardized and systematic procedures becomes indispensable, ensuring that the model manifests its maximum potential in handling daily operations. Furthermore, even if the model initially exhibits a stellar performance, we must periodically review and evaluate its efficacy to ensure consistent behavior. Supposing that we detect declining performance and/or other issues, how can we deal with it effectively? In this case, it becomes imperative to engage in an in-depth discussion with on-site technical experts to pinpoint the potential causes, whether they are the reason behind the intrinsic issues with the model or some externalities resulting from human interventions.
Table 4 below offers a daily demand forecast that is capable of predicting the equipment demand increment for the upcoming 15 days and creating a monthly model performance review table, as depicted in Table 5. To safeguard the company’s confidential information, these data are not factual but merely provided for the purposes of illustration. In the future, we also plan to employ BI software, such as Power BI, Tableau, or Qlik Sense, to further refine and enhance these tools, delivering more detailed and comprehensive information to help production line personnel take care of their tasks with ease.
In this study, we delved into the prediction methods for Work-In-Process (WIP) and found out that accuracy is influenced by various factors, resulting in production variations in the actual prediction outcomes. This scenario highlights that subsequent research on WIP predictions needs to encompass a broader array of variables to enhance prediction accuracy and reliability. Specifically, in real-world factory operations, certain specific factors such as whether the batch is marked as a rush order, the number of operators on duty on a specific day, and the actual maintenance status of the machines often have a direct or indirect impact on the manufacturing process, thereby affecting the actual WIP results.
Furthermore, a mere WIP prediction cannot entirely meet a factory’s needs when considering the factors of production efficiency and resource allocation. Hence, future research should develop other methods and provide more relevant recommendations for optimal production resource allocation. This point pertains not only to equipment setup and management but also demands a holistic investigation of human resource issues such as the deployment of operators and maintenance personnel. By holistically managing these resources, factories can be then employed to allocate and utilize their assets more effectively, consequently enhancing production efficiency and resulting quality to achieve better operational outcomes. Through these in-depth investigations, more insightful recommendations can thus be provided to manufacturing factories, aiding them in securing a competitive advantage in a fiercely competitive market environment.

Author Contributions

Conceptualization, C.-T.W., S.-H.L. and D.C.Y.; methodology, C.-T.W. and S.-H.L.; validation, C.-T.W. and S.-H.L.; writing—original draft preparation, C.-T.W. and S.-H.L.; writing—review and editing, D.C.Y.; supervision, D.C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

Author Chin-Ta Wu was employed by the company Powertech Technology Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yu, S.; Chen, P.Y. Emerging memory technologies: Recent trends and prospects. IEEE Solid-State Circuits Mag. 2016, 8, 43–56. [Google Scholar] [CrossRef]
  2. Chien, C.F.; Chen, Y.J.; Hsu, C.Y.; Wang, H.K. Overlay error compensation using advanced process control with dynamically adjusted proportional-integral R2R controller. IEEE Trans. Autom. Sci. Eng. 2013, 11, 473–484. [Google Scholar] [CrossRef]
  3. Longauer, D.; Vasvári, T.; Hauck, Z. Investigating make-or-buy decisions and the impact of learning-by-doing in the semiconductor industry. Int. J. Prod. Res. 2024, 62, 3835–3852. [Google Scholar] [CrossRef]
  4. Moore, G. Cramming more components onto integrated circuits. Proc. IEEE 1998, 86, 82–85. [Google Scholar] [CrossRef]
  5. Schaller, R.R. Moore’s law: Past, present and future. IEEE Spectr. 1997, 34, 52–59. [Google Scholar] [CrossRef]
  6. Theis, T.N.; Wong, H.S.P. The end of moore’s law: A new beginning for information technology. Comput. Sci. Eng. 2017, 19, 41–50. [Google Scholar] [CrossRef]
  7. Picard, M.; Mohanty, A.K.; Misra, M. Recent advances in additive manufacturing of engineering thermoplastics: Challenges and opportunities. RSC Adv. 2020, 10, 36058–36089. [Google Scholar] [CrossRef]
  8. Mobin, S.; Cui, C.; Rao, F. Statistical approach to analyze duty cycle jitter amplification in NAND flash memory system. In Proceedings of the 2018 IEEE 27th Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS), San Jose, CA, USA, 14–17 October 2018; pp. 75–77. [Google Scholar]
  9. Lee, H.L.; Padmanabhan, V.; Whang, S. The bullwhip effect in supply chains. Sloan Management Review 1997, 38, 93–102. [Google Scholar] [CrossRef]
  10. Hopp, W.J.; Spearman, M.L.; Chayet, S.; Donohue, K.L.; Gel, E.S. Using an optimized queueing network model to support wafer fab design. IIE Trans. 2002, 34, 119–130. [Google Scholar] [CrossRef]
  11. Wein, L.M. Scheduling semiconductor wafer fabrication. IEEE Trans. Semicond. Manuf. 1988, 1, 115–130. [Google Scholar] [CrossRef]
  12. Akcalt, E.; Nemoto, K.; Uzsoy, R. Cycle-time improvements for photolithography process in semiconductor manufacturing. IEEE Trans. Semicond. Manuf. 2001, 14, 48–56. [Google Scholar] [CrossRef]
  13. Kriett, P.O.; Eirich, S.; Grunow, M. Cycle time-oriented mid-term production planning for semiconductor wafer fabrication. Int. J. Prod. Res. 2017, 55, 4662–4679. [Google Scholar] [CrossRef]
  14. Tirkel, I. The effectiveness of variability reduction in decreasing wafer fabrication cycle time. In Proceedings of the 2013 Winter Simulations Conference (WSC), Washington, DC, USA, 8–11 December 2013; pp. 3796–3805. [Google Scholar]
  15. Chien, C.F.; Chen, C.H. A novel timetabling algorithm for a furnace process for semiconductor fabrication with constrained waiting and frequency-based setups. OR Spectr. 2007, 29, 391–419. [Google Scholar] [CrossRef]
  16. Lv, S.; Kim, H.; Zheng, B.; Jin, H. A review of data mining with big data towards its applications in the electronics industry. Appl. Sci. 2018, 8, 582. [Google Scholar] [CrossRef]
  17. Huang, C.L. The construction of production performance prediction system for semiconductor manufacturing with artificial neural networks. Int. J. Prod. Res. 1999, 37, 1387–1402. [Google Scholar] [CrossRef]
  18. Su, C.T.; Yang, T.; Ke, C.M. A neural-network approach for semiconductor wafer post-sawing inspection. IEEE Trans. Semicond. Manuf. 2020, 15, 260–266. [Google Scholar]
  19. Kim, J.K.; Knag, P.; Chen, T.; Zhang, Z. A 640M pixel/s 3.65 mW sparse event-driven neuromorphic object recognition processor with on-chip learning. In Proceedings of the 2015 Symposium on VLSI Circuits (VLSI Circuits), Kyoto, Japan, 16–19 June 2015; pp. C50–C51. [Google Scholar]
  20. Han, S.C.; Choi, J.; Hong, S.M. Acceleration of semiconductor device simulation with approximate solutions predicted by trained neural networks. IEEE Trans. Electron Devices 2021, 68, 5483–5489. [Google Scholar] [CrossRef]
  21. Wang, G.; Hasani, R.M.; Zhu, Y.; Grosu, R. A novel Bayesian network-based fault prognostic method for semiconductor manufacturing process. In Proceedings of the 2017 IEEE International Conference on Industrial Technology (ICIT), Toronto, ON, Canada, 22–25 March 2017; pp. 1450–1454. [Google Scholar]
  22. Wang, S.; Wang, H.; Yang, F.; Liu, F.; Zeng, L. Attention-based deep learning for chip-surface-defect detection. Int. J. Adv. Manuf. Technol. 2020, 121, 1957–1971. [Google Scholar] [CrossRef]
  23. Schlosser, T.; Friedrich, M.; Beuth, F.; Kowerko, D. Improving automated visual fault inspection for semiconductor manufacturing using a hybrid multistage system of deep neural networks. J. Intell. Manuf. 2020, 33, 1099–1123. [Google Scholar] [CrossRef]
  24. Wang, Y.; Sun, W.; Jin, J.; Kong, Z.; Yue, X. MVGCN: Multi-view graph convolutional neural network for surface defect identification using three-dimensional point cloud. J. Manuf. Sci. Eng. 2023, 145, 031004. [Google Scholar] [CrossRef]
  25. Kusiak, A. Convolutional and generative adversarial neural networks in manufacturing. Int. J. Prod. Res. 2020, 58, 1594–1604. [Google Scholar] [CrossRef]
  26. Shin, E.; Yoo, C.D. Efficient convolutional neural networks for semiconductor wafer bin map classification. Sensors 2023, 23, 1926. [Google Scholar] [CrossRef]
  27. Chen, T. A fuzzy back propagation network for output time prediction in a wafer fab. Appl. Soft Comput. 2003, 2, 211–222. [Google Scholar] [CrossRef]
  28. Chen, T.; Wang, Y.C.; Wu, H.C. A fuzzy-neural approach for remaining cycle time estimation in a semiconductor manufacturing factory—A simulation study. Int. J. Innov. Comput. Inf. Control. 2009, 5, 2125–2139. [Google Scholar]
  29. Lalis, J.T.; Gerardo, B.D.; Byun, Y. An adaptive stopping criterion for backpropagation learning in feedforward neural network. Int. J. Multimed. Ubiquitous Eng. 2014, 9, 149–156. [Google Scholar] [CrossRef]
  30. Li, X.J.; Ma, M.; Sun, Y. An adaptive deep learning neural network model to enhance machine-learning-based classifiers for intrusion detection in smart grids. Algorithms 2023, 16, 288. [Google Scholar] [CrossRef]
  31. Chen, Y.; Ding, S.; Long, J.; Hou, M.; Chen, X.; Gao, J.; He, Y.; Wong, C.P. Rationally designing the trace of wire bonder head for large-span-ratio wire bonding in 3D stacked packaging. IEEE Access 2020, 8, 206571–206580. [Google Scholar] [CrossRef]
Figure 1. Types of Packaging (Data source: this study).
Figure 1. Types of Packaging (Data source: this study).
Electronics 13 04275 g001
Figure 2. Simple 3D view of a highly integrated NAND Flash memory package (Data source: this study).
Figure 2. Simple 3D view of a highly integrated NAND Flash memory package (Data source: this study).
Electronics 13 04275 g002
Figure 3. Possible Turnaround Time (TAT) for Various Packaging Types (Data source: this study).
Figure 3. Possible Turnaround Time (TAT) for Various Packaging Types (Data source: this study).
Electronics 13 04275 g003
Figure 4. Illustration diagram shows how a Back Propagation Neural Network works [30].
Figure 4. Illustration diagram shows how a Back Propagation Neural Network works [30].
Electronics 13 04275 g004
Figure 5. Simplified Packaging Process.
Figure 5. Simplified Packaging Process.
Electronics 13 04275 g005
Figure 6. MAPE of the BPNN Model.
Figure 6. MAPE of the BPNN Model.
Electronics 13 04275 g006
Figure 7. Residual Plot for Each Process.
Figure 7. Residual Plot for Each Process.
Electronics 13 04275 g007
Table 1. Description for Each Variable and Response.
Table 1. Description for Each Variable and Response.
VariablesData TypeDescription
CustomerCategoryDesignated customer group.
Product TypeCategorySpecific type of product.
Equipment TypeCategoryEquipment model in use.
LocationCategorySpecific production location or area.
Limit Machine ProduceCategoryMachines designated for producing a certain product.
Remain ProcessNumericalNumber of remaining process stations from the current station to completion.
Limit Wait TimeNumericalMaximum waiting time allowed at each station, the unit is minutes.
Number of Chips in PackageNumericalNumber of chip layers in each package.
Process TotalNumericalNumber of total process stations.
Alarm TimeNumericalTimes of equipment alarm past 1 day.
ResponseData TypeDescription
TATNumericalThe producing time from start to completion in the station, the unit is hours.
Table 2. Number of Data Points for Test Sites.
Table 2. Number of Data Points for Test Sites.
Packaging
Process
DicingDie BondWire BondMoldBall MountTotal
Count2681730185554513425740
Table 3. MAPEs of the SVM and BPNN across various processes.
Table 3. MAPEs of the SVM and BPNN across various processes.
Packaging ProcessDicingDie BondWire BondMoldBall Mount
SVM0.90800.35050.36110.76310.6504
BPNN0.42850.15630.13120.61390.1697
Table 4. Daily Equipment Demand Forecast Table.
Table 4. Daily Equipment Demand Forecast Table.
ProcessProductNN + 1N + 2N + 3N + 4N + 5N + 6N + 7
Die BondBGA 16D-100401313
Die BondBGA 1D01000000
Wire BondBGA 16D-300000100
Wire BondBGA 2D-100100000
Table 5. Model Performance Review Table.
Table 5. Model Performance Review Table.
FactoryProductMSE (Before)MSE (After)Result
ABGA 3D-90.310.04Good
ABGA 2D-50.050.14Fail
BBGA 5D-20.030.03Draw
BBGA 8D-90.480.27Good
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, C.-T.; Li, S.-H.; Yen, D.C. Predicting Work-in-Process in Semiconductor Packaging Using Neural Networks: Technical Evaluation and Future Applications. Electronics 2024, 13, 4275. https://doi.org/10.3390/electronics13214275

AMA Style

Wu C-T, Li S-H, Yen DC. Predicting Work-in-Process in Semiconductor Packaging Using Neural Networks: Technical Evaluation and Future Applications. Electronics. 2024; 13(21):4275. https://doi.org/10.3390/electronics13214275

Chicago/Turabian Style

Wu, Chin-Ta, Shing-Han Li, and David C. Yen. 2024. "Predicting Work-in-Process in Semiconductor Packaging Using Neural Networks: Technical Evaluation and Future Applications" Electronics 13, no. 21: 4275. https://doi.org/10.3390/electronics13214275

APA Style

Wu, C.-T., Li, S.-H., & Yen, D. C. (2024). Predicting Work-in-Process in Semiconductor Packaging Using Neural Networks: Technical Evaluation and Future Applications. Electronics, 13(21), 4275. https://doi.org/10.3390/electronics13214275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop