Next Article in Journal
Facial Animation Strategies for Improved Emotional Expression in Virtual Reality
Previous Article in Journal
An Experimental Investigation of Noise Sources’ Contribution in the Multi-Chip Module Open-Loop Comb-Drive Capacitive MEMS Accelerometer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthetic Benchmark for Data-Driven Pre-Si Analogue Circuit Verification

1
CAMPUS Research Institute, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
2
Infineon Technologies, 82008 Neubiberg, Romania
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(13), 2600; https://doi.org/10.3390/electronics13132600
Submission received: 9 May 2024 / Revised: 25 June 2024 / Accepted: 28 June 2024 / Published: 2 July 2024
(This article belongs to the Section Circuit and Signal Processing)

Abstract

:
As the demand for more complex circuits increases, so does the duration of creating and testing them. The most time-consuming task in circuit development is notoriously the verification process, primarily due to the large number of simulations (hundreds or even thousands) required to ensure that the circuits adhere to the specifications regardless of the operating conditions. In order to decrease the number of required simulations, various verification algorithms have been proposed over the years, but this comes with an additional issue: the thorough validation of the algorithms. As simulations on real circuits are significantly time-consuming, synthetic circuits can offer precious insights into the capabilities of the verification algorithm. In this paper, we propose a benchmark of synthetic circuits that can be used to exhaustively validate pre-silicon (Pre-Si) verification algorithms. The newly created benchmark consists of 900 synthetic circuits (mathematical functions) with input dimensions (variables) ranging from 2 to 10. We design the benchmark to include functions of varying complexities, reflecting real-world circuit expectations. Eventually, we use this benchmark to evaluate a previously proposed state-of-the-art Pre-Si circuit verification algorithm. We show that this algorithm generally obtains relative verification errors below 2% with fewer than 150 simulations if the circuits have less than six to seven operating conditions. In addition, we demonstrate that some of the most complex circuits in the benchmark pose serious problems to the verification algorithm: the worst case is not found even when 200 simulations are used.

1. Introduction

Continuous growth in the complexity of modern analogue circuits has been observed in recent years. As the circuits become evermore complex, so does the duration of their verification process.
Given the intense pressure on semiconductor manufacturers to rapidly develop and release novel designs and products, the task of circuit verification assumes an important role in the design process, particularly during the Pre-Si phase, in order to ensure the reliability and functionality of the circuits prior to fabrication [1]. Normally, the verification process of a circuit is split into Pre-Si verification and Post-Si verification,  the focus of our work being the former. While Pre-Si verification focuses on verifying a digital model of a circuit by using a relatively large number of simulations, Post-Si verification deals with assuring that the physical end product performs as intended [2]. Pre-Si verification is particularly time consuming, with some studies showing that it can take up to 50–70% of the development period [3,4]. One reason for this is that failing to check the functionality of a circuit during Pre-Si verification will lead to failure in the Post-Si phase or, even worse, during production. This will lead to redesign delays and a substantial loss in terms of both costs and time. In this regard, machine learning (ML) approaches have been introduced in circuit verification to reduce the time and costs required. Such ML methods are applied for analogue [5,6] and digital [7,8] circuits, while some works deal with both Pre-Si and Post-Si verification [9,10]. An alternative methodology involves the application of symbolic quick error detection, which leverages formal methods and mathematical reasoning to verify the correctness of a given system [11]. Paradoxically to its importance, Pre-Si analogue circuit verification is rarely reported on in scientific papers. Moreover, benchmark tests for such methods are relatively hard to come by. While our focus is on analogue circuits, efforts have been made in creating synthetic benchmarks for digital circuits [12,13,14], while others aim to reduce the simulation time of the verification process through emulating various circuits [15]. With the motivation of evaluating the ever growing complexity of algorithms found in CAD tools, [13] presents a method for obtaining register–transfer-level synthetic benchmark circuits using evolutionary algorithms. The circuits presented in this work are relatively complex, with up to millions of gates. A survey focusing on the generation of synthetic digital circuits can be found in [16]. Another work [17], this time dealing with sequential circuits (which are still digital circuits), shows a method for generating synthetic versions of benchmark circuits using an abstract model of the circuit together with netlists. This method proves to be more accurate than a previous one that employed random graphs. Since this can be treated as an optimization problem, some surveys have been created regarding test functions for benchmarking [18,19,20].
In our previous work [21,22,23] on this topic, we presented various components of a complete and state-of-the-art circuit verification algorithm. This algorithm was evaluated on real circuits such as LDOs. However, a full, exhaustive validation of the algorithm was required, and, to this end, we designed and implemented a large benchmark of synthetic functions of various complexities with a variable number of input operating conditions. The main contributions of this work are the following: (i) we present an overview of the full circuit verification algorithm, (ii) we describe the synthetic function benchmark, and (iii) we exhaustively evaluate the performance of the verification algorithm on the synthetic benchmark.
The paper is organized as follows: Section 2 will present our circuit verification method overview, while Section 3 will detail the synthetic benchmark. Finally, in Section 4 and Section 5, we will highlight our evaluation metrics, together with our results, and discuss conclusions.

2. Method Overview

The main objective of our candidate selection algorithm comprises circuit design verification and validation in the Pre-Si phase, and more explicitly in assuring that a circuit adheres to all of its imposed specification thresholds with respect to its responses. Since the classical approach consists of using a wide range of simulations in order to guarantee proper functionality, our algorithm aims to achieve at least the same performance, while using a smaller number of simulations. This would reduce the necessary time and costs for circuit verification.
Our algorithm can be split into two main stages: the fixed planning (FP) stage and the adaptive planning (AP) stage. Both stages will be explained in detail in the following sections. In short, during the FP stage, we create an initial set of candidates. With those, we conduct a preliminary circuit check and then use this set of candidates in the AP stage. The aforementioned set of candidates is represented by circuit input values or operating conditions (OCs) and the resulting response values (as received from the simulator). In the AP stage, we train surrogate models for the circuit’s responses, using the initial data from FP, in order to better pinpoint the potential failures of the circuit. More specifically, the role of the AP phase is to try to find a failure OC for the circuit based on the predictions of the surrogate models. By using the simulated data in FP, the surrogate models will achieve good initial coverage of the circuit responses. From this point onward, the algorithm will enter an iterative process in which new candidates will be proposed by the surrogate models and simulated in order to verify if any of the circuit’s specifications have been violated. The difference between each iteration consists in the fact that the surrogate models are retrained with the simulations from previous iterations on top of the already existing FP data.
For the surrogate models, we employed Gaussian processes (GPs), which are an ML technique, or, more precisely, a Bayesian method.  GPs have been used in various statistical problems for a long period of time [24]. The primary practical advantage of GPs is their ability to provide a reliable estimate of their own uncertainty. A  GP’s uncertainty will increase as we sample in points further away from the training data. This is a direct consequence of the probabilistic and Bayesian foundation of GPs. Another advantage is the fact that, depending on the problem, we can model the GPs’ prior belief through kernels. We should also note that the GPs have disadvantages, namely that they are computationally expensive, due to the fact that they are non-parametric methods, if we are not to take the kernel hyper-parameters into account. This means that all of the training data are considered each time a prediction is made, leading to a cubic increase in computational costs for predictions as the number of training samples increases. We should also note that GPs lose efficiency in high-dimensional spaces, namely when the number of features exceeds a few dozens [25].

2.1. Fixed Planning

The FP stage plays the role of creating an initial set of candidates with the purpose of being later utilized for the training of the GPs. The usual approach of classical verification is to perform a full factorial (FF) search by testing all combinations of minimum, nominal and maximum values for the circuit OCs. For example, a 6 OC circuit, would require 3 6 = 729 simulations. In comparison, our FP approach uses a sampling method for proposing the initial candidate set such that the number of simulations used in FP, together with the ones in AP, is much lower then that in an FF approach. Several sampling methods have been considered, with the objective of achieving good initial coverage of the hyperspace. These methods include Monte Carlo, orthogonal arrays (OAs) and Latin hypercube sampling (LHS) [26].
OA [27] is a sub-sampling method that strategically selects a set of OCs from the FF pool of OCs. Besides the number of samples, an OA has three other characteristics: factors, level and strength. The factors of an OA represent how many OCs could be used for a given OA; since the OA is a matrix, the number of factors is equivalent to the number of columns. The level represents how many values the OCs could take. For example, an OA containing values of 0 and 1 is at level 2. This means that the OC could only take minimum and maximum values, while level 3 would indicate that OCs take minimum, maximum and middle (or nominal) values. In practice, OAs are generally used with 2 or 3 levels. The strength, s, is an integer that indicates that the set of row values chosen from any s column of the OA appear the same number of times. LHS [28] represents another sampling method that takes into account the number of dimensions, d (number of OCs in our case), and stratifies each dimension into n equal strata. This sampling method is also known as n-rooks. If we are to take a simple example of d = 2 dimensions, the grid would resemble a checkerboard in which we sample tiles such that by selecting any one row and column, only one rook can have direct access to that tile.
Our work [21], which handled the problem of comparison between the 3 sampling methods in the context of analogue circuit verification, showed that, out of the 3 methods, OA has the best performance, obtaining results similar to those of the FF approach while using far fewer simulations. The next step consisted of combining OA and LHS in order to further improve upon those results. After testing on various budgets (with a fixed number of simulations), the combined method outperformed the individual sampling methods.

2.2. Adaptive Planning

Moving on to the 2nd step of the algorithm, in adaptive planning, the OCCs simulated in the FP stage are used as training data for GP surrogate models. One GP is trained for each of the circuit responses, so that we can have an initial estimate of the response functions. From here, an initial evaluation candidate set is created, from which the GPs can select the next candidate for each of the circuit responses to be simulated in order to fail under the imposed specifications. This evaluation set is created from an FF grid, where each input OC can take its minimum, maximum and middle values; this sums up to L N _ O C equally distanced candidates in the OC hyperspace, where L is the number of levels (3 in our case) and N_OC represents the number of operating conditions. To this grid set, we further add a number of LHS samples, depending on the number of OCs. This combination of grid and LHS candidates will form our initial pool of evaluation candidates. Since this is an iterative process, our GPs would normally have to choose a candidate from the same fixed evaluation set, which could constitute a problem, due to the fact that the candidate which would lead to the true minimum of a response is not guaranteed to be among the ones in the evaluation set. To this end, the evaluation set is altered, by modifying the values of the OCs based on the GP estimates and with the help of gradient descent (GD) or evolutionary algorithms, such as GDE3 [29]. The GD approach was introduced in our previous work [22], with the aim of minimizing the response functions (the GP estimates more precisely) so that the GP could have a better candidate pool to choose from. GDE3, on the other hand, represents an evolutionary algorithm, with its main advantage being that it can handle various types of problems, but more importantly multi-objective problems, in contrast to the single-objective GD approach. The GDE3 approach was considered in order to further improve our GD approach, due to shortcomings detailed in [22], such as the possibility of GD becoming “stuck” in a local minimum. Our comparison of the 2 candidate evaluation pool improvement approaches (GD vs GDE3) [23] has shown that using GDE3 can introduce improvements to the overall algorithm, despite these not being significant and not being observed in all scenarios. Initially, the GD approach was evaluated using 3 different acquisition functions: probability of improvement (PI), expected improvement (EI) and the lower confidence bound (LCB). These acquisition functions are extracted from the GP estimates and represent a way of scoring the candidates.  PI is a measure that represents the probability of improving upon the best found value so far, while EI indicates how much we can expect to improve upon that value. The LCB, on the other hand represents the lower envelope of the GP variance or uncertainty and can be defined as L C B = μ σ β .
After evaluating the GD approach using these 3 acquisition functions for the GPs, LCB showed the best results [22]. GDE3, on the other hand, being a multi-objective approach, uses all 3 acquisition functions. While our first, simpler, GD approach proceeded to select the best candidate for each response from the new candidate evaluation pool (after applying GD) based on the LCB score, the GDE3 approach will perform non-dominated selection of candidates based on the Pareto front, selecting the best 100. Only after this step will the GDE3 algorithm be applied, resulting in our new candidate evaluation pool. Similarly to the GD approach, the GDE3 approach will proceed to rank the new candidates based on the hyper-volume determined by the 3 acquisition functions. The ensemble of processes in the GDE3 approach will be further denoted as multi-objective acquisition function ensemble (MACE), described in [23] and inspired from [30,31]. The main steps of the circuit verification algorithm using the MACE approach can be found in Algorithm 1, while an overview of the process can be observed in Figure 1.
Regarding the implementation of our GP surrogate models, we employed the BoTorch framework [32], while an ablation study in terms of hyper-parameter tuning is presented in [22], which highlights various comparisons between different learning rates and used optimizers.
In terms of effectiveness, we can look at the results in Table 1, which highlight the comparison between the classical approach, the FP-only stage and the AP stage. Through the classical approach, we understand the exhaustive search in the input OC hyperspace. As we can see in the table, the obtained values of FP and AP are compared with the ones obtained after the classical approach is used in a circuit with 5 responses. Four of the responses require minimization (where values should be above the imposed specification), while the last one requires maximization (where values should be lower than the imposed specification). Red indicates values that are not as good than the ones found via the classical approach, while green signifies equal or better values. While FP alone did not obtain values nearly as good as those obtained with the classical approach, together with the AP stage, we find the same worst cases (the closest values to the specifications). The main advantage is that our algorithm needs about 10× fewer iterations to find the same worst case. Nonetheless, there are some limitations of our method. Mainly, these revolve around the stochastic nature of the algorithm. Especially in higher dimension spaces, it is possible for the classic approach to find better candidates, while the GP-based algorithm gets stuck in local minima. On the other hand, the classical approach is not feasible in higher-dimension spaces, as it would require a number of simulations that will exponentially grow with the increase in input dimensions. For example, in a 9-dimension space, the classical approach would employ 3 9 = 19 , 683 simulations, whereas our algorithm can usually handle this number of dimensions with under 200 simulations (as shown in Section 4.3.1).
Algorithm 1: Circuit Verification Algorithm
Electronics 13 02600 i001

3. Synthetic Benchmark Creation and Development

After developing the verification algorithm (described in Section 2), we continued by assuring its robustness and high performance under different scenarios. It is worth mentioning that in previous papers (see [21,22,23]), we only presented preliminary results, focusing on results of a couple of real and synthetic circuits. In comparison, in this paper, we set out to develop a data-driven approach for Pre-Si circuit verification. The results presented in Section 4 cover the reported performance of 900 circuits ranging from 2 to 10 input/operating conditions.
During synthetic benchmark development, we started with the assumption that every synthetic circuit is mathematically composed of different functions. Each function models how a specific input (or operating condition) affects the output (or response). As the algorithm trains a separate GP model for each circuit response, synthetic circuits with just one response will be sufficient to assess the performance.
Next, we defined a list with one-dimensional functions with varying complexities. This list forms the basis of all proposed synthetic circuits. In order to compile the list, we combined both the functions already used in our previous synthetic circuits experiments [21,22,23] with new ones. In the pursuit of obtaining various wave shapes, we started from the Mexican hat function definition and varied the arguments; sometimes, we even multiplied the result by a certain trigonometric function. The resulting graphs vary greatly, and the mother function is no longer visually distinguishable. Following this methodology, we obtained a list of 30 different one-dimensional functions. The proposed functions were developed by specialized circuit engineers, to ensure that they resemble plausible graphs resulting from real circuits.
In order to report results in a consistent manner, we needed a method to measure the complexity of the synthetic circuits. In this phase, we considered different approaches to asses a method for the 30 one-dimensional functions. To the best of our knowledge, there is no universal method with which to obtain a numerical value representing the complexity of a certain mathematical function. One idea that we considered during early development stages was to measure something tangential to the somewhat vague notion of “complexity”. As there are mathematical tools with which to asses whether a function is stationary or whether a function is normally distributed, we started by employing these tools to achieve an objective measure of the function. The reason for the normal distribution assessment was that, by definition, the GP works better when it can be fitted to Gaussian distributions. By implication, a function that is not normally distributed would pose more difficulties to the GP training process.
Therefore, we supposed that the less likely a function is to be stationary, the more complex it is. Analogous to the notion of normal distribution, the less likely a function is to be normally distributed, the more complex it should be. The idea of this notion is to use the numerical outputs of such assessment mathematical tools to construct a complexity metric. In this regard, we considered the following statistical tests: the augmented Dickey–Fuller, Kwiatkowski–Phillips–Schmidt–Shin, Shapiro-Wilk, Anderson–Darling, and Lilliefors tests. The first 3 are stationarity tests and the following 2 are tests that evaluate how likely a function is to be normally distributed. These tools output continuous values that could be interpreted as scalar measures. Because their intended use is not to asses complexity, we hoped that the results of the statistical tests would correlate with our visual impressions. Unfortunately, high ranges and significant discrepancies between subjective complexity assessment and numerical outputted values discouraged us from further pursuing the idea of mathematical tests.
After experimenting with an objective method to measure complexity, we focused on the alternative approach: subjective assessment. Each function was graded from 1 to 10, where 1 means least complex. In order to mitigate the disadvantages associated with subjectivity, a group of 6 people independently graded the functions. Then, the final complexity scored was computed as the mean value.
In order to illustrate the assumptions we adhered to while grading, in Figure 2, we present some examples for what we considered simple, medium, and difficult functions.
The aforementioned functions have the following formula:
f 28 ( x ) = 200 0.25 1.5 x 6.25 93.75
f 16 ( x ) = 33 π 2 ( x 0.12 ) 2 e π ( x 0.44 ) 2 c o s ( 6 x ) + 90.42 152.53
f 23 ( x ) = ( 1.4 + 3 x ) s i n ( 18 x ) + 4.28 8.04
These functions were selected in order to emphasize the variability in the complexity of the benchmark. From the start, we envisioned the benchmark to include various complexity functions. We considered a low-complexity 1D function to be a straight line or a curve, a medium-complexity one to contain several slope changes, and lastly, a high-complexity one representing functions that resemble a ringing waveform. These 1D functions are then combined to form synthetic circuits with various OC numbers (or input features).
The next challenge was deciding on how to combine the individual functions in order to obtain the final definition of the synthetic circuit. One idea we took into consideration was to randomize between addition and multiplication. The problem with this approach was that the complexity of the resulting hyperplane would have been significantly difficult to assess. For example, minima or maxima could multiply in unpredictable manners (in the absence of high-level mathematical analysis, which is neither the scope of this project nor useful for future research directions with the proposed synthetic benchmark).
Considering the aforementioned disadvantage, we continued with an intuitive way to combine functions. Thus, we decided to add them. This approach is simple and easy to debug in the case of synthetic circuits that are not learned in a satisfactory manner by the GP. Plus, it offers more opportunities for measuring the resulting complexity. The first explored idea was to compute the complexity of the resulting synthetic circuit as the mean value between the complexities of the functions that make up that circuit. The problem we found with the approach is that it cannot compare circuits with different dimensional inputs. The capacity of the GP model to learn a hyperplane decreases with the dimensional increase in the target function. For example, a two-dimensional response with a mean complexity of 5 is clearly easier to learn than a ten-dimensional response with the same mean complexity. In order to compare complexities between different input dimensions, we continued the experiments considering multiplicative complexity (MC), which is calculated as the product of individual complexities. Therefore, circuits with different dimensional inputs but with the same MC are comparable in terms of the difficulty they impose on the GP model.
Therefore, at this point, we could define a response, R (which corresponds to the entire synthetic circuit as it has one response), as follows,
R ( O C _ l i s t ) = i = 1 N f j , 1 j 30 ( O C _ l i s t i )
where O C l i s t R N , N is the number of circuit/response input conditions, and j is randomly chosen between 1 and 30 and indicates the index of the function in the function list.
After consulting circuit engineers, we concluded that the MC of a hypothetical real circuit could not be very high. Therefore, we settled on an arbitrary MC maximum threshold in order to pursue the data-driven evaluation method. Another aspect we consulted the circuit engineers on is the maximum number of OCs for a given circuit. Thus, the experimental paradigm is focused on circuits with a maximum MC value of 40, with a maximum of 10 OCs.
The MC for the proposed benchmark could be summarized as follows:
M C ( R ) = i = 1 N C o m p l e x i t y ( f j , 1 j 30 ) 40
where j represents the indexes of functions that comprise the R response.
In terms of scale, we decided to keep a constant definition of the functions: f : [ 1 , 1 ] [ 0 , 1 ] . This uniformity will help in debugging in the algorithm in the future.

4. Experiments and Results

4.1. Experimental Setup

For the experimental setup, we employed the full synthetic benchmark described in Section 3. Thus, the circuit verification algorithm was evaluated for the 900 functions of varying complexity and the number of OCs. Due to the fact that we required a large number of simulations, the experiments were conducted on a CPU cluster. In general, 200 simulations (100 FP + 100 AP) should take between 5 and 20 min, depending of the number of OCs.

4.2. Evaluation Metrics

Since all of our functions are synthetic and we know the true minima and maxima of the response functions, we can properly evaluate the performance of our algorithm by comparing the worst cases found by the algorithm with these values. To this end, we use relative value error (RVE), which represents the difference between the minimum found by our algorithm and the true minimum of the response function with respect to the response range, as defined in Equation (6). An RVE value of 0% indicates that the algorithm has found the true minimum value of a function. Thus, in order to see how well the algorithm performs on the different benchmark functions, we will evaluate how long it takes, in terms of the number of iterations, to reach RVE values under 2%. This metric was originally introduced in [21].
R V E = A L G m i n R m i n R r a n g e 100 [ % ]

4.3. Performance in Terms of RVE

In this section, we present our experimental results by testing the verification algorithm across all nine evaluation sets. The task mainly consists of a minimization problem; more explicitly, it consists of finding a value (under 2% RVE) close enough to the true minimum for all 100 synthetic circuits in the corresponding evaluation set. As mentioned in Section 3, there are nine evaluation sets in total, each containing 100 synthetic circuits with 1 response comprising a combination of 2 to 10 simple functions of various complexities. Besides the complexity of the simple functions, by increasing the number of input OCs, and thus the number of simple functions, the problem increases in difficulty.

4.3.1. Analysis of Evaluation Sets of Small Dimensions

Moving on, we present some initial results by evaluating the verification algorithm on a relatively simple set of functions, namely evaluation sets containing two OCs and three OCs. This will allow us to have an easier understanding of the synthetic benchmark and of the performance of the evaluation algorithm. In the case of the 100 functions with two OCs, the algorithm manages to find an OCC that leads to an RVE under 2% in a maximum of three iterations, with the majority of cases requiring only one. This is considered a good start, since it showcases that the algorithm can handle this initial first test with synthetic circuits containing only two inputs (or operating conditions).
For the three-OC evaluation set, as the complexity grows, so does the number of iterations required to reach an RVE under 2%. In this case, the algorithm requires up to eight iterations to find a suitable candidate (with an RVE below 2%) in all 100 scenarios. In order to develop a better understanding of these synthetic circuits, a few examples are provided in Equations (7)–(9). These have subjective complexities of 16.9, 28.3 and 37.3, respectively. Table 2 presents the individual progress of the verification algorithm in these circuits. For SB40, the algorithm manages to find the true minimum after the fist iteration, while for SB90, it requires four iterations in order to reach an OC that outputs an RVE below 2%. In the case of the third function, SB86, the algorithm takes up to eight iterations to find a virtually true minimum value. From these results, a trend of more iterations being required as the algorithm encounters more complex functions can be observed. Another observation is the stagnation of the algorithm in the case of the SB86 circuit for eight iterations. No change in the RVE value occurs during these iterations from FP, meaning that the algorithm has some trouble pinpointing the minimum area. The most likely cause of this is the relatively high complexity of this function, or, put in another way, the complexities of the component’s simple functions, one of which is represented in Figure 2c. Since the synthetic benchmark was designed to have functions of various complexities, this highlights how the benchmark can prove to be a challenge even with a low number of OCs.
S B 40 ( x , y , z ) = ( 1 2 π ( x 0.1 ) 3 ) e π x 2 s i n ( 10 x ) + 1.02 2.1 + 200 0.25 1.5 + y 6.25 93.75 + 200 0.25 1.5 z 6.25 93.75
S B 90 ( x , y , z ) = x + 1 2 + s i n ( 10 π ( y + 10 12 ) ) 2 ( y + 10 12 ) + ( y 1 ) 4 + 2.87 19.58 + 3 ( 3 61 π 2 ( z 0.47 ) 2 ) e π ( z + 0.15 ) 2 s i n h ( 4 z ) + 213.38 11193.91
S B 86 ( x , y , z ) = ( 1.4 + 3 x ) s i n ( 18 x ) + 4.28 8.04 + 5 ( 2.5 y + 3 + 2.5 3 y ) 95.75 70.31 + 5 ( 2.5 z + 3 + 2.5 3 z ) 95.75 70.31

4.3.2. Overview of Higher-Complexity Evaluation Sets

As we increase the number of OCs, the complexity of the synthetic circuits also increases. Since we cannot represent a 4D or higher function, and in order to keep the results brief, we will further rely on histograms in order to observe the performance of the verification algorithm compared with that of the various circuits with different numbers of OCs. These results can be observed in Figure 3, where we highlight the number of iterations required by the synthetic circuits to reach a very close value to the true minimum of the response, more precisely an RVE under 2%. Revisiting the case of two OCs (Figure 3a), 60 of the synthetic circuits reach this value immediately after the FP stage, while the remaining 40 circuits require between 1 and 3 iterations, highlighted by the 1 to 10 iterations bin on the histogram. In the case of three OCs (Figure 3b), however, the two bins change sides, with the algorithm finding the required RVE value of under 2% for nearly 40 circuits after the FP stage and for the remaining period with over 60 circuits in up to 10 iterations (the maximum is 8 iterations, as stated in Section 4.3.1). Starting with four OCs (Figure 3c), we encounter synthetic circuits that require more than 10 iterations. The majority of the synthetic circuits need below 10 iterations and only under 10 circuits required between 11 and 20 iterations, and another under 10 require between 21 and 30 circuits, while only a few need between 31 and 60 iterations. While in the case of four OCs, with about 80 of the 100 circuits, we need only up to 10 iterations to reach a value close enough to the true minimum in the case of five OCs (Figure 3d), where complexity is increased, nearly 70 circuits meet this criteria. The reminder of the synthetic circuits are nearly equally distributed between 11 and 80 iterations. Moving on to a case with six OCs (Figure 3e), the number of circuits that require 10 iterations or less is maintained at slightly above 70, with the reminder distributed between 11 and 80 iterations and 1 circuit requiring nearly 100 iterations. In the case of seven OCs (Figure 3f), we can see a similar plot to the one for six OCs. The plot for eight-OC (Figure 3g) circuits shows a similar trend to that for those for seven OCs, with a further reduction in the cases requiring up to 10 iterations. At 9 and 10 OCs (Figure 3h,i), we can see the number of circuits that require 10 or less iterations decreasing to around 60.
Given the results discussed in Figure 3, the verification algorithm performs very good in the case of circuits with two to seven input OCs, managing to find near-true-minimum values for up to 70% of the circuits in under 10 AP iterations. Starting with the evaluation set for six input OCs, the circuits become complex enough to require a more extensive search in the OC hyperspaces. This is to be expected, due to the high-dimensional input space and the nature of the more complex simple functions that comprise the response function of the circuits. Table 3 presents an alternative view of the data. Here, we present statistics regarding the number of iterations needed to reach a certain RVE threshold. The set objective is to reach an RVE value below 5% and 2%. Regarding the two-OC evaluation set, 76% of the circuits reach an RVE below 5% during the FP stage, while for the 2% threshold, this is true for only 60% of the circuits. As we move toward evaluation sets with a higher number of OCs, these values decrease, which is inversely correlated to the complexity. Also, Table 3 shows that there is a chance of the desired threshold not being reached, i.e., in 100 FP simulations and 100 AP iterations; no value is found that meets the desired criteria. These cases first appear in the seven-OC evaluation set as well as in the sets of a higher OC order. Regarding the AP iterations, with the exception of the sets with a small number of input OCs, it takes an average of 10-20 iterations to reach an RVE below 2%.
Figure 4 presents another way to view the results. Here, we plot the mean RVE value across each of the nine evaluation sets of different OCs. The mean RVE value for an evaluation set (e.g., 3 OCs) represents the average RVE value across all 100 circuits of the respective evaluation set calculated at each iteration including the FP stage. The plot is presented in logarithmic form. Starting from the FP stage and moving on to higher iterations, we can see the decrease in the mean RVE values in all the evaluation sets. In addition, more abrupt trends can be noticed for the sets with a small number of OCs (e.g., 2, 3 and 4), in comparison with smoother behaviour for the ones of a higher OC order. This is expected behaviour; as the complexity introduced by the number of OCs increases, so does the number of iterations required to reach lower mean RVE values. Overall, the synthetic benchmark shows balanced complexity in terms of the evaluation sets of different OC orders.

5. Conclusions

In this paper, we present a validation of our circuit verification algorithm through an extensive synthetic benchmark, consisting of multiple evaluations sets, each differentiated by the number of input OCs. As the number of input OCs increases, so does the complexity of the evaluation set, representing a good indicator for observing the performance of a verification method across different circuit configurations. The performance of our circuit verification method is highlighted on this benchmark, with RVE results under 2% using a relatively low number of simulations and with only a few circuits requiring over 100 iterations. As these latter circuits have eight or more input OCs, it is to be expected that these circuits would require more simulations due to the high complexity. Overall, the evaluation sets represent a good benchmark for any circuit verification method that aims to find the absolute minimum or maximum response values for various circuit configurations and complexities. This, in turn, can be an indicator of whether or not an algorithm is consistent enough to be further applied in Pre-Si verification. It is worth emphasizing that a synthetic circuit can be used only as a preliminary validation technique for Pre-Si verification. Our benchmark, although very diverse, cannot fully replace validation on real circuits. Nonetheless, it can still give precious insights into the capabilities of the verification method. The main advantages of a synthetic benchmark over a real one include the following: complete knowledge of the output hyperspace with respect to the input, and decreased time for computing the response (in comparison with PSpice-like models, which require extensive periods of time to simulate a real circuit). On the other hand, synthetic circuits might fail to model the intricacies that naturally arise in real circuits.
For future work, we will consider further updating the methodology for the validation process by testing the verification algorithm on the synthetic benchmark using multiple random seeds in order to mitigate randomness impacts. As for the verification algorithm, an extensive test on real circuits will further validate its efficiency.

Author Contributions

Conceptualization, A.C., H.C. and A.B.; methodology, C.M., C.A. and A.C.; software, C.M., C.A., A.G. and A.C.; validation, C.M., C.A., A.G. and A.C.; formal analysis, C.M., C.A., A.G. and A.C.; investigation, C.A.; resources, C.M., C.A., A.G. and A.C.; data curation, C.M. and C.A.; writing—original draft preparation, C.M. and C.A.; writing—review and editing, A.C., H.C. and A.B.; visualization, C.M., C.A., A.G. and A.C.; supervision, H.C. and A.C.; project administration, H.C., A.B. and G.P.; funding acquisition, H.C., A.B. and G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from the Romanian Ministry of Research, Innovation and Digitization, CCCDI-UEFISCDI, project number PN-III-P2-2.1-PTE-2021-0460, within PNCDI III. This scientific paper has been created in the framework of “Important Projects of European Interest on Microelectronics” and as a collaboration between Infineon and the Faculty of Electronics, Telecommunications and Information Technology at the POLITEHNICA București National University for Science and Technology.

Data Availability Statement

The data that support the findings will be available in a future repository of the National University of Science and Technology POLITEHNICA Bucharest, following an embargo from the date of publication to allow for the commercialization of research findings.

Acknowledgments

We would like to extend our gratitude to Cristian Vasile Diaconu and Emilian Constantin from Infineon Technologies for their expertise, recommendations and valuable insight throughouts the project.

Conflicts of Interest

Authors Andi Buzo and Georg Pelz were employed by the company Infineon Technologies. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Pre-SiPre-Silicon
Post-SiPost-Silicon
MLMachine Learning
LDOLow Dropout
FPFixed Planning
OCOperating Condition
OCCOperating Condition Configuration
GPGaussian Process
FFFull Factorial
MCMultiplicative Complexity
OAOrthogonal Array
LHSLatin Hypercube Sampling
GDGradient Descent
GDEGeneralized Differential Evolution.
PIProbability of Improvement
EIEstimated Improvement
LCBLower Confidence Bound
MACEMulti-Objective Acquisition Function Ensemble
RVERelative Value Error

References

  1. Wu, X.; Zhang, C.; Du, W. An analysis on the crisis of “chips shortage” in automobile industry—Based on the double influence of COVID-19 and trade Friction. J. Physics Conf. Ser. 2021, 1971, 012100. [Google Scholar] [CrossRef]
  2. Gielen, G.; Xama, N.; Ganesan, K.; Mitra, S. Review of methodologies for pre-and post-silicon analog verification in mixed-signal SOCs. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 1006–1009. [Google Scholar]
  3. Devarajegowda, K.; Ecker, W. Meta-model based automation of properties for pre-silicon verification. In Proceedings of the 2018 IFIP/IEEE International Conference VLSI-SoC, Verona, Italy, 8–10 October 2018; pp. 231–236. [Google Scholar]
  4. Farooq, U.; Mehrez, H. Pre-silicon verification using multi-FPGA platforms: A review. J. Electron. Test. 2021, 37, 7–24. [Google Scholar] [CrossRef]
  5. Hu, H.; Zheng, Q.; Wang, Y.; Li, P. HFMV: Hybridizing formal methods and machine learning for verification of analog and mixed-signal circuits. In Proceedings of the 55th Annual Design Automation Conference, San Francisco, CA, USA, 24–29 June 2018; pp. 1–6. [Google Scholar]
  6. Dobler, M.; Harrant, M.; Rafaila, M.; Pelz, G.; Rosenstiel, W.; Bogdan, M. Bordersearch: An adaptive identification of failure regions. In Proceedings of the 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2015; pp. 1036–1041. [Google Scholar]
  7. Gaur, P.; Rout, S.S.; Deb, S. Efficient hardware verification using machine learning approach. In Proceedings of the 2019 IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS), Rourkela, India, 16–18 December 2019; pp. 168–171. [Google Scholar]
  8. Khan, W.; Azam, B.; Shahid, N.; Moeed Khan, A.; Shaheen, A. Formal verification of digital circuits using simulator with mathematical foundation. Appl. Mech. Mater. 2019, 892, 134–142. [Google Scholar] [CrossRef]
  9. Adir, A.; Copty, S.; Landa, S.; Nahir, A.; Shurek, G.; Ziv, A.; Meissner, C.; Schumann, J. A unified methodology for pre-silicon verification and post-silicon validation. In Proceedings of the 2011 Design, Automation & Test in Europe, Grenoble, France, 14–18 March 2011; pp. 1–6. [Google Scholar]
  10. Zhuo, C.; Yu, B.; Gao, D. Accelerating chip design with machine learning: From pre-silicon to post-silicon. In Proceedings of the 2017 30th IEEE International System-on-Chip Conference (SOCC), Munich, Germany, 5–8 September 2017; pp. 227–232. [Google Scholar]
  11. Singh, E.; Devarajegowda, K.; Simon, S.; Schnieder, R.; Ganesan, K.; Fadiheh, M.; Stoffel, D.; Kunz, W.; Barrett, C.; Ecker, W.; et al. Symbolic qed pre-silicon verification for automotive microcontroller cores: Industrial case study. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 1000–1005. [Google Scholar]
  12. Andrade, F.V.; Silva, L.M.; Fernandes, A.O. Bencgen: A digital circuit generation tool for benchmarks. In Proceedings of the 21st Annual Symposium on Integrated Circuits and System Design, Gramado, Brazil, 1–4 September 2008; pp. 164–169. [Google Scholar]
  13. Pecenka, T.; Sekanina, L.; Kotasek, Z. Evolution of synthetic RTL benchmark circuits with predefined testability. ACM Trans. Des. Autom. Electron. Syst. 2008, 13, 1–21. [Google Scholar] [CrossRef]
  14. Stroobandt, D.; Verplaetse, P.; Van Campenhout, J. Generating synthetic benchmark circuits for evaluating CAD tools. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2000, 19, 1011–1022. [Google Scholar] [CrossRef]
  15. Karthik, S.; Priyadarsini, K.; Poovannan, E.; Balaji, S. Emulating SoCs for Accelerating Pre-Si Validation. In Proceedings of the International Conference on Cognitive and Intelligent Computing: ICCIC 2021, Virtual, 11–12 December 2023; Springer: Berlin/Heidelberg, Germany, 2023; Volume 2, pp. 107–112. [Google Scholar]
  16. Srivani, L.; Kamakoti, V. Synthetic Benchmark Digital Circuits: A Survey. IETE Tech. Rev. 2012, 29, 442–448. [Google Scholar] [CrossRef]
  17. Hutton, M.D.; Rose, J.S.; Corneil, D.G. Automatic generation of synthetic sequential benchmark circuits. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2002, 21, 928–940. [Google Scholar] [CrossRef]
  18. Jamil, M.; Yang, X. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  19. Sharma, P.; Raju, S. Metaheuristic optimization algorithms: A comprehensive overview and classification of benchmark test functions. Soft Comput. 2024, 28, 3123–3186. [Google Scholar] [CrossRef]
  20. Kämpf, J.H.; Wetter, M.; Robinson, D. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using EnergyPlus. J. Build. Perform. Simul. 2010, 3, 103–120. [Google Scholar] [CrossRef]
  21. Manolache, C.; Caranica, A.; Stănescu, M.; Cucu, H.; Buzo, A.; Diaconu, C.; Pelz, G. Advanced operating conditions search applied in analog circuit verification. In Proceedings of the 2022 18th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design (SMACD), Villasimius, Italy, 12–15 June 2022; pp. 1–4. [Google Scholar]
  22. Manolache, C.; Caranica, A.; Cucu, H.; Buzo, A.; Diaconu, C.; Pelz, G. Enhanced Candidate Selection Algorithm for Analog Circuit Verification. In Proceedings of the 2022 International Semiconductor Conference (CAS), Poiana Brasov, Romania, 12–14 October 2022; pp. 137–140. [Google Scholar]
  23. Manolache, C.; Andronache, C.; Caranica, A.; Cucu, H.; Buzo, A.; Diaconu, C.; Pelz, G. Applying Multi-objective Acquisition Function Ensemble for a candidate proposal algorithm. In Proceedings of the 2023 International Conference on Speech Technology and Human-Computer Dialogue (SpeD), Bucharest, Romania, 25–27 October 2023; pp. 116–121. [Google Scholar]
  24. Cressie, N. Geostatistics. Am. Stat. 1989, 43, 197–202. [Google Scholar] [CrossRef]
  25. Knagg, O. An Intuitive Guide to Gaussian Processes. 2019. Available online: https://towardsdatascience.com/an-intuitive-guide-to-gaussian-processes-ec2f0b45c71d (accessed on 24 November 2023).
  26. Owen, A.B. Monte Carlo Theory, Methods and Examples. 2013, pp. 19–22. Available online: https://artowen.su.domains/mc/ (accessed on 24 November 2023).
  27. Owen, A.B. Orthogonal arrays for computer experiments, integration and visualization. Stat. Sin. 1992, 439–452. [Google Scholar]
  28. McKay, M.D.; Beckman, R.J.; Conover, W.J. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 2000, 42, 55–61. [Google Scholar] [CrossRef]
  29. Kukkonen, S.; Lampinen, J. GDE3: The third evolution step of generalized differential evolution. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 1, pp. 443–450. [Google Scholar]
  30. Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design. In Proceedings of the International Conference on Machine Learning (PMLR), Stockholm, Sweden, 10–15 July 2018; pp. 3306–3314. [Google Scholar]
  31. Zhang, S.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. An efficient batch-constrained bayesian optimization approach for analog circuit synthesis via multiobjective acquisition ensemble. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2021, 41, 1–14. [Google Scholar] [CrossRef]
  32. Balandat, M.; Karrer, B.; Jiang, D.; Daulton, S.; Letham, B.; Wilson, A.G.; Bakshy, E. BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. Adv. Neural Inf. Process. Syst. 2020, 33, 21524–21538. [Google Scholar]
Figure 1. Full candidate selection algorithm diagram.
Figure 1. Full candidate selection algorithm diagram.
Electronics 13 02600 g001
Figure 2. Functions examples: (a) mean subjective complexity = 1.5; (b) mean subjective complexity = 5.33; (c) mean subjective complexity = 9.33.
Figure 2. Functions examples: (a) mean subjective complexity = 1.5; (b) mean subjective complexity = 5.33; (c) mean subjective complexity = 9.33.
Electronics 13 02600 g002
Figure 3. Iterations needed for achieving RVE < 2% for the synthetic benchmark evaluation sets of 2 up to 10 input OCs. O y axis shows the number of synthetic circuits from the evaluation set, while the O x axis highlights the number of iterations required.
Figure 3. Iterations needed for achieving RVE < 2% for the synthetic benchmark evaluation sets of 2 up to 10 input OCs. O y axis shows the number of synthetic circuits from the evaluation set, while the O x axis highlights the number of iterations required.
Electronics 13 02600 g003aElectronics 13 02600 g003b
Figure 4. Mean RVE calculated for 900 synthetic circuits (100 for each OC number).
Figure 4. Mean RVE calculated for 900 synthetic circuits (100 for each OC number).
Electronics 13 02600 g004
Table 1. Comparison between classical approach, FP and AP in terms of final worst cases obtained (response values) and number of simulations required.
Table 1. Comparison between classical approach, FP and AP in terms of final worst cases obtained (response values) and number of simulations required.
ResponsesSpec.CLSFPAP
Response 1>4045.6545.8245.65
Response 2>209.479.629.47
Response 3>45 44.92 44.9244.92
Response 4>105.485.485.48
Response 5<10010.8410.4010.84
# simulations3456240305
Table 2. Number of iterations required to reach an RVE value under 2% for 3 different circuits with 3 OCs of various complexities.
Table 2. Number of iterations required to reach an RVE value under 2% for 3 different circuits with 3 OCs of various complexities.
IterationCircuits
SB40SB90SB86
FP11.26%4.89%4.05%
10.00%4.51%4.05%
2N/A4.51%4.05%
3N/A4.51%4.05%
4N/A1.11%4.05%
5N/AN/A4.05%
6N/AN/A4.05%
7N/AN/A4.05%
8N/AN/A0.07%
Table 3. Average number of AP iterations needed to reach a certain RVE threshold with respect to the OC number.
Table 3. Average number of AP iterations needed to reach a certain RVE threshold with respect to the OC number.
Circuits
with:
Circuits That Reach RVE
Threshold during FP
Number AP IterationsNot Found for:
RVE < 5%RVE < 2%RVE < 5%RVE < 2%RVE < 2%RVE < 5%
2 OC76%60%1 ± 0.21.1 ± 0.3--
3 OC49%38%1.2 ± 0.72.1 ± 1.8--
4 OC56%51%7.3 ± 10.110.3 ± 11.4--
5 OC37%23%15.1 ± 21.117.4 ± 23--
6 OC19%6%10.4 ± 19.813.1 ± 21.5--
7 OC12%1%9.9 ± 19.814 ± 23.32%2%
8 OC11%10%9.9 ± 2215.1 ± 24.33%5%
9 OC7%5%6.8 ± 16.612.4 ± 22.18%10%
10 OC3%1%4.1 ± 6.514.7 ± 22.18%9%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Manolache, C.; Andronache, C.; Guzu, A.; Caranica, A.; Cucu, H.; Buzo, A.; Pelz, G. Synthetic Benchmark for Data-Driven Pre-Si Analogue Circuit Verification. Electronics 2024, 13, 2600. https://doi.org/10.3390/electronics13132600

AMA Style

Manolache C, Andronache C, Guzu A, Caranica A, Cucu H, Buzo A, Pelz G. Synthetic Benchmark for Data-Driven Pre-Si Analogue Circuit Verification. Electronics. 2024; 13(13):2600. https://doi.org/10.3390/electronics13132600

Chicago/Turabian Style

Manolache, Cristian, Cristina Andronache, Alexandru Guzu, Alexandru Caranica, Horia Cucu, Andi Buzo, and Georg Pelz. 2024. "Synthetic Benchmark for Data-Driven Pre-Si Analogue Circuit Verification" Electronics 13, no. 13: 2600. https://doi.org/10.3390/electronics13132600

APA Style

Manolache, C., Andronache, C., Guzu, A., Caranica, A., Cucu, H., Buzo, A., & Pelz, G. (2024). Synthetic Benchmark for Data-Driven Pre-Si Analogue Circuit Verification. Electronics, 13(13), 2600. https://doi.org/10.3390/electronics13132600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop