The chosen distance between the start and end point is poorly defined and allows for parlor tricks as demonstrated above. Possible values for this circuit are the width of the cut between the pads (70 µm), the distance between the nearest copper features (400 µm), the distance between the centre of the pads 18.4 mm, the distance between the meander starting points (36.4 mm), or any other ill-devised definition. With each distance leading to its respective extension: 85,714,200%, 14,999,900%, 326,100%, and 164,800%.
The two center lines, A and B, defined in
Figure 1a, provide a more realistic length, leading to 2113% and 9288%, respectively. While in this case, center line A provides a more accurate number, the choice between these two center lines can become quite tricky in some instances. For example, using center line B might be justifiable if the radius of 0.86 mm (
Figure 2) is increased to 10 mm. In short, defining the initial length against which stretchability is measured has a considerable influence on the result. However, making the
“correct” choice might not be trivial in some cases.
Needless to say, unless the same definition and measurement guidelines were used, any comparison between these numbers is meaningless. Additionally, this meander will fail whenever the tension on it is released, making it useless for most applications. Clearly, a better definition of stretchability is required to remove ambiguity.
3.1. Alternative Stretchability Metrics
Defeating stretchability as a metric of a stretchable interconnect’s capabilities is trivial. As demonstrated above, it only requires creating a long wire using the technology at hand. A useful metric is unbiased and considers the reliability of the technology at hand. Solving this for the case of stretchable electronics seems trivial at first.
For a planar substrate, dividing the maximum achieved elongation by the occupied surface area before elongation would prevent looping the wire around the contact pads, as was done here. In fact, this planar stretchability (PS) should provide a fair assessment of a technology’s capabilities at first glance. The maximum PS would depend solely on the minimum feature size and not the design; some small gains could be made by shrinking the contact pads; however, these would be minimal.
This, of course, does not account for technologies that use out-of-plane features to create a stretchable interconnect [
10,
16]. These could create a tower of wire that might take up a relatively large volume while providing no cyclic endurance, like the above circuit—a single elongation might break the interconnect. An easy method to alleviate this is to include the thickness of the circuit by dividing the maximum elongation by the volume of the bounding box around the stretchable interconnect, leading to the volumetric stretchability (VS):
To eliminate the effect of the units, each value is normalized to millimeters; alternatively, the unit used could be mentioned next to the variable as a subscript (e.g., PSmm). At first glance, this might result in a constant number for a given technology. However, the design of the interconnect still depends on electrical parameters, such as impedance and maximum current carrying capability, and the mechanics of the encapsulating material.
Using these metrics and looking back at the above example, consider the 60 m extension, 50,905.0 mm2 surface area, and 3461.5 mm3 volume (68 µm thickness). The planar stretchability then becomes 1.178 and the volumetric stretchability becomes 17.334. These values are significantly more useful; the required substrate surface area or volume is directly related to the cost to fabricate the device. Additionally, merely dividing the desired final length by the achieved PS or VS will indicate the required area or volume to achieve a given wire length. However, this still would not help determine the actual stretchability a designer or engineer might expect from a circuit—the above circuit is still frightfully unreliable, even though it might score well in a comparison.
Defining reliability as a single number is a troublesome prospect. However, for meanders, a few assumptions are possible. For example, it is possible to postulate the following: “Each directional change between segments of a conductor in a stretchable electrical interconnect where the angle between individual segments exceeds 30 degrees is prone to failure over time”. This statement is most certainly wrong in an absolute sense, but provides a straight-forward method to quantify the chance of failure by cutting the stretchable interconnect into segments.
Providing a generic definition for a segment is troublesome; however, using the above definition, a few cases can be defined. First, any sharp corner (<90°) between two straight lines would most definitely be a transition at risk of failure. A second case is an interconnect that consists out of multiple identical elements that are repeated to form an interconnect. The third case, which is close to the above definition, would be a generic catch-all case; consider the tangent along the interconnect its center line. If this tangent changes more than 30° versus the tangent at an earlier point on the line this, indicates a transition, and hence a segment. Under ideal circumstances, the first two definitions are used, but these are troublesome to apply to special cases (e.g., fractal meanders).
Consider a segment with a known length, and n segments are required to span this distance. Neglecting the start and end points, the chance of failure for each segment and each transition between individual segments ris P
1[i] and P
2[i], respectively, where i is the cycle number during the cyclic test. Assuming the segments are independent of each other, with an equal chance of failure, and the applied strain is irrelevant, the chance a meander segment will survive a certain stretch cycle becomes the following:
Hence the chance of failure becomes the following:
Taking a few statistical liberties, like assuming the stretch cycles are completely independent of each other, the chance a meander segment breaks after i cycles becomes the following:
While this might appear extreme at first glance, the survival chances P
1 and P
2 are close to 1 in most technologies, resulting in a rather small P
F. Assuming each meander segment is independent of its neighbors, the chance of failure in cycle i for n segments then becomes the following:
Clearly, an increase in n will increase the chance of failure. However, this rudimentary approach fails to take into account the strain the meander might experience; but it does demonstrate the effect of the number of segments or transitions n on meander reliability if these are considered weak points. Introducing this into the stretchability metric is trivial; simply dividing the stretchability by n ought to penalize a technology with many transitions. This leads to the definition of compensated planar stretchability (CPS) and compensated volumetric stretchability (CVS):
In both cases, the area and volume are those for the complete meander. The above circuit has a staggering 4033 transitions, which meets the above requirement, slicing the CPS and CVS down to 0.003 and 0.004. For the case of a polymer matrix filled with conductive particles, setting n equal to one is an acceptable choice, unless it is patterned as well, because the likeliness of failure will depend on the material’s inherent properties instead of the interconnect design.
An even more generic approach would be to consider the chance of failure occurring per length unit after i cycles PFPL[i]. If such a number were available, it could easily be included by substituting n by (1 − PFPL[i]). However, this would only provide a momentary comparison point during cycle i. A more generic approach would distill the reliability function PFPL[i] into a single number based on the number of life-cycles a device should survive.
Let i
expected and k be the number of stretch-cycles the device should survive during regular use and the percentage of acceptable failures within this period, respectively. While setting k to zero might be an attractive prospect, no economical process achieves a 100% yield. Consider the function f[i] that returns the percentage of failed devices after i cycles; multiplying it by a weighing function and calculating the sum over the entire range of i would return a single number that signifies the reliability. This weighing function should heavily punish crib deaths (early failures), while not significantly penalizing for failures beyond i
expected. Based on this, we can propose the weighing function w[i]:
The latter condition in Equation (9) ensures failures after the expect lifetime do not significantly count towards the (un)reliability metric. However, a more reliable technology should still achieve a better result.
Next, factoring in the acceptable percentage of failures is done by the following:
Once a is known, determining b is trivial by solving the following equation numerically:
The reliability metric R[i
expected, k] can then be calculated by multiplying the weighing function point-wise with the cumulative failure percentage given by f[i]:
Using this metric, a higher number will indicate a less reliable technology, meaning it can substitute the compensation factor in Equations (7) and (8).
The exact method used to compare stretchable interconnect technologies and designs should be selected based on the application, available data, and desired outcome. For example, a smart health monitoring patch would only be expected to last a day, while a consumer device in the European Union would have to last for over two years. Sadly, calculating these metrics is infeasible at this time because of insufficient data and will most likely only happen at the point of large-scale industrialization.
3.2. Example Case
Analysing the data from the second test using the above methodology—planar stretchability—demonstrates the usefulness of these modified metrics and their potential pitfalls. Per design, four samples were tested, two with and two without TPU encapsulation.
Table 2 and
Table 3 list the mechanical and electrical measurements, respectively. As mechanical failure, the first point at which the material ruptures was chosen, while electrically, a ten-fold resistance increase versus the starting resistance was used. Because of time uncertainty between the trigger signal being sent and the start of the measurement, one millimeter is deducted from the extension before electrical failure.
The observed failure (Run 2—Design #13) in the mechanical measurement occurred because of a loss of air pressure to the pneumatic grips during the test, releasing the sample. The more common failures during the electrical measurement were caused by a variety of problems between synchronizing the mechanical and electrical measurements. In both cases, the actual achievable elongation values are expected to be less, the reason for this is a combination between the sample slipping and cantilevering in the claws. Measuring the exact length after failure was impossible because of delamination and curling of the material, as shown in
Figure 8.
Averaging the values for both the free-standing and encapsulated cases, and separating them into mechanical and electrical failure, the stretchability and planar stretchability were calculated for each value, resulting in
Table 4. The considered area when calculating the planar stretchability is the actual width the interconnect takes up on the sample multiplied by 47 mm; the values used are listed in
Table 1. The starting fillets were neglected, as these are identical in all cases. However, when comparing technologies, or if they differ between designs, these fillets or other transition structures should be included.
The straight track reference (Design #12) appears to have a stretchability of 20% to 30%. This high stretchability is not practical because it originates from plastic deformation of the material, combined with the above clamping problems. Additionally, it is non-reversible, limiting its use to one-time deformations. As a result, this should only be considered a baseline measurement.
Considering the possible deviation caused by plastic deformation and the test (20% to 30%), the experimental stretchability values are closely in line with the theoretical values calculated by considering the meander’s centerline—as expected. However, the planar stretchability (PS) values—multiplied times a thousand for the sake of convenience—tell a different story. The clearest example is design #4; while it achieves the same stretchability as designs #5 and #6, it requires significantly more surface area to do so, as illustrated in
Figure 9a, making it less attractive from a manufacturing cost perspective.
At the same time, the fallacy of the planar stretchability is magnified by the reference design, which scores a staggering 284 to 438—three to ten times higher than the meanders. This is not surprising, because a straight line is the most efficient way to connect two points on a flat substrate. However, a wider trace would lead to a lower planar stretchability; luckily, the higher stretchability meanders would experience a similar drawback, because they would have to use even wider copper to achieve the same resistance as the much shorter straight trace. However, this method heavily promotes horseshoe-shaped meanders with small radii and a large number of segments because it packs a lot of conductor length per substrate area, even though smaller bending radii might not necessarily be an advantage from a reliability perspective.
The compensated planar stretchability (CPS), listed in
Table 5, changes this figure entirely by introducing the number of transitions or segments—available in
Table 1. This is, once again, exceptionally clear when comparing designs with similar stretchability, such as Design #4, #5, and #6. While #5 and #6 were previously closely matched because they take up similar substrate surface areas, #4 and #5 now have the advantage because they have less segments. This stands to reason because the increased space between the individual segments would allow more encapsulation material in-between the segments—meaning it can deform more before developing tears in the encapsulation material. An additional reason that Designs #4 and #5 are preferable over #6 can be seen in
Figure 9c; at 5 mm elongation, Design #6 is almost entirely stretched, while #4 and #5 still have some headroom. This situation only worsens as the elongation is increased to 10 mm (
Figure 9d) and then to 15 mm (
Figure 9e). While this might appear counterintuitive at first glance, the reason for this discrepancy can be found in
Figure 9b. The length along the meander centerline, as listed in
Table 1, is not the actual length a meander can achieve without plastic deformation. Instead, the smallest radii in combination with a tangent running in between these radii provides an accurate number. As a result, the meander with a small number of segments, and hence a lower opening angle α, can potentially span a far longer distance without undergoing plastic deformation.
Instead, the smallest radii in combination with a tangent running in between these radii provides an accurate number. As a result, the meander with a small number of segments, and hence a lower opening angle α, can potentially span a far longer distance without undergoing plastic deformation. For example, when considering the difference between the centerline and this actual length on a per segment basis, Design #4 only loses 4.56% of its length per segment, while this increases to 7.49% and 10.10% for Designs #5 and #6, respectively, explaining the results seen in
Figure 9.
The advantages of compensated planar stretchability are also apparent when comparing Designs #16, #17, and #18. While Designs #18 and #17 should significantly exceed the stretchability of Design #16, both their CPS figures are significantly scaled down by introducing the number of segments as a factor. Given that the minimum spacing between two separate meander segments decreases from 5.038 mm to 2.831 mm, and finally to 1.849 mm, respectively, for these three designs, this is a correct figure; especially when considering a larger radius should decrease the strain experienced by the copper.
3.3. Resistance Measurements
DC resistance measurements are the golden standard to electrically verify stretchable electronics. The general idea consists of proving the prowess of the technology or design at its intended application; that is, passing an electrical signal. However, many measurements downplay the actual effects observed or are fundamentally flawed from the start by failing to consider realistic use conditions.
The above circuit was tested using a 1 mA current, a typical test current for resistance measurement to place the returned voltage in the multimeter’s 2.1 V measurement range. However, if the function of the device was to carry a large current (e.g., 10 A), the performed measurement would be pointless because the heating might significantly affect the mechanical reliability of the circuit. Carrying large currents will pose different challenges as opposed to, for example, capacitive sensing, the measurement should use a test current close to the one encountered in the intended application whenever possible. This is especially important when considering the failure mode of conductors can change dramatically as the width and thickness increases, as commonly seen with flexible circuit boards, meaning a small dimensional change to improve electrical performance might cause significant issues mechanically.
Under any condition, the first step is correctly measuring the observed resistance. Three circuit topologies, shown in
Figure 10, are in everyday use when measuring the resistance of an electrical interconnect. The first topology (
Figure 10a) is equivalent to placing multimeter probes on the contact pads and performing a two-point measurement. Here, both the contact and lead resistance come into full effect. The next possible topology is the quasi four-wire approach (
Figure 10b), where separate wires are used for driving the measurement current and sensing the voltage, but they are connected before contacting the device-under-test (DUT). As a result, the contact resistance is still present. The final case, a true four-wire measurement (
Figure 10c), is the only correct method in most cases [
17,
18].
The method in
Figure 10a is only suitable for quick process verification, secondary checks, or when dealing with resistances of multiple kilo-ohms. For the tested sixty-meter meander,
Figure 10b presents the minimum acceptable measurement. The movement of the test leads, and especially the coiling on the reel, might significantly affect the resistance of the test leads during the measurement. However, during movement, the contact resistance can change dramatically, making the true four-wire measurement the only correct method.
To understand the importance, it is worth looking at ballpark figures. Consider a stretchable interconnect in an undefined technology; this hypothetical copper-based interconnect has a resistance of 300 mΩ. A two-wire measurement (
Figure 10a) is performed on the interconnect while it is stretched, hoping to monitor the formation of micro-cracks over time. The test leads have a resistance of 100 mΩ each, while the nickel-plated probes have a contact resistance of 25 mΩ with interconnect. Before stretching the interconnect, the multimeter will see 550 mΩ, which, for example, increases to 700 mΩ when the interconnect is fully extended—or a 27.3% increase. While actually, the resistance increase the meander experienced was 50%, a significantly different result.
The next measurement, performed by a slightly more skilled operator, uses a zeroed multimeter—meaning the probes were placed on the same contact pad to make a reference measurement. This action will indeed subtract 250 mΩ from the measured resistance and provide a 50% figure in theory. However, in practice, the measurement is still flawed and the formation of small micro-cracks will still be lost to the measurement noise and error. The movement during stretching will slightly change the resistance of the test leads, and shifting of the contacts will result in a slight, but noticeable alteration of the contact resistance because it is a function of pressure, surface condition, and location [
19,
20,
21]. The magnitude of this effect is difficult to estimate as it heavily depends on the surface roughness and type of probe. However, it is safe to say it will create an uncertainty in the milliohm range or larger (e.g., ±5 mΩ). Additionally, the multimeter might not be in its optimal measurement range; the software will dutifully subtract the 250 mΩ and present a 150 mΩ in ideal circumstances. However, if the measurement ranges were, for example, 500 mΩ and 5 Ω with a resolution of 1 mΩ and 10 mΩ, respectively, the measurement uncertainty would increase tenfold for this scenario—because the multimeter is now operating in its 500 mΩ range. This means that it is crucial to consider the measurement range the multimeter is in, and not blindly base the result on the displayed value, especially when dealing with low-cost handheld units.
Of course, not every effect comes into play in every situation; therefore, it is essential to understand the magnitude of the problem for a specific situation. Micro-cracks are initially virtually impossible to detect electrically; this is easy to prove using simple electrical theory. The resistance of a rectangular conductor at DC depends on its resistivity ρ, length L, thickness t, and width w [
22]:
Consider a copper trace with a thickness of 18 µm and width of 100 µm at 20 °C, it will have a resistivity of 16.8 nΩm [
23,
24]. If the slice in which the micro-crack occurs has a length of 1 µm, it will start with a resistance of 9.3 µΩ. To add 1 µΩ of resistance, the crack has to propagate 9.7 µm into the trace, and another 7.95 µm to add another 1 µΩ. In short, detecting micro-cracks early requires exact high-resolution resistance measurements. This also explains why failures might appear suddenly; the trace might have broken most of the way before the actual resistance increase becomes noticeable. Of course, detecting a single micro-crack tends to be impossible under most circumstances, luckily, they tend to occur in large numbers. However, based on this, we can conclude analyzing the mechanical condition of a circuit based on electrical measurements is a precarious method unless the measurement limitations are considered.
What is also interesting to study is the effect of this localized resistance increase when carrying larger currents. Consider, for example, the case of a constant current LED driver supplying 100 mA through the above 18 µm by 100 µm long interconnect. Assuming the interconnect spans a distance of 200 mm, it will have a resistance of 1.867 Ω and will dissipate approximately 18.7 mW over its entire length—a trivial amount not worth mentioning for a highly conductive material like copper. However, at the location of micro-cracks, things can turn for the worse quickly; a 1-µm long slice will dissipate only 93.33 nW, but this goes up to 933.3 nW for a 90-µm wide crack. Considering this slice weighs only 16.13 ng and copper has a specific heat capacity of 384.4 J/(kg K) [
24], it requires only 620 nJ to increase its temperature by 100 °C—or less than one second at this current assuming no thermal energy is conducted to the surrounding material. While conduction to the surroundings will dampen the temperature rise, it does point out that thermal degradation of the encapsulating material, which supports the interconnect, can occur at the site of failing conductors. It is not far-fetched that this might significantly aggravate the situation and lead to earlier failures depending on the encapsulation material.
Another effect worth considering when measuring small resistances is the Seebeck effect, meaning any metal-to-metal junction starts behaving as a thermocouple [
17]. In most measurement setups, this effect is trivial and readily eliminated by using a parallel measurement path on both sides of the sample, because the entire setup is at the same temperature. Alternatively, specific alloys can be used on the contacts to limit this effect further—usually labelled as “low thermal electromotive force (EMF)” by the manufacturer. However, heating caused by the circuit or mechanical movement can upset this isothermal environment and result in the introduction of large thermal voltages.
Consider a typical nickel-plated probe; the copper–nickel Seebeck coefficient is 10 µV/K for pure copper and nickel [
25]. A temperature gradient of 10 °C will then lead to a thermal voltage of 100 µV. Using the initial example once more, a source meter sends a 100 mA measurement current through 300 mΩ, which results in a voltage of 30 mV over the sense leads in ideal conditions. The 100 µV offset by the thermal voltage skews this measurement to 301 mΩ, a relatively small error. The problem stems from the fact that many handheld multimeters are incapable of providing high test currents, unlike source meters or high-end bench meters, which will happily provide higher test currents. For example, the Keysight U1253B uses a current of 1.04 mA, which would result in a 312 µV output, adding 100 µV onto that results in a 400 mΩ value on the display (10 mΩ resolution). An even worse situation exists when considering copper–copper oxide contacts, which can exhibit Seebeck coefficients of 1 mV/K [
17]. For this reason, choosing the correct probe for the intended application is of vital importance when measuring small resistances.
A final consideration is the time it takes the multimeter to perform the measurement. This measurement is rarely instant because the sample-and-hold circuit requires time to charge a capacitor [
26]. A changing resistance during this interval—called the aperture time—is a potential cause for error, meaning movement of either the probes or the sample can introduce an unintended bias to the measurement [
18]. This is exceptionally clear when attempting to eliminate the noise of the power grid by using aperture times that are multiples of 20 ms or 16.7 ms to average one or more power cycles. Ideally, the mechanical measurement stops, triggers the multimeter, and waits for it to complete a measurement before continuing the test.
In short, measuring small resistance changes requires a correctly configured measurement setup. The most substantial errors can be avoided by using a four-wire measurement with appropriate test currents to emulate self-heating. Samples should have clean electrical contacts at the same temperature, and the measurement instrument has to be configured to the correct range with a suitable test current.