3.1. Preform Luminescence Parameters Measurement
After the preform is installed in the holders, the device lid is closed to prevent reflected or scattered pump radiation from escaping outside—this is a safety requirement when working with the system. The laser diode turns on and operates, allowing measurements to be made at the center point of the reference preform. The system waits until the voltage fluctuations received from both photodetectors (8) and (10) cease to exceed critical values and become acceptable for measurement. Practice has shown that the maximum time for such a wait is about 20 min, and about 7 min on average. The pump radiation is transmitted through a multimode fiber to the collimator and, after passing through the splitter–scatterer (19), enters through the side surface first into the reflective cladding of the optical fiber preform and then into the core, where, among other reagents, active erbium is contained. The active metal begins to luminesce at a wavelength of 1530 nm. It is this radiation that is registered by the detector (10).
After the diode enters the operating mode, the controller gives the motors a command to move the preform so that the pump radiation is directed to the first measurement point. In tomography mode, the rail moves to the zero position and begins translational motion in increments of 50 microns. At each position, the voltages received from photodetectors (10) and (8) are measured with accumulation for 500 ms. This is how the luminescence intensity profile of the active element (erbium and/or ytterbium) is constructed along the diameter of the preform. In the fast-scanning mode, measurements are conducted with the rail moving thanks to a stepper motor (
13) from the zero position to the end of the scanning range with a variable step: at the periphery of the cross section, it can reach 400 μm, but, as the rail approaches the core, it decreases to 50 μm. At the point with maximum intensity, the rail stops, and the system performs the same procedure with another stepper motor (
11). After the maximum intensity is reached again, the signal is acquired for 1 s. After, a command is given to the longitudinal movement stepper motor (not shown in
Figure 1) and the preform moves to the second measurement point. If fast scanning is used, the search area for the core is reduced based on the luminescence profiles in both axes at the previous point, saving time and resources, e.g., elements (
11) and (
13) and all associated mechanics. If the tomography mode is used, the rail moves to the zero position and scanning begins at a constant step of 50 microns. This continues until the last measurement point is reached. If necessary, the stepper motor (
9) performs axial rotation. At each point, the system records the following quantities: voltages received from both photodetectors, the position of motorized positioners for the X coordinate (rail) and Y coordinate (minirails along which the detector (10) moves), laser temperature.
After receiving the measured values in volts, the pump diode intensity fluctuations are compensated over time using the data obtained by detector (8), and the values of detector (10) are recalculated from the concentration of erbium or ytterbium oxide. For this purpose, the measurement should be obtained during stabilization of the pump diode, when its beam traverses the reference sample, and the photodetector (10) registers the luminescence intensity.
The data obtained by the system was verified using X-ray diffraction analysis (XRDA). To do this, one of the samples with a deliberately created uneven distribution of erbium oxide concentration along its length was divided into fragments, each of which was subjected to XRDA. The correlation coefficient of two dependences of erbium concentration on the coordinate along the length of the preform, obtained by different methods, was 0.98. The fact that the correlation coefficient turned out to be different from 1 can be explained by several factors. Firstly, the reason could be a certain measurement error (possible imperfection of the pump diode intensity fluctuation compensation algorithm); secondly, it is worth noting that XRDA detected all erbium, including clustered one, that is, incapable of luminescence. However, active fibers developers and manufacturers are mainly interested in precisely that erbium that emits radiation at a wavelength of 1.5 microns when pumped at 970–980 nm.
3.2. Preform Geometric Parameters Measurement
It should be noted that already at the stage of luminescent parameters measuring, as mentioned earlier, the XYZ coordinates of the positioners are registered. Since the search algorithms when scanning along both axes find the maximum intensity, this means that the maximum amount of erbium oxide in a given section is concentrated in these areas of the preform. It can be said that a kind of “luminescence geometric center” is being found. Practice has shown that, in most cases, it coincides with the geometric center of the core (which is determined by the refractive index profile). However, in some cases, the geometric center of the core may not correspond to the geometric center of the cladding. Therefore, it is necessary to measure the parameters of the outer surface of the preform. So, after the luminescence features study, the pump diode is turned off and both iris diaphragms are opened. Voltage is applied to the LED and the CCD camera begins to stream the data.
It was previously found that sample side illumination is one of the most effective. Firstly, this brings the problem closer to the already solved one: television measurement of the preform diameter during the chemical vapor deposition process, where the sample is illuminated by the flame of an oxygen–hydrogen torch. Secondly, experiments have shown that face illumination, although possible, is more suitable for the glass surface defects visualization. Thus,
Figure 4 shows a system design variant, where the LED radiation is injected into the technical holder and subsequently propagates along the sample to its opposite end.
In
Figure 4a, a human fingerprint is clearly visible, while in
Figure 4b there are small scratches (which, of course, according to the optical fiber preforms handling rules, should not exist). In this case, clean preform areas are quite poorly visualized (excluding local narrowings, see figure above). Thus, it is recommended to use side illumination for surface flaw detection. An example of side illumination for a preform diameter and center measurement is shown in
Figure 5.
In this measurement, the CCD camera captures a rectangular area, which is a raster image containing color information. An area of the preform about 10 mm long is used for averaging (color information is accumulated along the length and then divided by the number of pixels per 10 mm of the preform):
where
N is the number of pixels within 10 mm of preform length. Experiments have shown that, for typical preforms with a diameter from 10 mm to 16 mm, it is optimal to select the averaging window in such a way that there are at least 400 pixels per preform diameter. The intensity in the image is also described as [
17]:
where
Q is the bit depth for each color component (
R,
G, and
B), in our case of
Q = 8, and
m coefficient = 0. Then the
R,
G,
B color components can be obtained as follows:
where ‘div’ and ‘mod’ are the operations that return the result of division with the remainder discarded and the remainder of the division, respectively. Then, the “gray” component is calculated:
where
Gr1 and
Gr2 are the two methods for the “gray” component calculation which can be selected depending on the preform illumination type. The profile with the highest contrast among them is selected and assigned to the
Gr values. Then, the intensity of the “gray” image is:
The
set of intensities can be visualized as shown in
Figure 6.
Ideally, color boundaries can be described by a straight line
and inverted with the Heaviside function
:
where
R and
F are coordinates of the leading and trailing edge of the burst (reflection). Straight
and inverted
Heaviside functions can be represented in continuous form:
where
k is the “sharpness” of the difference between the zero and one states.
In this case, it seems technically feasible to calculate the correlation coefficient of the obtained data with this function, as well as with functions specified in a table and saved in the system database. Previously, it was shown that calculating the correlation coefficient in a similar way ensures the specified accuracy of localization of layers, including the boundaries of the surface of the fiber light guide preform [
18]. However, later, this option was excluded due to the fact that calculating the Pearson’s coefficient in the scanning window loads the system computing resources quite heavily and allows for the result to be obtained only with a delay of 15–30 min, which is quite significant under industrial production conditions.
Figure 6 shows that the intensities of the two peaks corresponding to the preform outer layer boundaries are not the same. This is due to the fact that illumination occurs only on one side of the preform under test; therefore, the opposite side of the outer surface is illuminated less intensely. In this regard, with large diameters of the preform, as well as with intense noise during image digitization, mixing of the useful signal (second peak) with noise may take place. There are several ways to get rid of the noise component:
Expanding the area where the preform is captured by the CCD camera and, as a result, obtaining more data for accumulation, thereby increasing the signal-to-noise ratio (SNR) of the data entering the system. This solution seems to be the most obvious; however, it requires an increase in the distance between the camera and the sample under test and, as a consequence, an increase in the dimensions of the product. This is unacceptable since the system is already quite large. In addition, the spatial resolution of the method will decrease;
Increased data accumulation time. This measure will also lead to an increase in the SNR, although at the cost of the technological process lengthening;
Digital signal processing. A method was recently presented that can increase the signal-to-noise ratio by more than 10 dB when using optical sensors. The ability to do so lies in the fact that a one-dimensional discretely specified array of real or integer data can be processed in such a way that their various components can be either averaged or left unchanged according to a specially defined law. In the simplest case, this may be a law that describes the dependence of the averaging parameter (for example, the size of the scanning window) on the signal parameter (suppose its intensity at a particular frequency or in a certain spatial coordinate).
Thus,
Figure 7 shows the result of the obtained noisy data processing using the activation function dynamic averaging (AFDA) method [
19]. This is an improved algorithm based on the frequency domain dynamic averaging (FDDA) technique, which was originally created for optical frequency domain reflectometers and coherent optical time domain reflectometers data processing [
20].
The AFDA/FDDA techniques are based on the following approach. First, all values in an
I(j) array of
N elements are normalized from 0 to 1. After this, the
Ia(j) filtered array is written [
21]:
where
Ia(j) stands for filtered signal,
j is the pixel number,
is the maximum event (burst) size in pixels.
The noise distribution of detectors that register backscattering in reflectometers is similar to the noise of the photosensitive elements of video cameras—this can be concluded by comparing the histograms presented in [
22,
23]. The useful signal, just as in the works above, is a sharp “outburst”. This means that FDDA-like approaches can also work for the data types studied in this work. Obviously, Equation (15) describes the linear function
. It has been shown that the choice of a linear function is not always appropriate in this case [
18]. In this work, Turov et al. justify the use of the GELU function (neuron activation function described by Gaussian Error Linear Unit) [
24]. Of course, [
19] allows one to assume that the features leading to the choice of the GELU function as a characteristic of a digital filter processing data from reflectometers of various types, owing to the similarity of the data, can be projected onto the preform external borders identifying problems. However, it is necessary to note a number of its advantages over others—precisely for conducting the research described in this work. Thus, below, other neuron activation functions are presented analytically (16)–(19) and graphically (
Figure 8) [
24]. Of all these expressions, the GELU function is the most flexible and suitable for the problem. The reasons are described below.
Rectified Linear Units (ReLU) is a type of activation function that grows linearly when the argument is positive but is zero when the argument is negative. The kink of this function ensures the nonlinearity of its response:
where
B is the kink point (which can occur not only at zero, but at any other position). Essentially, the use of this activation function for dynamic averaging and noise filtering is a combination of the FDDA method and a threshold algorithm, as demonstrated in [
19]. Fairly high processed signal SNR values were obtained in it. However, the filtered useful signal had a number of disadvantages. New distortions were introduced, some of which were associated with edge effects, and some with the incorrect choice of function (in particular, presumably, with the presence of an obvious kink point). The exponential linear unit (ELU) function does not have such problems [
25]:
where α > 0 and
B is the coordinate where the exponential and the straight line intersect.
However, this activation function divides the signal intensity region into only two zones, determined by the parameters
B and
α, which may not be enough for flexible adjustment of the filtering function. Therefore, as in [
19], it is proposed to use an inverted GELU function combined with another exponential instead:
where
y and
C are the adjustment parameters of the filtering function. It is known that the resulting function can be approximated, to some extent, by the logistic activation function (sigmoidal), and its GELU component can be simplified to:
However, according to [
26], this does not provide a significant gain in computing resources consumption, and also deprives the function of a large number of tuning coefficients.
Before dynamic filtering, binary and threshold filters can be applied to each original image row instead of the averaged one. The image is converted to binary with a T threshold: if the pixel brightness is less than
T, then it is equal to 0, otherwise it is equal to 1. After this, a threshold filter is applied to eliminate events containing less than
k pixels. The image is then averaged and subjected to AFDA. The output data for
k = 180 and
T = 15 are presented in
Figure 9.
It can be seen from the figure that the leading edge of the first event and the trailing edge of the second are represented by the lines almost perpendicular to the abscissa axis. The experiment showed that their strict perpendicularity is occasionally violated only by a line displacement of 1 pixel, which, with a preform diameter of about 10 mm, is 12–25 microns (depending on the focal length of the camera). This accuracy is enough to detect defects at an early stage. After the boundaries of the outer layer are determined from the event edges, the geometric center of the preform is calculated at the location where the measurement takes place:
where
Fa is the coordinate of the leading edge of the first event,
Fb is the coordinate of the trailing edge of the second event,
K is the conversion factor from pixels to SI length units.
It is worth noting, however, that the sample geometric center detecting accuracy at the location under testing deteriorates with the increasing sample cylindricity deviation. This is due to the fact that, during the tilted preform image averaging, the lines of a two-dimensional array containing information about the boundaries are shifted relative to each other and this gives rise to the edge “blurring”. This can be seen quite clearly from
Figure 10.
Figure 10, which also overlays the inverted color image from the camera (as well as dotted lines highlighting the boundaries), shows the extreme cylindricity deviation when the preform is rotated 10° relative to the camera. This practically unrealistic case will result in a measurement error of up to 850 µm; a deviation of 5° will result in an error of up to 240 µm. Real deviations rarely exceed 0.25°. The figure shows that, with such deviations, the inaccuracy of preform boundaries or geometric center detecting does not exceed 30 μm (
Figure 11).
It can be seen from the figures that, when the preform is rotated, the width of the events increases. That is why the width of both peaks at half maximum is additionally calculated. If, for some reason, the peak edges were not steep enough to obtain correct data, then this indicator value warns of a possible drop in the measurement accuracy in a particular location.
3.3. Refractive Index Profile Stability Monitoring
The distribution of optical–geometric characteristics along the length of the optical fiber is determined by the stability of the refractive-index–preform radius function in each position and projection. To save time in the overall process, detailed examination of the refractive index profile can be carried out using special equipment in one position of the preform.
Figure 12, for example, illustrates the refractive index profile of a silicon oxide, germanium oxide, aluminum oxide, and erbium oxide doped preform, obtained using a commercial testing setup PK2600 (Photon Kinetics, Beaverton, OR, USA).
Unlike the PK2600 setup, the proposed system uses a simple white light source instead of a red laser scanning over the cross section of the preform. The rays emerging from it pass at different angles and different points through the preform; therefore, what the CCD camera records is not the deflection function of any individual ray, but a certain integral dependence common to all these individual trajectories of light and unique for each section.
The measurement operates as follows. An aluminum plate, which is fixed to the side of the optical filter and a few centimeters from the beam of the pump diode, has a color code printed on its surface using heat-resistant paint—this is similar to a QR code, consisting of different areas. For measurement, the camera, radiation source, and sample may not be in a straight line because the white light diode in a multi-reflective measurement cell acts almost as an omnidirectional source. Reflecting from the color-coded plate, light passes through the preform and creates a color pattern on its side surface. This pattern
I′ is captured by the camera and converted into a one-dimensional array using the formula:
where
posmax returns the peak coordinate in pixels and
d is a derivative step. The
Dj dependence is presented in
Figure 13. It is easy to see that, visually, it resembles the deflection function obtained by typical preform analyzers. We call this function ‘integral’ because it is obtained by the interaction of a large number of rays reflected at different angles, as mentioned above.
However, in the future, it is proposed to operate with the modulus of its derivative (with respect to the spatial coordinate), since the integral deflection function itself contains an undesirable large bend, against the background of which small changes in the refractive index are not very noticeable (
Figure 14). This function could be expressed as:
Part of the previous one is superimposed on this figure to give a visual representation of the events (layer boundaries) having some relationship with the refractive index profile obtained with the PK2600. Unfortunately, the resulting function, unlike the standard deviation function in a preform analyzer, cannot be converted into a refractive index profile due to the large number of unknown parameters. However, since it is possible to obtain this function in any position of the preform, using correlation analysis it is possible to compare this function in each position with a “reference” function previously stored in the device memory. Such a standard must be stored for each type of optical fiber preform. Of course, it is quite difficult to produce two or more identical preforms, but production always requires an increase in the suitable products percentage. The reference does not contain a refractive index profile in the classical sense: it contains information about the boundaries of the sample layers. This can be compared to the settings file of a typical refractive index profile analyzer, which stores the search area for layer boundaries. Of course, it is almost impossible to achieve a correlation coefficient of 1 between the test sample and the reference; therefore, the user must set the required threshold value based on the conditions imposed by the preform manufacturer. If the function
u obtained in the form of a discrete one-dimensional array is compared to the reference function
v, the correlation coefficient between them,
R, will be written as [
27]:
where < > means the average value.
Initially, the value of Rmin is entered into the system, which is in the range of −1 to 1. If all correlation coefficients calculated for each section have a value of R > Rmin, the preform is recognized as the one having acceptable quality and proceeds to further stages of production. Otherwise, it is subjected to additional tests using a refractive index profile analyzer.