**Topological Charge Detection Using Generalized Contour-Sum Method from Distorted Donut-Shaped Optical Vortex Beams: Experimental Comparison of Closed Path Determination Methods**

#### **Daiyin Wang 1, Hongxin Huang 2,\*, Haruyoshi Toyoda <sup>2</sup> and Huafeng Liu 1,\***


Received: 30 August 2019; Accepted: 17 September 2019; Published: 20 September 2019

#### **Featured Application: This study will be valuable to researchers working in optical metrology and in the diagnosis of optical communication links through long-distance free-space propagation.**

**Abstract:** A generalized contour-sum method has been proposed to measure the topological charge (TC) of an optical vortex (OV) beam using a Shack–Hartmann wavefront sensor (SH-WFS). Moreover, a recent study extended it to be workable for measuring an aberrated OV beam. However, when the OV beam suffers from severe distortion, the closed path for circulation calculation becomes crucial. In this paper, we evaluate the performance of five closed path determination methods, including watershed transformation, maximum average-intensity circle extraction, a combination of watershed transformation and maximum average-intensity circle extraction, and perfectly round circles assignation. In the experiments, we used a phase-only spatial light modulator to generate OV beams and aberrations, while an SH-WFS was used to measure the intensity profile and phase slopes. The results show that when determining the TC values of distorted donut-shaped OV beams, the watershed-transformed maximum average-intensity circle method performed the best, and the maximum average-intensity circle method and the watershed transformation method came second and third, while the worst was the perfect circles assignation method. The discussions that explain our experimental results are also given.

**Keywords:** wavefront sensor; spatial light modulator; contour-sum method; topological charge; orbital angular momentum

#### **1. Introduction**

Recently, optical vortex (OV) beams, owing to their unique properties, have attracted more and more interest and have been utilized in a wide range of fields—from scientific research to advanced technology applications, such as optical communications [1–4], optical metrology [5–7], and optical trapping and manipulation [8–10]. Many specialties of OV beams are due to their phase singularity in the wavefront function, where the intensity drops to zero and the phase is undefined [11,12]. Moreover, the phase along a closed path enclosing the singularity point varies from 0 to 2*n*π, where *n* is an integer known as the topological charge (TC) or the orbital angular momentum (OAM). OV beams with different TC values perform diverse characteristics and consequently are used as information carriers in state-of-art optical communication systems, which are used to generate sufficient force to manipulate the molecules and so on. To meet the requirements of these applications, the determination

of how to precisely measure the TC value of an OV beam is an important issue, and therefore many methods, such as interferometry-based methods [13], diffraction-based methods [14–16], and model decomposition-based methods [17,18] have been proposed and comprehensively studied.

As one of the key devices used in adaptive optics systems [19], Shack–Hartmann wavefront sensors (SH-WFS) have also been utilized to determine the TC value of OV beams [12,20–23]. An SH-WFS consists of a lenslet array and an image sensor, and directly measures the phase slope of the incident wavefront at each lenslet position. TC determination with an SH-WFS is simple and direct, and the contour-sum method (CSM) has been proposed based on the principle that the net TC value in an area is proportional to the line integral of the phase slope along the closed path circumscribing the area [12].

The basic form of the contour-sum method uses the pre-assigned closed paths, e.g., the closed path associated with the central 2 × 2 or 3 × 3 lenslet area to calculate the discrete line integral, which is employed to determine the TC value of an OV beam [20,22]. This approach is successful in measuring OV beams with nearly uniform or quasi-uniform intensity distribution. However, it becomes deficient under some conditions, especially when the OV beams to be measured have donut-shaped intensity profiles, where phase singularities are embedded in the low-intensity region (dark region), and the phase slope measurement could be invalidated. This condition commonly exists in many OV applications such as OV-based optical communication and OV-based optical metrology. In view of this, in a previous study, we generalized the contour-sum method to be workable for a closed path with an arbitrary shape, and proposed a maximum average-intensity circle (MAIC) method to extract a closed path with only valid phase slope data from the annular intensity profile [24]. We experimentally investigated the MAIC method with both aberration-free OV beams as well as the OV beams distorted by simulated atmospheric turbulence, and we concluded that the proposed MAIC method has good robustness against aberrations. Moreover, we found that the closed path used for circulation calculation has a notable influence on the determined TC value when the OV beams are severely distorted by aberrations. Considering that the closed path is vital for the CSM, we also briefly demonstrated the superiority of the MAIC method against the use of perfectly round circles (PRC) as the closed path [24].

With the aim of further improving the measurement accuracy under the condition of severe distortion and enriching the discussion of the influence of closed paths in circulation calculations, in this paper, we compare the performance of several closed path determination methods: perfectly round circles assignation, watershed transformation, maximum average-intensity circle extraction, and the combination of the watershed transformation with maximum average-intensity circle extraction.

This paper is organized as follows. In Section 2, we present a brief of the generalized contour-sum method (GCSM), and in Section 3, we describe five closed path determination methods for the GCSM. In Section 4, we show the optical setup for the proof-of-principle experiments. In Section 5, we present some experimental results and discussions. The comparisons were conducted under scenarios of both aberration-free OV beams and OV beams distorted by simulated atmospheric aberrations. A summary and conclusion are given in Section 6.

#### **2. Generalized Contour-Sum Method**

The contour-sum method was firstly proposed by Fried and Vaughn in 1992 [12] to prove that there is a branch cut (phase singularity) in the phase function of a light field with strong intensity variation. Since then, it has been adopted to detect OV beams using the phase slopes measured by an SH-WFS [20]. The essence of this method is that the circulation value of the phase gradient along a closed path enclosing the singularity point has a relationship with the TC value *n*, which is an integer and can be expressed as [20]:

$$n = \frac{1}{2\pi} \oint\_{\mathbb{C}} \nabla \phi \cdot d\vec{r} \,, \tag{1}$$

where *C* is a closed path, ∇φ is the phase gradient, and *d* → *r* is an infinitesimal displacement along the closed path *C*.

In practice, in order to determine the TC, we must discretize the line integral according to the geometrical configuration of sampling points, and accordingly rewrite Equation (1) as:

$$\text{Cir} = \frac{A}{2\pi} \sum\_{k=1}^{K} \overrightarrow{S}\_k \cdot \overrightarrow{l}\_{k\prime} \tag{2}$$

where *K* is the number of sampling points along the closed path, and *Sk* and *lk* are the phase slope and discretized contour path of the *k*-th sampling point, respectively. In Equation (2), a constant *A* is introduced to compensate for the error caused by the discretization [24], and *Cir* is the TC value to be measured.

Specially, with the use of an SH-WFS, the discretization is realized by the lenslet array, and the phase slopes are simply obtained by the displacements of the focusing spots produced by the corresponding lenslets. Thus, the discretized contour paths connect the centers of the lenslet areas, forming the closed path. Figure 1 illustrates the generalized contour-sum method. In Figure 1, the square indicates the lenslet area, and the area marked with downward diagonal lines is the element forming the closed path. The discretized-contour path *lk* is represented by the red vector that connects the centers of the two adjacent elements in the closed path, and the blue vector *Sk* is the phase slope, which is the average of the phase slopes at the two adjacent elements.

**Figure 1.** Graph illustrating the generalized contour-sum method.

The raw data from the SH-WFS measurement consist of a Hartmanngram of multiple focused spots. The five main steps of the GCSM are (1) preprocessing the Hartmanngram, including thresholding and segmentation; (2) calculating the phase slope of the individual lenslet area locations according to the SH-WFS working principle; (3) summing the intensity values of all pixels in each lenslet area to obtain the intensity sum value; (4) extracting a closed path from the intensity sum map; and (5) calculating the circulation, which is TC value of the OV beam.

In this algorithm, the closed path should be extracted from the intensity sum map, instead of from the Hartmanngram itself. This is because the intensity distribution behind the individual lenslet areas reflects the average distortion, but not the intensity distribution of the sub-wavefront incident into the lenslets. Since closed path determination is a crucial step, we focus on exploring the most proper closed path determination method in the rest of this paper.

#### **3. Methods for Closed Path Determination**

Basically, there are two strategies to determine a closed path from a given intensity sum map. One is to adaptively extract the closed path along the ridge of the intensity sum map. The reason why we search for the ridge elements is to ensure that all the measured phase slope data along the closed path are valid. Another strategy is to generate a closed path with a perfectly round shape, whose diameter and center could be adaptively varied within a given range. Hence, in this section, we explain closed path determination methods, following each strategy in detail. We introduce three methods that conform to the ridge-extracting strategy, which are watershed transformation (WT), MAIC extraction, and the combination of watershed transformation and MAIC extraction (WT-MAIC),

as well as two methods that conform to the perfectly round circle generation strategy, which are the fixed-center perfectly round circle method (FC-PRCM) and the shifting-center perfectly round circle method (SC-PRCM).

#### *3.1. Watershed Transformation Method*

Watershed transformation (WT) is a technique to extract ridges from an elevation map. This idea originated from the field of topography, and was firstly introduced as an image processing method by Beucher and Lantuéjoul [25]. Since then, many modifications and improvements to the method have been proposed [26–29]. The kernel of watershed transformation is to treat the two-dimensional grayscale input image as a topographic map, as shown in Figure 2, with the grayscale value of each individual point representing the elevation. After this transformation, the image is intuitively divided into several catchment basins, each of which corresponds to a regional minimum in the elevation dimension. The boundaries between diverse catchment basins are considered as the ridges, which are commonly termed watershed lines.

**Figure 2.** A diagram to illustrate the topographic map of an image after performing watershed transformation. The red curves are the watershed lines.

To perform watershed transformation on an image, flooding is the most commonly used strategy, and Meyer's flooding algorithm is the corresponding extensively used algorithm [27]. Its core idea is to flood the entire topographic map, which is equivalent to raising the zero-elevation plane with time. With the flooding, the catchment basin corresponding to the global minimum is first filled with water, following which the other catchment basins start to fill with water one by one. After a certain amount of time, water from different catchment basins meet, and we build barriers to prohibit this phenomenon. The resulting barriers comprise the watershed lines, which correspondingly segment the input image into different regions.

Specifically, to apply watershed transformation on an intensity sum map to determine a closed path, we first perform preprocessing on the intensity sum map, such as thresholding and filtering [26], and then we perform the watershed transformation using the built-in function 'watershed' of MATLAB [30]. Moreover, considering that we aim to determine the net TC value in an OV beam, we should obtain only one closed path out of the multiple watershed lines. This can be realized by merging small regions into their neighboring large regions until there is only one region.

#### *3.2. Maximum Average-Intensity Circle Extraction*

The maximum average-intensity circle (MAIC) method has been proposed to extract a closed path from an intensity sum map for TC determination [24], as is briefly explained here. The basic process is to iteratively search the local peaks from the 2D intensity sum map and to connect these peak elements to form a closed path. The searching begins with the global maximum in the intensity sum map. The local maximum element from among several candidates of the eight neighbors of the current element is selected as the next element.

#### *3.3. Watershed Transformed Maximum Average-Intensity Circle Extraction*

Both WT and MAIC methods are simple and intuitive ways to obtain a closed path for circulation calculation, and they have equally excellent performance when the OV beam has little to no aberration. However, when the OV beam is severely distorted by aberration, the intensity sum map of the OV beam will have a shape far from a perfect circle, and may even split into several parts, resulting in both methods losing their efficiency. To further improve the performance under this severe aberration condition, we here propose a new method that combines WT and MAIC. In this method, we first perform watershed transformation on the intensity sum map, and then search for the ridge from among the elements on the watershed lines based on the MAIC algorithm. With the output of watershed transformation previously obtained, the influence of the element along the ridge and the noise can be significantly reduced, and thereby, a more proper closed path can be extracted. For simplicity, we named this method the watershed-transformed MAIC method (WT-MAICM).

#### *3.4. Perfectly Round Circle Assignation*

As mentioned above, besides the ridge-extracting strategy, adaptively generating an appropriate perfectly round circle according to the intensity sum profile of an OV beam is also a closed path determination strategy that is worthy of discussion. In general, the center position and radius are two integral parameters to generate a perfectly round circle. However, given an intensity sum profile, it is difficult to directly determine the most appropriate perfectly round circle. Consequently, in our previous research, we proposed to generate a group of concentric perfect circles, and chose the one for which the measured TC value had the maximum absolute value as the determined perfect round circle. The center position of the concentric perfect circles was determined as the nearest lenslet position from the centroid of the intensity sum map, while the lower and upper bounds of the radius variation range was determined by the radii of the inscribed and circumscribed circles of the intensity sum profile [24].

The performance of this generated perfectly round circle method (PRCM) as the closed path determination method to measure the TC value of a distorted OV beam is generally worse than that of the MAIC method [21,24]. The reasons are listed as follows:


Notwithstanding, the generated circle should be fixed to a perfectly round shape to agree with the annular intensity sum profile and the position. Moreover, the radius should be forced to be an integer. This is because if we adaptively vary the shape or precisely calculate the position and radius and determine the phase slopes of the non-integer position by interpolation, the incurred complexity will go against our original goal of maintaining simplicity in the calculations. However, regarding the center position determination, the performance may be improved by shifting the center position within a proper region, referring to the radius variation. Although this dramatically increases the closed path determination time, we can accept a tradeoff between TC determination speed and accuracy if such a tradeoff manifests an improvement in the determination performance.

Consequently, in this paper, we propose a new, modified method for the generation of perfectly round circles based on the previously proposed method, where the modification is merely to shift the center position of the concentric perfectly round circles within an advisable region, as is further discussed in Section 5. For the sake of differentiation, we named this method the shifting-center perfectly round circle method (SC-PRCM) and the previously proposed perfectly round circle method the fixed-center perfectly round circle method (FC-PRCM).

#### **4. Experimental Setup**

To verify the performance of these methods of closed path determination for the GCSM, we built an experimental setup, as shown in Figure 3 [24]. As shown in Figure 3, a collimated laser beam of wavelength 632.8 nm passed an aperture with a diameter of 4 mm, and was incident on a liquid crystal on silicon-spatial light modulator (LCOS-SLM). The LCOS-SLM was used to transform the incident beams into the optical vortex beams as well as bring aberrations into the beam. The beam reflected back from the LCOS-SLM was converged by a lens with a focal length of 2 m, and was split by a beam splitter into two beams. Finally, we used an SH-WFS to record the Hartmanngram and a complementary metal-oxide semiconductor (CMOS) camera to check and record the intensity profile of the OV beam. The LCOS-SLM was set at the front focal plane of the lens, while the SH-WFS as well as the CMOS camera were both located at the back focal plane of the lens.

**Figure 3.** Schematic diagram of the experimental setup.

The LCOS-SLM (Hamamatsu, X10468-01) was a pure phase modulator consisting of 792 × 600 pixels with a pixel size of 20 × 20 μm [31]. The SH-WFS consisted of two elements: a square grid lenslet array with a pitch size of 200 μm and focal length of 11 mm, and a high-speed intelligent vision sensor with 512 × 512 pixels and a pixel size of 20 × 20 μm [32]. The SH-WFS was mounted on a mechanical platform that could be moved along the horizontal direction. The movement was precisely controlled by a stepping motor system. The CMOS camera was 2592 × 2048 pixels and had a pixel size of 4.8 × 4.8 μm.

#### **5. Results and Discussion**

#### *5.1. Performance Comparison Based on Aberration-Free OV Beams*

To compare the above five closed path determination methods, we first evaluated the performance under the aberration-free OV beam condition. In the experiments, we displayed various computer-generated holograms (CGHs) with a spiral phase structure on the LCOS-SLM to generate an OV beam with TC values ranging from ± 1 to ± 20. In order to enrich the data amount as well as eliminate the randomness, for each TC value, we repeatedly recorded the Hartmanngrams at 15 different SH-WFS positions by laterally moving the mechanical platform. Hence, we obtained a total of 600 Hartmanngrams (40 TC values and 15 SH-WFS positions). Figure 4 shows an example of the spiral phase pattern displayed on LCOS-SLM, and the corresponding Hartmanngram as well as the intensity profile image respectively recorded by SH-WFS and CMOS camera.

**Figure 4.** Example of (**a**) spiral phase pattern, (**b**) Hartmanngram, and (**c**) intensity profile recorded by the complementary metal-oxide semiconductor (CMOS) camera. The bar indicates 1 mm.

As described in Section 2, for each recorded Hartmanngram, we first performed the necessary preprocessing, and then summed all the pixel values in each lenslet region to obtain the intensity sum map and measured the phase slopes at the lenslet positions according to the SH-WFS working principle. After that, we used each of the studied five methods to determine the closed path, and then utilized the generalized counter-sum method to calculate the TC value.

Considering that the theoretical TC value is an integer, for TC measurement, if the absolute difference between the measured TC value and the theoretical TC value is lower than 0.5, the measured TC value will be identical to the theoretical TC value after rounding—we define this as a well-determined TC measurement. Based on this definition, we introduced a parameter, the well-determined TC measurement ratio (WTCMR), which is the percentage of well-determined TC measurements within a given set of TC measurements, to quantitatively compare the performance of the diverse closed path determination methods.

The performance evaluation results of the individual closed path determination methods are given in Figures 5 and 6. In Figure 5, each value is the WTCMR of a group of 30 measurements (15 positions for both positive and negative TC values), while in Figure 6, the WTCMR is the statistical average of all 600 measurements. Figure 6 also shows the corresponding processing speed of the individual methods in terms of frames per second, which was measured on an Intel Core i7 computer with a CPU frequency of 3.7 GHz and 16 GB of RAM.

**Figure 5.** Well-determined topological charge (TC) measurement ratio (WTCMR) distributions of the five closed path determination methods when the TC value changes from ±1 to ± 20.

As shown in Figures 5 and 6, the WT, MAIC, and WT-MAIC methods performed perfectly, since the WTCMR was shown to be 100% for all tested TC values. However, the WTCMR of the FC-PRCM and the SC-PRCM was not always 100%, even when measuring aberration-free OV beams. Moreover, the performance of these methods deteriorated with the increase of the TC value. The SC-PRCM was the worst method in terms of both the WTCMR and the computing speed.

**Figure 6.** TC determination performances of five closed path determination methods in terms of the WTCRM (bars) and speed in frames per second (fps) (points). Each bar or point is the statistical average of all 600 measurements under the corresponding closed path determination method.

Considering that the TC determination progress is the same apart from the closed path being determined by diverse closed path determination methods, the reason for the differences in performance was exclusively due to the differences in the determined closed paths. To concretely demonstrate these differences, as well as the origin of the differences and why this factor significantly changes the TC determination performance, we present some examples in Figure 7. In this figure, the columns from left to right indicate the results using the WTM, MAICM, WT-MAICM, FC-PRCM, and SC-PRCM, and the rows from top to bottom are the inputted OV beams of TC values of 10 (top row), 15 (middle row), and 20 (bottom row). In Figure 7, each row has the same intensity sum map, and the red circle superimposed on the intensity sum map is the closed path extracted by the individual method.

From the figure, it can be seen that the WTM, MAICM, and WT-MAICM can properly extract closed paths passing through the ridge elements, thus obtaining precise TC values. On the other hand, for the FC-PRCM and SC-PRCM, the closed path might go through the low-intensity sum regions where the measured phase slopes are invalid. When the TC value of the OV beam varied from 10 to 15 to 20, the effects of the nonuniformity of the intensity sum map along the azimuthal direction increased. As a result, the closed path deviated more from the ridge elements and went through the low-intensity regions, generating a large error in the measured TC values. As for the SC-PRCM, its performance deteriorated more compared with that of the FC-PRCM. This reveals that simply shifting the center position of the concentric perfectly round circles does not improve, but rather reduces the performance. We believe that this is because shifting the center position of perfectly round circles may deviate the centers even further from the real position of the vortex, and thus they become liable to be affected by the invalid slope data in the low-intensity regions. Considering that we chose the generated perfectly round circle with the maximum absolute calculated TC value as the determined closed path, the SC-PRCM mostly output a measured TC value larger than the theoretical value. Moreover, in combination with the poor performance of the FC-PRCM, we found that the unified closed path determination criterion in the PRCM strategy, which is to select the generated perfectly round circle with the maximum absolute calculated TC value as the determined closed path, is not entirely rational, although it is simple.

The above finding was also supported by analyzing the errors of the measured TC values. Figure 8 shows the histogram distributions of the absolute error (AE) of TC measurements, wherein the *x*-axis is the interval of the AE, which is the absolute difference between the measured TC value and the theoretical TC value, and the *y*-axis is the frequency of the 600 measurements. From the distribution, we can view the striking difference between the performances of the WTM, MAICM, and WT-MAICM, and those of the FC-PRCM and SC-PRCM. Almost all of the AEs of the former three methods are concentrated within the value interval of 0 to 0.2; on the contrary, the AEs are distributed in a partly uniform pattern with one-sixth within the value interval of over 0.5 when using the FC-PRCM, while the majority are in the value interval of over 0.5 when using the SC-PRCM. This phenomenon profoundly influences the correct rate of TC determination when we shrink the confidence interval of the well-determined TC measurements. For example, when using AE < 0.3 as a criterion for the well-determined TC measurement definition, the WTCMR values of the WTM, MACIM, and WT-MAICM are still over 99%; however, those of the FC-PRCM and SC-PRCM drop to 63.3% and 19.7%, respectively.

**Figure 7.** Examples showing the differences of the closed paths (the red circles) and the TC values determined by (**a**) the watershed transformation method (WTM), (**b**) the maximum average-intensity circle method (MAICM), (**c**) the WT-MAICM, (**d**) the fixed-center perfectly round circle method (FC-PRCM), and (**e**) the shifting-center (SC-PRCM). The theoretical TC values are 10 (top row), 15 (middle row), and 20 (bottom row). In each image, the background is the intensity sum map, and the red circle is the closed path determined by the individual methods. The corresponding measured TC value is labeled in each image.

**Figure 8.** Absolute error (AE) histograms of 600 TC measurements separately using WTM, MAICM, WT-MAICM, FC-PRCM, and SC-PRCM. The number over each bar indicates the number of the measurements whose AE values are within the corresponding interval.

According to the previous discussions, we concluded that the FC-PRCM and SC-PRCM are not appropriate as closed path determination methods, and that generating a perfectly round circle is not a plausible nor practical closed path determination strategy. Therefore, we started to just involve the other three closed path determination methods in the following comparison.

#### *5.2. Performance Comparison Based on Distorted OV Beams*

Many practical applications of OV beams, e.g., free-space optical communication and optical remote metrology, require the determination of the TC value of the OV beam after its propagation over a certain distance in the atmosphere. Consequently, it is vital to evaluate and compare these methods for OV beams distorted by atmospheric turbulence. Generally, a turbulent atmosphere can be treated as an inhomogeneous refractive index media, featured by the structure parameter *C*<sup>2</sup> *<sup>n</sup>* [33], and its impact on the light beam propagating through it can be accumulated as phase screens [34]. Supposing the turbulence satisfies the Kolmogorov model, the phase screen can be simulated through the Zernike polynomial [35]. The coefficients of the Zernike polynomial are related to the normalized correlation length *r*0/*D*, where *r*<sup>0</sup> is the Fried parameter and *D* is the beam size [36].

In the experiments, considering that the beam size used in our optical system was 4 mm, we chose *r*<sup>0</sup> to be 1, 2, 3, 4, and 6 mm, and the normalized correlation length *r*0/*D* was chosen to be 0.25, 0.5, 0.75, 1, and 1.5. Under each *r*0/*D* value, we performed 50 random realizations of phase screens, which were then separately superposed with the spiral phase pattern (SPPs) whose TC values were 1, 5, and 10. Therefore, we generated 750 phase patterns altogether (three TC values, five *r*0/*D* values, and 50 phase screens). Afterwards, we displayed the phase patterns one by one on the LCOS-SLM and recorded the corresponding Hartmanngrams. The SH-WFS did not move herein, because we had already generated 50 diverse phase patterns under each condition for repeated testing. For each Hartmanngram, we respectively used WTM, MAICM, and WT-MAICM as the closed path determination method, and determined the TC value by the generalized contour-sum method.

The experimental results evaluated by the WTCMR are given in Figure 9, where Figure 9a–c corresponds to a set of 50 measurements for the individual TC, and Figure 9d is the mean of these measurements.

**Figure 9.** TC determination performance of three methods under different atmospheric turbulence conditions, where (**a**) is under *TC* = 1, (**b**) is under *TC* = 5, (**c**) is under *TC* = 10, and (**d**) is the average of (**a**), (**b**), and (**c**).

Based on the experimental results, we found that the general tendency of TC measurement accuracy increased along with the increase of *r*0/*D*, which means the atmospheric turbulence becomes gentler. Moreover, when changing the closed path determination method from WTM to MAICM to WT-MAICM, the TC determination accuracy improved, especially when the atmospheric turbulence was severe. Supposing the accredited WTCMR to be over 0.9, we found that the limitations of *r*0/*D* for WTM, MAICM, and WT-MAICM were around approximately 0.75–1, 0.5–0.75, and 0.25–0.5, respectively.

Figure 10 is an example specifically illustrating why the WT-MAICM has a better performance than the WTM and MAICM. Here, the OV beam to be measured has a TC value of 10 and a turbulence strength parameter *r*0/*D* of 0.25. The top, middle, and bottom rows show the closed path determination processes and corresponding results when utilizing WTM, MAICM, and WT-MAICM separately. From the significantly different determined closed paths, we can see that when the turbulence is severe, the distorted OV beam can have extremely nonuniform regions. These nonuniform regions can trap the MAICM searching process into a local loop (middle row), or lead the WTM-determined closed path to be more than one part (top row), which conflicts with the aim of extracting only one closed path; these results eventually deteriorate the performance of the WTM and MAICM.

**Figure 10.** Closed path determination processes and results using the WTM (top row), MAICM (middle row), and WT-MAICM (bottom row).

#### **6. Conclusions**

In this paper, we presented an experimental comparison of five closed path determination methods for the use of GCSM to detect the TC of an OV beam. These five methods are the previously proposed MAICM and FC-PRCM, and three newly proposed methods: the WTM, WT-MAICM, and SC-PRCM. These methods come from two strategies: WTM, MAICM, and WT-MAICM belong to the strategy of ridge extraction, and FC-PRCM and SC-PRCM are derived from the strategy of PRC assignation. The codes for the algorithms are available from the authors by email. The methods were tested with an optical setup that used a LCOS-SLM as an OV beam, and an aberration generator and an SH-WFS located at the far-field plane to measure the OV beams. Two types of evaluation experiments were performed. One was under the condition that the OV beam had hardly any aberration, and the other was under the condition that the OV beam was distorted by simulated atmospheric turbulence. The experimental results indicate that the methods with ridge extraction outperform those based on PRC assignation in terms of both the well-determined detection rate and the processing speed.

Under the condition of an aberration-free OV beam, the WTM, MAICM, and WT-MAICM performed excellently and achieved WTCMR values of 100%. On the other hand, the WTCMR values of the FC-PRCM and SC-PRCM were not always 100% and the performance was reduced with the increase of the TC value. What is more, the SC-PRCM was found to be the worst method in terms of both the WTCMR and the processing speed. There are a number of reasons that can explain the poor performance of the PRC assignation strategy, but the major factors are that the generated circle is prone to deviate from the OV center and go through regions with low intensity as well as invalid phase slope data, which are caused by nonuniformity in the intensity sum map along the azimuthal direction. The performance of the SC-PRCM was worse than that of the FC-PRCM, indicating that simply shifting the center of the PRC is not a suitable solution, and the maximum absolute TC value hunting criterion in the PRC assignation strategy lacks rationality.

In the case of measuring distorted OV beams, the WTM, MAICM, and WT-MAICM show certain differences in term of the WTCMR, especially when the turbulent strength is high. Among these three methods, the WT-MAICM shows the strongest robustness against distortion, and the WTM was found to be the weakest. The limits of *r*0/*D* to achieve WTCMR > 90% were around 0.75–1, 0.5–0.75, and 0.25–0.5 for the WTM, MAICM, and WT-MAICM, respectively, in terms of normalized correlation length. Overall, these results reveal that adaptively determining the closed path is necessary for GCSM in detecting the TC from a distorted OV beam.

**Author Contributions:** Conceptualization, D.W. and H.H.; funding acquisition, H.L.; methodology, D.W. and H.H.; project administration, H.H.; resources, H.T.; supervision, H.L. and H.T.; validation, D.W. and H.H.; writing—original draft, D.W.; writing—review and editing, D.W. and H.H.

**Funding:** Shenzhen Innovation Funding (No: JCYJ20170818164343304, JCYJ20170816172431715); National Natural Science Foundation of China (No: U1809204, 61525106, 61427807, 61701436); National Key Technology Research and Development Program of China (No: 2017YFE0104000, 2016YFC1300302).

**Acknowledgments:** We gratefully acknowledge A. Hiruma and T. Hara for their support and encouragement throughout this study. Wang thanks the exchange program between Hamamatsu Photonics K.K. and Zhejiang University. The support from the above funding organizations is also gratefully acknowledged.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Non-Contact Dermatoscope with Ultra-Bright Light Source and Liquid Lens-Based Autofocus Function**

**Dierk Fricke 1,\*, Evgeniia Denker 2, Annice Heratizadeh 2, Thomas Werfel 2, Merve Wollweber <sup>1</sup> and Bernhard Roth 1,3**


Received: 29 March 2019; Accepted: 23 May 2019; Published: 28 May 2019

#### **Featured Application: This work reports on a new non-contact dermatoscope targeting at improved examination and documentation of skin diseases through enhanced functionality compared to conventional contact-based systems.**

**Abstract:** Dermatoscopes are routinely used in skin cancer screening but are rarely employed for the diagnosis of other skin conditions. Broader application is promising from a diagnostic point of view as biopsies for differential diagnosis may be avoided but it requires non-contact devices allowing a comparably large field of view that are not commercially available today. Autofocus and color reproducibility are specific challenges for the development of dermatoscopy for application beyond cancer screening. We present a prototype for such a system including solutions for autofocus and color reproducibility independent of ambient lighting. System performance includes sufficiently high feature resolution of up to 30 μm and feature size scaling fulfilling the requirements to apply the device in regular skin cancer screening.

**Keywords:** dermatoscopy; skin screening; biomedical imaging

#### **1. Introduction**

A dermatoscope is the standard instrument for a first examination of skin conditions. The whole field of dermatoscopy started with the use of epiluminescence microscopes for this purpose [1–3]. State-of-the-art dermatoscopy devices today are mostly contact based and often include a camera to capture digital images for documentation. Compared to a non-contact setup, a contact-based dermatoscope exhibits several disadvantages. State of the art contact-based dermatoscopes have built-in cameras to make the pictures digitally available. The skin contact may cause distortion of the skin geometry, which makes it more difficult to compare pictures made at different examinations. Also, the contact could suppress the perfusion hampering detection of small vessel structures. When thinking of infected lesions, the contact can also be painful and represents a larger invasion to the privacy of patients. A non-contact device circumvents all these problems. In addition, one can realise a larger field of view and the concept is much more compatible with automation approaches. Challenges of a non-contact design are the implementation of a variable focus and the realization of a lighting situation, which is independent from the surrounding light.

In order to exploit the advantages of the non-contact design with regard to automation, some challenges need to be addressed. For example, to obtain sharp images, focus control is required. The preferred solution for easy handling or automation is an autofocus system. For documentation

and post-processing, the images should also be digitally available. Whereas digital image acquisition is already state-of-the-art in contact-based devices [4,5], the documentation is required to compare pictures from different examinations in order to identify changes in size or color of relevant skin areas. The size of a nevus is, for example, one criterion of the ABCDE (**A**symmetry, **B**order, **C**olor, **D**iameter, **E**volving) rule of dermatoscopy to visually identify malignant melanoma [6,7]. Therefore, it is necessary to record comparable images in non-contact mode without distortion of the skin geometry, as skin contact with standard devices can significantly distort the geometry and suppress blood perfusion correct assessment of skin lesions if often difficult [8]. As the hemoglobin of the blood, besides melanin, is one of the two dominant dye molecules in the skin, the suppression results in a color change of the skin area under study. The comparison of skin colors is not only important for skin cancer screening but also interesting, for example, for inflammatory skin diseases (e.g., erythema in the Psoriasis Area and Severity Index (PASI)). To enable to calibrate the camera sensor in a way that colors can be identified and compared between images made at different examinations and possibly under different ambient light situations, a bright light source is needed.

A dermatoscope provides 2D information about light intensity and color of the imaged location, i.e., skin area. As this is the same information generated by the human eye, the resulting data is comparably intuitive for the dermatologists. In addition, to obtain information from deeper layers of the skin, cross polarization can be used (see Section 2.3) [9]. Other techniques like optical coherence tomography (OCT) [10–12] or high frequency ultrasound [11,13] normally generating a 2D depth images are based on the optical thickness (for OCT) or the time of the propagation of the acoustic waves in the medium (for ultrasound), respectively. For these modalities, by scanning a 2D area, 3D information is generated. The resulting data, however, do not provide information about the color of the tissue and special training is required for their correct interpretation. Also, compared to dermatoscopes, OCT and high frequency ultrasound devices are much more expensive. Thus, there is a need for simple, easy-to-operate and yet accurate dermatoscope systems capable of providing the functionality required for proper skin examination without distortion of the geometry, i.e., in non-contact mode.

In this work, we present a novel non-contact dermatoscope featuring properties advantageous for more reliable and comparable skin examinations in the future. The light source of the developed prototype is an ultra-bright white light-emitting diode (LED), which allows for full control of the lighting situation to be independent from the surrounding light. Also, by realizing a calibrated display, it is possible to display the natural colors of the skin area. Furthermore, the non-contact approach enables a much larger field of view. While pigmented nevi that are examined in melanoma screening are often 0.5–2-cm wide, inflammatory lesions are generally several centimeters wide. Thus, their examination requires a much larger field of view of the imaging system compared to a device designed for skin cancer screening only. The prototype described here has a field of view of about 17 <sup>×</sup> 13 mm2 and can, in the future, be combined with a second camera to record images of even larger skin areas.

#### *1.1. State-of-the-Art Commercially Available Non-Contact Systems*

A few non-contact devices for skin examination have already been reported. They can be categorized in (i) non-contact dermatoscopes, which have a high magnification (see for example [4,5]), and (ii) non-contact skin screening devices for overview images of the skin (see for example [14]).

So far, different designs for non-contact dermatoscopes are utilized. One type is realized as a handheld magnifying glass while another is configured as an adaptor for mobile phones. The latter use the camera of the mobile phone to take the images. Other devices rely on built-in camera and illumination sources. However, all these systems are applied at short working distances of a few centimeters. An important aspect in this context is (auto)focusing, which has to be considered for all non-contact devices. For example, non-contact devices with more than 2 cm working distance use spacers to avoid the need of autofocusing. Furthermore, the illumination conditions are more difficult to control for non-contact devices as environmental influences are more relevant. Thus, sufficiently bright light sources are needed to ensure reproducible illumination conditions [4,5].

Further concepts rely on imaging systems at large working distances, which provide only overview images of the skin with lower resolution compared to dermatoscopes. They usually make use of standard cameras with passive autofocusing techniques that employ phase or contrast detection. The cameras are mounted on a movable holder so that images of the whole body can be taken. As room light is used for illumination it is difficult to compare images taken under varying lighting conditions [14].

#### *1.2. Prototype of the New Non-Contact Device*

The main advantages of the non-contact prototype system presented in this work are the large working distance while providing the high-resolution level of contact-type dermatoscopes. At present, the prototype is optimized for a working distance of 45 cm ± 3 cm. This is realized with a liquid lens-based autofocus, which has not yet been used for this purpose so far. In principle, other working distances are also possible requiring only marginal changes to the current design as further discussed in Section 2.

As an illumination source, an ultra-bright LED light source with full polarization control is used. This choice makes the device independent from the surrounding light situation and allows reproducible color measurements. Also, the color of different skin sites and lesions can more easily be compared between different images. Such analysis is, for example, important for skin cancer screening as multiple colors and color changes can be indicative of melanoma [6]. Due to the design on a swivel arm the ultra-bright light source can easily be powered. By controlling the polarization state of the measured light, the physician can increase or decrease the contrast of the surface morphology as desired. The presented autofocus function is fast compared to iterative autofocus solutions, which are based on contrast analysis of the captured image. Also, the built-in infrared distance sensor can provide information about the scale in the taken image. The new non-contact system allows for automated imaging functionality so the physician is able to map the skin without manual activity.

First preclinical studies with an earlier version of our non-contact dermatoscope indicated the potential benefit of the system [8]. For example, in the investigation of inflammatory skin diseases such as lichen ruber planus and psoriasis subtle details of the lesions could be visualized and natural color appearance ensured. With the early stage of the system structural changes such as hypergranulosis [15] were visible for lichen ruber planus. In psoriasis, variations of the capillary vessels could be seen. For this inflammatory skin disease, characteristic blood vessels are opening in the upper skin. These vessels are normally seen in the histopathology of the diseased skin. With the non-contact prototype, it was possible to observe these vessels as round structures in the image. This success could lead to a reduction of necessary biopsies for differential diagnosis. In the future, the newly designed setup described here could be part of an automated skin scanner where the patient is screened at a first examination stage without the need for a dermatologist to be present. However, the system described in [8] did not feature an autofocus function. Also it was mounted on a stable substrate board so that the patients had to move in front of it. Thus, not all regions of interest on a patient's body could be reached and handling by the dermatologist was still limited. Also, image acquisition took longer so that fewer patients could be examined. Using the new system presented in this work, even lesions on hard-to-reach areas of the body could be examined with high resolution.

#### **2. Materials and Methods**

Figure 1 shows the non-contact dermatoscope developed in this work. The setup consists of a camera with liquid lens-based autofocus and an illumination unit. An infrared laser distance sensor is attached besides the illumination unit. The camera is connected to a computer for control of the setup and image post processing. A movable polarizer is mounted in front of the liquid lens. The illumination unit consists of an ultra-bright white LED-chip and a reflector that redirects the light emitted by the LED and illuminates the region of interest homogeneously. A fixed polarizer is mounted in front of the reflector. In the subsections below, the different parts of the prototype are described in detail.

**Figure 1.** Schematic drawing (**left**) and photograph of the prototype of the non-contact dermatoscope. The imaging unit consists of the camera itself (**1**) a liquid lens (**2**) in front of the camera lenses and a rotatable polarizer (**3**). An infrared laser-based distance measurement device (**4**) besides the illumination unit is used to control the shape of the liquid lens and, thus, the focus length. The illumination unit is set up of an ultra-bright white LED source mounted on a radiator and a reflector. The light is polarized by a polarizer mounted in front the reflector. This setup is able to illuminate a target (**5**), measure the distance to this target and take images for different polarization settings.

The setup is mounted on an aluminum plate that is connected to the swivel arm mounted on a table that makes it possible to adjust its position in the room and with respect to the patients. The camera, the polarizers and the liquid lens are connected to a computer, which controls the system and performs digital image processing. Component costs for the presented non-contact dermatoscope are approximately 4000 €.

#### *2.1. Camera with Liquid Lens-Based Autofocus*

We integrated a liquid lens-based autofocus as a novel solution for the variable focus problem in a freely movable non-contact dermatoscope. A liquid lens has the advantage of fast tunability of the focal length. The liquid lens consists of a reservoir filled with an opto-fluid delimited by a transparent membrane consisting of silicon or thin glass. The reservoir is enclosed by a Piezo ring, which can increase the pressure in the reservoir if an electrical voltage is applied to it. The pressure leads to different curvatures of the membrane, which results in different focal lengths of the lens. The schematic setup of the camera is shown in Figure 1. The radius of curvature of the lens is controlled by the distance sensor. The system is adjusted so the skin area of interest is always in the focus. This is ensured by actively measuring the distance between the target and the camera to follow the target even if the latter is not illuminated or does not show visible sharp edges.

The camera (FL3-GE-28S4C-C, FLIR Integrated Imaging Solutions, Inc., Richmond, BC, Canada) has a resolution of 1928 × 1448 and a maximum image rate of 15 frames per second (FPS). It is equipped with an ICX687 CCD color sensor by Sony (Minato, Tokio, Japan) which has a sensor size of 1/1.8 and a pixel size of 3.69 μm. To record a dermatoscopic view of the skin, a zoom lens (NMV-75M1, Navitar, Rochester, NY, USA) with a focal length of 75 mm is mounted in front of the camera. In addition, the liquid lens (EL-16-40-TC, Optotune AG, Dietikon, Switzerland) is mounted in front of the zoom objective with a self-made adapter. It is an electrically tunable large aperture lens (with a clear aperture of 16 mm). The lens is controlled by the Optotune Lens Driver 4i via the "Lens Driver Controller" software. In order to realize an autofocus system, the tunable lens was combined with a distance sensor (DT35-B15851, Sick AG, Waldkirch, Germany). The lens driver can process an analog input signal from the distance sensor to set the correct focus length for a sharp image of the target. The distance sensor is based on an infrared laser distance sensor operating at 827 nm. It has an accuracy of 0.5 mm in the relevant range below 1 m, depending on the reflectivity of the target and the measurement speed, which is set. This setup realizes a fast autofocus as it does not contain moving parts. It is calibrated with the lens driver controller software. Here, a look-up table can be configured which relates every measured distance to a specific voltage for the liquid lens. The lens needs around 5 ms to respond and around 25 ms to settle to the new focal length. The exact settle time depends on the voltage difference

applied to the lens. The distance sensor has a time delay of 6.5 ms between the measured event and the output signal in the "fast" mode. This results in a total delay from the event to the settling of the lens of approximately 31.5 ms and allows to adjust the focus approximately 30 times a second, which is twice the frame rate of the camera. As patients usually only move slowly and slightly during dermoscopy, the autofocus is able to focus on the target all the time.

As the camera and the distance sensor are not coaxially aligned, the functional distance of the setup is limited. The available focus distance is 45 ± 3 cm. If the distance sensor was arranged coaxially to the camera, for example, by aid of a dichroic mirror, it would be possible to focus over a much larger distance. A wider focal range could be realized in combination with an illumination unit that is able to homogeneously illuminate a larger area than the actual version with the same light intensity Though the used liquid lens is one of the biggest commercially available lenses by now, its aperture is much smaller than the aperture of the camera lens. This makes it necessary to choose the aperture of the camera to be smaller to avoid strong aberrations from the edge of the liquid lens. A small aperture also leads to a relatively low light exposure of the camera sensor, which needs to be compensated by larger exposure times. In dermatoscopy, exposure times are limited by the natural movement of the patient can be avoided, such as blurring or other movement artifacts. Due to the ultra-bright LED illumination, acceptable exposure times of 75 ms can be achieved.

#### *2.2. Ultra-Bright LED White Light Source*

For illumination, a phosphor based ultra-bright LED white light source (CBT-90 White LED, Luminus Inc., Sunnyvale, CA, USA) is employed. The source is phosphor based and converts blue light from an LED-chip into the yellow spectral range. The emitted light mixture exhibits a white spectrum, which is generally more continuous compared to a mixture of red, green, and blue (RGB) LED chips.

The source has a color rendering index (CRI) of 76 (for comparison: lighting in surgical rooms requires a CRI > 85 [16]). In general, the closer the CRI of a light source is to the value of 100 the better the source can render the colors of real-world objects, i.e., skin lesion in our case. This then corresponds to the color perception during daylight.

The LED is mounted on a self-made reflector that was simulated and designed with OpticStudio (Zemax LLT, Kirkland, WA, USA) aiming at homogeneous illumination of a target at the highest possible intensity in the distance of interest (ca. 45 cm from the liquid lens). The bright illumination has the advantage, that the illumination of the target is well-defined and less influenced by the surrounding light. This is important for the color adjustment of the camera, which is of interest as the color of a lesion is a relevant parameter in dermatoscopy. For example, it is also taken into account in the ABCDE rule of dermatoscopy [6,7]. Also, it is of interest to evaluate the temporal evolution of a lesion. Furthermore, in the field of inflammatory skin diseases, different color variations need to be identified, e.g., shades of red. Another useful application area could be the color identification of hematoma allowing to predict the age of injuries of patients [17,18]. For detection of changes of the colors of lesions, the comparability of images taken at different examination sessions and different times must be ensured. Also, different disease stages could be assigned by evaluating the color (e.g., redness in the PASI as well as the Eczema Area and Severity Index (EASI), respectively). To fulfill these requirements, in particular, to allow for a more natural color representation comparable to the perception of a physician, color calibration is important. This ultimately also increases the acceptance of the system by the dermatologists.

#### *2.3. Full Polarization Control*

The concept of using polarized light for skin examination is well known in the literature [19–21]. There are two polarizers in the setup presented in this work: one in front of the illumination unit and one in front of the liquid lens.

The polarizer in front of the camera is mounted on a rotation stage such that the operator can switch between cross polarization and parallel polarization by means of custom software. In cross-polarization mode, the orientation of the two polarizers is 90 degrees. Because the directly reflected light does not change its polarization for the most part, this light will be filtered out. Only light that is scattered in the skin is detected in this case. If the orientation of the polarizers is parallel to each other, the effect is the opposite: the scattered light is filtered out and the light that is directly reflected will be detected. An image taken with parallel setting of the polarizers can provide information about the surface topography and structure while a cross polarization image will provide information from deeper skin layers. Due to the rotation stage, also any orientation between cross and parallel can be set. In Figure 2, images of a scar taken with parallel and cross-polarization are shown. On the picture taken with a parallel polarization setup, facets of the skin on the scar appear to be larger than those of the surrounding skin areas. The image with the crossed polarization shows a darker reddened area around the scar, which is not clearly visible in the other image. The scar itself appears brighter and whiter.

**Figure 2.** Image of a scar with parallel polarization (**left**) and cross-polarization (**right**). Due to the movement of the patient, the images do not show exactly the same part of the skin. The imaged area contains healthy skin and scar tissue. The scar tissue can be recognized by the larger facets in the left image and by the lighter color in the right image.

#### *2.4. Computer Control and Digital Post Processing*

The focal length of the tunable lens is controlled by the Optotune lens driver controller software. The autofocus is calibrated using a look-up table where an input voltage from the distance sensor is related to an output voltage for the liquid lens. The software calculates the resulting linear fit function automatically. The capturing of the image and the polarization control as well as image post-processing is done with a self-developed software based on LabVIEW (National Instruments AG, Austin, TX, USA). This software includes a basic database structure for the pictures.

#### **3. Results**

#### *3.1. Camera Parameters*

Relevant camera parameters were measured with and without liquid lens included in the setup to compare the system performance, i.e., focal length, resolution, depth of field (DOF), and image scale (detailed below).

#### 3.1.1. Focus Distance

Without the liquid lens, the focal length of the optical system is limited to a range of 40 cm starting from a distance of approximately 43 cm from the end of the camera lenses to approximately 83 cm along the optical axis.

As mentioned, the current alignment of the components allows to ensure, that the laser spot of the distance measurement sensor is in the area imaged by the camera in a distance range of 45 cm ± 3 cm. Ideally, the sensor aims for the center of the region of interest on the skin, which is also supposed to be in the center of the illuminated spot. As the camera and the image sensor are not on axis, this is only true for a certain distance, which was set to 45 cm. In the range of ±3 cm from this point, the laser spot from the distance sensor is still in the field of view of the camera, so that the signal correlates with the image signal. Leaving this area between the system and the patient, the autofocus works only for small sample curvatures in range of the depth of field. In the future, the laser could be co-aligned with the camera. In such an aligned setup, the range of the autofocus would be in the range of the possible focal length of the combination of the camera lenses and the liquid lens. With our setup, it was possible to focus objects at distances of approximately 17 cm from the liquid lens up to at least 8 m (limited by laboratory size). In this setup, only the alignment of the camera system and the illumination unit would limit the range of operation, however, the resolution decreases and the image size increases with distance. This effect has to be taken in to account to be able to specify a reasonable work range.

#### 3.1.2. Resolution

In consultation with dermatologists from the Hannover Medical School (MHH), the goal was to be able to resolve 30-μm large structures on or within the skin. This ensures that all blood vessels that could be of diagnostic relevance can be resolved. A 1951 USAF resolution test chart was used to determine the maximum resolution of the prototype (see in Figure 3). In this case, the test chart was placed at a distance of 45 cm from the liquid lens or the camera lens, respectively. The first picture Figure 4 left shows an image taken with the described prototype. A magnified view of the groups four and five are shown next to the image. The second picture, Figure 4 right shows an image taken under the same conditions but without the liquid lens. Due to the measurement process, the images are mirrored. In the setup with the liquid lens, the element 2 in group 4 can still be resolved. This corresponds to 17.96 lines per millimeter leading to a resolution of 27.87 μm. In the image taken without the liquid lens, the element 4 in group 4 can still be resolved. This element contains 22.63 lines per millimeter and corresponds to a resolution of 22.1 μm. The results of this measurement with the system without liquid lens being slightly more accurate cannot easily be transferred to an in vivo measurement. In the latter case, the autofocus could compensate for blurring caused by transversal movements of the patient during the measurement. However, this result shows that the resolution of the setup with the liquid lens still enables reasonable resolution for the measurements on skin samples together with improved handling.

#### 3.1.3. Depth of Field (DOF)

DOF, i.e., the region where objects are sharply imaged, is an important parameter for dermatoscopic systems because the human skin is not flat and nevi may even be raised up to several millimeters from the skin. The DOF here is given for a fixed focal length of the liquid lens. It is important to compensate for calibration errors of the autofocus system and possible measurement uncertainties of the distance sensor as it adds some tolerance if the focus is not exactly on the target. To measure the DOF, a special target was used which consists of a 45◦ inclined plane with a scale and line pairs (DOF 5-15 Depth of Field Target, Edmund Optics GmbH, Karlsruhe, Germany). The system performance can be seen in Figure 4.

The scale in the image is the depth of the target in mm. The DOF can be read off the scale and is at least 22 mm. This is sufficient for most dermatoscopic investigations and skin structures.

**Figure 3.** Images of a 1951 USAF resolution test chart at a distance of 45 cm with (**a**) and without (**b**) the liquid lens in front of the camera lens.

**Figure 4.** Image of the depth of field target (**left**) and the intensity profile over the left stripe pattern of the image (**right**).

#### 3.1.4. Image Scale

In order to be able to determine the size of interesting skin features for images taken at different distances to the imaging system, a method for reliable scaling of the images was developed. The image scale was calibrated by taking pictures of a scale bar from various distances. These distances where correlated to the output voltage of the distance sensor. In this way the already mentioned look-up table was realized. The image size at different distances to the patient can be seen in Table 1. The scale allows comparing images taken at different examinations and measuring changes in size of the lesions with time. This is for example important for the widely known ABCDE-rule in melanoma screening or for follow-up examinations in the therapy of inflammatory skin diseases.



#### *3.2. Characteristics of the Illumination Unit*

#### 3.2.1. Illumination Intensity and Spectral Data

The reflector for shaping of the illuminated area, designed with OpticStudio as mentioned before, was milled and polished from aluminum. The light distribution was measured with a goniometer (PM-1200N-1, Radiant Vision Systems, Redmond, WA, USA) and the data exported again to OpticStudio. Here the light intensity at different distances was calculated. As the output of the illumination unit was measured without polarizer, the intensity had to be multiplied by a factor of 0.5 to obtain a good estimate. Furthermore, the illumination unit was measured with an integrating sphere (50 cm diameter, Mountain Photonics GmbH, Landsberg am Lech, Germany; with connected spectrometer: AvaSpec-2048L, Avantes BV, Apeldoorn, The Niederlande) to determine light intensity and spectral data for color correlated temperature (CCT) and CRI. The CCT of the illumination unit is 6350 K with a CRI of 76. In Figure 5, the light distribution at a distance of 45 cm from the illumination unit is shown for an imaged area of 10 cm × 10 cm. The overall light flux through this area is 775 lm without or 378.5 lm with polarizer. The light flux on the area of interest (the area which is captured by the camera) at this distance, i.e., 17 mm × 13 mm, is 71.6 lm or 35.8 lm after the polarizer. This results in a polarized illuminance of 162,000 lx. To compare this value, the minimal illuminance for office work spaces in Europe according to work safety regulations is 500 lx [22]. The measured illuminance from room lighting on the table in our laboratory was 525 lx. If we consider the illuminance in a doctor's office to be 1000 lx, the illuminance of the area of interest by the presented prototype is 162 times higher. This makes the light situation comparable for every image recorded, as the influence of ambient light on the total illuminance on the area of interest is negligible.

**Figure 5.** Light distribution at a distance of 45 cm from the illumination unit. The square shows the area of interest at this distance.

#### 3.2.2. Homogeneous Illumination

A homogeneous illumination is important for both online assessment of an image by the practitioner and digital post processing of the images. In such post processing, often gray scale values are compared to each other and thresholds are used to segment images. This requires comparable gray scale values of the structures to be detected in the whole image. To measure the homogeneity, an image of a homogenous gray scale target (ColorChecker, X-Rite Incorporated, Grand Rapids, MI, USA) was taken.

The average gray scale value of both, the vertical and the horizontal intensity profile is 140.9 with a standard deviation of 2.64 (vertical) corresponding to a relative variance of 4.48% and 2.56 (horizontal) corresponding to a relative variance of 4.76%, respectively. With an average relative variance of 4.62% the illumination is sufficiently homogeneous for the use in a dermatoscopic device.

#### *3.3. Image Colors*

#### 3.3.1. Comparable Color Representation

Usually, there is a difference between image colors obtained during examinations and true colors. Due to the relatively small aperture of the camera lens in combination with the ultra-bright LED light source, the system only detects the LED light reflected from the sample under study. This makes the system independent from the ambient light situation and ensures equal exposure times and measurement conditions for every measurement session. In the first row of Figure 6, two images of the same target but for different lighting conditions are shown. The first image was taken in the dark laboratory. The second was taken while the neon lights of the laboratory were turned on. Comparing gray scale values for equal exposure times shows that there is no significant difference in the pictures. The target used was a signal white (RAL 9003, RAL gGmbH, Bonn, Germany) card from the RAL color standard. In the second row, a picture of human skin is shown under the same lighting conditions. To quantify the difference, the gray scale values from the images in Figure 6a,b were subtracted from each other. Because small movements/displacements of the target between the images would lead to errors, an image registration was performed first to compensate for that. The average gray scale value of the resulting difference image was determined and is in the range of the noise of the individual images in Figure 6a,b which was estimated by the standard deviations of the respective gray scale values. The latter were calculated for a 100 pixels x 100 pixels square area in the middle of each image, which shows a homogeneous target. The registration and subtraction was also performed with the images from Figure 6c,d. The observed differences for this case are slightly larger compared to the first example and can be explained by the residual motion of the skin area between images. That means that the illumination situation is very well controlled and reproducible. If the illumination conditions are known, one calibration is enough to generate a camera color profile and calibrate all pictures taken by that camera. Colors then are comparable to the colors seen by eye and comparable to each other in different images taken with the calibrated system.

**Figure 6.** (**a**): Picture of the RAL color "Signalweiβ" (signal white) taken in a dark laboratory and (**b**) while neon light is turned on in the same laboratory. (**c**) Picture of human skin with a nevus and a vein in the bottom part of the image taken in a dark laboratory. (**d**) Picture of the same area of human skin taken while neon light is turned on in the laboratory (at least 525 lx). Both pictures were taken with activated cross polarization and the hand was fixed, so that both images show the same area. The images look very similar. This shows that the system is mostly independent of ambient lighting conditions.

Because the light situation is controlled and the measurement parameters are the same for each measurement, the colors of the pictures taken in different examinations are comparable. They can be calculated from the relation between the different color channels.

#### 3.3.2. True Color Rendering

If the system is calibrated once, the colors in the images appear closer to natural colors as can be seen in Figure 7. It shows images of a standard color reference target (ColorChecker Classic, X-Rite, Grand Rapids, MI, USA). Because the target is too large to fit on one picture, a composite image was assembled from single pictures of each tile of the ColorChecker. The calibration procedure uses the CIE (R,G,B) and LAB color spaces [23]. It aims at minimizing the color difference between the calibrated image and the optimal colors of the ColorChecker. Color values of the ColorChecker target are given for reference by the distributor.

**Figure 7.** Color calibration: (**a**) shows a picture of the ideal RGB (**r**ed, **g**reen, **b**lue) values of the ColorChecker Classic. (**b**) Shows an image the color checker and (**c**) after the image after color correction. The RGB values are now closer to the ideal RGB values in (**a**).

For a more realistic color perception a calibrated monitor is recommended. The calibration also facilitates the comparison of images obtained from different systems.

The result of the calibration can be seen in Figure 8.

**Figure 8.** Two images of a patient suffering from Lichen simplex chronicus. (**a**) Uncalibrated image data. (**b**) Same image after color calibration. The image looks brighter and the skin color is more natural and closer to what the doctor is used to see with his bare eyes.

#### **4. Discussion**

The non-contact dermatoscope presented in this work features both an integrated autofocus function calibrated by using an infrared laser-based distance sensor as well as image post-processing. The large working distance is beneficial for routine examinations. If in a future version of the system the distance sensor will be aligned along the optical axis of the camera via a dichroitic mirror, for example, it will be able to focus over an even larger range. Also, a distance-variable image scale would be advantageous and can be included in our system. Furthermore, a color read out tool could be added to the software. With this tool, images taken during different examinations could more easily be compared which could provide, for example, additional information about the blood perfusion. Also, use in therapy monitoring is possible.

The color calibration of our system is not yet optimal as the color values in the calibrated image still deviate from the reference color values. The uncalibrated image is comparably dark. This is because the gain of the camera was set to zero, so that the images can more easily be compared after image noise reduction. Also, the exposure time was set to 75 ms as a compromise between image brightness and exposure time to avoid motion blur. An even brighter light source would be advantageous in this regard. Another option would be to track the gain for the images taken and include this value in the calibration algorithm.

With regard to CRI requirements, halogen light sources and high pressure xenon lamps, for example, can have CRI values of 100 and 93, respectively [24]. As we use an LED with a more discrete emission spectrum, the CRI cannot directly be compared with CRI of the more continuously emitting light sources mentioned above. The reason for this is, that small changes in a discrete spectrum can result in large changes in the CRI which does not correlate with the changes in color representation as perceived by an human observer or a camera [25]. Also, the spectral acceptance of the sensor employed, i.e., the camera system, has to be taken into account. As even the light situation in examination rooms can alter because of, for example, the daytime, no exact color matching is needed in this case. It is more important to ensure comparable colors in every picture taken by the system. The light source and image algorithms can further be improved by spectral tuning to enhance visibility of different features relevant for diagnostics in dermatology [26].

Finally, the hardware components of the system can be made more compact, so that the it can serve as a module in an automated skin-screening device. In such a scheme the patient lies on a medical lounger and an automated arm equipped with suitable imaging systems takes overview pictures of the skin and identifies nevi and other areas relevant to dermatology. Subsequently, the system described in this work would take high-resolution images of the selected skin areas. In case of nevi, an algorithm that evaluates the ABCDE criteria could then make a risk analysis so that the physician only needs to examine the suspicious ones in more detail. In further versions, external image analysis from outside the examination room could be implemented as well as deep learning approaches for fast and essentially real-time analysis as pointed out in [27].

#### **5. Conclusions and Outlook**

In this work, we present a prototype of a non-contact dermatoscope with a built-in autofocus system. The auto-focus is based on a liquid lens in combination with an infrared laser distance sensor. This setup is sufficiently fast for dermatoscopy enabling a focus refresh rate of 30 Hz for image acquisition, which is double the maximal frame rate of the camera. The resolution of the dermatoscopic system is slightly lower than the resolution of the system without the liquid lens, but it is still within the required minimal resolution of 30 μm. In the current alignment of the components, the system is optimized for a working distance of 45 cm ± 3 cm. If the surface to be examined is not strongly curved (over the range of the depth of field) the system delivers sharp images at even larger distances as the infrared laser spot is reflected on the image plane. The prototype is largely independent of the ambient light due to its ultra-bright LED-light source illuminating the area of interest on the target at a distance of 45 cm with a luminous flux of 35.8 lm, which is equivalent to 162,000 lx. This makes images taken under different ambient light conditions comparable. The system also allows for color calibration to obtain more realistic color representation

In the next step, the prototype will be tested in a clinical environment. This could reveal further diagnostic advantages. A study will be designed to collect a larger set of images from skin lesions correlated to specific diagnostic data. The focus will be placed on a number of common inflammatory diseases in order to identify (new) diagnostic features that are revealed by the dermatoscopic images and to better evaluate the diagnostic potential of the non-contact remote approach.

**Author Contributions:** Conceptualization, D.F. and M.W.; Funding acquisition, B.R.; Investigation, E.D. and D.F.; Project administration, B.R., Resources, A.H., T.W. and B.R.; Software, D.F.; Supervision, M.W. and B.R.; Writing—original draft, D.F.; Writing—review & editing, D.F., M.W. and B.R.

**Funding:** This project is funded by the Lower Saxony Ministry for Culture and Science (MWK) through the program Tailored Light and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453).

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Single-Mode Polymer Ridge Waveguide Integration of Organic Thin-Film Laser**

**Marko Cehovski ˇ 1,†, Jing Becker 2, Ouacef Charfi 1,†, Hans-Hermann Johannes 1,\*,†, Claas Müller <sup>2</sup> and Wolfgang Kowalsky 1,†**


Received: 6 March 2020; Accepted: 16 April 2020; Published: 18 April 2020

**Abstract:** Organic thin-film lasers (OLAS) are promising optical sources when it comes to flexibility and small-scale manufacturing. These properties are required especially for integrating organic thin-film lasers into single-mode waveguides. Optical sensors based on single-mode ridge waveguide systems, especially for Lab-on-a-chip (LoC) applications, usually need external laser sources, free-space optics, and coupling structures, which suffer from coupling losses and mechanical stabilization problems. In this paper, we report on the first successful integration of organic thin-film lasers directly into polymeric single-mode ridge waveguides forming a monolithic laser device for LoC applications. The integrated waveguide laser is achieved by three production steps: nanoimprint of Bragg gratings onto the waveguide cladding material EpoClad, UV-Lithography of the waveguide core material EpoCore, and thermal evaporation of the OLAS material Alq3:DCM2 on top of the single-mode waveguides and the Bragg grating area. Here, the laser light is analyzed out of the waveguide facet with optical spectroscopy presenting single-mode characteristics even with high pump energy densities. This kind of integrated waveguide laser is very suitable for photonic LoC applications based on intensity and interferometric sensors where single-mode operation is required.

**Keywords:** integrated optics and photonics; integrated polymer optics; organic laser; integration; polymeric waveguide; Lab-on-a-Chip

#### **1. Introduction**

The light from only a few light sources is able to couple effectively into single-mode waveguides. Lasers are light sources with very high coupling efficiencies. For polymeric waveguides, however, not all laser types are suitable. Especially for LoC devices, single-mode operation is mostly required. The coupling of the laser light into the single-mode waveguide is mainly achieved by prism-coupling, grating couplers with a free-space set-up or butt-coupling with a lensed fiber [1]. An important step towards waveguide integrated lasers can be realized by combining the laser resonator, such as distributed feedback resonator (DFB) with the waveguide structure. The DFB wavelength selectivity and high efficiency as well as the ease of fabrication makes them very suitable for this kind of integration. OLAS are well suited for this application due to their low layer thickness, their flexibility and simple processing [2]. In the last decade, the integration of organic DFB lasers into waveguides and LoC systems has been demonstrated. Balslev et al. [3] realized a LoC system with an integrated organic laser source and a photodiode. The laser light source is achieved by the laser dye Rhodamine 6G dissolved in ethanol, which is purged through a microfluidic channel to a DFB grating. The integration of the laser source and the multi-mode SU-8 waveguides is realized by butt coupling. Christiansen et al. [4] reported on active and passive SU-8 waveguides either doped or undoped with the laser dye Rhodamine 6G. First, the active waveguide is constructed by a lithography and imprint process. The following passive waveguide is subsequently built up and also butt coupled to the active waveguide. Mappes et al. [5] and Vannahme et al. [6] report on an integration of an OLAS into PMMA. The resonant DFB structure is formed by the aid of hot-embossing and the waveguides are manufactured by deep UV irradiation of the PMMA bulk material. Up to now, all the integrations are only realized with multi-mode waveguides or slab waveguides. However, it is a great approach for intensity based LoC applications where, for example, fluorescence excitation of an analyte in a microfluidic channel is used. Despite that, pure single-mode operations is required for interferometric based sensors such as optical based LoC-systems. Since higher modes propagate with different speed of light and have different evanescent wave characteristics, this leads to an inefficient interferometric interaction in the sensor as well as signal losses due to modal dispersion [1]. Recently Becker et al. [7] integrated DFB gratings onto a 2.0 m in width and 2.5 m in height few-mode ridge waveguide. This is realized by a straightforward combined nanoimprint and photolithography process (CNP process) using OrmoCore, a silicon-containing hybrid waveguide core material. On it, the OLAS is finally evaporated. In our study, we used a three-step fabrication process to integrate the OLAS into the 1.0 m in width and 1.0 m in height polymeric single-mode waveguide as schematically depicted in Figure 1a. Therefore, we used the photopatternable epoxies EpoClad (refractive index = 1.579 @650 nm) and EpoCore (refractive index = 1.593 @650 nm) for the cladding layer including the grating structure and the waveguide core, respectively. The waveguide materials are commercially available and distributed by Micro Resist Technology GmbH in Germany. The active organic material (Alq3:DCM2) was then evaporated on top of the waveguide structure with the DFB gratings underneath. The final device contains five DFB gratings in parallel and five single-mode waveguides on top of each grating area, forming 25 different waveguide integrated lasers with five different emission wavelengths.

**Figure 1.** (**a**) Schematic sketch of the OLAS integrated into the polymeric single-mode waveguide; and (**b**) the structural formula of Alq3 and DCM2.

#### **2. The Polymeric Single-Mode Waveguide**

Optical simulations using the finite difference eigenmode solver support the single-mode characteristic. Figure 2 shows the cross section of the single-mode ridge waveguide structure as described in the introduction as well as the TE0 and TE1 mode distribution with air (refractive index ≈1.0002 @650 nm) and EpoClad as the surrounding cladding medium.

For the ridge-type waveguide with air as upper cladding, the TE0 mode is asymmetric and extends into the lower cladding (see Figure 2a). The effective refractive index was calculated to be 1.561 @650 nm. Figure 2b shows the TE0 mode distribution for the symmetrical case where the waveguide core is surrounded by the cladding material making it a buried waveguide. The mode field maximum is located in the waveguide core and the minority of the field intensity is located to be outside and is called evanescent field. In this structure, the decay length of the evanescent field is about 1 m. For the buried waveguide, the effective refractive index of the TE0 mode is 1.58 @650 nm. In both cases, higher modes such as TE1 (see Figure 2c,d) are not capable of propagation in the waveguide core.

**Figure 2.** TE0 and TE1 mode distribution with: (**a**,**c**) air as upper cladding medium; and (**b**,**d**) EpoClad as upper cladding medium.

#### **3. The Organic Thin-Film Laser**

We used the well-known guest–host active organic laser material containing Tris-(8-hydroxyquinoline)aluminum (Alq3) as the host material and the laser dye 4-(Dicyanomethylene)-2 -methyl-6-julolidyl-9-enyl-4H-pyran (DCM2) as the guest material (see Figure 1b). Figure 3 shows the photoluminescence spectra (PL spectra) of Alq3 and Alq3:DCM2 with different DCM2 doping concentrations. The samples were evaporated and characterized on glass substrates.

**Figure 3.** PL spectra of Alq3 and Alq3:DCM2 at different doping concentrations.

Figure 3 shows a redshift in the emission spectra of the Alq3:DCM2 samples with increasing DCM2 doping concentration. The reason for this is the Solid State Solvation Effect (SSSE). It describes the polarity influence of the host matrix on the emitter molecule [8]. Furthermore, Figure 3 also shows a decreased Alq3 emission in the PL spectra with increasing doping concentration of DCM2. The fraction of Alq3 emission is about 18% (see dashed line in Figure 3) with the DCM2 doping concentration of 1%. When the doping concentration increases to 3%, the fraction of Alq3 emission drops to 5%. With further increasing of the doping concentration, as shown in Figure 3, the Alq3 emission fraction in the PL spectra drops even more. This is due to the Förster resonant energy transfer [9,10] where with decreasing donor/acceptor mean distance the probability of a resonant energy transfer increases. The great advantages of this material system are the broad optical laser tuning range of up to 115 nm, low lasing threshold down to approximately 3 J/cm2, and high optical gain, which saturates at about 300 cm−<sup>1</sup> [11–13]. Moreover, the Alq3:DCM2 thin-film is characterized by its absolutely smooth surface after the vapor deposition process. The surface roughness was measured to be Ra = 950 pm (arithmetic mean value) and Rq = 1.2 nm (root mean square value). However, surface roughness, morphological variations as well as the pump process itself can cause slightly stochastic fluctuations and noise in the signal. Therefore, in the spectral analysis, the Karhunen–Loève Transformation can be applied [14]. Different kinds of resonating structures could be demonstrated to achieve lasing in organic thin-films, such as DFB resonators, micro disks, spheres, and distributed Bragg reflectors (DBR) [15–19]. The DFB resonator, however, stands out because of its ease of fabrication and its optical properties such as the extraordinary wavelength selectivity and high efficiency. The emission wavelength (*λ*Bragg) of the distributed feedback laser can be varied by choosing the suitable grating periodicity Λ. It follows the Bragg condition:

$$
\lambda\_{\text{Bragg}} = \frac{2 \cdot n\_{\text{eff}} \cdot \Lambda}{m},
\tag{1}
$$

where *n*eff is the effective refractive index of the allover waveguide structure, Λ is the grating period, and *m* = 1, 2, 3, ..., is the diffraction order.

#### **4. Device Fabrication**

First, a Polydimethylsiloxan (PDMS) stamp was made for replication of the DFB gratings. The grating area contains five DFB gratings arranged in parallel, with the dimension of 2 mm in width and 10 mm in length. The periodicity of the gratings was 195 nm to 215 nm with a period spacing of 5 nm. After the fabrication of the PDMS stamp, the bottom cladding with the grating structures was fabricated by a UV nanoimprint process. In this step, a mixture of EpoClad and *γ*-Butyrolactone

(GBL) with a mixing ratio of 2:1 was used. The GBL diluted EpoClad was spin-coated onto a silicon wafer with 4000 rpm for 30 s and then soft baked at 120 ◦C for 2 min on a hot plate. After the soft bake, the PDMS stamp was brought into contact with the still sticky EpoClad. A subsequent flood-exposure starts the UV-induced cross-linking of the polymer. After that, the substrate was baked again on a hot plate at 120 ◦C for 3 min. After the EpoClad was completely polymerized, the silicon-PDMS stamp set was taken from the hot plate and the PDMS stamp was removed without any residue. The layer thickness of EpoClad was about 1 m. Figure 4a shows the profile of one of the gratings with Λ = 195 nm and Figure 4b shows a good replication quality. This image was taken with an atomic force microscope. The measured average periodicity is 194.97 nm, which is extremely close to the desired value.

**Figure 4.** (**a**) Grating profile; and (**b**) 3D AFM image of the imprinted DFB grating with Λ = 195 nm into EpoClad.

The next fabrication step is structuring the waveguides onto the DFB gratings. In this case, a mixture of EpoCore and *γ*-Butyrolactone (GBL) in a mixing ratio of 2:1 was used. The mixture was spin-coated with 4600 rpm for 30 s onto the substrates with the pre-structured cladding layer. The substrates were first baked at 50 ◦C and then at 90 ◦C for 2 min each followed by a UV-lithography step using a chromium mask with the waveguide structures. After that, the substrates were put on the hot plate again for 2 min at 50 ◦C and for 3 min at 85 ◦C for a post-exposure bake. The non-cross-linked EpoCore was removed by dipping it for 10 s into the mr-Dev 600 developer. Finally, the substrates were rinsed in isopropyl alcohol to stop the development process. Figure 5 shows schematic sketch of the final device.

Before starting the measurements, the final device was carved at the break points with a diamond scriber so that the silicon wafer split apart along its crystal lattice. Figure 6 presents the SEM images of the EpoCore waveguide on top of the DFB gratings.

**Figure 5.** Schematic sketch of the final device after the lithography process containing DFB grating and waveguide structures.

(**b**)

**Figure 6.** (**a**) SEM overview image of the whole waveguide and grating structure and a detailed view on the waveguides on top of the DFB grating; and (**b**) the waveguide end facet.

The figures show a good waveguide integration on top of the DFB gratings as well as a satisfying waveguide end facet for the optical analysis of the integrated waveguide laser. Figure 6a also gives a detailed look at the waveguide and shows some irregularities and an uneven surface. This can probably be addressed to the grating structure below. The waveguides outside this grating area show good optical quality and a smooth surface (cf. Figure 6b). After the fabrication of the DFB gratings and the waveguides, 200 nm of the guest–host laser active material system Alq3:DCM2 with a DCM2 doping concentration of 6% was co-evaporated in an ultrahigh vacuum chamber with *p* < 10−<sup>8</sup> mbar by organic molecular beam deposition method.

#### **5. Results and Discussion**

The samples were measured and characterized by the optical set-up schematically shown in Figure 7.

The waveguide integrated OLAS was excited with the third harmonic (*λ*<sup>3</sup> = 355 nm) of a passively Q-switched Nd:YAG laser. The excitation energy was controlled by a neutral density filter. For an efficient excitation, the Gaussian beam out of the Nd:YAG laser was formed into a laser stripe by the cylindrical lenses L1, L2, and L3. Additional apertures form a flat-top profile out of the Gaussian beam. Thus, an excitation stripe of 0.15 mm in width and 3.0 mm in length was created. The dimensions of the laser stripe were measured with a Thorlabs BP209-VIS/M scanning-slit optical beam profiler. The energy out of the pump laser was measured with the Coherent LabMax-Top system. The waveguide integrated laser probes were stored and measured in a N2-flushed chamber. Instead of this method, to avoid unnecessary degradation and oxidation under atmosphere conditions, a thin-film encapsulation can be applied on top of the waveguide integrated lasers [20]. The excited laser emission out of the end facet of the waveguide was measured by using the lens pair L4 and L5. An additional aperture at the N2-flushed chamber in front of the Lens L4 was included to suppress the scattered light. The waveguide integrated lasers were analyzed by a precise monochromator (Triax 320, HORIBA Scientific) and a liquid nitrogen cooled CCD detection system (Symphony, HORIBA Scientific).

**Figure 7.** Optical set-up for laser performance measurements of the integrated OLAS. The laser emission was measured orthogonal to the pump beam.

The measurement of the OLAS out of the waveguide facet is depicted in Figure 8a, where lasing occurred with all grating periods. For the sake of simplicity, we took a closer look at just one laser line. Figure 8b shows the laser line at *λ*Bragg = 640.24 nm. Using the Bragg condition in Equation (1) for the first order laser emission (*m* = 1) at Λ = 205 nm, an effective refractive index *n*eff of about 1.562 can be determined. This result is comparable with the result of the mode propagation simulation. The effective refractive index *n*eff of the fundamental mode for this waveguide was calculated to be approximately 1.561 @640 nm. The spectral width of the emission peak at the full width at half maximum (FWHM) was measured to be 200 pm. Furthermore, by increasing the excitation energy

density up to 274 J/cm2 (cf. Figure 8c), no further modes except the fundamental mode were found. In previous measurements on few-mode waveguides (waveguide facet dimension 2 × 2 m2), up to five modes could be found with increased pump energy.

Deviating from the DFB theory [21,22], where two laser modes near the Bragg wavelength are generated due to the symmetry of the gratings in index-coupled DFB-lasers [23], we could only measure one laser line out of our waveguide integrated lasers. We suppose that this effect can be addressed to the replication process of the Bragg grating structure into the polymeric cladding layer. Possible slight thickness variations of the polymer layers due to the spin-coating technique as well as potential variations in the doping concentration of the active laser material can have an effect on the effective refractive index change in the structure leading to an asymmetrical behavior and therefore to single-mode operation of the waveguide integrated laser.

**Figure 8.** (**a**) Lasing lines out of the integrated OLAS with different DFB grating periods with the optical modal gain in the background; and (**b**) the spectral response of the integrated OLAS with (**c**) increasing excitation energy density.

Figure 9a shows the emitted intensity of the waveguide integrated lasers in dependence of the excitation energy density. An increased laser emission could be found by exceeding the lasers above their laser thresholds. Laser thresholds could be found to be in the range from approximately 25 J/cm2 to approximately 170 J/cm2. Moreover, with increasing the Bragg grating period, and therefore increasing the laser wavelength, the laser threshold decreases and the slope of the lasers increases. At a certain turning point, the laser threshold increases and the laser slope decreases. This can be attributed to the spectral response of the optical modal gain of this material (cf. Figure 8a). The turning point corresponds with the modal gain maximum where the stimulated cross section is the highest [12,13]. The spectral width of the modal gain increases with increasing excitation energy density. Thus, for shorter laser wavelengths, higher excitation energy is needed until the optical modal gain overcomes absorption and optical losses. For higher wavelengths, the optical modal gain decreases, which can be seen in the laser threshold of the laser with 666.82 nm laser wavelength. The lowest lasing threshold is one magnitude higher than classic OLAS without waveguides (cf. [2]) but still up to 210-times lower than the thresholds reported in [3,4]. By inserting a polarization filter in front of the detection fiber into the optical set-up (cf. Figure 7), a polarization measurement could be done to get the TE- and TM-proportions. The radiated emission was clearly linear polarized with a polarization extinction ratio (PER) of 15.5 : 1. The intensity difference between the values at 0◦ and 360◦ can be addressed to a slight degradation of the sample during this measurement.

**Figure 9.** (**a**) Laser threshold of the waveguide integrated OLAS; and (**b**) the polar plot of the polarization properties. The lasing threshold corresponds to the optical modal gain for the active organic material.

#### **6. Conclusions**

In this study, EpoCore and -Clad based single-mode waveguides with integrated Bragg gratings were successfully fabricated. By depositing the organic laser material Alq3:DCM2 on top of the waveguide structure, a monolithically single-mode waveguide laser device is achieved. The laser emission could be measured at all waveguide integrated lasers with a FWHM of 200 pm. No other higher modes could be observed even with excitation energy densities up to 274 J/cm2. The laser threshold could be measured to be in a range of approximately 25 J/cm2 to 170 J/cm2. The LoC application ability for photonic or interferometric based sensors could be supported by the polarization extinction ratio of 15.5:1, which indicts a linear polarization. By using this type of laser with the described integration, light can be easily coupled into the single-mode waveguide. This type of laser integration allows an uncomplicated light coupling into a single-mode waveguide. With this work, a milestone towards the monolithically integration of organic lasers is achieved. Especially, optical sensor systems based on single-mode waveguides and, in particular, LoC systems could benefit from this work.

**Author Contributions:** conceptualization: M.C.; software: M. ˇ C.; validation: M. ˇ C., J.B. and O.C.; investigation: ˇ M.C. and J.B.; writing—original draft preparation: M. ˇ C.; writing—review and editing: J.B. and H-H.J.; supervision: ˇ H-H.J., C.M. and W.K.; All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors would like to thank the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) for funding this work within the PolySens Project (Project ID 410203759) as well as the Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453).

**Acknowledgments:** The authors would like to acknowledge the contribution of the SEM measurements used in this publication by M.Sc. Jan Gülink from the Institut für Halbleitertechnik, TU Braunschweig.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
