Next Article in Journal
Improved Parameter Estimation of the Line-Based Transformation Model for Remote Sensing Image Registration
Next Article in Special Issue
Improving CNN-Based Texture Classification by Color Balancing
Previous Article in Journal
Using SEBAL to Investigate How Variations in Climate Impact on Crop Evapotranspiration
Previous Article in Special Issue
Automatic Recognition of Speed Limits on Speed-Limit Signs by Using Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer †

1
Smart Manufacturing Technology Group, KITECH, 89 Yangdae-Giro RD., CheonAn 31056, ChungNam, Korea
2
UTRC, KAIST, 23, GuSung, YouSung, DaeJeon 305-701, Korea
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper published in Kim, HyungTae, KyeongYong Cho, SeungTaek Kim, Jongseok Kim, KyungChan Jin, SungHo Lee. “Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods.” In MATEC Web of Conferences, Volume 32, EDP Sciences, 2015.
J. Imaging 2017, 3(3), 31; https://doi.org/10.3390/jimaging3030031
Submission received: 27 May 2017 / Revised: 3 July 2017 / Accepted: 15 July 2017 / Published: 21 July 2017
(This article belongs to the Special Issue Color Image Processing)

Abstract

:
A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the driving condition for achieving maximum sharpness influences image quality. In most inspection systems, a single-color light source is used, and an equal step search (ESS) is employed to determine the maximum image quality. However, in the case of multiple color LEDs, the number of iterations becomes large, which is time-consuming. Hence, the steepest descent (STD) and conjugate gradient methods (CJG) were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence, the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The algorithm parameters were determined using orthogonal arrays, and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. The contribution of parameters was investigated using ANOVA. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations.

1. Introduction

The quality of images acquired from an industrial machine vision system determines the performance of the inspection process during manufacturing [1]. Image-based inspection using machine vision is currently widespread, and the image quality is critical in automatic optical inspection [2]. The image quality is affected by focusing, which is usually automatized, and illumination, which is still a manual process. Illumination in machine vision has many factors, such as intensity, peak wavelength, bandwidth, light shape, irradiation angle, distance, uniformity, diffusion, and reflection. The active control factors in machine vision are intensity and peak wavelength, though other factors are usually invariable. Although image quality is sensitive to intensity and peak wavelength, the optimal combination of these factors may be varied by the material of the target object [3]. Because the active control factors are currently manually changed, it is considerably labor intensive to adjust the illumination condition of the inspection machines in cases of the initial setup or product change. However, the light intensity of a light-emitting diode (LED) can be easily adjusted by varying the electric current. A few articles have been written about auto-lighting by controlling the intensity of a single-color light source [4,5,6]. A single-color lighting based on fuzzy control logic is applied to a robot manipulator [7]. The light intensity from a single-color source is mostly determined using in an equal step search (ESS), which varies the intensity from minimum to maximum in small intervals.
Color mixers synthesize various colors of light from multiple LEDs. The LEDs are arranged in an optical direction in a back plane, and an optical mixer is attached to a light output [8]. The color is varied using a combination of light intensities, which can be adjusted using electric currents. Optical collimators are the most popular device to combine the lights from the multiple LEDs [9,10,11]. These studies aim to achieve exact color generation, uniformity in a target plane, and thermal stability. They do not focus on the image quality. Optimal illumination can increase the color contrast in machine vision [12], hence spectral approaches in bio-medical imaging [13,14].
When color mixers are applied to machine vision, the best color and intensity must be found manually. Because automatic search is applied using the ESS, the searching time is long, which is caused by the vast number of light combinations. Thus, we have been studying fast optimization between color and image quality in industrial machine vision [15,16,17]. Because the above-mentioned studies were based on non-differential optimum methods, they were stably convergent, but required multiple calls of a cost function for iterations, leading to a longer processing time. Derivative optimum search methods are well-known, simple, and easy to implement [18]. The derivative optimum methods are less stable and more oscillatory, but usually faster [18,19]. In this study, arbitrary N color sources and image quality were considered for steepest descent (STD) and conjugate gradient (CJG). The optimum methods are composed of functions, variables, and coefficients which are difficult to determine for the inspection process. Algorithm parameters also affect the performance of image processing methods [20], and they can be determined using optimum methods. Thus, a tuning step is necessary to select the value of the coefficients when applying the methods to inspection machines. The relation between the LED inputs and the image quality is complex, difficult to describe, and is actually a black box function. The coefficients are sensitive to convergence, number of iterations, and oscillation, but the function is unknown. The Taguchi method is one of the most popular methods for determining the optimal process parameters with a minimum number of experiments when the system is unknown, complex, and non-linear. The contribution of process parameters can be investigated using ANOVA, and many cases have been proposed in machining processes [21,22]. The Taguchi method for robust parameter design was applied to tune the auto-lighting algorithm for achieving the fastest search time and best image quality in the case of a mixed-color source.

2. Derivative Optimum for Image Quality

2.1. Index for Image Quality

The conventional inspection system for color lighting comprises a mixed light source, industrial camera, framegrabber, controller, and light control board. Figure 1 shows a conceptual diagram of the color mixer and machine vision system. The color mixer generates a mixed light and emits it toward a target object, and the camera acquires a digital image, which is a type of response to the mixed light. The digital image is analyzed to study the image properties (e.g., image quality) and to determine the intensity levels of the LEDs in the color mixer. The intensity levels are converted into voltage level using a digital-to-analog converter (DAC) board. The electric current to drive the LEDs is generated using a current driver according to the voltage level. The color mixer and the machine vision form a feedback loop.
The image quality must be evaluated to use optimum methods. There are various image indices proposed in many papers; these are evaluated using pixel operations [23,24]. For instance, the brightness, I ¯ , is calculated from the conception of the average grey level of an m × n pixel image.
I ¯ = 1 m n x m y n I ( x , y )
where the grey level of pixels is I ( x , y ) and the size of the image is m × n .
Image quality is one of the image indices, and is usually estimated using sharpness. Sharpness actually indicates the deviation and difference of grey levels among pixels. There are dozens of definitions for sharpness, and standard deviation is widely used as sharpness in machine vision [25]. Thus, sharpness σ can be written as follows.
σ 2 = 1 m n x m y n I ( x , y ) I ¯
Industrial machine vision usually functions in a dark room so that the image acquired by a camera completely depends on the lighting system. The color mixer employed in this study uses multiple color LEDs having individual electric inputs. Because the inputs are all adjusted using the voltage level, the color mixer has a voltage input vector for N LEDs as follows.
V = ( v 1 , v 2 , v 3 , , v N )
As presented in section I, the relationship between the LED inputs and the image quality involve electric, spectral, and optical responses. This procedure cannot easily be described using a mathematical model, and the relationship from (1) to (3) is a black box function which can be denoted as an arbitrary function f, which is a cost function in this study.
σ = f ( V )
The best sharpness can be obtained by adjusting V. However, V is an unknown vector. The maximum sharpness can be found using optimum methods, but negative sharpness ρ must be defined because optimum methods are designed for finding the minimum. Hence, negative sharpness is a cost function.
ρ = σ = f ( V )
The optimum methods have a general form of problem definition using a cost function as follows [17]:
min V ρ = f ( V ) for V

2.2. Derivative Optimum Methods

The steepest descent and conjugate gradient methods are representative of the derivative optimum methods, which involve the differential operation of a cost function written as a gradient.
ρ = ρ v 1 , ρ v 2 , ρ v 3 , , ρ v N
The STD iterates the equations until it finds a local minimum; a symbol k is necessary to show the current iteration. The STD updates current inputs k V by adding a negative gradient to the current inputs. α is originally determined at ρ ( α ) / α = 0 in STD [18], however it is difficult to obtain using an experimental apparatus. In this study, the α is assumed to be a constant, α .
k + 1 V = k V α ( k ρ ) = k V α ( k ξ )
The CJG has the same method of updating the current inputs. However, the difference lies in calculating the index of the updates ξ .
k + 1 V = k V α ( k ξ )
k ξ = ( k ρ ) + ( k ρ ) ( k 1 ρ ) 2 ( k 1 ) ξ
k ξ usually has an unpredictably large or small value, which causes divergence or oscillation near the optimum. Consequently, the following boundary conditions are given before updating the current inputs.
α k ξ = η τ k ξ < τ η ( k ξ ) τ < k ξ < τ η τ k ξ > τ
where η is the convergence coefficient for a limited range and τ is the threshold. The updating of inputs and the acquisition of sharpness are iterated until the gradient becomes smaller than the terminal condition ϵ 1 , which indicates that auto-lighting finds the maximum sharpness and the best image quality.
| k ξ | < ϵ 1
where ϵ 1 is an infinitesimal value for the terminal condition.
The cost function is acquired using hardware, and the terminal condition considers differential values. The values are discrete and sensitive to noises; hence, an additional terminal condition, ϵ 2 , is applied as follows:
| k ρ k 1 ρ | < ϵ 2

3. Robust Parameter Design

3.1. System for Experiment

The sharpness and derivative methods were applied to a test system which was constructed in our previous study [6]. The test system was composed of a 4 M pixel camera (SVS-4021, SVS-VISTEK, Seefeld, Germany), a coaxial lens (COAX, Edmund Optics, Barrington, NJ, USA), a framegrabber (SOL6M, Matrox, Dorval, QC, Canada), a multi-channel DAC board (NI-6722, NI, Austin, TX, USA), and an RGB mixing light source. Commercial integrated circuits (ICs) of EPROMs were used as sample targets A (EP910JC35, ALTERA, San Jose, CA, USA) and B (Z86E3012KSES, ZILOG, Milpitas, CA, USA), as shown in Figure 2. The camera and the ICs were fixed on Z and XYR axes, respectively. The coaxial lens was attached to the camera, and faced the ICs. Optical fiber from the RGB source was connected to the coaxial lens and illuminated the ICs. Images of the ICs were acquired and transferred into a PC through a CAMERALINK port on the framegrabber. Operating software was constructed using a development tool (Visual Studio 2008, Microsoft, Redmond, WA, USA) and vision library (MIL 8.0, Matrox, Dorval, QC, Canada). Location of the ICs in an image was adjusted using XYR axes after focusing was performed using the Z axis. The inputs of the RGB source were connected to the DAC board. The light color and intensity were adjusted through the board. The STD and CJG for optimum light condition were implemented into the software.

3.2. Taguchi Method

The Taguchi method is commonly used to tune the algorithm parameters and optical design in machine vision [26,27,28]. A neural network is a massive and complex numerical model, and derivative optimal methods are frequently applied to its training parameters [29,30]. Taguchi method is useful to find the learning parameters of neural network and increase learning efficiency in machine vision system [31]. Considering the non-linear, multi-dimensional, and black box function systems in this study, we expected that the Taguchi method could be useful in tuning the auto-lighting algorithm. The performance of the algorithm was largely evaluated using the minimum number of iterations and the maximum sharpness. Hence, “the smaller the better” concept was applied in case of the number of iterations and “the larger the better” concept was applied in the case of the sharpness while calculating the signal-to-noise (SN) ratio. Those SN ratios can be obtained using the following equations [32,33]:
S N = 10 l o g 1 w j = 1 w 1 u j 2
S N = 10 l o g 1 w j = 1 w u j 2
where u j is the performance index (e.g., sharpness and iteration), and w is the number of experiments.

3.3. Experiment Design

The selected parameters were initial voltages of red, green, and blue (RGB) LEDs, V = ( v R 0 , v G 0 , v B 0 ) , the convergence constant η , and the threshold τ . Because the maximum sharpness is usually formed in low-voltage regions under a single-light condition, the range of the initial voltage was less than half of the full voltage. The ranges of η and τ were between 0.0 and 1.0. These five factors were chosen as control factors. Because all the ranges are divided into five intervals, the level was set at 5. Therefore, the L 25 ( 5 5 ) model is organized using five control factors and five levels, as shown in Table 1. The combination of the experiment is 25, which is quite a small value considering the multiple color sources and the algorithm parameters. Two sample targets were used for the experiments, as proposed.

4. Results

The maximum sharpness found using the ESS was σ m a x = 392.76 at V = ( 0 , 0 , 1.2 ) for Pattern A, and σ m a x = 358.87 at V = ( 1.0 , 0 , 0 ) for Pattern B. The total step number of combinations for RGB was 50 3 = 125,000 . The L 25 ( 5 5 ) orthogonal arrays for steepest descent and conjugate gradient methods were constructed as shown in Table 2 and Table 3. σ m a x , k m a x , V R , V G and V B were the optimal statuses found by the steepest descent method by using the selected parameters. Some combinations showed almost the same sharpness as that of the exact solution, some combinations reached the maximum after several steps, and some cases failed to converge. These facts show that parameter selection for a derivative optimum is important because of stability. The SN ratios were calculated using MINITAB for mathematical operations of Taguchi analysis. Figure 3, Figure 4, Figure 5 and Figure 6 are the results of Taguchi analysis and show the trend of the control factors. The variation in the sharpness was very small, whereas the variation in the number of the iteration was larger, which implied that the parameters were sensitive to iteration.
However, the trends of sharpness and the number of iterations were inverse. Sharpness is more important than the number of iterations because the inspection in a manufacturing process must be accurate. Hence, we chose the initial voltage in the sharpness, and τ and η in the number of the iteration. Retest combinations of STD were determined considering figures such as A 3 B 2 C 5 D 2 E 5 and A 5 B 1 C 1 D 1 E 1 for Patterns A and B, respectively. A 3 B 3 C 3 D 5 E 5 , and A 5 B 3 C 1 D 5 E 4 were selected for Patterns A and B in case of CJG. The retest results using A 3 B 2 C 5 D 2 E 5 were σ m a x = 390.07 , V = (0.00, 0.00, 1.09), and 19 iterations. The combination of A 5 B 1 C 1 D 1 E 1 was σ m a x = 357.97 , V = (1.02, 0.00, 0.00), and 37 iterations. The retest results using A 3 B 3 C 3 D 5 E 5 were σ m a x = 383.73 , V = (0.31, 0.30, 0.3), and 16 iterations. The value of this point was 2% lower than the ESS, and the coordinate is far from the ESS. This indicates a different local minimum compared with the ESS results. However, when the terminal condition is tightly given, a result similar to the ESS can be obtained with 74 iterations. The retest results using A 5 B 3 C 1 D 5 E 4 were σ m a x = 357.09 , V = (1.02, 0.02, 0.00), and 37 iterations. Contributions of the parameters in the STD were evaluated using ANOVA, as shown in Table 4 and Table 5. The results of ANOVA were obtained using general linear model in MINITAB. The η was the most significant factor for Pattern A, but initial point was significant for Pattern B. Table 6 and Table 7 show contributions of the parameters in the CJD. η was the most significant factor for the sharpness and the iteration. However, initial point was significant for the sharpness, and the iteration was more significant for the iteration. Hence, convergence constant, η , is the most important and the initial point is the second to find optimum of color lighting. τ was a minor factor in the experiments.
Figure 7 and Figure 8 show the convergence of maximum sharpness by employing the STD and the CJD methods. In the figures, V R , V G , and V B are mapped virtually in Cartesian coordinates. The starting point is shown in blue, and the color is varied into others during iteration. The terminal point is marked with red. The paths shaped smooth curve lines compared to direct and non-differentiation optimum search methods showing discrete pattern. The starting points of individual pattern determined using Taguchi method were different, but they approached the same point.
The sharpnesses in the results were almost the same as that observed in the best-case parameters. However, the number of iterations was relatively small compared to the average number of iterations—even the numbers using ESS. One result had almost the same sharpnesses as that of the exact solution using different voltage. The retest results show that the Taguchi method provides useful parameters with a small number of experiments. Although the maximum sharpness value determined by the proposed methods was a little lower than that determined by ESS, the number of iterations was much smaller. Therefore, the proposed auto-lighting algorithm can reduce the number of iterations, while the image quality remains almost the same. Furthermore, the Taguchi method can reduce laborious tasks and the setup time for the inspection process in manufacturing.

5. Conclusions

A tuning method was proposed for the auto-lighting algorithm using the Taguchi method. The algorithm maximizes the image quality by adjusting multiple light sources in the shortest time, thus providing a function called auto-lighting. The image quality is defined as sharpness—the standard deviation of the grey level in pixels of an inspected image. The best image quality was found using two differential optimum methods—STD and CJG. The image quality was represented using sharpness, and the minimum of the negative sharpness was found using the steepest descent and conjugate gradient methods. These methods are modified for auto-lighting algorithms.
The Taguchi method was applied to determine the algorithm parameters, such as initial voltage, convergence constant, and threshold. The L 25 ( 5 5 ) orthogonal array was constructed considering five control factors and five levels of the parameter ranges. The SN ratio of the sharpness was calculated using “the larger the better”, and that of the number of iterations was calculated using “the smaller the better”. The desired combinations were determined after the Taguchi analysis using the orthogonal array. A retest was conducted by using the desired combination, and the results showed that the Taguchi method provides useful parameter values, and the performance is almost equal to that of the best-case parameters. The Taguchi method will be useful in reducing tasks and the time required to set up the inspection process in manufacturing.

Acknowledgments

We would like to acknowledge the financial support from the R & D Program of MOTIE (Ministry of Trade, Industry and Energy) and KEIT (Korea Evaluation Institute of Industrial Technology) of Republic of Korea (Development of Vision Block supporting the combined type I/O of extensible and flexible structure, 10063314). The authors are grateful to AM Technology (http://www.amtechnology.co.kr) for supplying RGB mixable color sources.

Author Contributions

All of the authors contributed extensively to this work presented in the paper. HyungTae Kim conceived the optimum algorithm, conducted the experiments and wrote the paper. KyeongYong Cho conducted the experiment using the Taguchi method. KyungChan Jin designed the experimental apparatus and Jongseok Kim constructed the apparatus. SeungTaek Kim derived color coordinates and analyzed the data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

STDSteepest descent method
CJGConjugate gradient method
LEDLight emitting diode
RGBRed, green and blue
sRGBStandard red, green and blue
ESSEqual step search
TAETrial-and-error
SNSignal-to-noise
I ( x , y ) Grey level of an image pixel
I ¯ Brightness, average grey level of an image
kCurrent iteration
mHorizontal pixel number of an image
NNumber of voltage inputs for a color mixer
nVertical pixel number of an image
uthe performance index
VVector of voltage inputs for a color mixer
vIndividual voltage input for an LED
wthe number of experiments
xHorizontal coordinate of an image
yVertical coordinate of an image
α Convergence coefficient
ϵ Terminal condition
η Convergence coefficient for limited range
ρ Negative sharpness, cost function
σ Sharpness, image quality
τ Threshold
ξ Index of update for conjugate gradient method

References

  1. Gruber, F.; Wollmann, P.; Schumm, B.; Grahlert, W.; Kaskel, S. Quality Control of Slot-Die Coated Aluminum Oxide Layers for Battery Applications Using Hyperspectral Imaging. J. Imaging 2016, 2, 12. [Google Scholar] [CrossRef]
  2. Neogi, N.; Mohanta, D.K.; Dutta, P.K. Review of vision-based steel surface inspection systems. EURASIP J. Image Video Process. 2014, 50, 1–19. [Google Scholar] [CrossRef]
  3. Arecchi, A.V.; Messadi, T.; Koshel, R.J. Field Guide to Illimination; SPIE Press: Bellingham, WA, USA, 2007; pp. 110–115. [Google Scholar]
  4. Pfeifer, T.; Wiegers, L. Reliable tool wear monitoring by optimized image and illumination control in machine vision. Measurement 2010, 28, 209–218. [Google Scholar] [CrossRef]
  5. Jani, U.; Reijo, T. Setting up task-optimal illumination automatically for inspection purposes. Proc. SPIE 2007, 6503, 65030K. [Google Scholar]
  6. Kim, T.H.; Kim, S.T.; Cho, Y.J. Quick and Efficient Light Control for Conventional AOI Systems. Int. J. Precis. Eng. Manuf. 2015, 16, 247–254. [Google Scholar] [CrossRef]
  7. Chen, S.Y.; Zhang, J.W.; Zhang, H.X.; Kwok, N.M.; Li, Y.F. Intelligent Lighting Control for Vision-Based Robotic Manipulation. IEEE Trans. Ind. Electron. 2012, 59, 3254–3263. [Google Scholar] [CrossRef]
  8. Victoriano, P.M.A.; Amaral, T.G.; Dias, O.P. Automatic Optical Inspection for Surface Mounting Devices with IPC-A-610D compliance. In Proceedings of the 2011 International Conference on Power Engineering, Energy and Electrical Drives (POWERENG), Malaga, Spain, 11–13 May 2011; pp. 1–7. [Google Scholar]
  9. Muthu, S.; Gaines, J. Red, Green and Blue LED-based White Light Source: Implementation Challenges and Control Design. In Proceedings of the 2003 38th IAS Annual Meeting, Conference Record of the Industry Applications Conference, Salt Lake City, UT, USA, 12–16 October 2003; Volume 1, pp. 515–522. [Google Scholar]
  10. Esparza, D.; Moreno, I. Color patterns in a tapered lightpipe with RGB LEDs. Proc. SPIE 2010, 7786, 77860I. [Google Scholar]
  11. Van Gorkom, R.P.; van AS, M.A.; Verbeek, G.M.; Hoelen, C.G.A.; Alferink, R.G.; Mutsaers, C.A.; Cooijmans, H. Etendue conserved color mixing. Proc. SPIE 2007, 6670, 66700E. [Google Scholar]
  12. Zhu, Z.M.; Qu, X.H.; Liang, H.Y.; Jia, G.X. Effect of color illumination on color contrast in color vision application. Proc. SPIE Opt. Metrol. Insp. Ind. Appl. 2010, 7855, 785510. [Google Scholar]
  13. Park, J.I.; Lee, M.H.; Grossberg, M.D.; Nayar, S.K. Multispectral Imaging Using Multiplexed Illumination. In Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–8. [Google Scholar]
  14. Lee, M.H.; Seo, D.K.; Seo, B.K.; Park, J.I. Optimal Illumination Spectrum for Endoscope. In Proceedings of the 2011 17th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), Ulsan, Korea, 9–11 February 2011; pp. 1–6. [Google Scholar]
  15. Kim, T.H.; Kim, S.T.; Cho, Y.J. An Optical Mixer and RGB Control for Fine Images using Grey Scale Distribution. Int. J. Optomech. 2012, 6, 213–225. [Google Scholar] [CrossRef]
  16. Kim, T.H.; Kim, S.T.; Kim, J.S. Mixed-color illumination and quick optimum search for machine vision. Int. J. Optomech. 2013, 7, 208–222. [Google Scholar] [CrossRef]
  17. Kim, T.H.; Cho, K.Y.; Jin, K.C.; Yoon, J.S.; Cho, Y.J. Mixing and Simplex Search for Optimal Illumination in Machine Vision. Int. J. Optomech. 2014, 8, 206–217. [Google Scholar] [CrossRef]
  18. Arora, J.S. Introduction to Optimum Design, 2nd ed.; Academic Press: San Diego, CA, USA, 2004; pp. 433–465. [Google Scholar]
  19. Kim, T.H.; Cho, K.Y.; Kim, S.T.; Kim, J.S.; Jin, K.C.; Lee, S.H. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods. In Proceedings of the International Symposium of Optomechatronics Technology (ISOT 2015), Neuchâtel, Switzerland, 14–16 October 2015; p. 07004. [Google Scholar]
  20. Dey, N.; Ashour, A.S.; Beagum, S.; Pistola, D.S.; Gospodinov, M.; Gospodinova, E.P.; Tavares, J.M.R.S. Parameter Optimization for Local Polynomial Approximation based Intersection Confidence Interval Filter Using Genetic Algorithm: An Application for Brain MRI Image De-Noising. J. Imaging 2015, 1, 60–84. [Google Scholar] [CrossRef]
  21. Lazarevic, D.; Madic, M.; Jankovic, P.; Lazarevic, A. Cutting Parameters Optimization for Surface Roughness in Turning Operation of Polyethylene (PE) Using Taguchi Method. Tribol. Ind. 2012, 34, 68–73. [Google Scholar]
  22. Verma, J.; Agrawal, P.; Bajpai, L. Turning Parameter Optimization for Surface Roughness of Astm A242 Type-1 Alloys Steel by Taguchi Method. Int. J. Adv. Eng. Technol. 2012, 3, 255–261. [Google Scholar]
  23. Firestone, L.; Cook, K.; Culp, K.; Talsania, N.; Preston, K., Jr. Comparision of Autofocus Methods for Automated Microscopy. Cytometry 1991, 12, 195–206. [Google Scholar] [CrossRef] [PubMed]
  24. Bueno-Ibarra, M.A.; Alvarez-Borrego, J.; Acho, L.; Chavez-Sanchez, M.C. Fast autofocus algorithm for automated microscopes. Opt. Eng. 2005, 44, 063601. [Google Scholar]
  25. Sun, Y.; Duthaler, S.; Nelson, B.J. Autofocusing in computer microscopy: Selecting the optimal focus algorithm. Microsc. Res. Tech. 2004, 65, 139–149. [Google Scholar] [CrossRef] [PubMed]
  26. Muruganantham, C.; Jawahar, N.; Ramamoorthy, B.; Giridhar, D. Optimal settings for vision camera calibration. Int. J. Manuf. Technol. 2010, 42, 736–748. [Google Scholar] [CrossRef]
  27. Li, M.; Milor, L.; Yu, W. Developement of Optimum Annular Illumination: A Lithography-TCAD Approach. In Proceedings of the Advanced Semiconductor Manufacturing Conference and Workshop (IEEE/SEMI ), Cambridge, MA, USA, 10–12 September 1997; pp. 317–321. [Google Scholar]
  28. Kim, T.H.; Cho, K.Y.; Kim, S.T.; Kim, J.S. Optimal RGB Light-Mixing for Image Acquisition Using Random Search and Robust Parameter Design. In Proceedings of the 16th International Workshop on Combinatorial Image Analysis, Brno, Czech Republic, 28–30 May 2014; Volume 8466, pp. 171–185. [Google Scholar]
  29. Zahlay, F.D.; Rao, K.S.R.; Baloch, T.M. Autoreclosure in Extra High Voltage Lines using Taguchi’s Method and Optimized Neural Network. In Proceedings of the 2008 Electric Power Conference (EPEC), Vancouver, BC, Canada, 6–7 October 2008; Volume 2, pp. 151–155. [Google Scholar]
  30. Sugiono; Wu, M.H.; Oraifige, I. Employ the Taguchi Method to Optimize BPNN’s Architectures in Car Body Design System. Am. J. Comput. Appl. Math. 2012, 2, 140–151. [Google Scholar]
  31. Su, T.L.; Chen, H.W.; Hong, G.B.; Ma, C.M. Automatic Inspection System for Defects Classification of Stretch Kintted Fabrics. In Proceedings of the 2010 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Qingdao, China, 11–14 July 2010; pp. 125–129. [Google Scholar]
  32. Wu, Y.; Wu, A. Quality engineering and experimental design. In Taguchi Methods for Robust Design; The American Society of Mechanical Engineers: Fairfield, NJ, USA, 2000; pp. 3–16. [Google Scholar]
  33. Yoo, W.S.; Jin, Q.Q.; Chung, Y.B. A Study on the Optimization for the Blasting Process of Glass by Taguchi Method. J. Soc. Korea Ind. Syst. Eng. 2007, 30, 8–14. [Google Scholar]
Figure 1. System Diagram for color mixing and automatic lighting.
Figure 1. System Diagram for color mixing and automatic lighting.
Jimaging 03 00031 g001
Figure 2. Target patterns acquired by maximum sharpness: (a) Pattern A; (b) Pattern B.
Figure 2. Target patterns acquired by maximum sharpness: (a) Pattern A; (b) Pattern B.
Jimaging 03 00031 g002
Figure 3. Signal-to-noise (SN) ratios of control factors for Pattern A in the case of steepest descent method: (a) Sharpness; (b) Iterations.
Figure 3. Signal-to-noise (SN) ratios of control factors for Pattern A in the case of steepest descent method: (a) Sharpness; (b) Iterations.
Jimaging 03 00031 g003
Figure 4. SN ratios of control factors for Pattern B in the case of steepest descent method: (a) Sharpness; (b) Iterations.
Figure 4. SN ratios of control factors for Pattern B in the case of steepest descent method: (a) Sharpness; (b) Iterations.
Jimaging 03 00031 g004
Figure 5. SN ratios of control factors for Pattern A in the case of conjugate gradient method: (a) Sharpness; (b) Iterations.
Figure 5. SN ratios of control factors for Pattern A in the case of conjugate gradient method: (a) Sharpness; (b) Iterations.
Jimaging 03 00031 g005
Figure 6. SN ratios of control factors for Pattern B in the case of conjugate gradient method: (a) Sharpness; (b) Iterations.
Figure 6. SN ratios of control factors for Pattern B in the case of conjugate gradient method: (a) Sharpness; (b) Iterations.
Jimaging 03 00031 g006
Figure 7. Search path formed by steepest descent method using Patterns (a) A and (b) B.
Figure 7. Search path formed by steepest descent method using Patterns (a) A and (b) B.
Jimaging 03 00031 g007
Figure 8. Search path formed by conjugate gradient method using Patterns (a) A and (b) B.
Figure 8. Search path formed by conjugate gradient method using Patterns (a) A and (b) B.
Jimaging 03 00031 g008
Table 1. Control factors and levels for derivative optimum methods.
Table 1. Control factors and levels for derivative optimum methods.
FactorsCodeLevel
12345
V R 0 : Initial V R A0.51.01.52.02.5
V G 0 : Initial V G B0.51.01.52.02.5
V B 0 : Initial V B C0.51.01.52.02.5
τ : ThresholdD0.20.40.60.81.0
η : Convergence ConstantE0.20.40.60.81.0
Table 2. Orthogonal array of steepest descent method for Patterns A and B.
Table 2. Orthogonal array of steepest descent method for Patterns A and B.
Run #Control FactorsPattern APattern B
 A B C D E σ max k max V R V G V B σ max k max V R V G V B
111111389.431170.980.000.41353.881530.530.300.00
212222390.502550.000.001.25-----
313333-----340.4730.000.420.42
414444382.420.000.720.72-----
515555390.431520.000.001.30317.6320.000.530.82
621234386.091890.520.020.52344.0210.520.020.52
722345387.2210.200.200.70337.2210.200.200.70
823451390.401230.000.001.26333.3560.000.300.80
924512390.36490.000.001.27335.65220.000.240.74
1025123384.6460.001.060.00346.931080.000.580.00
1131352389.011090.750.020.57-----
1232413388.872630.780.000.51340.46110.420.000.68
1333524390.361640.000.001.29324.301070.000.000.90
1434135385.8830.410.700.00331.8520.300.800.00
1535241389.212370.990.000.38346.911920.000.580.00
1641425387.7130.800.000.80338.46180.400.000.40
1742531389.182061.040.000.36338.25150.300.000.70
1843142-----354.6240.720.220.00
1944253382.3220.800.800.00303.6120.820.800.00
2045314384.95490.240.740.00349.911510.240.420.00
2151543387.9040.580.000.60341.6240.580.000.58
2252154386.3621.280.000.00358.1420.900.000.00
2353215386.7861.300.300.00358.26820.900.000.00
2454321389.051911.080.000.32355.32230.660.160.00
2555432386.5080.580.580.08350.511370.340.340.00
Table 3. Orthogonal array of conjugate gradient method for Patterns A and B.
Table 3. Orthogonal array of conjugate gradient method for Patterns A and B.
Run #Control FactorsPattern APattern B
 A B C D E σ max k max V R V G V B σ max k max V R V G V B
111111389.28590.990.000.43358.491430.950.000.00
212222390.58410.000.001.25343.00240.480.160.43
313333390.431390.000.001.20340.2130.000.420.42
414444382.4220.000.720.72354.0590.800.000.00
515555384.6620.000.570.78316.4620.000.500.88
621234386.0710.520.020.52343.9210.520.020.52
722345387.1610.200.200.70337.1410.200.200.70
823451390.40500.000.001.30358.431680.980.000.00
924512390.54280.000.001.27338.89290.240.160.66
1025123388.641780.740.020.71350.01210.480.410.00
1131352390.42240.000.001.28331.9520.700.000.70
1232413390.501440.000.001.25352.28160.580.240.08
1333524390.37170.000.001.31324.42150.000.000.90
1434135388.1651.480.000.00347.42330.000.600.00
1535241390.392840.000.001.29348.20260.110.630.00
1641425387.6130.800.000.80338.481240.400.000.40
1742531390.432730.000.001.24342.81380.280.280.43
1843142390.432640.000.001.25354.4240.720.220.00
1944253387.5770.950.600.00304.1020.800.800.00
2045314387.8439.000.270.91349.67130.240.420.00
2151543387.9250.740.080.70338.5740.580.000.65
2252154384.6911.700.200.00358.0220.900.000.00
2353215387.4680.900.300.20357.9680.900.000.00
2454321389.12800.930.000.42358.451180.950.000.00
2555432386.3580.580.580.08350.3690.340.340.00
Table 4. ANOVA of Pattern A for contribution of steepest descent method.
Table 4. ANOVA of Pattern A for contribution of steepest descent method.
Control Factors σ max l
SourceParameterDFSSMSContribution (%)SSMSContribution (%)
AInitial V R 419.5484.88714.271,16817,79224.5
BInitial V G 418.8474.71213.736,910922812.7
CInitial V B 434.2908.57223.065,69716,42422.6
D τ 417.1884.29712.543,40610,85114.9
E η 441.65610.41430.371,06917,76724.5
Error 25.8062.9034.2217110850.7
Total 22137.335 290,421
Table 5. ANOVA of Pattern B for contribution of steepest descent method.
Table 5. ANOVA of Pattern B for contribution of steepest descent method.
Control Factors σ max l
SourceParameterDFSSMSContribution (%)SSMSContribution (%)
AInitial V R 41094.64273.6624.0402710074.5
BInitial V G 4916.54229.1320.140,86010,21545.6
CInitial V B 41019.66254.9122.428077023.1
D τ 4715.9917915.717,557438919.6
E η 4516.07129.0211.310,919273012.2
Error 4291.9072.976.413,524338115.1
Total 244554.80 89,694
Table 6. ANOVA of Pattern A for contribution of conjugate gradient method.
Table 6. ANOVA of Pattern A for contribution of conjugate gradient method.
Control Factors σ max l
SourceParameterDFSSMSContribution (%)SSMSContribution (%)
AInitial V R 424.8026.218.530,195754914.8
BInitial V G 423.7495.93717.734,288857216.9
CInitial V B 48.2952.0746.2975624394.8
D τ 419.1594.814.324,720618012.1
E η 451.2712.81838.272,86318,21635.8
Error 47.0541.7635.331,656791415.6
Total 24134.329 203,478
Table 7. ANOVA of Pattern B for contribution of conjugate gradient method.
Table 7. ANOVA of Pattern B for contribution of conjugate gradient method.
Control Factors σ max l
SourceParameterDFSSMSContribution (%)SSMSContribution (%)
AInitial V R 4637.8159.514.31884.4471.13.3
BInitial V G 4161.440.33.65903.61475.910.3
CInitial V B 41491.5372.933.48974.82243.715.6
D τ 4840.4210.118.88401.62100.414.6
E η 4794.9198.717.829,353.67338.451.1
Error 4537.4134.312.028887225.0
Total 244463.4 57,406

Share and Cite

MDPI and ACS Style

Kim, H.; Cho, K.; Kim, J.; Jin, K.; Kim, S. Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer. J. Imaging 2017, 3, 31. https://doi.org/10.3390/jimaging3030031

AMA Style

Kim H, Cho K, Kim J, Jin K, Kim S. Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer. Journal of Imaging. 2017; 3(3):31. https://doi.org/10.3390/jimaging3030031

Chicago/Turabian Style

Kim, HyungTae, KyeongYong Cho, Jongseok Kim, KyungChan Jin, and SeungTaek Kim. 2017. "Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer" Journal of Imaging 3, no. 3: 31. https://doi.org/10.3390/jimaging3030031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop