Author Contributions
This work was realized through the collaboration of all authors. Methodology, software, validation, formal analysis, investigation, writing—original draft preparation, visualization, Z.O.; data curation and supervision, A.K.; conceptualization, writing—review and editing, Z.O. and A.K. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Overview of the camera mounting in the combustion chamber.
Figure 1.
Overview of the camera mounting in the combustion chamber.
Figure 2.
An average value of the color components for individual variants of the combustion process. The results presented in the graphs were obtained for the entire set of 1676 images for each variant.
Figure 2.
An average value of the color components for individual variants of the combustion process. The results presented in the graphs were obtained for the entire set of 1676 images for each variant.
Figure 3.
Probability density functions of the frames intensity (a) and box plots showing the dispersion of the frames intensity around the median (b). The results presented in the graphs were obtained based on the analysis of the entire set of 1676 images for each variant.
Figure 3.
Probability density functions of the frames intensity (a) and box plots showing the dispersion of the frames intensity around the median (b). The results presented in the graphs were obtained based on the analysis of the entire set of 1676 images for each variant.
Figure 4.
Optimal values of the gamma correction coefficient determined experimentally, based on the segmentation quality, for the mean intensities of the R channel of individual variants (black points). The blue line shows the course of the exponential function matched to the data points.
Figure 4.
Optimal values of the gamma correction coefficient determined experimentally, based on the segmentation quality, for the mean intensities of the R channel of individual variants (black points). The blue line shows the course of the exponential function matched to the data points.
Figure 5.
Gamma correction for exemplary flame images belonging to different variants: (a) input images; (b) images after gamma correction. The images show that the adaptive selection of the gamma correction coefficient effectively eliminates the image background, regardless of the average intensity of the R component. Thanks to this, better conditions for segmentation of the flame area are created.
Figure 5.
Gamma correction for exemplary flame images belonging to different variants: (a) input images; (b) images after gamma correction. The images show that the adaptive selection of the gamma correction coefficient effectively eliminates the image background, regardless of the average intensity of the R component. Thanks to this, better conditions for segmentation of the flame area are created.
Figure 6.
Impact of gamma correction on the segmentation quality: (a) exemplary flame image belonging to Variant 7; (b) segmentation result without gamma correction (the segmented object includes also the large background area); (c) segmentation result after gamma correction.
Figure 6.
Impact of gamma correction on the segmentation quality: (a) exemplary flame image belonging to Variant 7; (b) segmentation result without gamma correction (the segmented object includes also the large background area); (c) segmentation result after gamma correction.
Figure 7.
Results of the most important operations performed during image processing: (
a) source image; (
b) the result of the gamma correction according to the function
G = f(meanR) (
Figure 4); (
c) greyscale image after histogram normalization and median filtering; (
d) binary image after thresholding with the Otsu method; (
e) image after morphological operations (closing, opening) and finding the contour with the largest area; (
f) the image with the contour of the flame marked and its centroid (blue cross); (
g) image with the ROI marked (green square of a size of 100 × 100 or 40 × 40 pixels), the center of which is in the contour’s centroid; (
h) the ROI copied from the source image (without gamma correction). For greater readability, the above images show the central part of the images processed that include the flame. The sizes of all images (except the ROI) were 800 × 800 pixels.
Figure 7.
Results of the most important operations performed during image processing: (
a) source image; (
b) the result of the gamma correction according to the function
G = f(meanR) (
Figure 4); (
c) greyscale image after histogram normalization and median filtering; (
d) binary image after thresholding with the Otsu method; (
e) image after morphological operations (closing, opening) and finding the contour with the largest area; (
f) the image with the contour of the flame marked and its centroid (blue cross); (
g) image with the ROI marked (green square of a size of 100 × 100 or 40 × 40 pixels), the center of which is in the contour’s centroid; (
h) the ROI copied from the source image (without gamma correction). For greater readability, the above images show the central part of the images processed that include the flame. The sizes of all images (except the ROI) were 800 × 800 pixels.
Figure 8.
General scheme for implementing deep neural network consisting of a convolutional base (blue line) and own classifier (red line) for binary classification. The meaning of the Keras classes visible on the individual layers is as follows: Flatten—converts a 3-dimensional tensor to a 1-dimensional vector; Dense—densely connected layer. The feature map has a shape (samples, height, width, channels). The None parameter corresponds to any number of samples.
Figure 8.
General scheme for implementing deep neural network consisting of a convolutional base (blue line) and own classifier (red line) for binary classification. The meaning of the Keras classes visible on the individual layers is as follows: Flatten—converts a 3-dimensional tensor to a 1-dimensional vector; Dense—densely connected layer. The feature map has a shape (samples, height, width, channels). The None parameter corresponds to any number of samples.
Figure 9.
Sample plots of the accuracy (a) and loss (b) of training and validation processes for Variant 1 vs. 2 vs. 3. During the first 50 epochs, new layers of the network are trained, while the whole convolutional base is frozen (feature extraction). Then the fine tuning starts, during which the upper layers of the convolutional base are unfrozen and tuned for a period of 100 epochs together with the new classifier.
Figure 9.
Sample plots of the accuracy (a) and loss (b) of training and validation processes for Variant 1 vs. 2 vs. 3. During the first 50 epochs, new layers of the network are trained, while the whole convolutional base is frozen (feature extraction). Then the fine tuning starts, during which the upper layers of the convolutional base are unfrozen and tuned for a period of 100 epochs together with the new classifier.
Figure 10.
Confusion matrices: (a) Variant 1 vs. 3; (b) Variant 1 vs. 2 vs. 3; (c) Variant 4 vs. 5; (d) Variant 6 vs. 7.
Figure 10.
Confusion matrices: (a) Variant 1 vs. 3; (b) Variant 1 vs. 2 vs. 3; (c) Variant 4 vs. 5; (d) Variant 6 vs. 7.
Figure 11.
The general principle of operation of the application for testing the method of identifying variants of the co-firing process.
Figure 11.
The general principle of operation of the application for testing the method of identifying variants of the co-firing process.
Figure 12.
Application interface for testing the method of identifying co-firing variants: (a) an example of incorrect classification; (b) an example of correct classification. At the top, the following information is displayed: actual label of the currently displayed variant (green color), below—a label of the variant recognized by the classifier (green color—correct classification, red—incorrect), to the right—current image frame number, below—the frame processing time, including the time of the ROI extraction and the co-firing variant prediction time using the constructed model. In the central part, the contour of a flame and its centroid (blue cross) are marked as well as the ROI (green square), the center of which is in the contour’s centroid.
Figure 12.
Application interface for testing the method of identifying co-firing variants: (a) an example of incorrect classification; (b) an example of correct classification. At the top, the following information is displayed: actual label of the currently displayed variant (green color), below—a label of the variant recognized by the classifier (green color—correct classification, red—incorrect), to the right—current image frame number, below—the frame processing time, including the time of the ROI extraction and the co-firing variant prediction time using the constructed model. In the central part, the contour of a flame and its centroid (blue cross) are marked as well as the ROI (green square), the center of which is in the contour’s centroid.
Table 1.
Variants of the combustion process. Pth—thermal power, λ—excess air coefficient.
Table 1.
Variants of the combustion process. Pth—thermal power, λ—excess air coefficient.
Variant | Pth (kW) | λ | Fuel Flow (kg/h) | Secondary Air Flow (Nm3/h) |
---|
1 | 250 | 0.85 | 35.2 | 103.6 |
2 | 250 | 0.75 | 36.0 | 73.2 |
3 | 250 | 0.65 | 39.4 | 62.9 |
4 | 300 | 0.85 | 44.2 | 132.5 |
5 | 300 | 0.75 | 43.5 | 96.4 |
6 | 400 | 0.75 | 59.7 | 181.3 |
7 | 400 | 0.65 | 56.8 | 152.8 |
Table 2.
Percentage of image frames discarded due to failure to meet the region of interest (ROI) extraction condition depending on the ROI size. This condition means that the ROI must be completely contained in the flame contour.
Table 2.
Percentage of image frames discarded due to failure to meet the region of interest (ROI) extraction condition depending on the ROI size. This condition means that the ROI must be completely contained in the flame contour.
Variant | 100 × 100 | 80 × 80 | 60 × 60 | 40 × 40 |
---|
1 | 0.12 | 0.0 | 0.0 | 0.0 |
2 | 0.06 | 0.0 | 0.0 | 0.0 |
3 | 6.03 | 2.27 | 0.89 | 0.36 |
4 | 36.58 | 23.75 | 14.86 | 7.88 |
5 | 2.45 | 1.37 | 0.78 | 0.12 |
6 | 44.81 | 30.43 | 21.54 | 14.14 |
7 | 39.74 | 27.15 | 18.08 | 11.75 |
Table 3.
Number of cases belonging to particular variants.
Table 3.
Number of cases belonging to particular variants.
Dataset | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
---|
Full | 1670 | 1671 | 1575 | 1544 | 1669 | 1439 | 1477 |
Training | 837 | 837 | 787 | 772 | 836 | 719 | 739 |
Validation | 418 | 419 | 394 | 386 | 419 | 360 | 370 |
Testing | 415 | 415 | 394 | 386 | 414 | 360 | 368 |
Table 4.
Classification results.
Table 4.
Classification results.
Model | Variant | Precision | Recall | F1-Score | Accuracy | Loss | Image size |
---|
Binary | 1 | 1.00 | 0.95 | 0.98 | 0.98 | 0.06 | 100 × 100 |
3 | 0.95 | 1.00 | 0.98 |
Multi-class | 1 | 0.80 | 0.78 | 0.79 | 0.84 | 0.40 |
2 | 0.76 | 0.80 | 0.78 |
3 | 0.98 | 0.94 | 0.96 |
Binary | 4 | 0.95 | 0.93 | 0.94 | 0.94 | 0.14 | 40 × 40 |
5 | 0.94 | 0.95 | 0.95 |
Binary | 6 | 0.89 | 0.71 | 0.79 | 0.82 | 0.37 |
7 | 0.77 | 0.92 | 0.83 |
Table 5.
Application test results for identifying variants of the co-firing process. The test used the classifier model that detected Variants 1–3 and the model for Variants 4 and 5.
Table 5.
Application test results for identifying variants of the co-firing process. The test used the classifier model that detected Variants 1–3 and the model for Variants 4 and 5.
System | Variant | Image Size | text | tpred | tproc |
---|
PC1 | 1-2-3 | 100 × 100 | 31 | 10 | 44 |
PC2 | 1-2-3 | 100 × 100 | 35 | 14 | 52 |
PC1 | 4-5 | 40 × 40 | 32 | 9 | 44 |
PC2 | 4-5 | 40 × 40 | 36 | 11 | 49 |
Table 6.
Comparison of the classification results, where the subjects of study were flame images.
Table 6.
Comparison of the classification results, where the subjects of study were flame images.
Application | Architecture | ACC | Prec., Rec. | No. in Ref. |
---|
Recognizing variants of pulverized coal and biomass co-firing | DCNN | 0.82–0.98 | 0.83–0.98, 0.81–0.98 | Own results |
Monitoring combustion quality (complete, partial, incomplete) in a coal-fired boiler | FLD+RBN | – | 0.96, 1.0 | [6] |
Recognizing the conditions of the pulverized coal combustion | PCA+RWN | 0.91 | – | [12] |
Identifying burning states of oil, powder and normal one | DCNN | 1.0 | – | [26] |
Detecting instability in an experimental combustion system based on flame images and sound pressure | DCNN | 0.69–1.0 | – | [3] |
Abnormal condition detection in the experimental combustion system | DBN | 0.96 | – | [8] |
Determination of combustion regimes using flame images of a gas burner | DCNN | 0.98 | – | [2] |
Detection of the stable, semi-stable and unstable combustion status in a thermal power plant | DCAE+PCA+HMM | 0.97 | – | [10] |