Next Article in Journal
Bridging AI Paradigms with Cases and Networks
Previous Article in Journal
Notes on the Cross-Level Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Research on the Image Background Base Algorithm Based on the Factor Space Theory †

College of Science, Liaoning Technical University, Fuxin 123000, China
*
Author to whom correspondence should be addressed.
Presented at the 2023 Summit of the International Society for the Study of Information (IS4SI 2023), Beijing, China, 14–16 August 2023.
Comput. Sci. Math. Forum 2023, 8(1), 55; https://doi.org/10.3390/cmsf2023008055
Published: 14 August 2023
(This article belongs to the Proceedings of 2023 International Summit on the Study of Information)

Abstract

:
Image vision has wide application prospects in the field of intelligent manufacturing, and studying an effective background base extraction algorithm has become an urgent need for factor space applications in the fields of big data and image vision. Based on this, an image background base extraction algorithm for identifying image information is proposed. The algorithm analyzes the characteristics of the base points in the image and converts the brightness changes observed by the human eye into the data language form that the machine can read. Numerical experiments show the effectiveness and robustness of the proposed algorithm.

1. Introduction

As a new theory under the framework of factor space, the significance of the background theory is to provide strong support for the factor factors in the application of artificial intelligence [1]. The background base theory of factor space uses its powerful mathematical model to become the theoretical basis of data science and provides factor thinking for scientific research [2].
At present, the background base extraction algorithm is the core, which is widely used in the concept division and classification of data [3]. Compared with the data field, the application of the background base theory is relatively weak in the field of image vision. As a key, core technology for realizing automation and intelligence, image vision is becoming a branch of the most rapid development of artificial intelligence. Image vision has wide application prospects in the field of intelligent manufacturing, and is one of the important directions of technological change and innovation in the current manufacturing industry. The value of vision field to artificial intelligence is just like the value of eyes to human beings. Its importance is self-evident, and the importance of applying background base in the field of image vision is becoming increasingly prominent. Further, studying the background base theory and developing a more effective extraction algorithm has become an urgent problem to be solved. Therefore, it is of great significance to improve and expand the background-based theory of artificial intelligence technology, which provides a solid theoretical basis for the application of factor space in artificial intelligence.

2. Image Background Base Algorithm

2.1. Algorithm Construction

2.1.1. Image Basis Point Analysis

In the theory of factor space, the background base is the feature of a group of data, the factor explicit is the process of extracting the key features of these data, and the data feature extracted by the factor is the background base. Natural factors cannot directly determine the category of an image, so the hidden key factors should be found and displayed. According to the image’s base point characteristics, an algorithm is designed to extract the image background base and extract the key factors in the image.
In image recognition, the background base contains the key information of the image, and small changes at the base point will have a significant impact on the image. Based on the theory of factor space, the pixels in the image are divided into two kinds: (1) the background basis points containing the key information of the image; ang (2) the base points that can be obtained using the linear combination. From the perspective of human brain judgment, the base point comes from the perceptual judgment of the image. The brightness at the base point changes significantly; from the perspective of image vision, the gray value at the base point changes significantly in all directions. Therefore, the core of the image background base algorithm is to judge the change in the gray scale of each pixel. Therefore, there are two characteristics of the image base point: (1) the base point with a large neighborhood point gray degree difference; and (2) a dramatic change in the gray degree value at the base point. For the base point and inner point gray scale characteristics, see Figure 1.

2.1.2. Algorithm Steps

In this paper, a method is proposed to extract the background base of an image. According to the base point characteristics, brightness changes observed by the human eye are transformed into the data form that can be recognized by the machine, and then the SVM algorithm is used to classify the pixel sample points and extract the background base in the image. The algorithm steps are as follows:
Step 1: Input image F, F has m pixels to preprocess the image. Gray the color image using the weighted average and use a Gaussian filter to smooth the image and remove the image noise points.
Step 2: Calculate the gray difference between each pixel (x,y) and its eight neighborhood pixels Ix,y.
Step 3: Calculate the gradient of each pixel point (x,y). Extending the traditional gradient operator from two to eight directions and assigning the convolution template weighted according to the pixel distance gives an eight-direction convolution template Dn. The eight-direction template is used as a convolution to calculate the gradient Gn in the eight directions, and the absolute value of the eight-direction gradient is linearly added to obtain the gradient value Tx,y for each pixel.
Step 4: Each pixel is used as a sample, and S represents the set of pixel samples. Each pixel sample point (x,y) has two conditional factors Ix,y, Tx,y. The outcome factor g is the pixel point category label, 1 is the basis point, and 0 is the inner point. The factor analysis table of the pixel feature values is shown in Table 1.
Step 5: Using the conditional factors Ix,y and Tx,y, use the SVM dichotomy algorithm to classify the pixels and specify the base points and inner points.
Step 6: Output the set of basis points to obtain the background base S* containing the image features.

3. Image Background Base Extraction Algorithm

3.1. Image Preprocessing

The main purpose of image preprocessing [4] is to eliminate the useless information in an image and enhance the useful information. Different treatments need to be selected according to different goals. To obtain the gray change of each pixel, the image is gray scaled. Because the weighted average method is closer to the feature of a different sensitivity to different colors, this paper chooses the weighted average method to gray the images. To avoid noise in the image base point detection, a Gaussian filter is applied to smooth the image [5].

3.2. 8 Neighborhood Gray Difference and Gradient Were Calculated

According to the characteristics of the base point, the difference between the base point and the gray value of the point in the neighborhood is large, so the difference between the gray value of each pixel and the remaining pixels of the neighborhood is made to calculate the gray difference between the pixel and its neighborhood.
Image processing in factor space defines the pixels in the image as the base point of the image, and the gradient is used to measure the intensity of the gray value change. The gray value at the base point in Figure 2a increases significantly, and the first derivative of the gray value at the base point is found to obtain the gradient of the base point. As shown in Figure 2b, the gradient reaches the maximum value at the base point, see Figure 2.
Calculating the pixel point gradient during the image basis point extraction requires calculating the first derivative of the gray scale of each pixel. To facilitate this calculation, templates of different sizes can be used to slide in the image and successively convolve the pixel gray scale value adjacent to the pixel, thus obtaining an approximate gradient of each pixel to represent the degree of change in the gray scale of that pixel.
The gradients obtained with the different filter templates are different. Both the Prewitt operator [6] and Sobel operator [7] use a convolution kernel of 33, but unlike the Prewitt operator, the Sobel operator gives weight according to different pixel positions, giving a higher weight to the pixels further away from the template center, while the Prewitt operator does not consider the contribution of the distance from the pixel to the center point [8]. Therefore, the Sobel operator works better than the Prewitt operator, and the Sobel operator is used to calculate the pixel gradient.
Since the Sobel operator only calculates the gray gradients in the vertical and horizontal directions, the basis point detection in other directions is not ideal, and the basis point information is lost. In order to solve the limitation of the Sobel operator in other directions, new diagonal lines and oblique diagonal direction convolution templates are added to the traditional Sobel operator, with a total of eight directions. The weight allocation of the new operator is determined according to the distance from the pixels. The closer the distance is, the greater the weight. The gradient values are calculated for each pixel point. Given the eight-oriented convolutional template Dn (n = 1, 2, …, 8), see Figure 3.

3.3. Classifier Selection

It is a dichotomy process to divide the pixels into the base points that can reflect the image features and non-basis points without the image features. An SVM is a typical supervised classifier that deals with two classification problems. SVMs use kernel functions to map the original data to a high dimensional space, so as to realize the transformation of the original space into a higher dimensional space [9]. Thus, the pixels are classified using an SVM as the base classifier. According to the feature of the radical base point gray scale change in the image, the sample label with a large feature value is marked as 1, and the sample point label with a smaller feature value is marked as 0. The spatial distribution of the base point and inner point feature values gives the background base containing the image features, see Figure 4.

4. Experimental Analysis

Numerical experiments were set to verify the effectiveness and stability of the image background base algorithm. Experiment one, for the effectiveness of the image background base algorithm experiment, tested the image for Power images and House images, used two images for gray image smoothing and image preprocessing, calculated the characteristic value of each pixel in the image, used SVM clustering, set the base point and the base point in the original image, and verified the image background base algorithm. The image background-based algorithm can identify the basic information of an image. On the whole, the algorithm has a good performance in extracting image base points, which provides an important tool for the application of the background base theory to the field of image vision.
Experiment two is for the stability of the image background base algorithm test. Because the clear base of the image test set is less, it is difficult to use accuracy to measure the extraction effect of the image background base algorithm. Therefore, this paper adopts the repetition of the base rate to verify the stability of the image background base algorithm, set the rotation and zoom transformation, and calculate the base of two kinds of image transformation to verify the stability of the image background base algorithm. The experimental results show that the image background base extraction algorithm can extract the image base points and obtain the basic information; the rotation transformation experiment result verifies the good rotation invariance in the eight directions set by the algorithm; and the zoom transformation verifies the effectiveness and robustness of the image background base point extraction.
In the future, local maximum steps can be added to the image background base algorithm to obtain a finer, more accurate background base. The image background base algorithm adds a machine learning method, which requires a lot of image training. In the future, a better image background base algorithm can become a more mature tool for factor image processing spaces.

Author Contributions

Conceptualization, P.Z. and X.B.; methodology, P.Z. and X.B.; software, X.B.; validation, P.Z. and X.B.; formal analysis, X.B.; investigation, P.Z.; writing—original draft preparation, X.B.; writing—review and editing, P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Science and Technology Research Funding Project of Education Department of Liaoning Province (Project No.:LJ2019JL019): Research on classification algorithm and knowledge expression based on factor space theory.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

http://data.stats.gov.cn/index.htm (accessed on 1 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, P.Z. Fuzzy set and fuzzy set category. Math. Prog. 1982, 1, 1–18. [Google Scholar]
  2. Wang, P.Z. Factor space theory—The mathematical foundation of the mechanistic artificial intelligence theory. J. Intell. Syst. 2018, 13, 37–54. [Google Scholar]
  3. Wang, P.Z. Factor Space and Data Science. J. Liaoning Eng. Tech. Univ. (Nat. Sci. Ed.) 2015, 34, 273–280. [Google Scholar]
  4. Heseltine, T.; Pears, N.; Austin, J. Evaluation of image preprocessing techniques for eigenface-based face recognition. Proc. SPIE-Int. Soc. Opt. Eng. 2002, 4875, 677–685. [Google Scholar]
  5. Yao, C.; Yang, X.; Zhao, M. Ultra-fast target positioning processor facing the SPAD image sensor. J. Photonics 2022, 51, 293–303. [Google Scholar]
  6. Zhang, H.; Zhu, Q.; Fan, C.; Deng, D. Image quality assessment based on Prewitt magnitude. AEU Int. J. Electron. Commun. 2013, 67, 799–803. [Google Scholar] [CrossRef]
  7. Jibrin, B.; Bello-Salau, H.; Umoh, I.J.; Onumanyi, A.J.; Salawudeen, A.T.; Yahaya, B. Development of hybrid automatic segmentation technique of a single Leaf from overlapping leaves image. J. ICT Res. Appl. 2021, 14, 257–273. [Google Scholar]
  8. Razali, A.; Ismail, M.F. Application of gradient-based edge detection on defined surface feature boundary. Ann. Emerg. Technol. Comput. 2021, 5, 129–136. [Google Scholar] [CrossRef]
  9. Luo, Q. Face recognition based on PCA and SVM. Stat. Appl. 2022, 11, 10–18. [Google Scholar]
Figure 1. Grayscale feature of image base point.
Figure 1. Grayscale feature of image base point.
Csmf 08 00055 g001
Figure 2. Gradient change at the base point of the image.(a) Gray value of pixels in column y, row x. (b) Point gradient in column y, row x.
Figure 2. Gradient change at the base point of the image.(a) Gray value of pixels in column y, row x. (b) Point gradient in column y, row x.
Csmf 08 00055 g002
Figure 3. Convolution template in 8 directions.
Figure 3. Convolution template in 8 directions.
Csmf 08 00055 g003
Figure 4. Pixel two classification based on SVM algorithm.
Figure 4. Pixel two classification based on SVM algorithm.
Csmf 08 00055 g004
Table 1. Analysis of eigenvalue factors of pixel points.
Table 1. Analysis of eigenvalue factors of pixel points.
SFg
Ix,y Tx,yg
s1 fj ui g(si)∈{0,1}
s2
sm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, P.; Bi, X. Research on the Image Background Base Algorithm Based on the Factor Space Theory. Comput. Sci. Math. Forum 2023, 8, 55. https://doi.org/10.3390/cmsf2023008055

AMA Style

Zhang P, Bi X. Research on the Image Background Base Algorithm Based on the Factor Space Theory. Computer Sciences & Mathematics Forum. 2023; 8(1):55. https://doi.org/10.3390/cmsf2023008055

Chicago/Turabian Style

Zhang, Pengxue, and Xiaoyu Bi. 2023. "Research on the Image Background Base Algorithm Based on the Factor Space Theory" Computer Sciences & Mathematics Forum 8, no. 1: 55. https://doi.org/10.3390/cmsf2023008055

Article Metrics

Back to TopTop