Next Article in Journal
Recommendation Algorithm Based on Survival Action Rules
Previous Article in Journal
Design and On-Orbit Performance of the Payload Rack Thermal Management System for China Space Station Experimental Lab Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Yarn Angle Detection of Glass Fiber Plain Weave Fabric Based on Machine Vision

1
Weihai Research Institute, Harbin University of Science and Technology, Weihai 264200, China
2
Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
3
School of Automation, Harbin University of Science and Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(7), 2937; https://doi.org/10.3390/app14072937
Submission received: 26 December 2023 / Revised: 25 March 2024 / Accepted: 29 March 2024 / Published: 30 March 2024

Abstract

:
To address the issue of low accuracy in the yarn angle detection of glass fiber plain weave fabrics, which significantly impacts the quality and performance of the final products, a machine vision-based method for the yarn angle detection of glass fiber fabrics is proposed. The method involves pre-processing the image with brightness calculation, threshold segmentation, and skeleton extraction to identify the feature region. Line segment detection is then performed on this region, using the Hough transform. The concept of a “line segment evaluation index” is introduced, and it was used as a criterion for assessing the quality and relevance of detected line segments. Moreover, the warp and weft yarn extrusion area contours refer to the reconstructed outlines of yarn areas, achieved by combining the center of mass extraction with morphological operations and used to accurately determine the yarn angle. Tested under a range of challenging scenarios, including varied lighting conditions, fabric densities, and levels of image noise, this method has demonstrated robust stability and maintained high accuracy. These tests mimic real-world manufacturing environments, where factors such as ambient light changes and material inconsistencies can affect the quality of image capture and analysis. The proposed method has high accuracy, as shown by MSE and a Pearson’s r of 0.931. By successfully navigating these complexities, the proposed machine vision-based approach offers a significant enhancement in the precision of yarn angle detection for glass fiber fabric manufacturing, thus ensuring improved quality and performance of the final products.

1. Introduction

Glass fiber plain weave fabric is formed by interweaving warp and weft yarns; the internal spacing of warp and weft yarns is more uniform, with high stability, and it is commonly used in numerous industries to make composite materials. The yarn angle (the angle between the warp and weft yarns) is an important parameter affecting the mechanical properties of woven composites [1,2,3,4,5]. It is also possible to determine whether there are folds and foreign objects in the glass fiber plain weave fabric by analyzing the angles. The research of Liang showed that a range of variations in yarn angle can affect the longitudinal tensile modulus and strength of the fabric [6]. During the stretching process of the fabric, the warp and weft yarns may deviate or overlap, and the angle between the yarns may change to some extent, thus affecting the performance of the composite material [7]. The change in yarn angle, that is, the angle between the warp and weft in fiber fabrics, affects the relative relationship between the warp and weft. As fiber fabrics are applied to composite material molding, they ultimately affect the distribution of fibers in the composite material, thereby affecting the mechanical properties of the composite material.
Traditional yarn angle detection is usually conducted by manual measurement, which is less efficient, and on the surface of the glass fiber plain weave fabric exists specular reflection, so long-time observation of the fabric will also cause some impact on the eyes. X-ray computed tomography (CT) scans, intensity methods, and current methods are also used to measure fiber orientation. However, these methods also inevitably have a certain impact on the human body, due to radiation, and contact measurements can cause some damage to the material. The research of Pourdeyhimi showed that the angle according to the nature of the two-dimensional Fourier transform [8] is suitable for one-way fabric angle detection, with high accuracy and good stability, but is not suitable for measuring the yarn angle. Şerban obtained the minimum circumscribed rectangle of the target region by edge detection and the rotating caliper algorithm, which was calculated to obtain the fiber orientation [9]. The Hough transform is widely used to obtain the orientation and angle information of the target object after processing the image [10,11,12,13].
Some also use target detection to detect a specific part and obtain the angle based on the change in this part [14]. Alamdarlo improved the accuracy of pavement texture depth measurement by investigating the camera pole angle using the photometric stereo vision method [15]. When the angle between the light source and the camera changes, the acquired image will also change. Shen used image fusion to restore the target area and obtain the yarn angle of the carbon fiber fabric through edge detection [16].
Currently, scholars’ research mainly focuses on the angle detection of carbon fibers, and there has been no research on the angle detection of glass fibers. The existence of specular reflection on glass fiber plain weave fabric causes some interference to detection, and a method is proposed to detect the yarn angle of glass fiber plain weave fabric, which can effectively reduce the effect of specular reflection. Line segment detection is performed on the initial image by Otsu threshold segmentation [17], skeleton extraction [18], and Hough transform, and the confidence degree of the line segment is calculated. When the confidence degree meets the requirement, the yarn angle is obtained by calculating the angle between the line segments. When the confidence degree does not meet the requirement, the contour of the region formed by the extrusion of the warp and weft yarns is reconstructed by calculating the center of mass and the morphological operations, and analyzing the two adjacent edges of the contour to obtain the yarn angle.

2. Design of Detection System and Materials

The image acquisition and processing system consists of a camera, a lens, and a light source. Among them, the most critical components are the camera and the light source. The camera used was a CMOS (Complementary Metal Oxide Semiconductor) camera from China Da Heng. Some parameters of the camera are shown in Table 1.
The light source was rectangular, 30 cm × 30 cm, with a maximum brightness of 600 nit and a minimum brightness of 300 nit. The acquisition system is shown in Figure 1.
For this study, the glass fiber plain weave fabric was sourced from China Taishan Fiberglass Inc. (Taian, China). The material parameters are shown in Table 2.
In practical use, it is necessary to adjust the brightness of the light source and the distance between the lens and the target object to achieve the best imaging effect. We adjusted the focal length and aperture knobs of the industrial camera to ensure clear imaging. Through testing, it was determined that placing the fabric on a transparent sheet, approximately 1 to 10 cm away from the light source, yields images that meet the requirements for subsequent detection.

3. Yarn Angle Detection Method

In this study, we aimed to measure the angle between warp and weft yarns in textiles. In Figure 2, the areas marked as 1, 2, and 3 are referred to as “feature regions” throughout this document. Specifically, feature region 2 is shaped like a rectangle, with its length and width dimensions provided. The central axis within this region, aligned parallel to the rectangle’s longer side, helps define the direction of region 2. These regions are colored differently to make them easily distinguishable.
To differentiate between the warp and weft yarns, it is necessary to categorize the feature regions based on their orientations. In practice, the angle between the warp and weft yarns is always greater than 10°. The angle between the central axes of feature regions 1 and 2 is greater than 10°, indicating that they belong to different directional categories. Conversely, the angle between regions 1 and 3 is less than 10°, suggesting they share a similar direction and, thus, are classified into the same category. These categorizations help in identifying “directionally identical regions” (angles less than 10°) and “directionally different regions” (angles greater than 10°).
Additionally, the term “light spot” refers to the brightest parts of the image, located at the four corners of the shape formed by the overlapping warp and weft yarns. This terminology and classification scheme facilitates the precise measurement and analysis of textile yarn orientations.
Figure 2. Feature regions and light spots.
Figure 2. Feature regions and light spots.
Applsci 14 02937 g002

3.1. Image Preprocessing

The obtained image contains a lot of irrelevant information; to remove this irrelevant information and improve the processing speed, we preprocessed the image. The preprocessing stage includes luminance calculation, Otsu threshold segmentation, skeleton extraction, Boolean threshold segmentation, the center of mass extraction, and morphological operations.

3.1.1. Image Cropping

In the process of fabric movement or stretching, the yarn angles of different regions may vary greatly. To obtain a more accurate yarn angle, the images were appropriately cropped, and then the angle detection was performed separately for each sub-image after cropping. When cropping, we ensured that each small image block contains at least four bright spots.

3.1.2. Brightness Calculation

A color digital image usually contains height information, width information, and the number of channels. Three channels are usually used to represent color images under the RGB model, where R, G, and B are the three primary colors, namely red, green, and blue, respectively, and each primary color is divided into 0–255 luminance levels. The color image can be converted into a grayscale image by combining R, G, and B in a specific way [19], and the grayscale image has only one channel and contains a smaller amount of information; a grayscale image can provide enough information for angle detection. The color image is converted to a grayscale image by a luminance calculation, where the luminance calculation formula is given by:
Y ( i , j ) = 0.299 × R ( i , j ) + 0.587 × G ( i , j ) + 0.114 × B ( i , j ) i = 1 , , m ; j = 1 , , n
where  m  is the number of pixels included in the horizontal direction of the input image,  n  is the number of pixels included in the vertical direction of the input image, and  Y  represents the brightness value of the grayscale image.

3.1.3. Otsu Threshold Segmentation

Otsu is one of the most classic algorithms in threshold segmentation [17], by which a threshold value can be automatically obtained and a grayscale image can be converted into a binary image with only two brightness levels, 0 and 1, where 0 means black and 1 means white. The binary image contains less information and can effectively separate the target from the background. Figure 3a shows the initial image, obtained by the acquisition system, which is transformed into the image shown in Figure 3b by brightness calculation, and then the image shown in Figure 3c is obtained by Otsu threshold segmentation.

3.1.4. Skeleton Extraction

Skeleton extraction is an operation for the connected domain in binary images [18], where the white connected region is considered the target region and the black region is the background. The skeleton of a rectangular connected region is its central axis in the long direction; the skeleton of a square is its centroid, the skeleton of a circle is its circle center, the skeleton of a straight line is itself, and the skeleton of an isolated point is also itself. The skeleton is obtained to highlight the main structure and shape information of the object and remove redundant information. For the feature region, we can consider it as a rectangle, and after the skeleton extraction operation, we can obtain its central axis. Figure 4 shows the image obtained after the skeleton extraction of the binary image.

3.1.5. Morphological Open Operation and Center of Mass

The morphological open operation is a filter based on geometric operations [20]. It can remove isolated dots and burrs, while the total position and shape do not change.
For a connected region, its center of mass is its geometric center. Assuming that the coordinates of all pixels in the connected region of the input image are  x 1 , y 1 x k , y k , the formula for calculating the center of mass is given by the following equation:
x ~ = i = 1 k x i k , y ~ = i = 1 k y i k

3.2. Line Segment Detection

3.2.1. Hough Transform Straight Line Detection Algorithm

The Hough transform converts a point in a right-angle coordinate system into a sine function curve in Hough space [21]. If n points are co-linear in the right-angle coordinate system, then the corresponding sine function curves will also intersect at a point, so by finding the coordinates  ( r , θ )  of that point, we can then obtain the equation of the line where these n points are located, as given by the following equation:
r = x cos ( θ ) + y sin ( θ )
As shown in Figure 5, the shortest distance between the origin and the line segment is    r . The angle between the vertical line where r lies and the x positive semi-axis is    θ x , y  is the coordinate of the point on this line.
Since the Hough transform detects multiple line segments in a feature region, for one feature region, only one line segment needs to be obtained, and to ensure the obtained line segments are the best ones, some minor ones need to be filtered out first. First, a length threshold was set, to one-third of the average length of the detected line segments, and all the line segments shorter than this threshold were deleted. The results of Hough transform detection are shown in Figure 6.

3.2.2. Confidence Level

After applying the Hough transform, multiple line segments were obtained, from which the most representative line segment needed to be selected to replace the central axis of the feature region. Since it is impossible to know the angle between each line segment and the central axis, we took an alternative approach and calculated the probability of how well each line segment matches the central axis, which is called the  C L  (confidence level).
The line segments detected after preprocessing are displayed in the initial image and extended to the image boundary, which will be called extension lines. First, the intersection of the extension line with the edge of the image was calculated.
As shown in Figure 7a, the width of the input image is m pixels, the height is n pixels, and the coordinate origin of the image is (1,1), located at the upper left corner of the image. The extension line will not lie on the edge of the image; as shown in Figure 7b, there are six cases in which the extension line intersects the edge of the image.
Assume that the equation in which the extension line lies is given by:
r 1 = x cos ( θ 1 ) + y sin ( θ 1 )
Assume that the extension line and all four edges have intersections, and that these intersections are given by the following four equations:
x a = 1 , y a = r 1 cos θ 1 sin θ 1
x b = r 1 n sin θ 1 cos θ 1 , y b = n
x c = m , y c = r 1 m cos θ 1 sin θ 1
x d = r 1 sin θ 1 cos θ 1 , y d = 1
When  1 y a n  is satisfied, there is an intersection between the extension line and side a; when  1 x b m  is satisfied, there is an intersection with side b; when  1 y c n  is satisfied, there is an intersection with side c; and when  1 x d m  is satisfied, there is an intersection with side  d .
Through the above process, the two intersection points of the extension line and the image were calculated. The number of pixels contained in the line segment where these two points are located was calculated, and the sum of the luminance values of these pixels (the luminance value of the binary image) was calculated to obtain the number of pixels located in the feature area. The confidence level was obtained by dividing the sum of the luminance values by the total number of pixels.  C L  can be expressed by the following equation:
C L = P B P A
where  P A  is the total number of pixels and  P B  is the sum of the brightness values of all pixels.
For all detected line segments in the image, the most suitable two line segments were selected under the following conditions:
(1)
The line segment detected according to the Hough transform contains both  θ  and r information, and the straight lines were divided into two groups according to  θ , representing the warp and weft yarns, respectively. If the angle between the two line segments was less than 10°, they were placed into the same group, and if the opposite was true, they were divided into different groups;
(2)
The line segment from each group that corresponds to the highest confidence level was selected;
(3)
There were several line segments with a  C L  of 1, the average value of  θ  was calculated for these line segments and the line segment closest to that average was selected;
(4)
A threshold value of  C L  was set. Line segments were only eligible as suitable if they corresponded to a  C L  greater than this threshold.
Condition (3) ensured that the output line segment was not the line segment with the largest difference from the ideal line segment. When the set threshold was not met, then the segment went to contour reconstruction.

3.2.3. Contour Reconstruction

The light spots are located at the four vertices of the extrusion area, and the contour of the area formed by the extrusion of the warp and weft yarns can be obtained by connecting the center of mass of the four light spots. The luminance value of the light spot is larger than that of other regions, so Boolean threshold segmentation was used here to extract the light spot. The binary image shown in Figure 8a was obtained after Boolean threshold segmentation and morphological open operation. After extracting its center of mass, the image shown in Figure 8b was obtained.
To reconstruct the contour of the area formed by the extrusion of the warp and weft yarns, the center of mass needs to be connected. As shown in Figure 9, there are multiple connections between centers of mass, and the following steps needed to be followed to obtain the correct results:
Suppose there are n centers of mass. We let the average luminance value of all pixels between the line of two points be  Y ¯  and the distance between the two points be d.
(1)
The two points when  Y ¯  was greater than the set brightness threshold and d was less than the set distance threshold were connected.
(2)
A point was successfully connected four times, it no longer participated in the process.
(3)
These two centers of mass satisfy conditions 1 and 2, with coordinates  ( x 1 , y 1 )  and  x 2 , y 2 ,  respectively. We can obtain the vector  V  formed by these two points, where  V  is given by the following equation:
V = ( x 1 x 2 , y 1 y 2 )
We followed the steps above to obtain the image shown in Figure 10.

3.3. Yarn Angle Calculation

When we selected two suitable line segments from the line segments detected by the Hough transform, the angle between these two line segments, referred to as the yarn angle, could be calculated, because the result obtained by the Hough transform contains the  θ  of the line segment. We subtracted let the  θ  of the two line segments and took the absolute value, i.e., obtained the yarn angle. Because the result may have an acute or obtuse angle, here we unified the output as an acute angle.
When the profile is obtained by the above process and the corresponding V for each side is obtained, the angle between any two sides is given by the following equation:
θ = 180 π a r c c o s ( V 1 × V 2 | V 1 | · | V 2 | )

4. Results and Discussion

To verify the effectiveness of the detection, we take the manual measurement, as shown in Figure 11, to obtain its yarn angle, and the angle obtained in this way is noted as  θ ^ .
As shown in Figure 12, three different angles of glass fiber plain weave fabric were selected, and their yarn angles were obtained by manual measurement and machine vision measurement, respectively.
The input image is cropped to 5 × 5, and Figure 13 shows the results of machine vision-based detection, with the detected angle noted as  θ ^ .
The formulas for calculating the absolute and relative errors are given by the following two formulas:
e a = | θ ~ θ ^ | × 100 %
e r = | θ ~ θ ^ | θ ^ × 100 %
Figure 14 and Figure 15 show the absolute and relative errors of angle detection by machine vision, respectively.
The maximum value of absolute error is 3°, and the maximum value of relative error is 0.0495.
The mean squared error (MSE) [22] is the expected value of the squared difference between the parameter estimate and the true value of the parameter; it evaluates the degree of variability of the data, and when the value of MSE is smaller, it also indicates that the prediction model describes the experimental data with better accuracy. The MSE is given by the following equation:
M S E = 1 N i = 1 N θ ~ i θ ^ i 2
where  N = 25 , brought into the formula (15), then  M a = 3.22 M b = 2.92 , and  M a = 3.52  can be obtained.
The Pearson correlation coefficient, commonly referred to as Pearson’s r, serves as a statistical indicator quantifying the degree of linear relationship between two variables. This coefficient spans from −1 to 1, where a value of 1 signifies an impeccable positive correlation, −1 denotes an absolute negative correlation, and 0 implies the absence of any linear correlation between the variables. The formula for calculating Pearson’s r is as follows:
r = n x y x y ( n x 2 x 2 ) ( n y 2 y 2 )
where r is the Pearson correlation coefficient, n is the number of data points, x and y are the observed values of the two variables, and ∑ denotes the summation. By calculation, we can obtain r = 0.931.
The combination of relative error, absolute error, and MSE, along with the Pearson correlation coefficient, indicates that the method has high accuracy.
When selecting the appropriate line segment based on the  C L , the higher the  C L , the better the result, but the result is not necessarily worse when the  C L  is below a set threshold.
For the image shown in Figure 16a, we preprocessed one of the feature regions and applied the Hough transform for line segment detection, and the result is shown in Figure 16b, which meets our expectations, but its corresponding confidence level is lower than our requirement. As shown in Figure 16c, it can be found that, in the binary image, the area of the white connected region where this line segment is located is small, so it leads to the corresponding low confidence level.
The contour reconstruction of the four images shown in Figure 17a yields the results shown in Figure 17b, and it can be found that the contours can be effectively restored, even for the blurred images.
After designing the angle measurement method mentioned above and conducting actual experiments, the results show that the measurement method proposed in this paper can accurately measure the angle of fiberglass. These works lay the foundation for the application of fiber fabrics in the molding process of composite materials, the characterization of composite material properties, and the accurate description of the relationship between performance and fiber angle after subsequent composite material performance testing.

5. Conclusions

We propose a new method for detecting the yarn angle of glass fiber plain weave fabric, which reduces the effects of the specular reflections generated by glass fibers, where the preprocessing stage removes the redundant information and separates the target from the background. The corresponding confidence level is calculated for the detected line segments and, within the set threshold, two optimal line segments are obtained to replace the warp and weft yarns, and the yarn angle is obtained by calculating the angle between these two line segments. When the confidence level is lower than the set threshold, the contour reconstruction of the area formed by the extrusion of warp and weft yarns is performed, and this stage can handle images with poor resolution. The combination of relative error, absolute error, mean squared error (MSE), and Pearson’s r correlation coefficient indicates that this method has high accuracy. The accurate detection of glass fiber angles is beneficial for characterizing the performance of the composite material forming process, and it is of great significance for improving the mechanical properties of composite materials and improving their forming process.

Author Contributions

Conceptualization, J.H.; methodology, J.H.; software, J.H.; validation, J.H., T.W. and M.C.; formal analysis, J.H.; investigation, J.H.; resources, J.H.; data curation, J.H.; writing—original draft preparation, J.H.; writing—review and editing, J.X.; visualization, J.H.; supervision, J.X.; project administration, J.X.; funding acquisition, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grant number 2022YFD2200903, supported by the National Key Research and Development Program, and project ZR2023ME064, supported by the Shandong Provincial Natural Science Foundation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aisyah, H.A.; Paridah, M.T.; Khalina, A. Effects of fabric counts and weave designs on the properties of laminated woven kenaf/carbon fibre reinforced epoxy hybrid composites. Polymers 2018, 10, 1320. [Google Scholar] [CrossRef]
  2. Boris, D.; Xavier, L.; Damien, S. The tensile behaviour of biaxial and triaxial braided fabrics. J. Ind. Text. 2018, 47, 2184–2204. [Google Scholar] [CrossRef]
  3. Zhu, H.; Li, D.; Han, W. Experimental and numerical study of in-plane compressive properties and failure of 3D six-directional braided composites with large braiding angle. Mater. Des. 2020, 195, 108917. [Google Scholar] [CrossRef]
  4. Zhang, W.; Yan, S.; Yan, Y. A parameterized unit cell model for 3D braided composites considering transverse braiding angle variation. J. Compos. Mater. 2022, 56, 491–505. [Google Scholar] [CrossRef]
  5. Parmiggiani, A.; Prato, M.; Pizzorni, M. Effect of the fiber orientation on the tensile and flexural behavior of continuous carbon fiber composites made via fused filament fabrication. Int. J. Adv. Manuf. Technol. 2021, 114, 2085–2101. [Google Scholar] [CrossRef]
  6. Liang, B.; Zhang, W.; Gao, S. Analysis of the influence of yarn angle on the mechanical behaviors of cured woven composites. High Perform. Polym. 2020, 32, 975–983. [Google Scholar] [CrossRef]
  7. Denos, B.R.; Sommer, D.E.; Favaloro, A.J. Fiber orientation measurement from mesoscale CT scans of prepreg platelet molded composites. Compos. Part A Appl. Sci. Manuf. 2018, 114, 241–249. [Google Scholar] [CrossRef]
  8. Pourdeyhimi, B.; Dent, R.; Davis, H. Measuring fiber orientation in nonwovens part III: Fourier transform. Text. Res. J. 1997, 67, 143–151. [Google Scholar] [CrossRef]
  9. Şerban, A. Automatic detection of fiber orientation on CF/PPS composite materials with 5-harness satin weave. Fibers Polym. 2016, 17, 1925–1933. [Google Scholar] [CrossRef]
  10. Ahmad, R.; Naz, S.; Razzak, I. Efficient skew detection and correction in scanned document images through clustering of probabilistic hough transforms. Pattern Recognit. Lett. 2021, 152, 93–99. [Google Scholar] [CrossRef]
  11. Ding, F.; Wang, B.; Zhang, Q. Research on a Vehicle Recognition Method Based on Radar and Camera Information Fusion. Technologies 2022, 10, 97. [Google Scholar] [CrossRef]
  12. Lu, H.; Zhao, K.; You, Z.; Huang, K. Angle algorithm based on Hough transform for imaging polarization navigation sensor. Opt. Express 2015, 23, 7248–7262. [Google Scholar] [CrossRef]
  13. Qi, Y.; Li, P.; Xiong, B. A two-step computer vision-based framework for bolt loosening detection and its implementation on a smartphone application. Struct. Health Monit. 2022, 21, 2048–2062. [Google Scholar] [CrossRef]
  14. Sun, Y.; Li, M.; Dong, R. Vision-Based Detection of Bolt Loosening Using YOLOv5. Sensors 2022, 22, 5184. [Google Scholar] [CrossRef]
  15. Alamdarlo, M.N.; Hesami, S. Optimization of the photometric stereo method for measuring pavement texture properties. Measurement 2018, 127, 406–413. [Google Scholar] [CrossRef]
  16. Shen, D.; Wu, Z. A Method for Measuring the Orientation of Carbon Fiber Weave. Fibers Polym. 2021, 22, 3501–3509. [Google Scholar] [CrossRef]
  17. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  18. Zhang, T.Y.; Suen, C.Y. A fast parallel algorithm for thinning digital patterns. Commun. ACM 1984, 27, 236–239. [Google Scholar] [CrossRef]
  19. Kanan, C.; Cottrell, G.W. Color-to-grayscale: Does the method matter in image recognition? PLoS ONE 2012, 7, e29740. [Google Scholar] [CrossRef]
  20. Serra, J. Image Analysis and Mathematical Morphology; Academic Press Inc.: New York, NY, USA, 1982. [Google Scholar]
  21. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  22. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
Figure 1. Image acquisition system.
Figure 1. Image acquisition system.
Applsci 14 02937 g001
Figure 3. Preprocessing stage: (a) initial image; (b) grayscale image; and (c) Otsu Threshold Segmentation image.
Figure 3. Preprocessing stage: (a) initial image; (b) grayscale image; and (c) Otsu Threshold Segmentation image.
Applsci 14 02937 g003
Figure 4. Skeleton extraction.
Figure 4. Skeleton extraction.
Applsci 14 02937 g004
Figure 5. Parameters of a straight line.
Figure 5. Parameters of a straight line.
Applsci 14 02937 g005
Figure 6. Hough transform detection of line segments.
Figure 6. Hough transform detection of line segments.
Applsci 14 02937 g006
Figure 7. Image parameters and extension lines: (a) description of image parameters and (b) six cases of intersection of the image edge and the extension line.
Figure 7. Image parameters and extension lines: (a) description of image parameters and (b) six cases of intersection of the image edge and the extension line.
Applsci 14 02937 g007
Figure 8. Threshold segmentation and center of mass extraction: (a) the result after threshold segmentation and (b) results of the center of mass extraction.
Figure 8. Threshold segmentation and center of mass extraction: (a) the result after threshold segmentation and (b) results of the center of mass extraction.
Applsci 14 02937 g008
Figure 9. The connection between centers of mass.
Figure 9. The connection between centers of mass.
Applsci 14 02937 g009
Figure 10. The correct connections between the centers of mass.
Figure 10. The correct connections between the centers of mass.
Applsci 14 02937 g010
Figure 11. Manual angle detection.
Figure 11. Manual angle detection.
Applsci 14 02937 g011
Figure 12. Three different angles of glass fiber plain weave fabric: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Figure 12. Three different angles of glass fiber plain weave fabric: (a) Sample 1; (b) Sample 2; and (c) Sample 3.
Applsci 14 02937 g012
Figure 13. Machine vision-based inspection results: (a) Result 1; (b) Result 2; and (c) Result 3.
Figure 13. Machine vision-based inspection results: (a) Result 1; (b) Result 2; and (c) Result 3.
Applsci 14 02937 g013
Figure 14. Absolute error.
Figure 14. Absolute error.
Applsci 14 02937 g014
Figure 15. Relative error.
Figure 15. Relative error.
Applsci 14 02937 g015
Figure 16. Explaining low  C L  with binary images: (a) initial grayscale image; (b) grayscale image detection line segment; and (c) binary image detection line segment.
Figure 16. Explaining low  C L  with binary images: (a) initial grayscale image; (b) grayscale image detection line segment; and (c) binary image detection line segment.
Applsci 14 02937 g016
Figure 17. Contour reconstruction: (a) four grayscale images and (b) contour reconstruction.
Figure 17. Contour reconstruction: (a) four grayscale images and (b) contour reconstruction.
Applsci 14 02937 g017aApplsci 14 02937 g017b
Table 1. Camera parameters.
Table 1. Camera parameters.
ModelMER-500-14GC
InterfaceGigE
Resolution2592 (H) × 1944 (V)
Frame rate14 fps
Pixel size2.2 μm × 2.2 μm
Pixel depth8 bit
Exposure time36 μs~1 s
Table 2. Material parameters.
Table 2. Material parameters.
yarn density600 tex
warp density35 ± 0.35 roots/cm
weft density32 ± 0.32 roots/cm
mass per-unit area400 ± 10 g/m2
diameter width≥1.6 mm (average)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, J.; Wang, T.; Xu, J.; Cao, M. Yarn Angle Detection of Glass Fiber Plain Weave Fabric Based on Machine Vision. Appl. Sci. 2024, 14, 2937. https://doi.org/10.3390/app14072937

AMA Style

Hou J, Wang T, Xu J, Cao M. Yarn Angle Detection of Glass Fiber Plain Weave Fabric Based on Machine Vision. Applied Sciences. 2024; 14(7):2937. https://doi.org/10.3390/app14072937

Chicago/Turabian Style

Hou, Jiatong, Tao Wang, Jiazhong Xu, and Moran Cao. 2024. "Yarn Angle Detection of Glass Fiber Plain Weave Fabric Based on Machine Vision" Applied Sciences 14, no. 7: 2937. https://doi.org/10.3390/app14072937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop