4.1. Stand-Alone Surround Scanning System
In order to obtain the 3D scanning clothing for simulating wearing quickly and conveniently, and taking into account the cost, we have developed the single-machine surround-scanning method. The fixed camera angle can keep the 3D camera stable, but due to the limitation of the turntable speed and the limitation of the 3D image generation algorithm, the scanning time is too long. If the rotation speed of the turntable is increased, the stability of the shape of the object to be measured (such as clothing) will be affected, and more overlapping point clouds will be generated. Although the second method may overcome the above shortcomings, it will also cause frame loss, ghosting, and other problems due to the camera’s own shaking during the orbiting process.
To address these challenges, this paper introduces a standalone automatic scanning system. This system is engineered with the aid of a mechanism support and a stepper motor, which work together to maintain a consistent and steady camera speed as it moves. By enhancing the 3D point cloud generation algorithm, we have managed to expedite the acquisition speed of depth images. This advancement serves to fulfill the objective of swiftly collecting high-quality point cloud data.
4.1.1. Stand-Alone Surround Scanning System Module
An automated 3D clothing scanning system was built in the experiment. The system includes four modules, namely a user control platform, three-dimensional image acquisition module, rotating support module, and power control output module. The stand-alone surround scanning system module is shown in
Figure 3.
As shown in
Figure 3, the user control platform includes an image display interface, a three-dimensional point cloud optimization panel, and a motor control panel. The image display interface is used to display the depth image captured by the 3D camera in real time, and the 3D point cloud generated by the depth image after scanning is completed. The 3D point cloud optimization panel is used to discharge point cloud optimization buttons, and the motor control panel is used to discharge motor parameter control buttons. The power control output module includes power supply, transmission bearing, motor controller, motor driver, and stepper motor. The power supply provides rotational power for the stepper motor; the transmission bearing is used to transmit the rotational force of the stepper motor; the motor controller is used to receive the motor control signal from the user control interface; the motor driver is used to convert the digital signal into a pulse signal and send it to the stepper motor. The rotation support module includes a support member, a rotation member, and a three-dimensional camera angle adjustment member. The support structure is used to provide a support frame, carrying the rotating member and the power take-off module. The rotation component is used to rotate the 3D camera and adjust the camera’s position, viewing angle, and provide a stable rotation support. The 3D image acquisition module includes a 3D scanner and an offline 3D point cloud generation algorithm.
4.1.2. Scanning Effect
In order to verify the scanning speed of the single-machine scanning system and the quality of the scanned point cloud, different garments were scanned and packaged into 3D meshes in the experiment. The experimental platform is Windows 10 operating system, 2.6 GHz CPU, 8 GB memory, and NVIDIA GeForce GTX 650. The experiment scanned a set of clothing, including six tops, and compared the real-time point cloud generation algorithm with the offline point cloud generation algorithm. The scanning devices used were all PrimeSense 1.09, which revolved around the garment under a stand-alone wrap-around mechanism. By setting the rotation speed of the rotating motor, the time that the 3D scanning camera surrounds the garment is adjusted, as shown in
Table 1.
As shown in
Table 1, the offline point cloud generation algorithm can generate the 3D clothing point cloud that meets the requirements within 12 s, while the real-time point cloud generation algorithm takes 35 s to generate the 3D clothing point cloud that meets the requirements. This article takes the average of three measurements for each piece of clothing. Most of the clothing grid errors generated by offline point cloud calculation and real-time point cloud algorithm are within 0.5 cm, the relative error of the experiment varies between ±2% and ±0.2%, and the root mean square error (RMSE) range does not exceed 2, which proves that the proposed offline point cloud generation algorithm is better than the real-time point cloud generation algorithm in the single-machine rotating mechanism. Under the stable and uniform rotation of the single-machine rotating mechanism, the scanned three-dimensional clothing has a similar effect, but the speed is dominant.
In order to verify the accuracy of the single-machine scanning system, six pieces of 3D clothing were measured, and the absolute difference was calculated with the actual clothing, as shown in
Figure 4.
As can be seen from
Figure 4, the difference is about 1 cm. According to “GB/T 2660-2017: Shirts” for the measurement position and measurement method of shirt specification determination, this error is within the allowable error range of the main part specification. The smaller scanning error provided a guarantee for the acquisition of accurate data of the 3D clothing, so that the scanned 3D clothing can keep the size of the original clothing.
4.2. Mesh Division and Grouping
After scanning a 3D garment object with a single-machine surround system and generating an initial 3D garment mesh, the surface topology needs to be trimmed and edited to form a 3D garment that can be used to simulate wearing. One of the most important operations is the cutting and separation of 3D meshes. Through the study of the 3D mesh segmentation algorithm, the connection of the cutting planes is used to generate the connected cutting intersection lines, and then the cutting intersection lines are screened, connected, and sorted to obtain the cutting path from the starting point to the end point. Through the positional relationship between the cutting path and the triangle, the segmentation forms of the triangle are classified into three categories, and finally the free cutting is completed.
In order to verify the reliability and efficiency of the cutting algorithm in this paper, the experiment of trimming and editing the surface topology of the initial 3D clothing mesh was carried out, and the generation time and the number of trimmed triangles and clothing meshes were recorded, as shown in
Table 2.
As shown in
Table 2, in order to verify the efficiency of the proposed 3D mesh cutting algorithm, the generation of mesh cutting lines for six 3D scanned garments and the calculation time of triangle cutting were recorded. The time used was about 0.1 s, which proved that the cutting and grouping algorithm in this chapter can be completed quickly and efficiently.
The physical images of a, b, c, d, e, and f are shown in
Figure 5.
4.3. Human Body Ring Cutting Algorithm
The 3D human body ring cutting algorithm refers to an algorithm that uses a plane to intersect with a 3D human body mesh, connects and sorts the intersection points, obtains a closed ring cutting ring, and analyzes the closed ring cutting ring to obtain 3D human body information. It converts the 3D human body features into ring cutting features, and can define different intersection planes and obtain different body information. If the horizontal plane is defined and intersected with the 3D human body grid, the limb information can be analyzed according to the number of closed cut rings. The vertical section defined by the armpit point is intersected with the 3D human mesh to obtain the separation information of the human arm and torso. The intersection of the plane passing through the feature points of the human body and the three-dimensional human body grid are defined, and the measurement information of the girth of the human body is obtained. After the 3D clothing is worn on the 3D human body, the loop cutting algorithm can be used to detect the penetration information of the clothing mesh and the human body mesh by whether the cutting loops of the same layer intersect.
After optimizing the human body ring segmentation algorithm, it is used in the search of human feature points and the automatic measurement of 3D human body to verify the effectiveness of the method. At the same time, when dealing with three-dimensional clothing, this method can also be used to determine the feature points of clothing.
To confirm the precision and effectiveness of extracting human body feature points and conducting 3D body measurements using the ring cutting algorithm presented in this paper, we conducted a series of experiments as outlined below.
By comparing the automatic measurements of 200 3D scanned human bodies with the manual measurements of the corresponding human bodies, the measurement errors shown are shown in
Figure 6.
As can be seen from
Figure 6, most of the measurement errors are within 0.5 cm. The contact-type crotch bottom measurement error is large, because the manual measurement values of the legs are approximate. In the process of actually measuring the 3D scanning human body data, due to the large difference in leg shape, the scanned 3D human body will also have adhesion phenomenon, and there are also many parts of the human body that are difficult to automatically measure accurately by scanning the human body, such as waist circumference. Although these have an impact on the algorithm of this paper, the error is still within the acceptable range.
Table 3 shows the time used by the anthropometric algorithm in this paper for anthropometric measurements of different densities.
It can be seen from
Table 3 that the 3D scanning of the human body with 20 K triangles takes between 1.7 and 2.0 s. The usage time is also within 7 s in the case of 100 K triangles. In the experiment, the automatic cutting time of the test 3D clothing was recorded, as shown in
Figure 7 (the horizontal axis represents five sets of cut objects (due to the extreme similarity between cropped objects f and d, e, this article will exclude f from the research object in the following experiment), the left side of the vertical axis represents the height of the bar chart, representing the clothing grid number, and the right side of the vertical axis represents the numbers for auto cutting of dashed lines and seam generation).
As can be seen in
Figure 7, the automatic 3D virtual clothing cutting generation algorithm in this paper can quickly complete the generation of 3D pieces and generate sutures between pieces. For the flattening of the pieces, we compared the change of the side length L and L’ (L is the length of the triangle before flattening; L’ is the length of the triangle after flattening), as shown in
Table 4.
It can be seen from
Table 4 that the variation of side length is around 3%, which can keep the original information of clothing. After the three-dimensional clothing pieces are matched with the three-dimensional human body posture, gaps will be generated between the three-dimensional clothing pieces due to the movement of relative positions. It is necessary to sew the gaps between the pieces according to the seam information between the pieces of the three-dimensional garment.
Figure 8 shows the corresponding segmentation of clothing in 2D and tops in 3D. (
Figure 8 is made of the platform on CORELDRAW.)
As shown in
Figure 8, for clothing without wrinkles, the obtained three-dimensional coordinates can be directly used. For garments with rich folds, it is a little more complicated to obtain three-dimensional coordinates, and the triangle endpoints at both ends of the seam need to be displaced during sewing. At the same time, the triangles at both ends of the seam line are merged, so that the three-dimensional garment pieces are re-merged into a whole. When merging triangle boundary points, it is necessary to ensure the smooth movement of adjacent triangle vertices to maintain the geometric characteristics of 3D clothing.
4.5. Garment Automatic Simulation Dressing Method Based on Geometric Reconstruction
On the basis of automatic 3D clothing cutting and 3D human body segmentation, an automatic 3D clothing simulation dressing method based on geometric reconstruction was proposed. In this method, the wearing of the 3D virtual clothing was completed via matching and stitching the 3D clothing pieces and the corresponding 3D human body segments. Through the three-dimensional clothing pieces and the corresponding three-dimensional human body blocks, the penetration detection of the pieces was carried out. The penetrating mesh face was detected by the ring cutting algorithm, and the mesh penetration was corrected by mesh subdivision and moving the penetrating mesh in the specified direction using the Laplace transform. These targeted pose matching methods and penetration compensation methods accelerate the wearing speed of 3D clothing. The penetration of the 3D virtual top is shown in
Figure 9.
As shown in
Figure 9, the method based on geometric reconstruction can keep the original geometric features of 3D virtual clothing to the maximum extent, such as drape and wrinkles, which provided a good initial state for the physical method of 3D clothing wearing. In addition, because the physical drape and wrinkle simulation are omitted, it saved time and provided a fast method for the wearing evaluation of 3D clothing. Collision detection is an important processing step after wearing 3D clothing. After the pieces of the 3D garment in this chapter have been sewn together, the penetrating garment mesh needs to be found and moved to the correct position.
The 3D clothing simulation wearing algorithm based on geometric reconstruction mainly solved the problem of pose matching between 3D clothing and human body, penetration compensation, and maintaining the geometric features of 3D clothing during wearing. In this paper, the following groups of experiments are used to verify the applicability and reliability of the algorithm in this chapter. In order to verify the dimensional stability of the 3D clothing, the area of the 3D mesh before and after wearing the clothing was compared in the experiment, as shown in
Table 6.
It can be seen from
Table 6 that the area change of the three-dimensional clothing is within 1.7% after being worn by the method in this paper, and the inherent size of the clothing is well maintained. In order to verify the applicability of the algorithm in this paper, experiments were carried out on different human bodies and different clothing to simulate wearing. The 3D virtual stitching is shown in
Figure 10. (The 3D simulation stitching was completed on CAD.)
As shown in
Figure 10, the method we used was to automatically wear the 3D clothing first, and then use the physical method to simulate the effect of the clothing. After geometric stitching and penetration compensation, the initial state of the generated physical model can retain the geometric characteristics of the 3D-scanned clothing, and the wearing effect is more realistic. Automatic dressing provided a practical and reliable method. In order to verify the efficiency of the algorithm, the wearing time of the 3D clothing was recorded in the experiment, and the simulated wearing time of the 3D clothing is shown in
Figure 11.
As shown in
Figure 11, the wearing time of a single 3D garment is about 10 s, which is within an acceptable range. The 3D clothing simulation dressing algorithm proposed in this paper has good applicability and can simulate 3D clothing for different human bodies, and provided a practical and reliable method for the automatic wearing of 3D clothing.