Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns
Abstract
:1. Introduction
2. Data Collection and Preparation
2.1. Front Label Acquisition
2.2. Rendering Image Acquisition
2.3. Real Image Acquisition
2.4. Distribution of Yarn Types and Stitches
3. Model Architecture
3.1. Front Label Generation: Refiner and Img2prog
3.2. Complete Label Inference: Residual Model
4. Experimental Setup and Results
4.1. Generation Phase Evaluation—Scenario 1
4.2. Inference Phase Evaluation—Scenarios 2, 3, and 4
4.2.1. Scenario 2: Complete Label Generation (Unknown Yarn Type)
4.2.2. Scenario 3: Complete Label Generation (Known Yarn Type)
4.2.3. Scenario 4: Complete Label Generation (Using Ground Truth Front Label)
4.3. Case Study
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
sj | single-yarn |
mj | multi-yarn |
MIL | multiple-instance learning |
CNN | Convolutional Neural Network |
Appendix A
- 1.
- Scenario 1: Front Label Generation
- Goal: Obtain a front label from a real image.
- Components Used: Generation phase (refiner + Img2prog).
- Command:
- python main.py
- --checkpoint_dir=./checkpoint/RFINet_front_xferln_160k
- --training=False
- Input: Real image.
- Output: Front label.
- Training Command:
- ./run.sh -g 0
- -c ./checkpoint/RFINet_front_xferln_160k
- --learning_rate 0.0005
- --params discr_img=1,bvggloss=1,gen_passes=1,
- bloss_unsup=0,decay_steps=50000,
- decay_rate=0.3,bunet_test=3,
- use_tran=0,augment=0,bMILloss=0
- --weights loss_D*=1.0,loss_G*=0.2 --max_iter 160000
- TensorBoard: We can use TensorBoard to view various loss changes, generated images, multi-class confusion matrices, etc.
- tensorboard
- --logdir "./checkpoint/RFINet_front_xferln_160k/val"
- 2.
- Scenario 2: Complete Label Generation (Unknown Yarn Type)
- Goal: Obtain a complete label from a real image without prior knowledge of its sj/mjclassification.
- Components Used:
- -
- Generation phase (refiner + Img2prog) for front label generation.
- -
- Inference phase for complete label prediction.
- Command:
- python xfernet.py test
- --checkpoint_dir ./checkpoint/xfer_complete_frompred
- _residual
- --model_type residual
- --dataset default
- --input_source frompred
- Input: Real image.
- Output: Complete label.
- 3.
- Scenario 3: Complete Label Generation (Known Yarn Type)
- Goal: Obtain a complete label with knowledge of the sj/mj classification of the input image.
- Components Used:
- -
- Generation phase (refiner + Img2prog).
- -
- Yarn-specific residual model for inference phase.
- Command:
- python xfernet.py test
- --checkpoint_dir ./checkpoint/xfer_complete_frompred_sj
- --model_type residual
- --dataset sj
- --input_source frompred
Or for mj data:- python xfernet.py test
- --checkpoint_dir ./checkpoint/xfer_complete_frompred_mj
- --model_type residual
- --dataset mj
- --input_source frompred
- Input: Real image.
- Output: Complete label.
- 4.
- Scenario 4: Complete Label Generation (Using Ground Truth Front Label)
- Goal: Generate complete labels using a ground truth front label and knowledge of yarn type.
- Components Used: Yarn-specific residual model.
- Command:python xfernet.py test--checkpoint_dir ./checkpoint/xfer_complete_fromtrue_sj--model_type residual--dataset sj--input_source fromtrueOr for mj data:python xfernet.py test--checkpoint_dir ./checkpoint/xfer_complete_fromtrue_mj--model_type residual --dataset mj--input_source fromtrue
- Input: Ground truth front label.
- Output: Complete label.
Appendix B
- 1.
- Install Miniconda-x86_64.shbash Miniconda3-latest-Linux-x86_64.shsource ~/.bashrc
- 2.
- Create and Activate Python 3.6 Environmentconda create -n tf1.11 python=3.6conda activate tf1.11
- 3.
- Install GPU-Compatible TensorFlow and Its Dependencies. Install TensorFlow 1.11 and its associated dependencies, ensuring compatibility with the RTX 2070 and CUDA 9.0.conda install tensorflow-gpu=1.11.0conda install numpy=1.15.3conda install scipy=1.1.0
- 4.
- Install CUDA Toolkit and cuDNNconda install cudatoolkit=9.0 cudnn=7.1
- 5.
- Install Python Package Requirements. The required Python packages were installed using the requirements.txt file provided in the project repository.pip install -r requirements.txt
- 6.
- Install ImageMagick for Image Processing. ImageMagick was used for image manipulation during the preprocessing stage. The following commands were used:sudo apt updatesudo apt install imagemagicksudo apt install zip unzip
- 7.
- Set Up Jupyter Notebook 6.4.3 for Interactive Development. Jupyter Notebook was installed to facilitate interactive code testing and experimentation.conda install jupyterjupyter notebook --ip=0.0.0.0 --no-browser
- 8.
- Install Additional Libraries. For additional functionalities, the Scikit-learn library was installed.conda install scikit-learn
References
- Bohm, G.; Suteu, M.D.; Doble, L. Study on knitting with 3D drawings using the technology offered by Stoll. Fascicle Text. Leatherwork 2022, 23, 5–9. [Google Scholar]
- Kaspar, A.; Oh, T.-H.; Makatura, L.; Kellnhofer, P.; Matusik, W. Neural inverse knitting: From images to manufacturing instructions. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 3272–3281. [Google Scholar]
- Shrivastava, A.; Pfister, T.; Tuzel, O.; Susskind, J.; Wang, W.; Webb, R. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2107–2116. [Google Scholar]
- Trunz, E.; Klein, J.; Müller, J.; Bode, L.; Sarlette, R.; Weinmann, M.; Klein, R. Neural inverse procedural modeling of knitting yarns from images. Comput. Graph. 2024, 118, 161–172. [Google Scholar] [CrossRef]
- Melnyk, V.E. Punch card patterns designed with GAN. In Proceedings of the 2021 DigitalFUTURES; Springer: Singapore, 2022; pp. 83–94. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Scheidt, F.; Ou, J.; Ishii, H.; Meisen, T. deepKnit: Learning-based generation of machine knitting code. Procedia Manuf. 2020, 51, 485–492. [Google Scholar] [CrossRef]
- Yuksel, C.; Kaldor, J.M.; James, D.L.; Marschner, S. Stitch meshes for modeling knitted clothing with yarn-level detail. ACM Trans. Graph. 2012, 31, 37:1–37:12. [Google Scholar] [CrossRef]
- Wu, K.; Swan, H.; Yuksel, C. Knittable stitch meshes. ACM Trans. Graph. 2018, 37, 121:1–121:14. [Google Scholar] [CrossRef]
- Wu, K.; Gao, X.; Ferguson, Z.; Panozzo, D.; Yuksel, C. Stitch meshing. ACM Trans. Graph. 2018, 37, 130:1–130:14. [Google Scholar] [CrossRef]
- Lafferty, J.; McCallum, A.; Pereira, F. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, Williamstown, MA, USA, 28 June–1 July 2001; pp. 282–289. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 37–30 June 2016; pp. 770–778. [Google Scholar]
- Narayanan, V.; Albaugh, L.; Hodgins, J.; Coros, S.; McCann, J. Automatic knitting of 3D meshes. ACM Trans. Graph. 2018, 37, 109:1–109:14. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 37–30 June 2016; pp. 779–788. [Google Scholar]
Model | Sample Size | Params Count * | Time (h) | F1-Score |
---|---|---|---|---|
RFI_complex_a0.5 | 12,392 | 2,934,605 | 6.50 | 90.2% |
RFINet_notran_noaug_newinst | 12,392 | 2,934,398 | 5.00 | 97.3% |
RFINet_front_xferln_MIL_160k | 4950 | 2,934,398 | 3.00 | 82.1% |
RFINet_front_xferln_160k ** | 4950 | 2,934,398 | 3.00 | 83.1% |
Stitch * | Scenario 1 (sj + mj) | |
---|---|---|
Count | F1-Score | |
FK | 1,484,133 | 90.5% |
BK | 209,577 | 78.2% |
T | 87,510 | 53.6% |
H | 41,059 | 67.8% |
M | 37,223 | 35.5% |
E | 166 | 0.0% |
V | 1471 | 34.9% |
VR | 25,359 | 69.1% |
VL | 25,733 | 64.1% |
X(R) | 7031 | 68.5% |
X(L) | 7043 | 59.8% |
O | 18,933 | 43.0% |
Y | 22,904 | 63.3% |
FO | 11,858 | 30.8% |
Model | Sample Size | Params Count | Time (h) | F1-Score |
---|---|---|---|---|
RFINet_complete_MIL | 4950 | 2,935,778 | 4.67 | 71.6% |
RFINet_complete | 4950 | 2,935,778 | 4.67 | 80.8% |
xfer_complete_frompred_2lyr_MIL | 4950 | 21,026 | 2.75 | 39.4% |
xfer_complete_frompred_2lyr | 4950 | 21,026 | 2.75 | 52.7% |
xfer_complete_frompred_5lyr | 4950 | 1,585,422 | 3.00 | 78.1% |
xfer_complete_frompred_residual * | 4950 | 872,034 | 3.00 | 85.9% |
xfer_complete_frompred_unet | 4950 | 279,138 | 3.00 | 83.9% |
xfer_complete_frompred_2lyr_sj | 3000 | 21,026 | 1.75 | 95.0% |
xfer_complete_frompred_residual_sj * | 3000 | 872,034 | 1.75 | 97.0% |
xfer_complete_frompred_unet_sj | 3000 | 279,138 | 1.75 | 96.2% |
xfer_complete_frompred_2lyr_mj | 1950 | 21,026 | 1.00 | 74.0% |
xfer_complete_frompred_residual_mj * | 1950 | 872,034 | 1.00 | 90.2% |
xfer_complete_frompred_unet_mj | 1950 | 279,138 | 1.00 | 84.2% |
xfer_complete_fromtrue_2lyr_sj | 3000 | 21,026 | 1.75 | 98.4% |
xfer_complete_fromtrue_residual_sj * | 3000 | 872,034 | 1.75 | 99.8% |
xfer_complete_fromtrue_unet_sj | 3000 | 279,138 | 1.75 | 99.3% |
xfer_complete_fromtrue_2lyr_mj | 1950 | 21,026 | 1.00 | 86.3% |
xfer_complete_fromtrue_residual_mj * | 1950 | 872,034 | 1.00 | 96.0% |
xfer_complete_fromtrue_unet_mj | 1950 | 279,138 | 1.00 | 95.4% |
Stitch * | Scenario 2 (sj + mj) | Scenario 3 (sj) | Scenario 3 (mj) | Scenario 4 (sj) | Scenario 4 (mj) | |||||
---|---|---|---|---|---|---|---|---|---|---|
Count | F1-Score | Count | F1-Score | Count | F1-Score | Count | F1-Score | Count | F1-Score | |
FK | 920,228 | 95.9% | 90,474 | 98.5% | - | 0.0% | 90,474 | 100.0% | - | 0.0% |
BK | 183,030 | 82.6% | 13,375 | 95.2% | 11,366 | 81.8% | 13,375 | 99.7% | 11,366 | 92.8% |
T | 87,698 | 65.2% | 11,366 | 89.1% | - | 0.0% | 3207 | 98.9% | - | 0.0% |
H,M | 15,618 | 66.2% | - | 0.0% | - | 0.0% | 1001 | 97.1% | - | 0.0% |
M | 22,342 | 65.0% | 885 | 97.1% | 2308 | 80.4% | 885 | 97.1% | 2308 | 96.0% |
E,V(L) | 15,901 | 64.5% | 158 | 84.9% | 158 | 94.8% | 158 | 94.8% | - | 0.0% |
V,HM | 1179 | 46.1% | - | 0.0% | - | 0.0% | - | 0.0% | - | 0.0% |
VR | 8214 | 84.0% | 809 | 92.1% | 809 | 92.1% | 809 | 99.7% | 809 | 99.7% |
VL | 7919 | 82.0% | 864 | 91.9% | 864 | 91.9% | 864 | 99.5% | 864 | 99.5% |
X(R) | 7031 | 90.1% | 741 | 95.9% | 741 | 95.9% | 741 | 99.6% | 741 | 99.6% |
X(L) | 7043 | 90.0% | 713 | 95.9% | 713 | 95.9% | 713 | 99.6% | 713 | 99.6% |
T(F) | 25,024 | 78.5% | 3207 | 89.1% | - | 0.0% | 3207 | 98.9% | - | 0.0% |
V,M | 265 | 80.0% | - | 0.0% | 6 | 80.0% | - | 0.0% | - | 0.0% |
E,V(R) | 15,703 | 85.0% | 1747 | 89.1% | - | 0.0% | 1747 | 99.7% | - | 0.0% |
FK,MAK | 536,224 | 88.9% | - | 0.0% | 107,069 | 94.6% | - | 0.0% | 107,069 | 99.0% |
FT,FMAK | 65,413 | 88.9% | - | 0.0% | 13,106 | 92.1% | - | 0.0% | 13,106 | 92.6% |
Y,MATBK | 22,904 | 63.3% | 4741 | 98.9% | - | 0.0% | 4741 | 99.6% | - | 0.0% |
FO(2) | 7324 | 32.3% | - | 0.0% | 1924 | 82.3% | - | 0.0% | 1,924 | 69.4% |
O(5),BK | 7774 | 43.6% | - | 0.0% | 1500 | 51.9% | - | 0.0% | 1500 | 67.9% |
VR,FMAK | 23,014 | 88.8% | 4,419 | 95.9% | - | 0.0% | 4419 | 99.9% | - | 0.0% |
AO(2) | 7767 | 31.8% | - | 0.0% | 1441 | 64.7% | - | 0.0% | 1441 | 64.0% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sheng, H.; Cai, S.; Zheng, X.; Lau, M. Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns. Electronics 2025, 14, 1605. https://doi.org/10.3390/electronics14081605
Sheng H, Cai S, Zheng X, Lau M. Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns. Electronics. 2025; 14(8):1605. https://doi.org/10.3390/electronics14081605
Chicago/Turabian StyleSheng, Haoliang, Songpu Cai, Xingyu Zheng, and Mengcheng Lau. 2025. "Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns" Electronics 14, no. 8: 1605. https://doi.org/10.3390/electronics14081605
APA StyleSheng, H., Cai, S., Zheng, X., & Lau, M. (2025). Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns. Electronics, 14(8), 1605. https://doi.org/10.3390/electronics14081605