Next Article in Journal
Changes in Temperature and Precipitation Trends in Selected Polish Cities Based on the Results of Regional EURO-CORDEX Climate Models in the 2030–2050 Horizon
Previous Article in Journal
Biodegradable Magnesium Alloys for Biomedical Implants: Properties, Challenges, and Surface Modifications with a Focus on Orthopedic Fixation Repair
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method of Enhancing Silk Digital Printing Color Prediction through Pix2Pix GAN-Based Approaches

1
College of Textile Science and Engineering, International Institute of Silk, Zhejiang Sci-Tech University, 928 Second Avenue, Xiasha Higher Education Zone, Hangzhou 310018, China
2
College of Art and Design, Zhejiang University of Science & Technology, 308 Liuhe Road, Xihu District, Hangzhou 310023, China
3
Huadong Medicine Co., Ltd., 866 Moganshan Road, Hangzhou 310011, China
4
Modern Textile Processing Technology National Engineering Research Center, 928 Second Avenue, Xiasha Higher Education Zone, Hangzhou 310018, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 11; https://doi.org/10.3390/app14010011
Submission received: 24 October 2023 / Revised: 14 December 2023 / Accepted: 16 December 2023 / Published: 19 December 2023
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:
Color prediction and color management for digital printed fabrics remain a challenging task. Accurate prediction of color appearances of digital printed fabrics would enable designers and manufacturers to better fulfill their design requirements and creative visions. We propose a color prediction method for silk digital printing utilizing a Pix2Pix Generative Adversarial Network (GAN) framework. This method aims to generate predicted images that possess the same stylistic and color characteristics as their actual fabrics after production. To develop and validate the method, color data and images are collected and processed from 5252 sets of paired original Pantone TPX color card and actual print sample fabrics. The results of this study demonstrate that the method can predict the colors of silk digital print samples while effectively reproducing the effects of inkjet printing in silk fabrics including silk crepe satin and silk twill. The method exhibits high prediction accuracy to an average CIEDE2000 value of 2.372 for silk crepe satin and 1.846 for silk twill. The findings of this research not only enhance the efficiency and accuracy of color management in fabric digital printing technology but also contribute to the exploration and development of high-fidelity color prediction techniques within the textile industry.

1. Introduction

Digital inkjet printing technology has evolved along with the continuous integration and development of electronic computer technology and high-precision instrument technology. This technology is the key solution for the textile industry to address the demand for personalized customization and small-batch textile production. Digital inkjet printing technology involves inputting patterns and designs as digital signals into a computer, followed by editing and processing through a computer-controlled color separation drafting system. Subsequently, the computer-controlled printer nozzle directly sprays specialized ink onto the textile and mixes it to form the desired pattern. However, this technology has challenges in accurately reproducing the color on the fabric to achieve the desired color effect in the actual print [1], due to the limitation of the printer’s available color gamut [2], color-mixing algorithm, and surface texture of textiles [3]. Traditional color prediction methods are often based on manual experience and proofing adjustments, which are subjective, inconsistent and labor-intensive. The development of an accurate and reliable color prediction method is therefore essential to the advancement of digital printing technology in textile.
Color prediction is key to color reproduction and its main purpose is to predict the expected value of color reproduction results in a digital environment. The process of color prediction is based on color chromaticity information and the prediction of color reproduction results under specific light source conditions to achieve accurate control of color output [4]. It is known that a textile sample’s surface texture influences its visual color [5]; thus, it is also important that color prediction methods account for different textiles. Current solutions for color prediction in digital printing include ICC (International Color Consortium) [6] profiles, prediction methods implemented with machine learning algorithms, and print tests. In the field of deep learning, image-to-image conversion methods [7] based on generative adversarial networks (GAN), which learn mapping functions between source and target domains based on training images, have been successfully applied to tasks such as image coloring [8], style conversion [9], image restoration and image segmentation [10]. Pix2Pix is a conditional GAN-based image translation framework that enables mapping from input images to output images.
Silk, being an expensive fabric, possesses unique qualities and aesthetics. However, there are still challenges in color control for silk, requiring multiple proofing adjustments to achieve accurate colors in actual industrial settings. Silk fiber also differs from other fibers with its distinctive microscopic structure, resulting in unique coloration effects. Silk products are typically positioned in the high-end market, which demands stringent color requirements, necessitating the attainment of the highest level of color accuracy and consistency. Therefore, selecting silk as the research subject holds practical significance. Furthermore, silk finds extensive applications in the textile industry, including fashion, interiors, art, and gifts, among other areas. Thus, precise control and prediction of silk colors hold significant importance, enabling designers, manufacturers, and consumers to better fulfill their requirements and creative visions. By studying color differences of silk digital printing of different types, we hope to improve predictive models for silk fabric color appearances in design processes and actual printed fabrics.
To date, no published research has scientifically characterized color gamut differences in silk printed fabrics and used deep learning methods for color predictions of silk printed fabrics. In this paper, a color prediction method for silk digital printing is proposed based on the Pix2Pix framework. Color information and fabric style are two important factors in shaping the aesthetic appearance of a fabric. Pix2Pix GAN can learn color transfer from color cards to silk printed fabrics to facilitate the design process. The method incorporated the color and texture features of silk printed fabric images into a neural network to predict the color effect, accounting for fabric style and systemic color variation during the printing process. Finally, the model generates predicted silk printed fabric images based on the input of a designed image. Designers and manufacturers can directly visualize colored images of predicted print textiles of a given fabric structure in their particular manufacturing setting, potentially skipping the lengthy proofing processes.

2. Materials and Methods

Figure 1 shows the overall workflow of color prediction method development for digital silk printing investigated in this study. A digital PANTONE TCX color card database was printed on silk crepe satin and silk twill fabrics. The DigiEye system is used to capture images of silk fabrics and match them with design color cards to train a supervised Pix2Pix GAN model. After training the model, an independent test set of color cards and style categories are input into the model to generate predicted images of silk fabrics, which are then verified by the DigiEye system, and finally to calculate color differences and evaluate the prediction performance.

2.1. Selection and Production of Test Sample Materials

The commonly used PANTONE TCX color card in the textile industry (containing a total of 2626 color blocks including supplements) was used to generate 2626 standard color blocks with classified numbers for subsequent matching, as shown in Figure 2. All color information was converted into the CIELAB color space, assuming an illuminant D65 and CIE 1931 standard colorimetric observer, and coded for subsequent data matching.
The prepared PANTONE TCX color card file was printed on silk crepe satin and silk twill fabrics using the industrial-grade digital printing machine HM1800L from Shenzhen Hometech Co., Ltd. (Shenzhen, China), followed by finishing. The standard reactive dye set (cyan, magenta, black, yellow, royal blue, orange, crimson, light black) used by the printing machine owner was used to mimic actual industrial set-ups. The same printer set-up and the same dye batch was used in producing silk crepe satin and silk twill, so that the only difference is silk surface texture. The printed samples were used for digital color measurements.

2.2. Color Space Conversion

During the digital silk printing process, color information is transmitted through different devices to achieve color reproduction, which is one of the fundamental issues in the field of digital printing. Digital inkjet color management involves transferring image color data from one device to another with minimal color distortion, essentially moving color information from one color space to another. Its main purpose is to enable the conversion of colors between different color spaces [11], ensuring color accuracy when transferring and outputting colors between different media.
In practical applications, there are many commonly used color spaces, such as RGB, CMYK, CIELAB, and HSV, etc. In the digital inkjet printing industry, there are three widely used color spaces—RGB, CMYK and CIELAB, where RGB and CMYK color spaces are device-dependent [12], and CIELAB is device-independent. Device-dependent color spaces have relative color coordinate values, which means that the same color values will display differently under different system conditions; device-independent color spaces are unrelated to any device and can be used to convert color spaces between different devices. The CIELAB color space is related to human visual color perception and is also a standard color space that computers can operate. To ensure color consistency, all color information used in this paper is unified into the CIELAB space [13]. The PANTONE TCX color was converted from RGB to CIELAB [14].
Equations (1) and (2) are used to complete the conversion from the RGB color space to the XYZ color space.
C l i n e a r = C s r g b 12.92 ,     C s r g b 0.04045 C s r g b + 0.055 1.055 2.4 ,   C s r g b > 0.04045
where C is R, G, or B.
For D65 illuminant, the conversion follows the matrix conversion formula in Equation (2).
X Y Z = 0.4124 0.3576 0.1805 0.2126 0.7152 0.0722 0.0193 0.1192 0.9505 R l i n e a r G l i n e a r B l i n e a r
Then, in Equation (3), XYZ is converted into the CIELAB color space.
L = 116 f y 16 a = 500 f x f y b = 200 f y f z  
where:
f x = x n 3 ,     i f   x n > ϵ κ x n + 16 116 ,     o t h e r w i s e ,
f y = y n 3 ,   i f   y n > ϵ κ y n + 16 116 ,     o t h e r w i s e ,
f z = z n 3 ,   i f   z n > ϵ κ z n + 16 116 ,     o t h e r w i s e ,
x n = X X n , y n = Y Y n ,   z n = Z Z n ,
ϵ = 0.008856 , κ = 903.3
For D65 illuminant, X n = 95.0489 , Y n = 100 , Z n = 108.8840 .

2.3. Color Measurement, Image Capture and Pairing

In the color control of textile products, current fabric color measurement methods are classified according to different ways of obtaining chromaticity values using colorimeters: visual comparison, photoelectric integration, spectrophotometry, and digital color measurement. Spectrophotometric colorimetry has gradually replaced the traditional visual comparison method. Although desktop spectrophotometers have extremely high testing accuracy, they have low testing efficiency [15]. Digital color measurement is a color testing technology that has been based in recent years on digital imaging [16]. It can measure color on irregular and rough object surfaces with high-throughput. In recent years, its acceptance has gradually increased, and it has great potential for application.
Digital color measurement is conducted under a stable standard light source environment, with a high-precision digital camera installed at a fixed position. The camera is white-balanced and color-corrected using a whiteboard and standard color card. With suitable and fixed camera shooting parameters (such as shooting distance, focal length, shooting mode, aperture size, exposure speed, and sensitivity), high-resolution digital photos are taken. Then, the pixel color values of the samples in the specified positions and ranges of the photos are calculated by a program to obtain the comprehensive color values of the samples. This is a non-contact color testing method that preserves the natural state of the sample surface and normal lighting effects, reproducing the true visual color of the sample, making it very suitable for color testing of textured textiles. In 2003, the UK-based VeriVide introduced the DigiEye Digital Imaging System, a typical representative of digital color measurement. Since DigiEye preserves the true surface state of the fabric by simulating a real lighting environment, it is closer to the results of visual observation [17].
The purpose of this study is to provide color management references for silk printing practitioners, thus requiring images to be taken under standard conditions. In the textile field, the DigiEye system is commonly used for color measurement and evaluation. The DigiEye system used in this paper to capture images of digitally printed silk fabrics is equipped with a Nikon D7000 camera, a lightbox, standard D65 illumination, and a dedicated capture head to maintain standard acquisition conditions.
Using the DigiEye image capture system, digital inkjet samples of silk twill and silk crepe satin were collected, generating a total of 5252 sets of digital printing color data and corresponding fabric images. To reduce edge distortion and fit the input size for the Pix2Pix model, the collected images were uniformly processed to 256 × 256 pixels.

2.4. Database Construction

The PANTONE TCX color blocks and printed silk twill and silk crepe satin color blocks were matched by PANTONE code to create a database for subsequent method development. A total of 5252 sets of matched data were archived with a corresponding CIELAB value and image from design PANTONE TCX color blocks and actual produced textiles. Data cleaning by removing irrelevant, duplicate, erroneous, and missing data and data normalization was implemented. All images were uniformly processed to 256 × 256 pixels with mapping data to the [0, 1] interval (normalization). Data augmentation techniques were used to expand the dataset by rotating, cropping, and other methods to increase data diversity and robustness. Finally, data splitting was carried out by dividing the dataset into training and testing sets. The dataset contains 2626 pairs of corresponding images for 2 different fabrics, with 90% used for training and 10% for testing.

2.5. Deep Learning Method Pix2Pix GAN

Pix2Pix GAN is an image-to-image translation model based on Conditional Generative Adversarial Networks (CGAN), and it can be used for various tasks, including colorization. Compared to other methods, Pix2Pix GAN has obvious advantages in colorization, generating high-quality color images, capturing more details, and creating more realistic images, better restoring fabric surface style features [18]. Also, because it can handle non-linear relationships, it can learn complex color mappings. In different image-to-image translation models, the Pix2Pix model is based on CGAN, involving constraints guiding appearance transfer, which includes two modules: Generator G and Discriminator D. The network is based on a conditional generative adversarial network based on a U-Net [19] Generator and a Patch GAN Discriminator. The conditional adversarial loss function of GAN is defined as follows: Suppose x is the input of G (here are the PANTOME color blocks), y is the actual printed silk fabric image, and z is random noise; x and z are input into G to generate image G(x). Then, G(x) and x are merged along the channel dimension as input to D to obtain the predicted probability, which represents whether the input is a pair of real images. The range of predicted probability is 0 to 1, with 0 indicating fake images and 1 indicating real images. When x and y are input into D, the predicted probability is close to or equal to 1. G’s training goal is to generate images that cannot be distinguished by adversarial training D, while D is trained to detect the falseness of the generated images as much as possible. For CGAN, the objective function formula of the conditional adversarial loss is as follows, in Equation (4).
m i n G m a x D V D , G = E x p dt x log D x + E z p z z log 1 D G z
where:
m i n G : Represents the minimization operation for the generator (Generator), aiming to make the generator’s output closer to real samples.
m a x D : Represents the maximization operation for the discriminator (Discriminator), aiming to enable the discriminator to accurately distinguish between generated samples and real samples.
V D , G : Represents the adversarial loss function (Adversarial Loss), indicating the competitive relationship between the generator and the discriminator, where D and G represent the parameters of the discriminator and the generator, respectively.
E x p dt x log D x : Represents the expectation of the logarithm probability of the discriminator D correctly classifying a sample x drawn from the real sample distribution.
E z p z z log 1 D G z : Represents the expectation of the logarithm probability of the discriminator D incorrectly classifying a sample generated by the generator G from a randomly sampled noise vector z from the latent space.
Using PyTorch Lightning to build and train the model, two RTX3090 graphics cards were used, and the training time was about one day, with 1000 epochs of training rounds.
  • Defining the loss function: The generator uses a loss function composed of two parts, L1 loss and adversarial loss, while the discriminator uses a loss function composed of two parts, real image loss and generated image loss.
  • Compiling the model: Compile the model based on the defined loss function and optimizer (Adam optimizer is used in this experiment) and set hyperparameters such as learning rate and batch size.
  • Training the model: Use the prepared training dataset for multiple iterations of training, continuously updating the parameters of the generator and discriminator, so that the generator can produce more realistic and conditionally constrained target images.
  • Evaluating the model: Use the prepared testing dataset to evaluate the trained model and evaluate the difference between the generated images and the real images using metrics such as PSNR and SSIM.
  • Adjusting hyperparameters: Adjust hyperparameters such as learning rate, batch size, and training rounds to optimize the model’s performance.
  • Generating new images: Use the trained model to convert new original images into new target images.
Figure 3 shows the model training process of the color prediction method.

2.6. Chromaticity Evaluation Method for Color Difference Calculations

Color evaluation methods are used for quantitative measurement and analysis of color attributes. The color quality evaluation of digital silk printing products involves assessing the chromaticity of a sample by comparing its color difference with that of a standard color block. A color difference formula is used to measure the difference between two different colors to assess the similarity of the colors. The result of the color difference formula is a numerical value, typically denoted as ΔE. The smaller the value, the more similar the two colors.
Color scientists have developed four commonly used color difference formulas, CIELAB, CMC (l:c), CIE94, and CIEDE2000 for evaluating color differences in different fields. Among them, CIEDE2000 is currently the most theoretically close to human vision [20]. The paper uses CIEDE2000, as in the current textile color control and evaluation, CIEDE2000 is the best choice [21]. The CIEDE2000 formula is as follows:
Δ E 2000 = Δ L k L S L 2 + Δ C k C S C 2 + Δ H k H S H 2 + R T Δ C k C S C Δ H k H S H 1 2
where:
K L , K C , and K H are the weighting factors for lightness, chroma, and hue, respectively, based on their perceptual relevance in practical applications. SL, SC, and SH are the weighting functions for lightness, chroma, and hue, respectively. RT is the interaction term.
K L = 2 ,     K C = K H = 1 , as these specific values are recommended for textiles.
The CIEDE2000 formula was used to calculate color differences between the PANTONE TCX color blocks with printed silk twill and silk crepe satin color blocks, as well as the Pix2Pix-predicted silk twill and silk crepe satin color blocks with printed silk twill and silk crepe satin color blocks. The lightness differences (ΔL00), chroma difference (ΔC00) and hue difference (ΔH00) contribution to ΔE00 was calculated and expressed as percentage [22].

3. Results

3.1. The Silk Twill and Silk Crepe Satin Fabrics Dataset

Using the DigiEye image capture system, digital inkjet-printed samples of silk twill and silk crepe satin were collected. Figure 4 compares the color distribution of the PANTONE TCX color card in the CIELAB space with digital inkjet printing on silk twill and silk crepe satin fabrics. As expected, gross color differences are observed between digital color cards and printed fabrics. It can be seen in Figure 4 that the same color card has different color expressions after printing on the two different fabrics, and the color gamut based on a*b* and L*hab planes on silk twill is similar to that of silk crepe satin: both are smaller than the color card. The data again demonstrated that digital inkjet print fabrics only capture an achievable space of the color gamut of the design color card. Figure 5 displays representative images showing that the color difference of the same design color on different fabrics is visually perceivable. Visual color thresholds for human observers with normal color vision are in the range of 0.4–0.7 CIEDE2000 units [23], roughly in agreement with the performance test results from threshold color differences (TCD) visual datasets where the average sizes of the visual color threshold for observers with normal color vision is 1.1 CIELAB units [24]. Overall color differences are summarized in Table 1. The average ΔE00 between silk crepe satin and the PANTONE TCX color card is 4.80, while ΔE00 between silk twill and the PANTONE TCX color card is 4.83. The ΔE00 between silk crepe satin and silk twill is 1.78, indicating that even with the same printer set-up, the texture of fabrics would also influence color. The percentage split into lightness, chroma and hue of ΔE00 showed the single biggest contributor to color differences in the samples is lightness (68.83%, 67.00% and 97.29%, respectively). After selection, 2626 pairs of images from each fabric category (silk crepe satin and silk twill) were used to build the dataset and train the Pix2Pix GAN.

3.2. Pix2Pix GAN Based Color Prediction Model Results

A Pix2Pix GAN-based color prediction model based on two types of digital printed images on silk crepe satin and silk twill was constructed. The paired images totaled 2626, divided into a training set (2326 images) and a test set (300 images). Figure 6 shows the comparison between predicted images from the test set and original printed fabric images, as well as the color distribution in test sets. The color gamut of printed fabrics was preserved in the predicted images.
After calculating with the CIEDE2000 color difference formula, the color data of the measured actual printed fabrics and the color data of the predicted digital printing image were compared. The color difference and contribution of lightness, chroma and hue differences are summarized in Table 2. The method predicts highly accurate colored images with an average ΔE00 of 2.37 for silk crepe satin and an average ΔE00 of 1.85 for silk twill. The standard deviations of ΔE00 are 1.50 and 1.04, respectively, indicating the method’s robustness. Chroma differences are the biggest contributor to the observed differences between prediction and actual printed fabrics (averages of 43.35% and 42.30%, respectively). Hue differences are second biggest contributor, at 36.96% and 38.96%. Unlike the percentage contribution seen in the PANTONE TCX color card and printed fabrics, lightness differences do not contribute significantly to the color differences. Figure 7 shows the individual color differences from the two different fabric prediction models. By binning the CIEDE2000 results of predicted images vs. actual fabric images using a 0.7 CIEDE2000 unit bin size, the resulting distribution plot also shows the number of predicted images below the 0.7 visual threshold for human observers with normal color vision to be 7% and 8% [23,24]. The distribution of predictions with a ΔE00 below 2.1 are in the majority for both silk crepe satin and silk twill, for about 40% of the predictions of silk twill fall within color differences of 1.4 CIEDE2000 unit, indicating the model performs better than silk crepe satin, for which only 25% of the predictions fall below 1.4 CIEDE2000 unit.
The representative predicted results in two types of silk fabrics are shown in Figure 8. As shown in the figure, the predicted images (Generated Print Fabric) are visually similar to the corresponding real samples (Ground Truth) in color and style, with most of the measured ΔE00 values below 0.7. The colors of the fabrics are visually similar to the color card; the texture details are clear and realistic. Experimental results show that the proposed Pix2Pix GAN-based method for predicting colors in digital silk printing is effective and feasible.

3.3. Prediction Results’ Relationship to Various Factors of the Fabrics

To understand more details of the model performance, CIELAB color coordinates that contribute to color differences were analyzed. Figure 9 shows the linear fitting of the L*a*b value from the predicted images and that of actual samples from two different fabrics. For the silk twill, the individual L*a*b value predicted by this method has excellent linear fitting performance in terms of lightness L* (R2 = 0.987), chromaticity coordinates a* (R2 = 0.965), and chromaticity coordinates b* (R2 = 0.966). For the silk crepe satin, the individual L*a*b predicted by this method also has excellent linear fitting in lightness (L*) (R2 = 0.984), a* (R2 = 0.984), and b* (R2 = 0.982).
The linear fitting results of predicted colors for two different fabrics in terms of chroma and hue angle were evaluated (Figure 10). For both silk twill and silk crepe satin, the color predicted by this method showed high linear fitting quality in terms of chroma (C*ab) (R2 = 0.960 and R2 = 0.978, respectively). But the fitting effect was relatively poor in terms of hue angle (hab) (R2 = 0.745 and R2 = 0.816).
To understand if the current model has a poor performance relating to CIELAB color coordinates, color differences were plotted against L* and C*ab. Figure 11 shows that the current color prediction method has the largest error for colors with low CIE chroma and high lightness. It seems that the color prediction method performs poorly in the area where the human eye perceives the brightest and most neutral colors, while the predicted color differences’ ΔE00 in other areas are less than 5. In the area of low C*ab and high lightness, the color prediction performs worse than in the area of high C*ab and low lightness. The color prediction in the area with higher chroma values is more accurate and has fewer variations, where individual ΔE00 are below 4.

4. Discussion

Color differences seen in PANTOME TCX color card and actual printed silk twill and silk crepe satin are common, due to problems in color transmittance through different devices, ink color deviations, machine parameterization, etc. Additionally, the style of digital inkjet print fabrics is determined by the style of the substrate fabric; the same design colors may have different visual effects on different substrates. Silk twill has a distinct oblique texture, while silk crepe satin has a strong gloss without an obvious texture, so the same design colors may have different visual effects on the two fabrics. Thus, our results showed that color reproduction from design to actual prints in manufacturing settings would lead to noticeable color differences. Using an image-based color prediction method, designers and manufacturers can potentially “see” what they will produce in the end by their particular manufacturing setting and thus be able to decide on achievable colors choices, reduce sampling/waste and even select a color on the computer screen that would result in the desired effect.
Our experimental results show that Pix2Pix GAN is a robust color prediction method in predicting color outcomes of digital inkjet printing on silk twill and silk crepe satin. The perceptible thresholds of color differences for average human observers with normal color vision can defined as ΔE00 ≤ 0.7 [23,24]. When ΔE00 ≤ 0.7, the human eye usually has difficulty perceiving the difference between two colors. According to experimental results (Table 2), the average color differences for predicting silk crepe satin and silk twill are 2.372 and 1.846, respectively. About 40% of predictions of silk twill fall within color differences of 1.4 CIEDE2000 unit, while 25% of predictions of silk crepe satin fall below 1.4 CIEDE2000 unit. We suspect that, since the twill fabric has a uniform oblique texture on the surface and the light reflection is more uniform [25], the predicted color difference is smaller and the result is more accurate.
From a method development point of view, color prediction methods based on deep learning may use deep neural networks to implement colorization of images. Such methods can use a large amount of color image data to train the network, enabling it to automatically learn how to add appropriate colors to grayscale images. The advantages of U-Net and Patch GAN in Pix2Pix in our current setting may improve the quality and realism of the generated image in image-to-image transformation tasks. U-Net can effectively handle image segmentation tasks and retain important information about the content of the image. Patch GAN can evaluate the difference between the generated image and the target image in a more granular way, thereby improving the quality and realism of the generated image. Further optimization of the Pix2Pix model is necessary to enhance prediction accuracy in areas where the current version exhibits suboptimal performance. Moreover, to offer more cost-effective application scenarios for the industry, additional optimization of the model training process is needed to ascertain the minimal input dataset required to attain both high accuracy and robustness. Future work would also include generating datasets using diverse methodologies and algorithm implementation to take in diverse sources of data.
This study is an attempt to apply a modified Pix2Pix model to silk digital print color prediction with different types of fabrics. The proposed methodology demonstrates a high degree of effectiveness in accurately forecasting the color outcomes of digital printing on silk, thereby offering valuable guidance to designers in optimizing resource utilization and streamlining the sampling process. Subsequent endeavors will also encompass the exploration of alternative materials and textile processes, including but not limited to jacquard fabrics and embroidered fabrics, to expand the repertoire of styles and establish a solid foundation towards the development of a silk textile color prediction system.

Author Contributions

Conceptualization, W.Z. and C.Z.; methodology, W.Z. and Q.L.; software, Z.W.; validation, W.Z. and Z.W.; formal analysis, W.Z. and Q.L.; investigation, W.Z.; resources, C.Z.; data curation, Z.W.; writing—original draft preparation, W.Z.; writing—review and editing, Q.L. and C.Z.; supervision, Q.L. and C.Z.; funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy concerns.

Acknowledgments

We would like to thank Xiaoke Jin and Yan Xia for their help in setting up the experiments and providing advice during the study.

Conflicts of Interest

Author Zhe Wang is employed by the company Huadong Medicine Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Javorsěk, D.; Javorsěk, A. Colour management in digital textile printing. Color. Technol. 2011, 127, 235–239. [Google Scholar] [CrossRef]
  2. Hajipour, A.; Shams-Nateri, A. Expanding the color gamut of inkjet textile printing during color matching. Color Res. Appl. 2021, 46, 1218–1226. [Google Scholar] [CrossRef]
  3. Luo, L.; Tsang, K.M.; Shen, H.-L.; Shao, S.-J.; Xin, J.H. An investigation of how the texture surface of a fabric influences its instrumental color. Color Res. Appl. 2015, 40, 472–482. [Google Scholar] [CrossRef]
  4. Zhang, N.; Ma, B.; Lan, W. Next generation color management process modeling for digital printing. In Proceedings of the 4th IEEE International Conference on Industrial Informatics, Singapore, 16–18 August 2006; pp. 932–937. [Google Scholar]
  5. Gorji Kandi, S.; Amani Tehran, M.; Rahmati, M. Colour dependency of textile samples on the surface texture. Color. Technol. 2008, 124, 348–354. [Google Scholar] [CrossRef]
  6. Buckley, R. The History of Device Independent Color-10 Years Later. In Proceedings of the IS&T/SID, Tenth Color Imaging Conference, Scottsdale, AZ, USA, 12 November 2002; pp. 41–46. [Google Scholar]
  7. Tan, D.S.; Lin, Y.X.; Hua, K. Incremental learning of multidomain image-to-image translations. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1526–1539. [Google Scholar] [CrossRef]
  8. Messaoud, S.; Forsyth, D.; Schwing, A.G. Structural consistency and controllability for diverse colorization. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 1–15. [Google Scholar]
  9. Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1510–1519. [Google Scholar]
  10. Eslami, M.; Tabarestani, S.; Albarqouni, S.; Adeli, E.; Navab, N.; Adjouadi, M. Image-to-images translation for multi-task organ segmentation and bone suppression in chest X-ray radiography. IEEE Trans. Med. Imaging 2020, 39, 2553–2565. [Google Scholar] [CrossRef] [PubMed]
  11. Su, Z.; Yang, J.; Li, P.; Zhang, H.; Jing, J. Colour space conversion model from CMYK to CIELab based on CS-WNN. Color. Technol. 2021, 137, 272–279. [Google Scholar] [CrossRef]
  12. Nin, S.I. Printing CIELAB images on a CMYK printer using tri-linear interpolation. Color Hard Copy and Graphic Arts. Int. Soc. Opt. Photonics 1992, 1670, 316. [Google Scholar]
  13. Carter, E.C.; Schanda, J.D.; Hirchler, R.; Jost, S.; Luo, M.R.; Melgosa, M.; Ohno, Y.; Pointer, M.R.; Rich, D.C.; Vienot, F.; et al. Colorimetry, 4th ed.; International Commission on Illumination: Vienna, Austria, 2018; ISBN 978-3-902842-13-8. [Google Scholar]
  14. Ding, Y.; Parrillo-Chapman, L.; Freeman, H.S. Developing the methodology of colour gamut analysis and print quality evaluation for textile ink-jet printing: Delphi method. Color. Technol. 2018, 134, 135–147. [Google Scholar] [CrossRef]
  15. Li, Q.Z.; Jin, X.K.; Zhang, S.C. Classification and Development of the Fabric Color Measurement Methods. China Text. Lead. 2012, 9, 103–105. [Google Scholar]
  16. Strgar Kurecic, M.; Agic, D.; Mandic, L. A digital imaging system for color accurate reproduction of various materials. Tekstil 2008, 57, 623–631. [Google Scholar]
  17. Li, Q.Z.; Jin, X.K.; Zhang, S.C.; Zhu, C.Y. Application of digital color measuring methods to color evaluation of textile. Dying Finish. 2014, 17, 120–124. [Google Scholar]
  18. Zhang, N.; Xiang, J.; Wang, J.; Pan, R.; Gao, W. Appearance generation for colored spun yarn fabric based on conditional image-to-image translation. Color Res. Appl. 2022, 47, 1023–1034. [Google Scholar] [CrossRef]
  19. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional net-works for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 1–8. [Google Scholar]
  20. Alman, D.H.; Berns, R.S.; Komatsubara, H.; Li, W.; Luo, M.R.; Melgosa, M.; Nobbs, J.H.; Rigg, B.; Robertson, A.R.; Witt, K. Improvement to Industrial Colour-Difference Evaluation; International Commission on Illumination: Vienna, Austria, 2001; ISBN 978-3-901906-08-4. [Google Scholar]
  21. Melgosa, M.; Nobbs, J.; Alman, D.H.; Berns, R.S.; Carter, E.C.; Cui, G.; Hirschler, R.; Li, C.; Luo, M.R.; Oleari, C.; et al. Recommended Method for Evaluating the Performance of Colour-Difference Formulae; International Commission on Illumination: Vienna, Austria, 2016; ISBN 978-3-902842-57-2. [Google Scholar]
  22. Nobbs, J.H. A lightness, chroma and hue splitting approach to CIEDE2000 colour differences. Adv. Col. Sci. Tech. 2002, 5, 46–53. [Google Scholar]
  23. Melgosa, M.; Cui, G.; Oleari, C.; Pardo, P.J.; Huang, M.; Li, C.; Luo, M.R. Revisiting the weighting function for lightness in the CIEDE2000 colour-difference formula. Color. Technol. 2017, 133, 273–282. [Google Scholar] [CrossRef]
  24. Huang, M.; Cui, G.; Melgosa, M.; Sánchez-Marañón, M.; Li, C.; Luo, M.R.; Liu, H. Power functions improving the performance of color-difference formulas. Opt. Express 2015, 32, 597–610. [Google Scholar] [CrossRef]
  25. Huertas, R.; Melgosa, M.; Hita, E. Influence of random-dot textures on perception of suprathreshold color differences. J. Opt. Soc. Am. A 2006, 23, 2067–2076. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of Color Prediction for Digital Silk Printings.
Figure 1. Flowchart of Color Prediction for Digital Silk Printings.
Applsci 14 00011 g001
Figure 2. PANTONE TCX color card file.
Figure 2. PANTONE TCX color card file.
Applsci 14 00011 g002
Figure 3. Pix2Pix model structure for color prediction method.
Figure 3. Pix2Pix model structure for color prediction method.
Applsci 14 00011 g003
Figure 4. Color gamut of Pantone TCX color cards and printed samples. Each dot in the figure represents one sample datum. The left panel displays PANTONE TCX color card data and actual printed silk crepe satin data comparison in terms of a*, b* values and L*, hab values; the center panel displays PANTONE TCX color card data and actual printed silk twill data comparison; the right panel displays actual printed silk crepe satin data and actual printed silk twill data comparison.
Figure 4. Color gamut of Pantone TCX color cards and printed samples. Each dot in the figure represents one sample datum. The left panel displays PANTONE TCX color card data and actual printed silk crepe satin data comparison in terms of a*, b* values and L*, hab values; the center panel displays PANTONE TCX color card data and actual printed silk twill data comparison; the right panel displays actual printed silk crepe satin data and actual printed silk twill data comparison.
Applsci 14 00011 g004
Figure 5. Illustration of printed silk fabrics and color cards comparison.
Figure 5. Illustration of printed silk fabrics and color cards comparison.
Applsci 14 00011 g005
Figure 6. Color gamut of comparison between predicted and original printed fabric images. The left panel presents a*, b* values and L*, hab values of the predicted color data of the silk crepe satin in red squares and actual color data of the printed silk samples in blue circles; the right panel presents a*, b* values and L*, hab values of the predicted color data of the silk twill in red squares and actual color data of the printed silk samples in blue circles.
Figure 6. Color gamut of comparison between predicted and original printed fabric images. The left panel presents a*, b* values and L*, hab values of the predicted color data of the silk crepe satin in red squares and actual color data of the printed silk samples in blue circles; the right panel presents a*, b* values and L*, hab values of the predicted color data of the silk twill in red squares and actual color data of the printed silk samples in blue circles.
Applsci 14 00011 g006
Figure 7. Individual color difference distribution of silk crepe satin and silk twill predictions. The upper plots show individual dots as color differences between predicted image vs. actual printed sample. The lower plots show the ΔE00 value distribution binned by 0.7 CIEDE2000 unit, showing the percentage of predictions that fall into each bin.
Figure 7. Individual color difference distribution of silk crepe satin and silk twill predictions. The upper plots show individual dots as color differences between predicted image vs. actual printed sample. The lower plots show the ΔE00 value distribution binned by 0.7 CIEDE2000 unit, showing the percentage of predictions that fall into each bin.
Applsci 14 00011 g007
Figure 8. The representative generated results in two fabrics. ΔE00 is calculated between predicted digital printing image (generated print fabric) and actual printed fabrics (ground truth).
Figure 8. The representative generated results in two fabrics. ΔE00 is calculated between predicted digital printing image (generated print fabric) and actual printed fabrics (ground truth).
Applsci 14 00011 g008
Figure 9. L*a*b value from the predicted images and that of actual samples of silk twill and silk crepe satin.
Figure 9. L*a*b value from the predicted images and that of actual samples of silk twill and silk crepe satin.
Applsci 14 00011 g009
Figure 10. C*ab and hab values from the predicted images and those of actual samples of silk twill and silk crepe satin.
Figure 10. C*ab and hab values from the predicted images and those of actual samples of silk twill and silk crepe satin.
Applsci 14 00011 g010
Figure 11. Scatter plot of ΔE00 between predicted and actual image versus L* and C*ab of actual image. Samples with large L* value and small C*ab value showed higher deviation with large ΔE00.
Figure 11. Scatter plot of ΔE00 between predicted and actual image versus L* and C*ab of actual image. Samples with large L* value and small C*ab value showed higher deviation with large ΔE00.
Applsci 14 00011 g011
Table 1. CIEDE2000 color differences and their components in percentages (%ΔL00, %ΔC00, %ΔH00) between 2626 pairs of color cards and corresponding printed silk fabrics of silk crepe satin and silk twill, and 2626 color pairs of silk crepe satin and silk twill.
Table 1. CIEDE2000 color differences and their components in percentages (%ΔL00, %ΔC00, %ΔH00) between 2626 pairs of color cards and corresponding printed silk fabrics of silk crepe satin and silk twill, and 2626 color pairs of silk crepe satin and silk twill.
Average ΔE00Maximum
ΔE00
Standard Deviation ΔE00Average
%ΔL00
Average
%ΔC00
Average
%ΔH00
Silk crepe satin vs. Pantone TCX color card4.8032.682.1268.83%15.65%15.52%
Silk twill vs. Pantone TCX color card4.8331.412.0467.00%16.73%16.27%
Silk crepe satin vs. silk twill1.7828.230.8497.29%1.76%0.95%
Table 2. CIEDE2000 color differences and their components in percentages (%ΔL00, %ΔC00, %ΔH00) between 300 pairs of predicted images and actual samples of silk crepe satin and 300 pairs of predicted images and actual samples of silk twill.
Table 2. CIEDE2000 color differences and their components in percentages (%ΔL00, %ΔC00, %ΔH00) between 300 pairs of predicted images and actual samples of silk crepe satin and 300 pairs of predicted images and actual samples of silk twill.
Average
ΔE00
Maximum
ΔE00
Standard Deviation ΔE00Average
%ΔL00
Average
%ΔC00
Average
%ΔH00
Predicted image vs. silk crepe satin2.3711.841.5019.69%43.35%36.96%
Predicted image vs. silk twill1.857.371.0418.74%42.30%38.96%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, W.; Wang, Z.; Li, Q.; Zhu, C. A Method of Enhancing Silk Digital Printing Color Prediction through Pix2Pix GAN-Based Approaches. Appl. Sci. 2024, 14, 11. https://doi.org/10.3390/app14010011

AMA Style

Zhu W, Wang Z, Li Q, Zhu C. A Method of Enhancing Silk Digital Printing Color Prediction through Pix2Pix GAN-Based Approaches. Applied Sciences. 2024; 14(1):11. https://doi.org/10.3390/app14010011

Chicago/Turabian Style

Zhu, Weijing, Zhe Wang, Qizheng Li, and Chengyan Zhu. 2024. "A Method of Enhancing Silk Digital Printing Color Prediction through Pix2Pix GAN-Based Approaches" Applied Sciences 14, no. 1: 11. https://doi.org/10.3390/app14010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop