Next Article in Journal
Factors Influencing the Intention to Eat Insects as an Alternative Protein Source: A Sample from Turkey
Previous Article in Journal
Effect of Wheat Varieties and Cultivation Environments on Grain Endophytes, Microbial Communities, and Quality of Medium-High Temperature Daqu in Chinese Baijiu
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies

1
Institute of Digital Agriculture, Fujian Academy of Agricultural Sciences, Fuzhou 350003, China
2
Fujian Zhi Cha Intelligent Technology Co., Anxi 362400, China
*
Author to whom correspondence should be addressed.
Foods 2025, 14(6), 983; https://doi.org/10.3390/foods14060983
Submission received: 17 February 2025 / Revised: 6 March 2025 / Accepted: 12 March 2025 / Published: 13 March 2025
(This article belongs to the Section Food Engineering and Technology)

Abstract

:
The fermentation of oolong tea is a critical process that determines its quality and flavor. Current fermentation control relies on tea makers’ sensory experience, which is labor-intensive and time-consuming. In this study, using Tieguanyin oolong tea as the research object, features including the tea water loss rate, aroma, image color, and texture were obtained using weight sensors, a tin oxide-type gas sensor, and a visual acquisition system. Support vector regression (SVR), random forest (RF) machine learning, and long short-term memory (LSTM) deep learning algorithms were employed to establish models for assessing the fermentation degree based on both single features and fused multi-source features, respectively. The results showed that in the test set of the fermentation degree models based on single features, the mean absolute error (MAE) ranged from 4.537 to 6.732, the root mean square error (RMSE) ranged from 5.980 to 9.416, and the coefficient of determination (R2) values varied between 0.898 and 0.959. In contrast, the data fusion models demonstrated superior performance, with the MAE reduced to 2.232–2.783, the RMSE reduced to 2.693–3.969, and R2 increased to 0.982–0.991, confirming that feature fusion enhanced characterization accuracy. Finally, the Sparrow Search Algorithm (SSA) was applied to optimize the data fusion models. After optimization, the models exhibited a MAE ranging from 1.703 to 2.078, a RMSE from 2.258 to 3.230, and R2 values between 0.988 and 0.994 on the test set. The application of the SSA further enhanced model accuracy, with the Fusion-SSA-LSTM model demonstrating the best performance. The research results enable online real-time monitoring of the fermentation degree of Tieguanyin oolong tea, which contributes to the automated production of Tieguanyin oolong tea.

1. Introduction

Tieguanyin tea is a type of semi-fermented oolong tea that originates from Anxi County, Fujian Province, China [1]. Due to its unique flavor profile, which includes floral notes, a refreshing quality, and a rich and mellow taste, it has become one of the most prevalent oolong teas [2]. The primary processing steps involved in the initial production of Tieguanyin tea include plucking, withering, fermentation (also referred to as ZuoQing, green making, or oxidation), pan firing, shaping, and drying [3,4,5,6]. Mastering the processing of Tieguanyin oolong tea is a profound art, particularly when it comes to fermentation, which is a highly technical process and one of the key steps in creating the distinctive flavor of Tieguanyin tea [7,8].
Fermentation typically involves three to four alternating cycles of tumbling and aeration. Tumbling, accomplished through manual or mechanical means, involves gently shaking the tea leaves and causing them to collide and rub against one another. This action results in slight damage to the leaf edges, facilitating the interaction of enzymes within the leaves with oxygen [9]. Aeration involves spreading the tumbled tea leaves out for a period of time to allow the redistribution of moisture and the release of enzyme substances from damaged cells. Additionally, the exposure of the leaves to air further promotes the fermentation process. Incorrect timing in either tumbling or aeration can lead to over-fermentation or insufficient fermentation, significantly diminishing the quality of the tea [10,11].
Currently, control of the fermentation process mainly relies on the manual sensory experience of tea makers through methods of “touching, smelling, and observing”. “Touching” refers to feeling the softness and hardness of the tea leaves to judge the changes in water content; “smelling” means sniffing the aroma of the tea leaves and perceiving the peak of the characteristic aroma; and “observing” involves monitoring the physical changes on the surface of the tea leaves. During the fermentation process, the edges of tea leaves gradually turn reddish-brown and eventually form the characteristic “green leaves with red edges” [9,12]. However, such methods are subjective and dependent on the individual’s sensory experience, making them labor-intensive and susceptible to external factors such as weather, lighting, and mood. They stand in stark contrast to the standardization, mechanization, and cleanliness demanded by modern agriculture [11,13]. Therefore, it is of great significance to obtain features of tea fermentation in real time, simulate tea makers’ judgment on the degree of fermentation, and achieve online real-time monitoring of the degree of oolong tea fermentation. Currently, based on the judgment and experience of tea makers, single features such as a tea’s water content, aroma, and image have been used to study the fermentation processes of various types of tea.
Moisture content remains one of the most critical parameters in the fermentation process, directly influencing the physical state and chemical reactions of tea, which in turn significantly affect tea quality [14,15]. Chen et al. showed that fresh leaf weight loss rate is the dominant factor affecting aroma changes in tea fermentation [7]. Liu et al. pointed out that the variation pattern of the moisture content in tea leaves during the oolong tea fermentation process is closely related to the quality of the finished tea [16]. Therefore, if changes in moisture content can be measured rapidly, it will be of great significance for guiding the fermentation process.
Tea makers rely on their olfactory experience to judge the degree of fermentation, with the core criterion being the peak appearance of the characteristic aroma of Tieguanyin when the fermentation reaches an optimal level. The essence of this sensory evaluation lies in the dynamic evolution of volatile organic compounds (VOCs). Among these, key aroma components (such as linalool and nerolidol) are produced in large quantities during the fermentation stage, directly influencing sensory perception and the overall quality of tea [8,17,18]. Research has shown that electronic nose technology can effectively simulate human sensory evaluation. For instance, Tseng et al. used an electronic nose to monitor changes in the odor of oolong tea during fermentation and found that the results aligned with the sensory perception trends of tea makers, confirming the feasibility of using electronic noses to replace human sensory evaluation [10]. Han et al. developed a gas sensor detection system capable of effectively distinguishing between four different stages of fermentation in Tieguanyin tea; the prediction accuracy of the BPANN model reached 94.11%, further demonstrating the feasibility of online monitoring of the fermentation process using gas sensors [19].
During fermentation, the polyphenol oxidase in tea leaves reacts with oxygen in the air, leading to changes in color and brightness, which can be effectively captured according to image features [20]. This physical change can be quantitatively captured by machine vision technology, providing an objective alternative to traditional manual observation. Dong et al. demonstrated that the color features of images can quantitatively evaluate the quality indicators and pigment changes during the fermentation process of black tea [21]. Singh et al. classified black tea fermentation via images, achieving an accuracy of 0.895 and a prediction rate of 0.973 in deep learning [22].
The fermentation process of oolong tea involves numerous physicochemical changes [23]. It is difficult to comprehensively and accurately evaluate the fermentation degree of oolong tea by measuring a single characteristic such as moisture, aroma, or image. By fusing data from different sensors, the accuracy of the model can be improved, showing superior performance compared to a single technology. In the case of Longjing tea, Xu et al. utilized an electronic nose and a computer vision system to detect tea aroma and appearance, achieving a maximum classification accuracy of 100% [24,25]. In the research on black tea, Zhou et al. used a computer vision system and an electronic nose to analyze the image and odor characteristic values of black tea, demonstrating that the model effect of multi-source sensors is superior to that of a single sensor, and the classification accuracy of the test set can reach up to 95.6% [26]. However, currently, there is relatively little research on the application of data fusion technology in the real-time monitoring of the oolong tea fermentation process [20].
Therefore, the focus of our research is to simulate the sensory experience of tea makers during actual operations. By collecting multi-source data, such as the water loss rate, aroma, and image changes, we aim to establish a model for the automated and intelligent fermentation process of Tieguanyin tea. This research consists of the following parts: (1) selecting sensors and visual imaging systems capable of online measurement to automatically acquire data on water loss rate, aroma, and image variations during the fermentation of Tieguanyin tea; (2) establishing prediction models of the fermentation degree based on single-factor influences such as water loss rate, aroma, and image features, respectively, as well as models based on data fusion; and (3) comparing the models and further optimizing them using the Sparrow Search Algorithm.

2. Materials and Methods

2.1. Experimental Design and Sample Selection

Fresh tea leaves were sourced from the Longjuan Township in Anxi County, characterized by a medium-open leaf structure. The tea was harvested on a sunny day between 12:00 and 15:00, with the leaves primarily consisting of one bud and three to four leaves, exhibiting a medium opening. Following harvesting, the fresh leaves underwent sun withering prior to fermentation. The fermentation of Tieguanyin involved three cycles of tumbling and three cycles of aeration, alternating throughout the process. Tumbling was performed using a drum-type tumbling machine with a length of 300 cm and a diameter of 110 cm, at a rotation speed of 22 revolutions per minute. For aeration, the fresh leaves were spread to a thickness of 6 cm on bamboo trays and placed on a drying rack in a fermentation chamber maintained at 21 °C.
Experiments on the fermentation process were conducted in three batches on 28 April, 1 May, and 2 May 2023. The first tumbling lasted 3 min, followed by an aeration period of 50 min; the second tumbling lasted 5 to 7 min, with the subsequent aeration lasting 110 to 140 min; and the third tumbling lasted 12 to 16 min, followed by a 5 to 7 h aeration period. Due to the short intervals between the first two tumbling and aeration cycles, the changes in aroma and appearance features were not significant. Therefore, this study focused on monitoring moisture loss, total aroma, and image data only after the completion of the third tumbling and the commencement of the third aeration. This phase is critical for assessing whether the fermentation level is appropriate [19]. Three tea masters were present on-site to jointly evaluate the fermentation level of the tea. Once the fermentation level was deemed suitable, the process proceeded to the pan-firing stage.
In the experiment, the fermentation level was quantified on a scale from 0% to 100%, where 0% indicated the beginning of fermentation and 100% signified its completion. Support vector regression (SVR), random forest (RF) machine learning, and long short-term memory (LSTM) deep learning algorithms were employed to develop prediction models for the fermentation level based on both single factors and fused data. The accuracy of these models was compared, and the Sparrow Search Algorithm (SSA) was utilized to further enhance model performance. All model development and performance evaluations were conducted using MATLAB R2023b. Figure 1 presents the experimental flowchart for this study.

2.2. Data Collection Equipment

Figure 2 illustrates a schematic of the data acquisition equipment, which is capable of measuring the water loss rate (WLR), aroma, and image data of the tea leaves. The WLR is measured using three weight sensors, each with a capacity of 0–100 kg, mounted on the three wheels of the aeration rack. These sensors continuously collect the weight of the rack in real time, from which the WLR is calculated. To facilitate mobility, the weight collection system is powered by a lithium battery, and data transmission is accomplished using a RoLa wireless transmission module, which sends the weight data to a host computer for processing and storage. The formula for calculating the WLR is given in Equation (1). In the experiment, three identical racks are employed to independently collect water loss data for the tea on each rack. The aroma data of the tea leaves were collected using a portable odor detector, the XP-329M (COSMOS, Osaka, Japan). The detector, with the built-in high-sensitivity gas sensor (tin oxide type), can measure the total composition of volatile organic compounds present in the tea, which can be used to measure the total aroma value of the tea. During measurements, the instrument was positioned 3 cm above the tea leaves. To minimize measurement errors caused by air movement, the detector was placed in a box measuring 40 cm in length, 30 cm in width, and 25 cm in height, creating a relatively enclosed environment for collecting aroma data. Real-time image data of the tea leaves were captured using three industrial cameras mounted on the ceiling above three aeration racks. The cameras were positioned 25 cm above the tea leaves to monitor the entire fermentation process. The lighting for the photographs was provided by a flat visual light source measuring 20 cm × 20 cm (Juhua Vision Technology Co., Ltd., Dongguan, China). The industrial camera model used was the T-GE1000C-T-CL (Hua Teng Vision, Shenzhen, China), which has a maximum resolution of 3664 × 2748 pixels.
WLR (%) = (Wo − WT)/(Wo − WS) × 100%
Here, WLR is the water loss rate of tea leaves, %; Wo is the initial mass of tea leaves plus the mass of the aeration rack; WT is the real-time mass of tea leaves plus the mass of the aeration rack; and WS is the mass of the aeration rack. WS, W0, and WT are all obtained using the weight sensors.

2.3. Image Features Extraction

From the original RGB images of the tea leaves, six color feature variables were extracted: red channel average (R), green channel average (G), blue channel average (B), standard deviation of red (Rδ), standard deviation of green (Gδ), and standard deviation of blue (Bδ). The images were then converted from the RGB color space to the HIS color space, from which another six color feature variables were derived: mean hue (H), mean intensity (I), mean saturation (S), standard deviation of hue (Hδ), standard deviation of intensity (Iδ), and standard deviation of saturation (Sδ). Additionally, the RGB images were transformed into grayscale images to extract six texture feature parameters based on a gray-level co-occurrence matrix analysis. These parameters included average gray level (m), standard deviation (δ), smoothness (r), third moment (µ3), uniformity (U), and entropy (e) [14,15,27]. The image processing software was developed in Python 3.8 (software copyright number: 2023SR1650911) and was utilized to obtain a total of 18 color and texture features.

2.4. Feature Selection via Tree Models

Feature selection using tree models helps identify the most predictive features, effectively reducing data dimensionality, eliminating irrelevant data, and enhancing both the model’s performance and interpretability [28]. The core principle of tree model feature selection is to evaluate the importance of each feature using decision trees or random forest algorithms. During the construction of a decision tree, the algorithm partitions the dataset based on the features in such a way that each resulting subset is as pure as possible. This process allows us to quantify the contribution of each feature, facilitating an assessment of its importance.

2.5. Model and Performance Evaluation

2.5.1. SVM

SVM regression aims to identify an optimal hyperplane that best fits the data. By incorporating kernel functions, the SVM can handle both linear and nonlinear relationships, and its strengths lie in its robust generalization capability, as well as its ability to avoid overfitting [29].

2.5.2. RF

RF regression is an ensemble learning method. During the training phase, RF utilizes bootstrap sampling to generate multiple subsets from the original dataset, each of which is used to train an individual decision tree [30]. Ultimately, the predictions from all decision trees are averaged, which can reduce overfitting and improve prediction accuracy [31].

2.5.3. LSTM

LSTM networks are a specialized type of recurrent neural network (RNN). The core of LSTM lies in its three gating mechanisms: the forget gate, input gate, and output gate. By incorporating these gating structures, LSTM networks selectively retain or forget information during the learning process, thereby effectively capturing long-term dependencies [32].

2.5.4. SSA

SSA is an emerging population-based optimization technique. The SSA mimics the dynamic behaviors of sparrow groups and utilizes their unique social hierarchies and behavioral patterns. Through specific computational rules and formulas, the SSA facilitates efficient global and local searches for optimal solutions in complex optimization problems [33].
In a sparrow population, individuals are categorized as discoverers and joiners. Discoverers are responsible for exploring food sources and determining foraging areas and directions, while joiners rely on the information provided by discoverers to locate food. The position update formula for discoverers is defined as follows:
X i , j t + 1 = X i , j t + 1 e x p i α i t e r m a x i f   R 2 < S T X i , j t + Q L i f   R 2 S T
where t is the number of iterations; X i , j t + 1 is the position information of the i-th sparrow in the j-th dimension during iteration t + 1; α ( 0,1] is a random number; i t e r m a x is the maximum number of iterations; R2 and ST are warning and safety thresholds, respectively; Q is a random number following a normal distribution; and L is a 1 × d matrix, where each element is −1.
For joiners, the position update formula is the following:
X i , j t + 1 = Q e x p X w o r s t t X i , j t α i t e r m a x i f   i < n / 2 X P t + 1 + X i , j t X P t + 1 A + L o t h e r w i s e
where X P is the optimal position currently occupied by the discoverer and X w o r s t is the worst current global position. A is a 1 × d matrix, where each element is randomly assigned a value of 1 or −1; A + =   A T A A T 1 . When i > n/2, it indicates that the i -th joiner with lower fitness values cannot obtain food and needs to search elsewhere for food sources.
When the population detects danger, sparrows will adopt anti-predator measures. In this case, the position update formula is given by
X i , j t + 1 = X b e s t t + β X i , j t X b e s t t i f   f i > f g X i , j t + K X i , j t X w o r s t t f i f w + ε i f   f i = f g
where X b e s t is the current global optimal position; β is a step length control parameter, which is a random number following a normal distribution with a mean of 0 and a variance of 1; K 1,1 is a random number; f i , f g , and f w are the fitness values of the current individual sparrow, the global best and the global worst, respectively; and ε is a minimum constant.

2.5.5. Model Evaluation

The mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) are used to evaluate the regression prediction results. These metrics are defined by Equations (5)–(7). Smaller values of the MAE and RMSE indicate higher predictive accuracy, while a value of R2 closer to 1 indicates a stronger explanatory capability of the model [34].
M A E = 1 m i = 1 m y i y ^ i
R M S E = 1 m i = 1 m y i y ^ i 2
R 2 = 1 i = 1 m y i y ^ i 2 i = 1 m y i y i 2
Here, y i is the true value, y ^ i is the predicted value, and m is the number of samples.

3. Results and Discussion

3.1. Data Acquisition

Aroma data were recorded every 5 min. The industrial cameras captured one image of the tea leaves every minute, resulting in a total of fifteen images collected over 5 min from three cameras. The average image features from these images were utilized as the final dataset. Similarly, WLR data were also recorded every 5 min, and the average WLR derived from the three racks served as the final WLR. Therefore, characteristic data such as the WLR, aroma, and image of tea leaves were obtained for the tea fermentation process, resulting in a total of 220 sets of data collected over three days. The data were randomly divided into training and test sets in a 7:3 ratio, yielding 154 sets of data for training and 66 for testing. The training set was employed for the initial learning and parameter tuning of the model, while the test set was used to evaluate the model’s performance on new data. Given the varied dimensionality of different feature data, Z-score normalization was conducted prior to modeling for each dataset.

3.2. Fermentation Degree Model Based on WLR Data

WLR data were collected over a three-day fermentation period. Based on the collected data, a scatter plot illustrating the relationship between WLR and fermentation degree was generated, as shown in Figure 3a. Observations indicate a gradual increase in the WLR throughout the fermentation process. Specifically, the total WLR was 4.064% on the first day, 3.827% on the second day, and 3.950% on the third day, resulting in an average total WLR of 3.947% for the three days.
Using SVR, RF machine learning, and LSTM deep learning algorithms, prediction models for WLR and fermentation progression were developed. To observe the training progression and convergence of the WLR-LSTM model during training, the model’s loss curve was plotted, as shown in Figure 3b. In deep learning, the model loss curve is an intuitive way to measure the difference between the model’s prediction results and the real results. As illustrated in Figure 3b, in the initial training stage of the WLR-LSTM model, the loss value was relatively high, which indicates that there was a large deviation between the model’s prediction results and the actual values in the initial stage. As the number of iterations gradually increased, the model continuously adjusted its own parameters and learned the patterns and rules in the training data, and the loss value gradually decreased accordingly. When the number of iterations reached 220, the loss value basically stabilized and there was no obvious change anymore, which means that the model reached the convergence state. At this moment, the model learned the features of the data to a certain extent and was able to make relatively stable predictions. The performance of models based on water loss rate is presented in Table 1. The MAE for the three models on the test set ranged from 4.537 to 4.986, the RMSE fell between 5.980 and 6.463, and the R2 values were between 0.952 and 0.959, indicating that the performance of the models was comparable. Figure 4 shows the relationship between the predicted data and the true data of the test set based on the WLR data.

3.3. Fermentation Degree Model Based on Aroma Data

Using the aroma data collected over three days, a scatter plot illustrating the relationship between aroma and the fermentation degree was generated, as shown in Figure 5a. The aroma of the tea leaves showed a sharp increase at the beginning of the data collection, primarily due to the unstable collection environment. Subsequently, the aroma tended to stabilize and exhibited a gradual upward trend. The maximum aroma value recorded on the first day, indicating the end of fermentation, was 603; on the second day, it was 645; and on the third day, it was 636, resulting in an average maximum aroma value of 628 over the three days.
Machine learning algorithms such as SVR and RF, along with the LSTM deep learning algorithm, were utilized to establish a prediction model for the relationship between aroma and the degree of fermentation. The loss curve of the Aroma-LSTM model is depicted in Figure 5b. The model converged after approximately 200 iterations. By this time, the loss value calculated by the loss function had stabilized, and the model could already make relatively stable predictions. The performance of models based on aroma data is shown in Table 2. The MAE for the three models on the test set ranged from 6.015 to 6.732, the RMSE varied between 8.109 and 9.416, and the R2 values fell within the range of 0.898 to 0.924. Notably, the Aroma-RF and Aroma-LSTM models demonstrated better performance compared with the Aroma-SVM model. The relationship between the predicted data and the true data of the test set based on aroma data is illustrated in Figure 6.

3.4. Fermentation Degree Prediction Model Based on Image Features

3.4.1. Changes in Surface Color During Fermentation

To reflect the changes in surface color during the fermentation of tea leaves more intuitively, one original image was selected from various time points, and the average color image for that time point was extracted, as shown in Figure 7a,b. To enhance the visibility of differences, we increased the saturation, brightness, and contrast to better show the color variations throughout the fermentation process, as depicted in Figure 7c. As can be seen from Figure 7c, the images exhibited noticeable differences in color and brightness across different stages of fermentation. As time progressed, the edges of the leaves turned red, and the overall color of the leaves changed from an initial bright green to darker green, with some leaves even displaying traces of reddish-brown. This phenomenon is consistent with the typical color changes observed in tea leaves during fermentation. These changes are attributed to the oxidation of polyphenolic compounds in the tea leaves, forming substances such as theaflavins, thearubigins, and theabrownins [20].

3.4.2. Feature Selection Results from Tree Models Based on Image Features

For feature selection, a random forest was employed to construct a supervised model, allowing for the assessment of feature importance during the training process. The results of the feature selection for image features are illustrated in Figure 8, where the x-axis represents each feature, and the y-axis represents the estimated importance value. A higher value indicates a greater influence on the output and greater importance of the feature. The importance scores for the image features are as follows: B = 1.047, H = 0.970, S = 0.895, U = 0.872, Sδ = 0.865, µ3 = 0.678, e = 0.662, Bδ = 0.567, Hδ = 0.560, I = 0.518, δ = 0.507, Gδ = 0.426, m = 0.400, r = 0.390, G = 0.387, R = 0.384, Iδ = 0.296, and Rδ = 0.269. From these importance metrics, it is evident that the blue channel average (B), mean hue (H), and mean saturation (S) are the three most critical features among the image characteristics. During the feature selection process, a threshold of 0.5 was set based on the model’s testing performance, which meant that only features with an importance score exceeding this threshold were selected. In total, 11 image features were chosen for modeling.

3.4.3. Model Results Based on Image Features

Predictive models for the relationship between image features and fermentation progress were developed using machine learning algorithms including SVR, RF, and LSTM. The performance metrics of these models are listed in Table 3. The testing results of the three models were similar, with MAE values ranging from 5.378 to 6.211, RMSE values between 7.392 and 7.953, and R2 values ranging from 0.927 to 0.937. The relationship between predicted data and true data of the test set based on the image features data is depicted in Figure 9. Additionally, the loss function curve of the Image-LSTM model, varying with the number of iterations, is shown in Figure 10. The model converged after approximately 230 iterations. At this time, the loss value calculated by the loss function tended to be stable, and the model could already make relatively stable predictions.

3.5. Fermentation Degree Prediction Model Based on Fusion Feature Data

3.5.1. Feature Selection Results from Tree Models Based on Fusion Feature Data

Feature selection results from the tree models were performed using fusion feature data, including the WLR, aroma, and image features. The results are shown in Figure 11. The features are ranked according to their importance from highest to lowest as follows: Aroma = 1.153, WLR = 1.130, Sδ = 0.665, B = 0.599, S = 0.581, U = 0.478, H = 0.477, r = 0.441, e = 0.438, Iδ = 0.394, Hδ = 0.386, Bδ = 0.385, Rδ = 0.326, µ3 = 0.320, I = 0.275, Gδ = 0.259, δ = 0.242, G = 0.239, R = 0.208, and m = 0.198. The results indicate that the aroma, WLR, standard deviation of saturation (Sδ), blue channel average (B), and mean saturation (S) are the five most important features. Based on the model testing performance, a threshold of 0.4 was established for feature selection, meaning that only features with scores exceeding this threshold were included. In total, nine features were selected for modeling.

3.5.2. Model Results Based on Fusion Feature Data

We utilized SVR, RF, and LSTM algorithms to develop predictive models for the relationship between fusion feature data and the fermentation degree. The performance of these models is reported in Table 4, yielding a MAE of 2.232–2.783, RMSE of 2.693–3.969, and R2 values of 0.982–0.991 on the test set. The relationship between predicted data and true data of the test set based on fusion feature data is depicted in Figure 12.

3.6. Model Comparison

Figure 13 illustrates the modeling performance of the SVR, RF, and LSTM models across different datasets. For the SVR models, including WLR-SVR, Aroma-SVR, Image-SVR, and Fusion-SVR, the MAE values were 4.537, 6.732, 6.211, and 2.232, respectively. The RMSE values were 6.214, 9.416, 7.392, and 2.693, and the R2 values were 0.955, 0.898, 0.937, and 0.991, respectively. In the RF models, which include WLR-RF, Aroma-RF, Image-RF, and Fusion-RF, the MAE values were 4.956, 6.082, 5.378, and 2.249, respectively; the RMSE values were 6.463, 8.109, 7.953, and 3.447; and the R2 values were 0.952, 0.924, 0.927, and 0.986. For the LSTM models—WLR-LSTM, Aroma-LSTM, Image-LSTM, and Fusion-LSTM—the MAE values were 4.986, 6.015, 5.675, and 2.783, respectively; the RMSE values were 5.980, 8.123, 7.408, and 3.969; and the R2 values were 0.959, 0.924, 0.937, and 0.982. The results indicate that compared to models based on single feature data, models based on fusion feature data can significantly reduce MAE and RMSE values while increasing the R2 value, demonstrating a notable improvement in model accuracy.

3.7. Model Optimization

Based on fusion feature data, the Sparrow Search Algorithm (SSA) was applied to enhance the SVR, RF, and LSTM models, thereby establishing predictive models for the relationship between fused features and the fermentation degree. The performance metrics of these models are detailed in Table 5, and the relationship between the predicted data and true data is illustrated in Figure 14. For the Fusion-SSA-SVR model, key optimizations included the kernel function scaling factor set to KernelScale = 2.449, the upper limit of sample weights adjusted to BoxConstraint = 19.785, and the sensitivity band width set to Epsilon = 0.001. The Fusion-SSA-RF model optimized the number of decision trees to Tree num = 68 and the minimum leaf node size to MinLeafSize = 1. For the Fusion-SSA-LSTM model, the number of hidden units in the two LSTM layers was optimized, with the first LSTM layer containing 76 hidden units and the second layer containing 47 hidden units. Additionally, the dropout rate was optimized to 0.258.
The comparison between the Fusion-SVM model and the Fusion-SSA-SVM model reveals a notable improvement in the test set, with the MAE decreasing from 2.232 to 1.834, the RMSE declining from 2.693 to 2.600, and the R2 value increasing from 0.991 to 0.992. Similar trends were also observed in the comparison between the Fusion-RF model and the Fusion-SSA-RF model, with the MAE decreasing from 2.249 to 2.078, the RMSE dropping from 3.969 to 3.230, and the R2 value rising from 0.986 to 0.988. For the Fusion-LSTM model compared with the Fusion-SSA-LSTM model, the MAE dropped from 2.786 to 1.703, the RMSE declined from 3.939 to 2.258, and the R2 value increased from 0.982 to 0.994. The radar chart comparing model performance before and after SSA optimization is presented in Figure 15. The results indicate that the SSA consistently enhanced model accuracy, with the Fusion-SSA-LSTM model demonstrating the best performance on the test set and achieving the highest simulation effectiveness. The loss function curves for the Fusion-LSTM and Fusion-SSA-LSTM models are presented in Figure 16. As can be seen from Figure 16a, for the Fusion-LSTM model, when the number of iterations reached 150, the loss value basically stabilized and there was no obvious change anymore, indicating that the model had reached the convergence state. From Figure 16b, it can be observed that the Fusion-SSA-LSTM model performed even better. The model reached the convergence state after only 100 iterations. Therefore, the Fusion-SSA-LSTM model optimized by the SSA is able to learn the features in the data more efficiently, thus accelerating the decrease in the loss value. It has a significant advantage in terms of convergence speed, which is of great significance for improving the training efficiency of the model and its application effects.
In order to test the running speed of the Fusion-SSA-LSTM model, five rounds of training and testing were carried out on an experimental computer with an Intel(R) Core(TM) i5-10400F CPU @ 2.90 GHz, an NVIDIA GeForce GTX 1660 SUPER graphics card, and 16 GB of memory. After the model was generated, a new set of feature data was predicted. The results show that the average time taken to complete one round of training and testing is 101.64 s, and the average time taken to complete the prediction of one set of new data is 0.012 s. Therefore, this running speed is sufficient to meet the system requirements for real-time monitoring of the tea fermentation production line.

4. Conclusions

Fermentation is one of the key processes in Tieguanyin oolong tea production, significantly influencing its flavor and aroma. Therefore, there is an urgent need for rapid assessment of the fermentation degree in the industrial production of Tieguanyin tea. This study focuses on modeling the fermentation process of Tieguanyin tea, aiming to enhance the intelligence of fermentation assessment through the fusion of multi-source information, including water loss rate (WLR), aroma, and image features. The results indicate that when considering multi-source information, such as the WLR, aroma, and variations in image features, the proposed approach offers several advantages over single-feature monitoring methods. These advantages include a large amount of information, strong comprehensive ability, high fault tolerance, and similarity to human cognitive processes, all of which contribute to increased model accuracy. Additionally, the introduction of the Sparrow Search Algorithm (SSA) further enhances the model’s accuracy and convergence speed, with the Fusion-SSA-LSTM model achieving a R2 value of 0.994 on the prediction set. This model not only enables precise control over the fermentation process but also significantly improves the quality of the tea, providing a scientific basis for the production of Tieguanyin oolong tea.

Author Contributions

Conceptualization, Y.H. and L.X.; methodology, Y.H. and Y.C.; software, C.Z. and C.L.; validation, J.Z.; formal analysis, Y.H., J.Z., Y.C. and C.Z.; investigation, Y.H.; resources, Y.C. and J.Z.; data curation, J.Z. and L.X.; writing—original draft preparation, Y.H.; writing—review and editing, J.Z., C.Z., C.L., T.W. and Y.C.; visualization, Y.H.; supervision, Y.C.; project administration, Y.C.; funding acquisition, Y.H. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Fujian Natural Science Foundation (2022J01484) and Key Technology for Digitization of Characteristic Agricultural Industries in Fujian Province (XTCXGC2021015).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed at the corresponding author.

Acknowledgments

The authors would like to thank the editor and reviewers for their assistance and valuable comments.

Conflicts of Interest

Author Liangde Xiao was employed by the company Fujian Zhi Cha Intelligent Technology Co., Ltd. He participated in conceptualization and data curation in the study. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Li, C.; Lin, J.; Hu, Q.; Sun, Y.; Wu, L. An integrated metabolomic and transcriptomic analysis reveals the dynamic changes of key metabolites and flavor formation over Tieguanyin oolong tea production. Food Chem. X 2023, 20, 100952. [Google Scholar] [CrossRef] [PubMed]
  2. Cao, Q.; Wang, J.; Chen, J.; Wang, F.; Gao, Y.; Granato, D.; Zhang, X.; Yin, J.; Xu, Y.-Q. Optimization of brewing conditions for Tieguanyin oolong tea by quadratic orthogonal regression design. npj Sci. Food 2022, 6, 25. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, N.; Jing, T.; Zhao, M.; Jin, J.; Xu, M.; Chen, Y.; Zhang, S.; Wan, X.; Schwab, W.; Song, C. Untargeted metabolomics coupled with chemometrics analysis reveals potential non-volatile markers during oolong tea shaking. Food Res. Int. 2019, 123, 125–134. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, S.; Liu, H.; Zhao, X.; Li, X.; Shan, W.; Wang, X.; Wang, S.; Yu, W.; Yang, Z.; Yu, X. Non-targeted metabolomics analysis reveals dynamic changes of volatile and non-volatile metabolites during oolong tea manufacture. Food Res. Int. 2020, 128, 108778. [Google Scholar] [CrossRef]
  5. Yao, Q.; Huang, M.; Zheng, Y.; Chen, M.; Huang, C.; Lin, Q. Prediction and Health Risk Assessment of Copper, Lead, Cadmium, Chromium, and Nickel in Tieguanyin Tea: A Case Study from Fujian, China. Foods 2022, 11, 1593. [Google Scholar] [CrossRef]
  6. Xu, Y.; Liu, P.; Shi, J.; Gao, Y.; Wang, Q.-S.; Yin, J.-F. Quality development and main chemical components of Tieguanyin oolong teas processed from different parts of fresh shoots. Food Chem. 2018, 249, 176–183. [Google Scholar] [CrossRef]
  7. Chen, L.; Chen, K.; Chen, Q.; Zhang, Y.; Song, Z.; Wang, L.; You, Z. Effects of Green-making Technique on Aroma Pattern of Oolong Tea. J. Tea Sci. 2014, 34, 387–395. [Google Scholar]
  8. Hu, C.; Li, D.; Ma, Y.; Zhang, W.; Lin, C.; Zheng, X.; Liang, Y.; Lu, J. Formation mechanism of the oolong tea characteristic aroma during bruising and withering treatment. Food Chem. 2018, 269, 202–211. [Google Scholar] [CrossRef]
  9. Lin, S.; Lo, L.; Chen, I.; Chen, P. Effect of shaking process on correlations between catechins and volatiles in oolong tea. J. Food Drug Anal. 2016, 24, 500–507. [Google Scholar] [CrossRef]
  10. Tseng, T.; Hsiao, M.; Chen, P.-A.; Lin, S.; Chiu, S.; Yao, D. Utilization of a Gas-Sensing System to Discriminate Smell and to Monitor Fermentation during the Manufacture of Oolong Tea Leaves. Micromachines 2021, 12, 93. [Google Scholar] [CrossRef]
  11. An, T.; Huang, W.; Tian, X.; Fan, S.; Duan, D.; Dong, C.; Zhao, C.; Li, G. Hyperspectral imaging technology coupled with human sensory information to evaluate the fermentation degree of black tea. Sens. Actuators B Chem. 2022, 366, 131994. [Google Scholar] [CrossRef]
  12. Wu, W.; Wang, Y.; Cheng, X.; Zheng, M. Study on the color degree recognition of Wuyi Rock Tea with machinery vision. J. Shanxi Agric. Univ. (Nat. Sci. Ed.) 2019, 39, 86–92. [Google Scholar]
  13. Jin, G.; Wang, Y.; Li, L.; Shen, S.; Deng, W.-W.; Zhang, Z.; Ning, J. Intelligent evaluation of black tea fermentation degree by FT-NIR and computer vision based on data fusion strategy. LWT 2020, 125, 109216. [Google Scholar] [CrossRef]
  14. Liu, Z.; Zhang, R.; Yang, C.; Hu, B.; Luo, X.; Li, Y.; Dong, C. Research on moisture content detection method during green tea processing based on machine vision and near-infrared spectroscopy technology. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 271, 120921. [Google Scholar] [CrossRef]
  15. Wei, Y.; Wu, F.; Xu, J.; Sha, J.; Zhao, Z.; He, Y.; Li, X. Visual detection of the moisture content of tea leaves with hyperspectral imaging technology. J. Food Eng. 2019, 248, 89–96. [Google Scholar] [CrossRef]
  16. Liu, Y. Moisture effects and mathematical modeling of oolong tea greening process. Tea Sci. Bull. 1990, 1, 25–28. [Google Scholar]
  17. Kang, S.; Zhang, Q.; Li, Z.; Yin, C.; Feng, N.; Shi, Y. Determination of the quality of tea from different picking periods: An adaptive pooling attention mechanism coupled with an electronic nose. Postharvest Biol. Technol. 2023, 197, 112214. [Google Scholar] [CrossRef]
  18. Zhang, C. Research progress on the changes of aroma composition and formation mechanism of oolong tea during processing. Fujian Tea 2006, 1, 7–8. [Google Scholar]
  19. Han, Z.; Ahmad, W.; Rong, Y.; Chen, X.; Zhao, S.; Yu, J.; Zheng, P.; Huang, C.; Li, H. A Gas Sensors Detection System for Real-Time Monitoring of Changes in Volatile Organic Compounds during Oolong Tea Processing. Foods 2024, 13, 1721. [Google Scholar] [CrossRef]
  20. Zheng, P.; Solomon Adade, S.Y.-S.; Rong, Y.; Zhao, S.; Han, Z.; Gong, Y.; Chen, X.; Yu, J.; Huang, C.; Lin, H. Online System for Monitoring the Degree of Fermentation of Oolong Tea Using Integrated Visible–Near-Infrared Spectroscopy and Image-Processing Technologies. Foods 2024, 13, 1708. [Google Scholar] [CrossRef]
  21. Dong, C.; Liang, G.; Hu, B.; Yuan, H.; Jiang, Y.; Zhu, H.; Qi, J. Prediction of Congou Black Tea Fermentation Quality Indices from Color Features Using Non-Linear Regression Methods. Sci. Rep. 2018, 8, 10535. [Google Scholar] [CrossRef] [PubMed]
  22. Singh, D.; Bhargava, A.; Agarwal, D. Deep Learning-Based Tea Fermentation Grading. In Proceedings of the Innovative Computing and Communications, New Delhi, India, 16–17 February 2024; Springer: Singapore, 2024; pp. 171–186. [Google Scholar]
  23. Salman, S.; Öz, G.; Felek, R.; Haznedar, A.; Turna, T.; Özdemir, F. Effects of fermentation time on phenolic composition, antioxidant and antimicrobial activities of green, oolong, and black teas. Food Biosci. 2022, 49, 101884. [Google Scholar] [CrossRef]
  24. Xu, M.; Wang, J.; Gu, S. Rapid identification of tea quality by E-nose and computer vision combining with a synergetic data fusion strategy. J. Food Eng. 2019, 241, 10–17. [Google Scholar] [CrossRef]
  25. Xu, M.; Wang, J.; Jia, P.; Dai, Y. Identification of Longjing Teas with Different Geographic Origins Based on E-Nose and Computer Vision System Combined with Data Fusion Strategies. Trans. ASABE 2021, 64, 327–340. [Google Scholar] [CrossRef]
  26. Zhou, Q.; Dai, Z.; Song, F.; Li, Z.; Song, C.; Ling, C. Monitoring black tea fermentation quality by intelligent sensors: Comparison of image, e-nose and data fusion. Food Biosci. 2023, 52, 102454. [Google Scholar] [CrossRef]
  27. Chen, Q.; Zhao, J.; Tsai, J.; Saritporn, V. Inspection of tea quality by using multi-sensor information fusion based on NIR spectroscopy and machine vision. Trans. CSAE 2008, 24, 5–10. [Google Scholar]
  28. Paja, W. Feature Selection Methods Based on Decision Rule and Tree Models. In Proceedings of the Intelligent Decision Technologies 2016, Tenerife, Spain, 15–17 June 2016; Springer: Cham, Switzerland, 2016; pp. 63–70. [Google Scholar]
  29. Sánchez, A.S.; Nieto, P.G.; Fernández, P.R.; del Coz Díaz, J.J.; Iglesias-Rodríguez, F.J. Application of an SVM-based regression model to the air quality study at local scale in the Avilés urban area (Spain). Math. Comput. Model. 2011, 54, 1453–1466. [Google Scholar] [CrossRef]
  30. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  31. Xie, X.; Wu, T.; Zhu, M.; Jiang, G.; Xu, Y.; Wang, X.; Pu, L. Comparison of random forest and multiple linear regression models for estimation of soil extracellular enzyme activities in agricultural reclaimed coastal saline land. Ecol. Indic. 2021, 120, 106925. [Google Scholar] [CrossRef]
  32. Zhou, L.; Zhang, D.; Zhang, L.; Zhu, J. Dynamic Gust Detection and Conditional Sequence Modeling for Ultra-Short-Term Wind Speed Prediction. Electronics 2024, 13, 4513. [Google Scholar] [CrossRef]
  33. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  34. Zhan, C.; Mao, H.; Fan, R.; He, T.; Qing, R.; Zhang, W.; Lin, Y.; Li, K.; Wang, L.; Xia, T.E.; et al. Detection of Apple Sucrose Concentration Based on Fluorescence Hyperspectral Image System and Machine Learning. Foods 2024, 13, 3547. [Google Scholar] [CrossRef]
Figure 1. Experimental flowchart.
Figure 1. Experimental flowchart.
Foods 14 00983 g001
Figure 2. Schematic of the data acquisition equipment.
Figure 2. Schematic of the data acquisition equipment.
Foods 14 00983 g002
Figure 3. Fermentation degree and modeling monitoring based on WLR data: (a) scatter plot of WLR and fermentation degree and (b) the loss curve of the WLR-LSTM model.
Figure 3. Fermentation degree and modeling monitoring based on WLR data: (a) scatter plot of WLR and fermentation degree and (b) the loss curve of the WLR-LSTM model.
Foods 14 00983 g003
Figure 4. Relationship between predicted data and true data of the test set based on WLR data: (a) WLR-SVM; (b) WLR-RF; and (c) WLR-LSTM.
Figure 4. Relationship between predicted data and true data of the test set based on WLR data: (a) WLR-SVM; (b) WLR-RF; and (c) WLR-LSTM.
Foods 14 00983 g004
Figure 5. Fermentation degree and modeling monitoring based on aroma data: (a) scatter plot of aroma and fermentation degree and (b) the loss curve of the Aroma-LSTM model.
Figure 5. Fermentation degree and modeling monitoring based on aroma data: (a) scatter plot of aroma and fermentation degree and (b) the loss curve of the Aroma-LSTM model.
Foods 14 00983 g005
Figure 6. Relationship between predicted data and true data of the test set based on aroma data: (a) Aroma-SVM; (b) Aroma-RF; and (c) Aroma-LSTM.
Figure 6. Relationship between predicted data and true data of the test set based on aroma data: (a) Aroma-SVM; (b) Aroma-RF; and (c) Aroma-LSTM.
Foods 14 00983 g006
Figure 7. Surface color change during fermentation: (a) original image; (b) average color image; and (c) enhanced average color image.
Figure 7. Surface color change during fermentation: (a) original image; (b) average color image; and (c) enhanced average color image.
Foods 14 00983 g007aFoods 14 00983 g007b
Figure 8. The results of the feature selection for image features.
Figure 8. The results of the feature selection for image features.
Foods 14 00983 g008
Figure 9. The relationship between predicted data and true data of the test set based on the image features data: (a) Image-SVM; (b) Image-RF; and (c) Image-LSTM.
Figure 9. The relationship between predicted data and true data of the test set based on the image features data: (a) Image-SVM; (b) Image-RF; and (c) Image-LSTM.
Foods 14 00983 g009
Figure 10. The loss curve of the Image-LSTM model.
Figure 10. The loss curve of the Image-LSTM model.
Foods 14 00983 g010
Figure 11. Feature selection results from tree model based on fusion feature data.
Figure 11. Feature selection results from tree model based on fusion feature data.
Foods 14 00983 g011
Figure 12. The relationship between predicted data and true data based on fusion feature data: (a) Fusion-SVM; (b) Fusion-RF; and (c) Fusion-LSTM.
Figure 12. The relationship between predicted data and true data based on fusion feature data: (a) Fusion-SVM; (b) Fusion-RF; and (c) Fusion-LSTM.
Foods 14 00983 g012
Figure 13. Comparative effect of models in different datasets: (a) SVR; (b) RF; and (c) LSTM.
Figure 13. Comparative effect of models in different datasets: (a) SVR; (b) RF; and (c) LSTM.
Foods 14 00983 g013
Figure 14. Relationship between predicted data and true data based on fusion feature data: (a) Fusion-SSA-SVM; (b) Fusion-SSA-RF; and (c) Fusion-SSA-LSTM.
Figure 14. Relationship between predicted data and true data based on fusion feature data: (a) Fusion-SSA-SVM; (b) Fusion-SSA-RF; and (c) Fusion-SSA-LSTM.
Foods 14 00983 g014
Figure 15. The radar chart comparing model performance before and after SSA optimization: (a) SVR; (b) RF; and (c) LSTM.
Figure 15. The radar chart comparing model performance before and after SSA optimization: (a) SVR; (b) RF; and (c) LSTM.
Foods 14 00983 g015
Figure 16. Model loss curve chart: (a) Fusion-LSTM and (b) Fusion-SSA-LSTM.
Figure 16. Model loss curve chart: (a) Fusion-LSTM and (b) Fusion-SSA-LSTM.
Foods 14 00983 g016
Table 1. Performance of models based on WLR data.
Table 1. Performance of models based on WLR data.
ModelTraining SetTest Set
MAERMSER2MAERMSER2
WLR-SVM4.0565.6910.9614.5376.2140.955
WLR-RF3.4304.5950.9754.9566.4630.952
WLR-LSTM4.5525.6240.9624.9865.9800.959
Table 2. Performance of models based on aroma data.
Table 2. Performance of models based on aroma data.
ModelTraining SetTest Set
MAERMSER2MAERMSER2
Aroma-SVM6.6439.4850.8926.7329.4160.898
Aroma-RF4.3685.6880.9616.0828.1090.924
Aroma-LSTM5.6627.0970.9406.0158.1230.924
Table 3. Performance of models based on image features.
Table 3. Performance of models based on image features.
ModelTraining SetTest Set
MAERMSER2MAERMSER2
Image-SVM5.8157.6210.9306.2117.3920.937
Image-RF3.8295.550.9635.3787.9530.927
Image-LSTM5.6217.4410.9345.6757.4080.937
Table 4. Performance of models based on fusion feature data.
Table 4. Performance of models based on fusion feature data.
ModelTraining SetTest Set
MAERMSER2MAERMSER2
Fusion-SVM2.8353.6350.9842.2322.6930.991
Fusion-RF1.5822.3350.9932.2493.4470.986
Fusion-LSTM2.5173.6680.9842.7833.9690.982
Table 5. Performance of optimization models based on fusion feature data.
Table 5. Performance of optimization models based on fusion feature data.
ModelTraining SetTest Set
MAERMSER2MAERMSER2
Fusion-SSA-SVM2.3353.7750.9851.8342.6000.992
Fusion-SSA-RF0.9581.3860.9982.0783.2300.988
Fusion-SSA-LSTM1.6892.2570.9941.7032.2580.994
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.; Zhao, J.; Zheng, C.; Li, C.; Wang, T.; Xiao, L.; Chen, Y. The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies. Foods 2025, 14, 983. https://doi.org/10.3390/foods14060983

AMA Style

Huang Y, Zhao J, Zheng C, Li C, Wang T, Xiao L, Chen Y. The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies. Foods. 2025; 14(6):983. https://doi.org/10.3390/foods14060983

Chicago/Turabian Style

Huang, Yuyan, Jian Zhao, Chengxu Zheng, Chuanhui Li, Tao Wang, Liangde Xiao, and Yongkuai Chen. 2025. "The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies" Foods 14, no. 6: 983. https://doi.org/10.3390/foods14060983

APA Style

Huang, Y., Zhao, J., Zheng, C., Li, C., Wang, T., Xiao, L., & Chen, Y. (2025). The Fermentation Degree Prediction Model for Tieguanyin Oolong Tea Based on Visual and Sensing Technologies. Foods, 14(6), 983. https://doi.org/10.3390/foods14060983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop