An Evaluation of Large Language Models for Supplementing a Food Extrusion Dataset
Abstract
:1. Introduction
- RQ1: What are the key challenges of manually curating high-quality structured datasets for food extrusion research, and how can LLMs address these challenges?Manually curating high-quality structured datasets for food extrusion research presents several challenges. It is time-consuming, resource-intensive, and limited by human capacity, making it difficult to create comprehensive datasets that can keep pace with the growing volume of research. LLMs address these challenges by automating data extraction, significantly increasing the dataset size beyond what is feasible with manual effort alone. We present an approach that uses LLMs to extract information by learning from the human-annotated data (see Section 2). By automating the process, LLMs reduce the time and effort required for dataset curation, enabling the creation of dynamic, scalable datasets. The domain expert reported that automation can reduce manual effort by up to 50%, requiring only verification of the extracted information for correctness, which aligns with the outcome of Wan et al. [24].
- RQ2: How do LLMs perform in automating the extraction of domain-specific information from the scientific literature compared to manual human annotation?Our experiments showed that LLMs effectively identify key details about product information, process and formulation details, and variables and characterisation methods. By using a few-shot learning approach with domain expertise, we improved the accuracy and reliability of LLM-generated data. LLMs significantly reduce the time and effort required—potentially cutting manual effort by 50%. LLMs performed well in extracting short-answer information and, in some cases, even outperformed human annotation. (see Section 3.6). This highlights their potential to enhance research synthesis and dataset curation in food extrusion. The approach can be extended to other domains in and out of food science requiring structured data extraction from the scientific literature.
2. Materials and Methods
2.1. Food Extrusion Literature Database
2.1.1. Literature Search and Filtering
2.1.2. Manual Data Extraction
- Publication details: Includes metadata such as study ID within the dataset, study title, publication year, journal, journal impact factor, corresponding author country, article keywords, and the availability of quantitative data. These details provide context on the credibility and accessibility of each study and allow users to track bibliometric trends in food extrusion research.
- Product information: Specifies the product type(s) investigated in the study (e.g., meat analogue, breakfast cereal, snack), which is valuable for identifying application-specific trends and evaluating the data in the appropriate context, as extrusion processing conditions and characterisation methods often vary significantly depending on the type of product.
- Process details: Captures key details of the extruder process, including the extruder model, manufacturer, manufacturer country, extrusion screw type (i.e., single or twin screw), screw diameter (mm), length-to-diameter ratio, die dimensions (mm), and the scale of the extrusion setup (i.e., lab, pilot, or commercial scale). These parameters define the operational constraints of the extrusion process and help with comparing equipment setups across different studies.
- Formulation details: Describes the ingredients used within the product formulation and the in-barrel moisture content (%), both of which significantly influence the structural, textural, nutritional, and sensorial properties of extruded products. This category is important for understanding how raw material composition affects extrusion outcomes.
- Variables and characterisation: Categorises input variables (i.e., factors selected as independent variables in the experimental design), response variables (i.e., system dynamics of the extrusion process such as residence time and specific mechanical energy), feed characterisation metrics (i.e., characterisation performed on the raw feed ingredients), and extrudate characterisation metrics, which includes both quantitative (e.g., numerical measurements) and qualitative (e.g., image-based) methods.
- Overall study information: Includes a subjective rating (out of 10) by the domain expert and a paragraph summary of the study. These additional insights help contextualise studies, helping researchers to filter studies based on relevance and identify overarching themes in food extrusion research.
2.1.3. Study Exclusion
2.2. LLM-Powered Information Extraction Pipeline
2.2.1. Parsing Source PDFs
2.2.2. Information Extraction Using LLMs
- Zero-shot learning: The LLM extracts information from a set of articles without benefiting from the existing human-annotated data (the dataset in Section 2.1). The LLM is expected to generate the correct structured output based solely on the provided document and any inherent ability to understand the schema.
- Few-shot learning: In the few-shot setting, we use in-context learning (ICL) [9,28], which involves providing the LLM with a set of examples as part of the prompt in addition to the data to be analysed. Here, the examples are drawn from the dataset in Section 2.1. These examples help the model understand how to map schema data attributes to the desired outputs. We use 1-shot prompting following Ghosh et al. [6]. As ICL has been shown to be sensitive to the provided examples [29,30], selecting appropriate examples is crucial for improving performance. Instead of relying on random sampling, various strategies exist for selecting examples, including semantic-based methods such as KATE [15] and vocabulary-based approaches such as BM25 [31,32]. In this study, we use BM25 due to its simplicity and computational efficiency in this study. Originally developed as a bag-of-words retrieval model, BM25 [33] scores to retrieve articles from the human-corrected dataset (training examples) by measuring the textual similarity to the article being analysed. By selecting contextually similar articles, the intuition is that the examples of extracted data will be more relevant to the article at hand, thereby improving the LLM’s performance in extracting information.
2.2.3. Postprocessing
2.2.4. Large Language Models
3. Results
3.1. Dataset
3.2. Evaluation Metrics
3.3. Automatic Evaluation
3.4. Impact of Training Set Size
3.5. Results per Data Attribute Class
3.6. Additional Content Verification Annotations
4. Discussion
4.1. Outcomes and Challenges
4.2. LLM-Powered Literature Review
4.3. Cost and Energy Consumption of LLM-Powered IE
4.4. Automatic Evaluation of LLM-Powered IE
5. Advantages and Limitations
5.1. Advantages
5.2. Limitations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Harper, J.M.; Clark, J.P. Food extrusion. Crit. Rev. Food Sci. Nutr. 1979, 11, 155–215. [Google Scholar] [CrossRef] [PubMed]
- Singh, S.; Gamlath, S.; Wakeling, L. Nutritional aspects of food extrusion: A review. Int. J. Food Sci. Technol. 2007, 42, 916–929. [Google Scholar] [CrossRef]
- Choton, S.; Gupta, N.; Bandral, J.D.; Anjum, N.; Choudary, A. Extrusion technology and its application in food processing: A review. Pharma Innov. J. 2020, 9, 162–168. [Google Scholar] [CrossRef]
- Dalbhagat, C.G.; Mahato, D.K.; Mishra, H.N. Effect of extrusion processing on physicochemical, functional and nutritional characteristics of rice and rice-based products: A review. Trends Food Sci. Technol. 2019, 85, 226–240. [Google Scholar] [CrossRef]
- Borg, C.K.; Frey, C.; Moh, J.; Pollock, T.M.; Gorsse, S.; Miracle, D.B.; Senkov, O.N.; Meredig, B.; Saal, J.E. Expanded dataset of mechanical properties and observed phases of multi-principal element alloys. Sci. Data 2020, 7, 430. [Google Scholar] [CrossRef]
- Ghosh, S.; Brodnik, N.; Frey, C.; Holgate, C.; Pollock, T.; Daly, S.; Carton, S. Toward Reliable Ad-hoc Scientific Information Extraction: A Case Study on Two Materials Dataset. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand, 11–16 August 2024; pp. 15109–15123. [Google Scholar] [CrossRef]
- Fukagawa, N.K.; McKillop, K.; Pehrsson, P.R.; Moshfegh, A.; Harnly, J.; Finley, J. USDA’s FoodData Central: What is it and why is it needed today? Am. J. Clin. Nutr. 2022, 115, 619–624. [Google Scholar] [CrossRef]
- Caracciolo, C.; Stellato, A.; Morshed, A.; Johannsen, G.; Rajbhandari, S.; Jaques, Y.; Keizer, J. The AGROVOC linked dataset. Semant. Web 2013, 4, 341–348. [Google Scholar] [CrossRef]
- Brown, T.B. Language models are few-shot learners. arXiv 2020, arXiv:2005.14165. [Google Scholar]
- Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. Llama 2: Open foundation and fine-tuned chat models. arXiv 2023, arXiv:2307.09288. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Anthropic. The Claude 3 Model Family: Opus, Sonnet, Haiku. 2024. Available online: https://www.anthropic.com/news/claude-3-family (accessed on 13 August 2024).
- Team, G.; Anil, R.; Borgeaud, S.; Alayrac, J.B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A.M.; Hauth, A.; Millican, K.; et al. Gemini: A family of highly capable multimodal models. arXiv 2023, arXiv:2312.11805. [Google Scholar]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Chi, E.; Le, Q.; Zhou, D. Chain of thought prompting elicits reasoning in large language models. arXiv 2022, arXiv:2201.11903. [Google Scholar]
- Liu, J.; Shen, D.; Zhang, Y.; Dolan, W.B.; Carin, L.; Chen, W. What Makes Good In-Context Examples for GPT-3? In Proceedings of the Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, Dublin, Ireland, 22–27 May 2022; pp. 100–114. [Google Scholar]
- Hegselmann, S.; Buendia, A.; Lang, H.; Agrawal, M.; Jiang, X.; Sontag, D. Tabllm: Few-shot classification of tabular data with large language models. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 25–27 April 2023; pp. 5549–5581. [Google Scholar]
- Morris, M.R. Scientists’ Perspectives on the Potential for Generative AI in their Fields. arXiv 2023, arXiv:2304.01420. [Google Scholar]
- Hope, T.; Downey, D.; Weld, D.S.; Etzioni, O.; Horvitz, E. A computational inflection for scientific discovery. Commun. ACM 2023, 66, 62–73. [Google Scholar] [CrossRef]
- Bolanos, F.; Salatino, A.; Osborne, F.; Motta, E. Artificial intelligence for literature reviews: Opportunities and challenges. Artif. Intell. Rev. 2024, 57, 259. [Google Scholar] [CrossRef]
- Olivetti, E.A.; Cole, J.M.; Kim, E.; Kononova, O.; Ceder, G.; Han, T.Y.J.; Hiszpanski, A.M. Data-driven materials research enabled by natural language processing and information extraction. Appl. Phys. Rev. 2020, 7, 041317. [Google Scholar] [CrossRef]
- Şahinuç, F.; Tran, T.; Grishina, Y.; Hou, Y.; Chen, B.; Gurevych, I. Efficient Performance Tracking: Leveraging Large Language Models for Automated Construction of Scientific Leaderboards. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, FL, USA, 12–16 November 2024; pp. 7963–7977. [Google Scholar]
- Peng, R.; Liu, K.; Yang, P.; Yuan, Z.; Li, S. Embedding-based retrieval with llm for effective agriculture information extracting from unstructured data. arXiv 2023, arXiv:2308.03107. [Google Scholar]
- Zheng, Z.; Zhang, O.; Borgs, C.; Chayes, J.T.; Yaghi, O.M. ChatGPT chemistry assistant for text mining and the prediction of MOF synthesis. J. Am. Chem. Soc. 2023, 145, 18048–18062. [Google Scholar] [CrossRef]
- Wan, S.; Bölücü, N.; Duenser, A.; Irons, J.; Lee, C.; Jin, B.; Rybinski, M.; Walker, S.; Yang, H. A Case for User-Centred NLP Methodology: Developing a Science Literature AI Assistant CSIRO Technical Report. Id:EP2025-1250. CSIRO. 2025. Available online: https://publications.csiro.au/publications/publication/PIcsiro:EP2025-1250 (accessed on 27 March 2025).
- Bouvier, J. Twin screw versus single screw in feed extrusion processing. In Proceedings of the Extrusion Technology in Feed and Food Processing, Novi Sad, Serbia, 19–21 October 2010; Institut for Food Technology: Novi Sad, Serbia, 2010. [Google Scholar]
- Lopez, P. GROBID: Combining automatic bibliographic data recognition and term extraction for scholarship publications. In Proceedings of the Research and Advanced Technology for Digital Libraries: 13th European Conference, ECDL 2009, Corfu, Greece, 27 September–2 October 2009; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2009; pp. 473–474. [Google Scholar]
- Meuschke, N.; Jagdale, A.; Spinde, T.; Mitrović, J.; Gipp, B. A benchmark of pdf information extraction tools using a multi-task and multi-domain evaluation framework for academic documents. In Proceedings of the International Conference on Information, Copenhagen, Denmark, 13–17 March 2023; Springer Nature: Cham, Switzerland, 2023; pp. 383–405. [Google Scholar]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
- Lu, Y.; Bartolo, M.; Moore, A.; Riedel, S.; Stenetorp, P. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; pp. 8086–8098. [Google Scholar]
- Agrawal, S.; Zhou, C.; Lewis, M.; Zettlemoyer, L.; Ghazvininejad, M. In-context Examples Selection for Machine Translation. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, 9–14 July 2023; pp. 8857–8873. [Google Scholar]
- Xu, Y.; Zhu, C.; Wang, S.; Sun, S.; Cheng, H.; Liu, X.; Gao, J.; He, P.; Zeng, M.; Huang, X. Human parity on commonsenseqa: Augmenting self-attention with external attention. arXiv 2021, arXiv:2112.03254. [Google Scholar]
- Wang, S.; Xu, Y.; Fang, Y.; Liu, Y.; Sun, S.; Xu, R.; Zhu, C.; Zeng, M. Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; pp. 3170–3179. [Google Scholar]
- Robertson, S.; Zaragoza, H. The probabilistic relevance framework: BM25 and beyond. Found. Trends® Inf. Retr. 2009, 3, 333–389. [Google Scholar] [CrossRef]
- Xu, D.; Chen, W.; Peng, W.; Zhang, C.; Xu, T.; Zhao, X.; Wu, X.; Zheng, Y.; Wang, Y.; Chen, E. Large language models for generative information extraction: A survey. Front. Comput. Sci. 2024, 18, 186357. [Google Scholar] [CrossRef]
- He, K.; Mao, R.; Lin, Q.; Ruan, Y.; Lan, X.; Feng, M.; Cambria, E. A survey of large language models for healthcare: From data, technology, and applications to accountability and ethics. Inf. Fusion 2025, 118, 102963. [Google Scholar] [CrossRef]
- Gutiérrez, B.J.; McNeal, N.; Washington, C.; Chen, Y.; Li, L.; Sun, H.; Su, Y. Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, Abu, Dhabi, 7–11 December 2022; pp. 4497–4512. [Google Scholar]
- Bölücü, N.; Rybinski, M.; Wan, S. Impact of sample selection on in-context learning for entity extraction from scientific writing. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6–10 December 2023; pp. 5090–5107. [Google Scholar]
- Team, G.G. Gemini 1.5: Unlocking Multimodal Understanding Across Millions of Tokens of Context. 2024. Available online: https://goo.gle/GeminiV1-5 (accessed on 14 February 2025).
- Meta. Llama3 Model Card. 2024. Available online: https://huggingface.co/collections/meta-llama/ (accessed on 13 August 2024).
- Jiang, A.Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.; Chaplot, D.S.; de las Casas, D.; Bressand, F.; Lengyel, G.; Lample, G.; Saulnier, L.; et al. Mistral 7B. arXiv 2023, arXiv:2310.06825. [Google Scholar]
- NVIDIA. Llama-3_1-Nemotron-70B-Instruct | NVIDIA NIM. 2024. Available online: https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instruct/modelcard (accessed on 7 January 2025).
- Gilardi, F.; Alizadeh, M.; Kubli, M. ChatGPT outperforms crowd workers for text-annotation tasks. Proc. Natl. Acad. Sci. USA 2023, 120, e2305016120. [Google Scholar] [CrossRef]
- He, X.; Lin, Z.; Gong, Y.; Zhang, H.; Lin, C.; Jiao, J.; Yiu, S.M.; Duan, N.; Chen, W. Annollm: Making large language models to be better crowdsourced annotators. arXiv 2023, arXiv:2303.16854. [Google Scholar]
- Aldeen, M.; Luo, J.; Lian, A.; Zheng, V.; Hong, A.; Yetukuri, P.; Cheng, L. ChatGPT vs. Human Annotators: A Comprehensive Analysis of ChatGPT for Text Annotation. In Proceedings of the 2023 International Conference on Machine Learning and Applications (ICMLA), Jacksonville Riverfront, FL, USA, 15–17 December 2023; pp. 602–609. [Google Scholar]
- Törnberg, P. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv 2023, arXiv:2304.06588. [Google Scholar]
- Zhu, Y.; Zhang, P.; Haq, E.U.; Hui, P.; Tyson, G. Can chatgpt reproduce human-generated labels? a study of social computing tasks. arXiv 2023, arXiv:2304.10145. [Google Scholar]
- Ziems, C.; Held, W.; Shaikh, O.; Chen, J.; Zhang, Z.; Yang, D. Can large language models transform computational social science? Comput. Linguist. 2024, 50, 237–291. [Google Scholar] [CrossRef]
- Schmidt, V.; Goyal, K.; Joshi, A.; Feld, B.; Conell, L.; Laskaris, N.; Blank, D.; Wilson, J.; Friedler, S.; Luccioni, S. CodeCarbon: Estimate and Track Carbon Emissions from Machine Learning Computing. 2021, 4658424. Available online: https://zenodo.org/records/11171501 (accessed on 27 March 2025).
- Banerjee, S.; Lavie, A. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Ann Arbor, MI, USA, 29 June 2005; pp. 65–72. [Google Scholar]
- Xu, W.; Napoles, C.; Pavlick, E.; Chen, Q.; Callison-Burch, C. Optimizing statistical machine translation for text simplification. Trans. Assoc. Comput. Linguist. 2016, 4, 401–415. [Google Scholar] [CrossRef]
Category | LLM | Context Length | Response Time (s) | Cost (USD/M Tokens) |
---|---|---|---|---|
Proprietary | GPT-4o | 128k | 0.40 | |
o1-mini | 128k | 14.29 | ||
Claude v1 | 200k | 0.78 | ||
Claude 3 Sonnet | 200k | 0.78 | ||
Gemini 1.5 Pro | 2m | 0.74 | ||
Open-source | Llama 3.1 8B | 128k | 0.37 | |
Llama 3.1 70B | 128k | 0.43 | ||
Llama 3.1 405B | 128k | 0.69 | ||
Mistral-7B | 8k | 0.28 | ||
Llama 3.1 Nemotron 70B | 128k | 0.56 |
Model | Zero-Shot | 1-Shot | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1 Score | Precision | Recall | F1 Score | |
GPT-4o | 14.47 | 14.26 | 14.36 | 29.30 | 28.46 | 28.87 |
o1-mini | 14.38 | 14.00 | 14.19 | 19.76 | 18.75 | 19.24 |
Claude 3 Sonnet | 11.18 | 10.96 | 11.07 | 26.59 | 26.06 | 26.33 |
Claude v1 | 11.15 | 10.08 | 10.59 | 30.42 | 29.32 | 29.86 |
Llama 3.1 8B | 12.66 | 12.33 | 12.49 | 30.55 | 29.62 | 30.08 |
Llama 3.1 70B | 14.27 | 14.24 | 14.26 | 30.46 | 29.76 | 30.11 |
Llama 3.1 405B | 14.45 | 14.32 | 14.38 | 31.17 | 30.11 | 30.64 |
Mistral-7B | 10.48 | 10.10 | 10.19 | 28.17 | 27.12 | 27.23 |
Nemotron 70B | 13.75 | 13.14 | 13.24 | 29.52 | 28.45 | 28.78 |
Size | Precision | Recall | F1 Score |
---|---|---|---|
Zero-shot | 14.27 | 14.24 | 14.26 |
Small (10 articles) | 25.12 | 23.56 | 24.34 |
19.57 | 18.56 | 19.06 | |
19.61 | 19.71 | 19.66 | |
Medium (50 articles) | 30.84 | 29.31 | 30.08 |
31.11 | 29.88 | 29.99 | |
29.73 | 27.56 | 28.65 | |
Large (84 articles) | 30.46 | 29.76 | 30.11 |
Schema | Zero-Shot | 1-Shot | ||||
---|---|---|---|---|---|---|
Precision | Recall | F1 | Precision | Recall | F1 | |
Extruder product type | 2.78 | 2.67 | 2.72 | 52.35 | 52.35 | 52.35 |
Extruder manufacturer company | 36.51 | 30.67 | 33.33 | 49.65 | 46.98 | 48.28 |
Extruder model name | 41.67 | 36.67 | 39.01 | 55.00 | 51.68 | 53.29 |
Extruder manufacturer country | 74.60 | 62.67 | 68.12 | 77.70 | 72.48 | 75.00 |
Scale | 1.33 | 1.33 | 1.33 | 21.48 | 21.48 | 21.48 |
Screw type | 0.00 | 0.00 | 0.00 | 91.72 | 89.26 | 90.47 |
Screw dimension | 0.00 | 0.00 | 0.00 | 24.24 | 21.48 | 22.78 |
Length to diameter ratio | 31.25 | 20.00 | 24.39 | 24.60 | 20.81 | 22.55 |
Die dimensions | 0.00 | 0.00 | 0.00 | 27.86 | 26.17 | 26.99 |
Ingredients | 6.67 | 6.67 | 6.67 | 16.78 | 16.78 | 16.78 |
Moisture in the extrusion trials | 0.00 | 0.00 | 0.00 | 29.25 | 28.86 | 29.05 |
Input variables | 0.00 | 0.00 | 0.00 | 12.75 | 12.75 | 12.75 |
Response variables | 0.00 | 0.00 | 0.00 | 4.03 | 4.03 | 4.03 |
Feed characterisation | 0.00 | 0.00 | 0.00 | 0.69 | 0.67 | 0.68 |
Quality characterisation | 0.00 | 0.00 | 0.00 | 0.67 | 0.67 | 0.67 |
Other characterisation | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bölücü, N.; Pennells, J.; Yang, H.; Rybinski, M.; Wan, S. An Evaluation of Large Language Models for Supplementing a Food Extrusion Dataset. Foods 2025, 14, 1355. https://doi.org/10.3390/foods14081355
Bölücü N, Pennells J, Yang H, Rybinski M, Wan S. An Evaluation of Large Language Models for Supplementing a Food Extrusion Dataset. Foods. 2025; 14(8):1355. https://doi.org/10.3390/foods14081355
Chicago/Turabian StyleBölücü, Necva, Jordan Pennells, Huichen Yang, Maciej Rybinski, and Stephen Wan. 2025. "An Evaluation of Large Language Models for Supplementing a Food Extrusion Dataset" Foods 14, no. 8: 1355. https://doi.org/10.3390/foods14081355
APA StyleBölücü, N., Pennells, J., Yang, H., Rybinski, M., & Wan, S. (2025). An Evaluation of Large Language Models for Supplementing a Food Extrusion Dataset. Foods, 14(8), 1355. https://doi.org/10.3390/foods14081355