Next Article in Journal
High-Gain Dual-Band Microstrip Antenna for 5G mmWave Applications: Design, Optimization, and Experimental Validation
Previous Article in Journal
A Comparative Study of a Deep Reinforcement Learning Solution and Alternative Deep Learning Models for Wildfire Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Multimodal Large Language Models: Optimizing Prompt Engineering Strategies for Enhanced Performance

1
Department of Metabiohealth, Sungkyunkwan University, Suwon 16419, Republic of Korea
2
Department of Smart Automotive, Soonchunhyang University, Asan 31538, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3992; https://doi.org/10.3390/app15073992
Submission received: 1 March 2025 / Revised: 30 March 2025 / Accepted: 1 April 2025 / Published: 4 April 2025

Abstract

:
This study investigates prompt engineering (PE) strategies to mitigate hallucination, a key limitation of multimodal large language models (MLLMs). To address this issue, we explore five prominent multimodal PE techniques: in-context learning (ICL), chain of thought (CoT), step-by-step reasoning (SSR), tree of thought (ToT), and retrieval-augmented generation (RAG). These techniques are systematically applied across multiple datasets with distinct domains and characteristics. Based on the empirical findings, we propose the greedy prompt engineering strategy (Greedy PES), a methodology for optimizing PE application across different datasets and MLLM models. To evaluate user satisfaction with MLLM-generated responses, we adopt a comprehensive set of evaluation metrics, including BLEU, ROUGE, METEOR, S-BERT, MoverScore, and CIDEr. A weighted aggregate evaluation score is introduced to provide a holistic assessment of model performance under varying conditions. Experimental results demonstrate that the optimal prompt engineering strategy varies significantly depending on both dataset properties and the MLLM model used. Specifically, datasets categorized as general benefit the most from ICL, ToT, and RAG, whereas mathematical datasets perform optimally with ICL, SSR, and ToT. In scientific reasoning tasks, RAG and SSR emerge as the most effective strategies. Applying Greedy PES leads to a substantial improvement in performance across different multimodal tasks, achieving an average evaluation score enhancement of 184.3% for general image captioning, 90.3% for mathematical visual question answering (VQA), and 49.1% for science visual question answering (VQA) compared to conventional approaches. These findings highlight the effectiveness of structured PE strategies in optimizing MLLM performance and provide a robust framework for PE-driven model enhancement across diverse multimodal applications.

1. Introduction

1.1. Multimodal Large Language Models: Foundations and Architectures

With the recent advancements in artificial Iintelligence (AI), large language models (LLMs) have demonstrated remarkable performance across various natural language processing (NLP) tasks. However, human language comprehension extends beyond mere text-based processing; it integrates multiple sensory modalities, including vision, hearing, and contextual reasoning to achieve a more holistic understanding. To overcome this limitation, multimodal large language models (MLLMs) have emerged as a new paradigm. These models are designed to process and interpret not only textual data but also diverse input modalities such as images, audio, and video, thereby enabling a more comprehensive and context-aware understanding of information.
MLLMs are designed to process not only textual data but also various modalities such as images, videos, and audio. While conventional LLMs are trained exclusively on textual data, MLLMs integrate visual and auditory information, allowing them to leverage richer contextual cues. This multimodal capability extends beyond traditional language comprehension, enabling more sophisticated decision making and reasoning by combining linguistic and perceptual information. However, the incorporation of multimodal data introduces new technical challenges, particularly concerning the architecture and training methodologies of MLLMs. These challenges arise from the need to effectively align, fuse, and interpret multiple modalities within a unified framework, necessitating advancements in model design and optimization strategies.
The architecture of MLLMs is primarily composed of three key components: (1) pre-trained modality encoder, (2) pre-trained large language model (LLM), and (3) cross-modality transformer [1].
The pre-trained modality encoder is responsible for processing and extracting features from non-textual data, such as images, audio, and video. A prominent example of such an encoder is CLIP (Contrastive Language–Image Pretraining) [2], which plays a crucial role in learning the relationships between vision and language. These encoders enable MLLMs to bridge the gap between different modalities by effectively mapping non-textual inputs into a representational space that aligns with linguistic information.
The pre-trained LLM serves as the core text processing component of MLLMs. It utilizes existing LLM architectures, such as GPT [3,4,5], Llama [6,7], Gemini [8], and Mistral [9], to generate refined responses by integrating textual information with extracted multimodal features. These LLMs act as the reasoning engine of MLLMs, enabling context-aware and semantically coherent responses by leveraging both linguistic and non-linguistic information.
The cross-modality transformer facilitates the effective fusion of features extracted from non-textual data with the linguistic representations processed by the LLM. This component is essential for aligning, integrating, and contextualizing multimodal information, allowing MLLMs to learn semantic relationships across different modalities. By incorporating multimodal reasoning capabilities, the cross-modality transformer enables MLLMs to generate more accurate, context-aware, and semantically enriched outputs across diverse multimodal tasks.
To optimize the performance of MLLMs, various training strategies, including pretraining, instruction tuning, and alignment tuning, are employed [1].
Pretraining serves as the foundational phase, where the model learns fundamental representation learning by leveraging large-scale multimodal datasets. During this stage, MLLMs are trained on image–text, audio–text, and other modality–text combinations, allowing them to understand and capture the relationships between different modalities. This step is crucial for enabling MLLMs to process and integrate information from diverse sources effectively. Following pretraining, instruction tuning is applied to enhance the model’s ability to generate task-specific responses. This process fine-tunes the MLLM to align with user prompts, ensuring that the model can produce outputs that are more coherent, relevant, and tailored to specific tasks. By learning from structured instructions, MLLMs become more adept at following user queries and delivering accurate and context-aware responses. To further refine the quality, reliability, and trustworthiness of the model’s outputs, alignment tuning is incorporated. This involves techniques such as reinforcement learning from human feedback (RLHF) [10], which adjusts the model’s responses to better reflect human preferences and ethical considerations. In particular, RLHF plays a critical role in reducing hallucination in large language models (LLMs) [11]. Alignment tuning plays a vital role in mitigating hallucinations and biases, ensuring that MLLMs produce factually accurate and contextually appropriate outputs. By integrating these training methods, MLLMs can achieve improved multimodal understanding and enhanced user interaction capabilities.

1.2. Technical Challenges and Hallucination in MLLMs

Despite the powerful capabilities of MLLMs enabled by their architectural design and training methodologies, several performance limitations remain. One of the most critical challenges is hallucination, which refers to instances where the model generates responses that do not accurately correspond to the actual visual information [12]. This phenomenon occurs when MLLMs produce information that is not present in the training data or misinterpret visual content, leading to inaccurate or misleading outputs. Hallucination is particularly problematic in tasks such as image captioning, object recognition, and scene understanding, where precise alignment between textual descriptions and visual data is crucial. Recent studies [13] have highlighted the risks associated with semantic gaps and misalignment between different modalities in MLLMs. These issues arise when the textual and visual components of the model fail to integrate effectively, leading to inconsistencies in generated responses. To address this, it is essential to develop effective modality alignment techniques that ensure a coherent and accurate representation of multimodal data. Furthermore, improper alignment strategies can lead to unnecessary increases in model parameters without guaranteeing performance improvements, underscoring the need for careful selection of alignment methods to optimize both efficiency and accuracy in MLLMs.

1.3. Prompt Engineering for Enhancing MLLM Performance

To mitigate the hallucination problem and enhance the performance of MLLMs, various prompt engineering (PE) techniques have been proposed, similar to those developed for LLMs [14,15,16]. However, unlike LLMs, which rely solely on textual inputs, MLLMs process visual content in addition to text. As a result, strategic prompt design must go beyond simple text-based prompting and consider alignment with visual information to ensure coherence and accuracy in multimodal reasoning.
First, in-context learning (ICL) [17] requires providing relevant examples within a given multimodal image–text pair context to enable the model to generate appropriate responses. Chain of thought (CoT) [18] should guide the model to solve complex reasoning tasks by leveraging sequential textual explanations based on image analysis. Similarly, step-by-step reasoning (SSR) [19] encourages the model to perform spatial and stepwise visual analysis, ensuring a structured reasoning process. Tree of thought (ToT) [19] extends this concept by considering multiple cognitive pathways derived from the image, allowing the model to select the most reliable response based on different analytical perspectives. On the other hand, retrieval-augmented generation (RAG) [20] enhances multimodal understanding by retrieving external knowledge related to the given image, enabling the model to generate evidence-based responses even when dealing with previously unseen information. In summary, prompt engineering in MLLMs must evolve beyond simple text-based design to strategically integrate visual information, ensuring that the model effectively utilizes multimodal inputs to improve response accuracy and reliability.
In fact, existing prompt engineering research aimed at mitigating hallucination has predominantly focused on LLMs, while the systematic optimization of PE strategies for multimodal data remains underdeveloped. However, hallucination phenomena arising specifically from multimodal inputs present challenges that cannot be fully addressed by conventional approaches alone. Therefore, the development of prompt engineering techniques tailored to multimodal data is essential for generating accurate and contextually grounded responses.
This study aims to develop an optimal prompt engineering strategy that maximizes user satisfaction and response accuracy in practical MLLM service deployment while minimizing computational resource requirements. Specifically, instead of performing additional fine-tuning on pre-trained modality encoders, pre-trained LLMs, or cross-modality transformers, we explore how multimodal-specific prompt engineering techniques alone can enhance MLLM performance.
To achieve this, we systematically investigate the application of RAG, CoT, ICL, SSR, and ToT as effective prompt engineering strategies. In particular, we evaluate the impact of these techniques on state-of-the-art MLLMs, including Phi [21,22], Llama [23], Pixtral [24], and Qwen [25,26], providing empirical insights into their effectiveness across different architectures.
To ensure that our evaluation closely aligns with user satisfaction, we employ a diverse set of performance metrics, including bilingual evaluation understudy (BLEU), recall-oriented understudy for gisting eval (ROUGE), metric for evaluation of translation with explicit ordering (METEOR), sentence-bidirectional encoder representations from transformers (S-BERT), MoverScore, and consensus-based image description evaluation (CIDEr). These metrics collectively assess the quality, fluency, and relevance of multimodal-generated responses.
For benchmark datasets, we utilize MathVista [27], CVBench [28], ScienceQA [29], nocaps [30], MSCOCO [31], and Flickr30k [32]. These datasets span a variety of domains and multimodal tasks, allowing for a comprehensive analysis of prompt engineering strategies in multimodal natural language generation. Based on the results, we propose a greedy prompt engineering strategy (Greedy PES) that optimally selects the most effective prompt engineering technique for each dataset and MLLM model, maximizing response quality and reliability.
The proposed Greedy PES method enables the identification of the optimal MLLM model and the most effective PE combination for each dataset, based on the exhaustive evaluation of all possible PE configurations. Furthermore, by employing a weighted metric computation scheme that adaptively reflects the characteristics of each dataset and user preferences, this approach achieves a closer alignment with user satisfaction compared to conventional methods.

2. Related Works

MLLMs [21,22,23,24,25,26], which aim to generate text by processing diverse multimodal inputs through LLMs, have been the focus of extensive research in terms of architectural advancements [1,15] and training methodologies [1]. Despite continuous improvements in performance, MLLMs still face significant challenges, particularly in regard to generating hallucinated responses that fail to accurately reflect the provided visual or contextual inputs [12]. To mitigate these limitations, prompt engineering (PE) strategies have emerged as a promising solution to enhance response quality and reliability [14,15,16]. Additionally, recent research has explored user experience optimization methods specifically tailored for MLLMs, employing diverse evaluation frameworks to assess model effectiveness [11,33,34,35].
In terms of architectural studies, Fu et al. [1] categorized MLLM architectures into three core components: pre-trained modality encoders, pre-trained LLMs, and cross-modality transformers. This structure allows MLLMs to integrate multimodal information efficiently while leveraging LLMs’ textual reasoning capabilities. In contrast, Zhang et al. [15] proposed a more fine-grained MM-LLM framework, decomposing it into modality encoders, input projectors, LLM backbones, output projectors, and modality generators. This expanded framework extends beyond text generation, encompassing multimodal content generation, including images and audio, thus broadening the application scope of MLLMs.
In contrast to architectural research, studies on MLLM training methodologies have primarily focused on pretraining, instruction tuning, and alignment tuning [1]. Pretraining aims to align different modalities while embedding multimodal world knowledge into the model [1]. Instruction tuning is designed to teach MLLMs how to follow user instructions and effectively perform assigned tasks. Meanwhile, alignment tuning ensures that MLLMs are aligned with specific human preferences, improving their ability to generate responses that are both reliable and contextually appropriate.
Despite these advanced training strategies, achieving perfect alignment between different modality encoders remains a fundamental challenge. This misalignment issue often leads to multimodal hallucination, where the content generated by an MLLM does not accurately correspond to the provided visual input. When combined with an LLM, multimodal hallucination can manifest in three distinct types [12]: existence hallucination, attribute hallucination, and relationship hallucination. Existence hallucination is the most fundamental form of hallucination, where the model incorrectly asserts the presence of objects that do not actually exist in the image. Attribute hallucination occurs when the model misdescribes the attributes of an object, such as failing to correctly identify the color of a dog. This type of hallucination is often correlated with existence hallucination, as attribute descriptions should be grounded in the actual objects present in the image. Relationship hallucination is a more complex phenomenon that extends beyond the existence of objects. It refers to incorrect descriptions of relationships between objects, such as relative positioning or interactions, leading to misinterpretations of the scene’s contextual meaning. These hallucination challenges highlight the inherent difficulties in aligning multimodal representations, necessitating effective prompt engineering strategies to mitigate the issue and improve response reliability.
To help address the hallucination problem in LLMs, various prompt engineering techniques such as ICL [17], CoT [18], SSR [19], ToT [19], and RAG [20] have been introduced. These approaches aim to guide the model in structured reasoning, contextual retrieval, and incremental step-wise reasoning, thereby improving response accuracy. However, applying these techniques directly to MLLMs presents performance limitations, as MLLMs require strategic prompt design that accounts for alignment with visual information, rather than relying solely on text-based prompts. To overcome these challenges, recent studies have proposed multimodal-specific prompt engineering techniques. Yin et al. [14,36,37] introduced multimodal ICL (M-ICL), multimodal CoT (M-CoT), and LLM-aided visual reasoning (LAVR) to mitigate multimodal hallucination by integrating visual and textual reasoning. Similarly, He et al. [38] proposed prompt optimization for enhancing multimodal reasoning (POEM), a visual analysis system that optimizes prompts to enhance multimodal reasoning capabilities in large language models. Expanding on this, Zhang et al. [15] introduced Multimodal-CoT, a framework that extends chain-of-thought reasoning to process multimodal inputs (text and images), thereby improving joint linguistic and visual inference. Additionally, Wu et al. [16] explored visual prompting techniques for MLLMs, categorizing different types of visual prompts and investigating their impact on compositional reasoning, visual grounding, and object reference within multimodal contexts.
In parallel, MLLM evaluation methodologies have also been a subject of extensive research. Xu et al. [33] provided a comprehensive review of MM-LLM efficiency improvement techniques, introducing various benchmarks for measuring multimodal effectiveness. Li et al. [34] highlighted the limitations of existing evaluation methods, noting that most benchmarks require fixed answers, which constrains the evaluation of creative responses. Additionally, they emphasized the lack of effective hallucination assessment, the inadequate evaluation of multimodal knowledge learning, and the absence of causality understanding metrics. To address these shortcomings, they proposed adopting user-centric evaluation, multimodal expansion evaluation, and interactive and dynamic evaluation methods. Similarly, Huang et al. [11] systematized MLLM evaluation concepts, categorizing evaluation approaches based on what to evaluate (evaluation objectives), how to evaluate (evaluation methodologies), and where to evaluate (evaluation scope). Furthermore, Xie et al. [35] proposed a standardized evaluation framework that incorporates accuracy-based metrics (BLEU, ROUGE, CIDEr, and MoverScore based on Wasserstein distance), as well as human evaluation, to ensure a more holistic assessment of MLLM performance.
The key contributions of this paper are as follows:
  • Comprehensive Performance Analysis: We present an extensive performance evaluation of various MLLMs, along with prompt engineering techniques designed to enhance their capabilities. This analysis is conducted across multiple datasets and assessed using a diverse set of performance metrics.
  • Weighted Aggregate Performance Metric: We introduce a weighted aggregate performance metric that integrates multiple evaluation metrics, such as BLEU, ROUGE, METEOR, SBERT, MoverScore, and CIDEr, to provide a holistic assessment of prompt engineering strategies.
  • Optimization Strategy for Prompt Engineering in MLLMs: We investigate the impact of prompt engineering strategies on different datasets and MLLM architectures, leading to the formulation of an optimized strategy tailored to dataset characteristics and MLLM models. Additionally, we propose a greedy prompt engineering strategy (Greedy PES) to further refine the application of prompt engineering for improved model performance.
The structure of this paper is as follows: Section 3 provides a detailed explanation of the MLLM models used in this study. Section 4 introduces the evaluation metrics employed for performance measurement, while Section 5 describes the benchmark datasets used for experimentation. Section 6 presents the proposed Greedy PES for optimizing MLLM performance. Section 7 discusses the experimental results, including the impact of different parameters, the effect of Greedy PES, the derived MLLM optimization strategies, and further insights. Finally, Section 9 concludes the paper.

3. System Models

In this study, we conduct experiments using four state-of-the-art MLLM models: Phi-3.5, Llama-3.2, Pixtral, and Qwen-2.5. These models have been recently introduced, are widely adopted, and exhibit strong performance while maintaining a parameter size of approximately 10 billion. A summary of the technical specifications of these models is presented in Table 1.

3.1. Phi

Phi-3.5-Vision-Instruct is a lightweight multimodal model developed by Microsoft in October 2024. It is designed to perform a wide range of vision–language tasks, including general image understanding, optical character recognition (OCR), chart and table comprehension, multi-image comparison, and video clip summarization [39].
The model architecture consists of a CLIP ViT-L/14-based image encoder and a Phi-3.5-mini-based pre-trained LLM. It employs rotary position embedding (RoPE) for positional encoding and utilizes the SwiGLU activation function [40]. The model supports a maximum context length of 128K tokens and has been pre-trained on approximately 0.5T tokens from image–text datasets. Additionally, supervised fine-tuning (SFT) and direct preference optimization (DPO) were applied for post-training.

3.2. Llama

Llama-3.2-11B-Vision-Instruct is an 11-billion-parameter multimodal language model developed by Meta Platforms in September 2024. It is designed to process both text and images simultaneously, enabling multimodal conversations and visual reasoning tasks. Built upon the Llama 3.1 architecture, this model integrates visual information to support various applications across different domains [7].
The architecture consists of a CLIP-based image encoder and a Llama 3.1-based pre-trained LLM. It employs RoPE for positional encoding and utilizes the SwiGLU activation function [40]. The model supports a maximum context length of 128K tokens and has been pre-trained on approximately 15.6T tokens comprising image–text datasets. For post-training, it has undergone SFT and DPO.

3.3. Pixtral

Pixtral-12B is a 12-billion-parameter multimodal language model developed and released by Mistral AI in October 2024. It is designed to understand both images and text simultaneously, enabling advanced multimodal reasoning and language generation [24].
The architecture comprises a CLIPA-based image encoder [41], a Mistral Nemo 12B-based pre-trained LLM, and a Mistral Nemo 12B-based multimodal decoder. It employs RoPE-2D for positional encoding and utilizes the SwiGLU activation function [40]. The model supports a maximum context length of 128K tokens and has been pre-trained on billions of image–text pairs. For post-training, SFT and DPO were applied.

3.4. Qwen

Qwen2-VL-7B-Instruct is a 7-billion-parameter multimodal language model developed by Alibaba Group in 2024. It is designed to function as a visual agent, enabling advanced multimodal understanding and reasoning [42].
The architecture consists of a 600-million-parameter ViT-based encoder [43] and a Qwen2-7B-based pre-trained LLM. It employs rotary multimodal rotary position embedding (M-ROPE) for positional encoding and utilizes the SwiGLU activation function [40]. The model supports a maximum context length of 128K tokens and has been pre-trained on approximately 7 trillion tokens of image–text data.
For post-training, SFT and DPO were applied to refine the model’s output. Additionally, to extend the context window, yet another RoPE extension method (YARN) [44] and dual chunk attention (DCA) [45] techniques were employed.

4. Performance Metric

This paper aims to analyze the performance of MLLMs from multiple perspectives by considering various evaluation metrics that effectively reflect user satisfaction with model responses. To achieve this, we employ a diverse set of widely used and well-established evaluation metrics, each with its own unique characteristics. Specifically, we utilize BLEU, ROUGE, METEOR, S-BERT, MoverScore, and CIDEr as our primary evaluation criteria.

4.1. BLEU

BLEU [46] is one of the most widely used metrics for evaluating the performance of machine translation and natural language generation models. It measures the similarity between a generated sentence and a reference sentence based on n-gram overlap. BLEU typically calculates precision from unigrams (1-g) to four-grams (4-g) and applies a brevity penalty (BP) to address the issue of shorter sentences receiving disproportionately high scores. The BLEU score is computed using the following formula:
BLEU = B P · exp n = 1 N w n log p n
where p n represents the n-gram precision value, and w n denotes the weight, which is typically set as w n = 1 N . The brevity penalty (BP) is introduced to penalize excessively short generated sequences and can be formally defined by the following equation:
B P = 1 , if c > r e ( 1 r / c ) , if c r
where c represents the length of the generated sentence, while r denotes the length of the reference sentence.
BLEU allows for quantitative performance comparison. However, it has limitations in capturing contextual meaning, as it does not account for synonyms or the flexibility of sentence structures.

4.2. ROUGE

ROUGE score [47] is primarily used to measure the similarity between generated text and reference text in summarization tasks. It includes several variants, such as ROUGE-N, ROUGE-L, and ROUGE-W. ROUGE-N is based on n-gram overlap, while ROUGE-L relies on the longest common subsequence (LCS). ROUGE-W is a weighted version of ROUGE-L, assigning greater importance to longer common subsequences.
In this study, we utilized the Evaluate library’s ROUGE module to compute the ROUGE scores, specifically using ROUGE-1 as the primary evaluation metric.
The formula for ROUGE-N is as follows:
ROUGE - N = n - gram Ref min ( Count Ref ( n - gram ) , Count Can ( n - gram ) ) n - gram Ref Count Ref ( n - gram )
where n-gram is sequence of n consecutive words, Count Ref ( n - gram ) is the frequency of the n-gram in the reference text, and Count Can ( n - gram ) is the frequency of the n-gram in the candidate text.

4.3. METEOR

The METEOR metric differs from BLEU in that it considers not only simple n-gram matches but also synonymy, stemming, and word order alignment. METEOR utilizes the harmonic mean of unigram precision and recall, making it more closely correlated with human evaluation compared to BLEU [48]. The corresponding formula is as follows:
F mean = P · R α P + ( 1 α ) R
where P is precision, R is recall, and α is weight factor, normally set to 0.9 .

4.4. Sentence-BERT

S-BERT is a model designed to measure semantic similarity at the sentence level. In this study, sentence embeddings for both reference and generated sentences are obtained using a BERT-based model, and the semantic similarity between sentences is computed based on these embeddings [49].
E X = S B E R T ( X ) , E Y = S B E R T ( Y ) ,
where E X and E Y are dense vector representations of reference sentence X and generated sentence Y, respectively. S-BERT is a fine-tuned BERT-based model that outputs sentence-level embeddings.
The similarity between sentences is calculated using the following cosine similarity formula:
sim ( X , Y ) = cos ( E X , E Y ) = E X · E Y E X E Y .

4.5. MoverScore

MoverScore [50] is a metric designed to measure semantic similarity between sentences more accurately by combining word mover’s distance (WMD) with word embeddings. The corresponding formula is as follows:
MoverScore = 1 i = 1 n j = 1 m T i , j · ( 1 c o s ( w i , w j ) ) i = 1 n j = 1 m T i , j
where w i and w j denote the constituent words of the reference sentence X and the generated sentence Y, respectively. c o s ( w i , w j ) is cosine similarity between the word embeddings W i and w j , and T i , j is the optimal transport matrix determining how much of the embedding from w i should be moved to w j .

4.6. CIDEr

CIDEr [51] measures the similarity between the generated sentence and reference sentence using a term frequency-inverse document frequency (TF-IDF) weighted n-gram matching approach. It emphasizes informative content (rare but meaningful words) while reducing the influence of common, less informative words. The CIDEr formula is as follows:
sim ( X , Y ) = c o s ( w g ( X ) , w g ( Y ) ) = g w g ( X ) · w g ( Y ) w ( X ) · w ( Y )
where the TF-IDF weight of an n-gram g for each reference sentence X and generated sentence Y is defined as follows:
w g ( X ) = h g ( X ) · IDF ( g ) , w g ( Y ) = h g ( Y ) · IDF ( g )
where h g ( X ) and h g ( Y ) represent the term frequency (TF) of the n-gram g in the candidate and reference sentences, respectively. IDF ( g ) is the inverse document frequency of g, computed as:
IDF ( g ) = log N d D I [ g d ]
where N is the total number of captions in the corpus. D represents the set of all reference captions. I [ g d ] is an indicator function equal to 1 if g appears in document d, and 0 otherwise.

5. Dataset

In this study, we selected MSCOCO, Flickr30k, nocaps, and CVBench as benchmark datasets for the general natural language understanding category; ScienceQA for the scientific reasoning category; and MathVista for the mathematical reasoning category. These datasets were chosen to evaluate a wide range of language capabilities, ensuring a balanced representation across general natural language understanding, scientific reasoning, and mathematical reasoning. A comparative analysis of these datasets is presented in Table 2.
The MSCOCO 2014 5K Test Image-Text Retrieval dataset is utilized for evaluating image–text retrieval and matching performance. It consists of a total of 5000 test samples and is used to assess a model’s ability to retrieve appropriate textual descriptions for a given image or to find images corresponding to a given text query [31].
The Flickr30k dataset is designed for image captioning research, focusing on learning and evaluating the relationship between images and textual descriptions. It comprises 31,800 test samples and is widely used to evaluate models that generate natural language descriptions of images [32].
The nocaps dataset is specifically constructed for image captioning performance evaluation, particularly in scenarios where the images contain objects or scenes that are challenging for conventional captioning models. The dataset includes 4500 samples in the validation set and 10,600 samples in the test set [30].
The ScienceQA dataset is designed for evaluating scientific question answering (QA) models and contains a diverse set of scientific questions. The dataset consists of 12,700 samples in the training set, 4240 samples in the validation set, and 4240 samples in the test set. It focuses on assessing a model’s scientific knowledge and reasoning capabilities [29].
The MathVista dataset is constructed to evaluate mathematical visual question answering (Math-VQA) models by integrating mathematical concepts with visual information. It is used to test a model’s ability to perform mathematical reasoning and visual interpretation. The dataset includes 5140 test samples and provides an additional 1000-sample Test Mini set for smaller-scale evaluations [27].
The CVBench dataset serves as a computer vision benchmark (CV-Bench) for visual question answering (VQA) tasks, measuring model performance in visual understanding and question answering accuracy. The dataset includes 2640 test samples and is used to evaluate a model’s ability to process visual information and generate correct responses to image-based questions [28].

6. Greedy Prompt Engineering Strategy

This section describes the greedy prompt engineering strategy (Greedy PES), which is designed to identify and apply optimal prompt engineering techniques for different MLLM deployment environments, including the various MLLM models and benchmark datasets discussed in the previous sections.
In addition, the RAG approach was extended by integrating it with CoT, ToT, and SSR, whereby external information is retrieved and reformulated based on each respective reasoning strategy. These variants are denoted as R(C), R(T), and R(S), respectively.
The greedy prompt engineering strategy (Greedy PES) aims to determine the optimal combination of MLLM models and prompt engineering techniques for each dataset by identifying the highest achievable performance across all possible prompt engineering (PE) combinations. To formalize this, let d represent a dataset, p a prompt engineering technique, e an evaluation metric, and m an MLLM model. The evaluation score derived from these parameters is denoted as S d , e ( m , p ) . Furthermore, the weight assigned to each evaluation metric is defined as w d , e , which accounts for the varying dynamic ranges of different evaluation metrics to prevent imbalance when aggregating scores. Additionally, these weights reflect the relative importance of each metric in assessing model performance.
The applied prompt engineering techniques are represented using the following abbreviations:
B : base , I : ICL , C : CoT , S : SSR , T : ToT ; R ( B ) : basic RAG , R ( C ) : CoT - based RAG , R ( S ) : SSR - based RAG , R ( T ) : ToT - based RAG .
Then, the objective is to identify the MLLM model m and prompt engineering technique p that maximize the aggregated evaluation score S e ( m , p ) across multiple evaluation metrics. This can be formulated as the following optimization equation:
m , p = arg max m , p { e E w d , e · S d , e ( m , p ) } , subject to d { MSCOCO , Flickr 30 k , Nocaps , ScienceQA , MathVista , CVBench } , p { B , I , C , S , T , R ( B ) , R ( C ) , R ( S ) , R ( T ) } , m { Llama , Pixtral , Phi , Qwen } , E = { BLUE , ROUGE , METEOR , S - BERT , Mover , CIDEr } .
The optimal MLLM model m and the optimal prompt engineering technique p may vary depending on each dataset d.

7. Simulation Result

This section presents the experimental setup designed to validate the effectiveness of the proposed Greedy PES algorithm for optimizing MLLM performance, along with a detailed performance analysis across different benchmark datasets.
Table 3 presents the experimental setup used in this study. The selected MLLM models include Llama-3.2-11B, Phi-3.5-4.2B, Pixtral-12B, and Qwen2-VL-7B, and the performance of various PE strategies, including ICL, CoT, RAG, ToT, SSR, and their hybrid combinations, was analyzed. In Section 7.1, Section 7.2, Section 7.3, Section 7.4, Section 7.5, Section 7.6, performances are evaluated, compared, and analyzed based on the experimental setup in Table 3 across the six datasets.
The responses for performance evaluation were generated using prompt formats derived from the corresponding PE strategies, with temperature = 0.1 and top-P = 0.9 applied as decoding parameters. For RAG, the prompt is automatically augmented with an image that exhibits high cosine similarity to the input image. Specifically, RAG employs a retrieval-augmented strategy to enhance multimodal reasoning. A subset of the dataset is pre-embedded using the CLIP [2] model to construct a retrieval database via ChromaDB. When the original image is given, it is encoded into a vector using the same CLIP model, and the most semantically similar image is retrieved based on cosine similarity. Prior to presenting the target image, the retrieved image and its caption are shown to provide relevant contextual knowledge and assist the model in generating more accurate responses.
For performance analysis, inference was conducted by applying various prompt engineering techniques to the pretrained models, including Llama-3.2-11B, Phi-3.5-4.2B, Pixtral-12B, and Qwen2-VL-7B, utilizing NVIDIA H100 Tensor Core GPU computing resources.
Finally, Section 7.7 analyzes the best-performing PE strategy for each dataset and MLLM model to derive a PE optimization strategy and discuss insights for performance enhancement.

7.1. MSCOCO

Analyzing the baseline performance (B) in Table 4 and Table 5, it is evident that Qwen-2 achieves the highest performance across most evaluation metrics. This is followed by Pixtral, Llama 3.2, and Phi-3.5 in descending order of performance. Notably, Llama 3.2 exhibits the best results in semantic similarity metrics (S-BERT, MoverScore), suggesting that the generated captions are likely to be more semantically appropriate. Meanwhile, Phi-3.5 achieves a higher CIDEr score than Pixtral and Llama 3.2, although it records the lowest performance in other metrics.
However, when applying the Greedy PES, the optimal model m and optimal prompt engineering strategy p are found to be Phi-3.5 with the base RAG technique. This indicates that although Phi-3.5 initially exhibited the lowest baseline performance, it outperforms all other models when Greedy PES is applied. This underscores the significant impact of PE on MLLM performance. Additionally, it is noteworthy that Phi-3.5, despite being the smallest model at 4.2B parameters, achieves superior performance compared to larger models when optimized using Greedy PES. Furthermore, following Phi-3.5, the models rank in performance as Qwen-2, Pixtral, and Llama 3.2. Interestingly, Qwen-2, which had the lowest baseline performance, demonstrates the second-best performance under Greedy PES. The BLEU score improvement for Qwen-2 through Greedy PES is nearly tenfold, highlighting the effectiveness of prompt engineering optimization. On the other hand, for the CIDEr metric, the ToT technique proves to be the most effective, with Qwen-2 emerging as the best-performing model.

7.2. Flickr30k

Analyzing the baseline performance (B) in Table 6 and Table 7, it is evident that Qwen-2 achieves the highest performance across most evaluation metrics. In particular, Qwen-2 records the highest scores in BLEU, ROUGE, and CIDEr, indicating its superior baseline performance in caption generation. Meanwhile, Pixtral achieves the highest performance in METEOR and also records a high MoverScore, suggesting strong semantic similarity between generated and reference captions. Phi-3.5 demonstrates a higher CIDEr score than Pixtral and Llama 3.2, but it records the lowest performance in most other metrics. When applying the greedy prompt engineering strategy (Greedy PES), the optimal model m and optimal PE strategy p are found to be Qwen-2 with the ToT technique. Notably, this combination achieves the highest performance across METEOR, S-BERT, MoverScore, and CIDEr, further confirming its effectiveness in enhancing captioning performance. Additionally, Phi-3.5, despite being the smallest model with only 4.2B parameters, demonstrates comparable performance. This suggests that Phi-3.5 could be a resource-efficient alternative for general captioning tasks, particularly in hardware-constrained environments where computational efficiency is a key requirement.

7.3. nocaps

As observed in Table 8 and Table 9, the baseline performance (B) analysis reveals that Qwen-2 achieves the highest scores in BLEU, ROUGE, and METEOR, indicating that it possesses the strongest baseline performance in caption generation. In contrast, Pixtral outperforms the other models in MoverScore and CIDEr, while Llama 3.2 achieves the highest score in S-BERT, demonstrating its strength in semantic similarity evaluation.
When applying the Greedy PES, the optimal model m and optimal PE strategy p are found to be Qwen-2 with the ToT technique. However, it is noteworthy that Phi-3.5, despite being the smallest model, achieves the best performance in BLEU and ROUGE. Additionally, Phi-3.5 also demonstrates performance comparable to Qwen-2 across METEOR, S-BERT, and MoverScore, indicating its efficiency in multimodal captioning tasks despite its lower parameter count.

7.4. ScienceQA

As observed in Table 10 and Table 11, the baseline performance (B) analysis demonstrates that Qwen-2 achieves the highest performance across all evaluation metrics. Following Qwen-2, Phi-3.5, Llama 3.2, and Pixtral exhibit strong performance in descending order.
Upon applying the greedy prompt engineering strategy (Greedy PES), the optimal model m and optimal prompt engineering strategy p are identified as Phi-3.5 with the base RAG technique. Additionally, Phi-3.5 with the ICL and SSR combination also demonstrates strong performance, albeit with a marginal difference. This result highlights the importance of knowledge expansion and step-by-step reasoning techniques, such as base RAG, ICL, and SSR, in scientific question-answering tasks, where structured reasoning and contextual information retrieval are crucial for generating accurate responses.
Furthermore, after applying Greedy PES, the combination of Qwen-2 with SSR follows Phi-3.5 in terms of performance. Notably, Qwen-2 also exhibited strong performance in the baseline results, reinforcing its effectiveness in scientific-domain-specific response generation.

7.5. MathVista

As observed in Table 12 and Table 13, the baseline performance (B) analysis demonstrates that Phi-3.5 outperforms all models across all evaluation metrics, followed by Qwen-2, Llama 3.2, and Pixtral in descending order. This result indicates that Phi-3.5 and Qwen-2 exhibit strong mathematical problem-solving and reasoning capabilities, whereas Pixtral demonstrates relatively lower performance in this domain.
Upon applying the greedy prompt engineering strategy (Greedy PES), the optimal model m and optimal prompt engineering strategy p are identified as Qwen-2 with the ToT approach. However, it is also notable that Phi-3.5 with the ICL approach achieves the best performance in the S-BERT and MoverScore metrics. These findings confirm that even after applying prompt engineering techniques, Phi-3.5 and Qwen-2 maintain a performance advantage over other models in mathematical reasoning and problem-solving tasks. Additionally, ToT and ICL emerge as the most effective prompt engineering strategies for optimizing MLLM performance in mathematical domains.

7.6. CVBench

As observed in Table 14 and Table 15, the base performance (B) indicates that Qwen-2 demonstrates the highest overall performance, with Phi-3.5 also exhibiting comparable proficiency in understanding multimodal data. Specifically, Phi-3.5 achieves the highest scores in BLEU, ROUGE, and METEOR, while Qwen-2 records the best performance in S-BERT, MoverScore, and CIDEr. This suggests that responses generated by Qwen-2 are semantically more appropriate and natural compared to other models.
When applying the Greedy PES, the optimal model m and prompt engineering strategy p are determined to be Phi-3.5 combined with the ICL technique. Furthermore, since ICL consistently emerges as the most effective prompt engineering method across various evaluation metrics for other MLLM models, this indicates that ICL is particularly advantageous for datasets such as CVBench, which require a fundamental yet comprehensive understanding of both text and image-based inputs.

7.7. Performance Analysis and Discussion

This section provides a quantitative analysis of the best-performing MLLM, the best-performing MLLM with PES, and the degree of performance enhancement, based on the previously presented results across datasets, MLLM models, and evaluation metrics.
Table 16 presents the optimal prompt engineering strategies (PES) and MLLM models across various datasets and evaluation metrics, derived from the results in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15. As observed in the results, the optimal PES varies significantly depending on the dataset and the chosen MLLM model. Generally, for general category datasets, ICL, ToT, and RAG are predominantly utilized. This trend can be attributed to the characteristics of multimodal data, where generating captions from recognized objects and input text requires in-context reasoning, multi-path inference, and knowledge expansion to deepen the relationship between objects and textual context. In contrast, for math-related datasets, ICL, SSR, and ToT are the primary techniques, while for science-related datasets, RAG and SSR are more frequently employed. The emphasis on SSR in the math and science domains compared to the general domain is notable, as solving mathematical and scientific problems inherently demands step-by-step reasoning, which is crucial for handling complex problem-solving tasks.
Additionally, while Qwen-2 consistently achieves the highest performance across most cases when no PES is applied, it is noteworthy that Phi-3.5 also emerges as a strong contender when Greedy PES is applied. More significantly, despite being the smallest model, Phi-3.5 exhibits substantial performance improvement when PES is applied, demonstrating the effectiveness of PE in enhancing MLLM performance. These findings suggest that Greedy PES has strong potential for MLLM model optimization, highlighting its applicability for further expansion and future advancements in multimodal AI research.
A more detailed analysis of each dataset is now presented to examine the optimal MLLM and PES combinations for different multimodal tasks.
The MSCOCO dataset is designed for image captioning, encompassing diverse scenes and objects. The optimal MLLM–PES combinations identified through Greedy PES are Phi-3.5 with RAG and Qwen-2 with ToT. The results indicate that Qwen-2 exhibits strong image captioning capabilities even without additional prompt engineering, suggesting that it is inherently well trained for general multimodal image–text alignment. In contrast, Phi-3.5, when integrated with RAG, demonstrates a more effective retrieval-based approach, allowing the model to extract relevant information from the image and generate high-quality captions.
Flickr30k focuses on understanding relationships between people and objects within an image to generate relevant captions. The optimal MLLM–PES combination is Qwen-2 with ToT, reinforcing the finding that Qwen-2 is a strong candidate for text generation in general multimodal datasets. The results further suggest that the ToT-based approach facilitates enhanced logical reasoning, allowing the model to establish deeper semantic connections between elements in the image, ultimately producing more contextually relevant captions.
The nocaps dataset is designed for open-domain image captioning, where models must generate captions that describe the main content of an image, even for unseen objects. As observed in prior datasets, the optimal MLLM-PES combination remains Qwen-2 with ToT, reinforcing its capability in open-domain captioning. Furthermore, in the baseline setting (B), Qwen-2 outperforms the other models, highlighting its robustness in unconstrained image captioning tasks.
The ScienceQA dataset evaluates scientific reasoning and question answering, requiring the model to comprehend scientific concepts and principles. While Qwen-2 achieves the highest performance in the baseline setting (B), applying Greedy PES leads to optimal MLLM–PES combinations of Phi-3.5 with RAG or Phi-3.5 with ICL and SSR. This suggests that RAG and structured step-by-step reasoning (ICL, SSR) are the most effective strategies for solving scientific problems, as they facilitate information retrieval, logical deduction, and structured reasoning.
MathVista is designed to assess mathematical problem solving, numerical computation, and logical reasoning in a multimodal context. In the baseline setting (B), Phi-3.5 emerges as the best-performing model. However, when applying Greedy PES, the optimal MLLM–PES combination shifts to Qwen-2 with ToT, demonstrating that the ToT framework enhances logical reasoning and enables structured multi-step problem solving, particularly for mathematical tasks requiring iterative hypothesis evaluation and validation.
CVBench serves as a computer-vision-focused multimodal benchmark, where models are assessed on object recognition and scene description based on image–text relationships. In the baseline setting (B), Phi-3.5 and Qwen-2 achieve the highest performance, while Greedy PES identifies Phi-3.5 with ICL as the optimal combination. This finding indicates that ICL effectively optimizes image descriptions by incorporating diverse in-context examples, making it the most suitable approach for tasks requiring fine-grained multimodal understanding.
Ultimately, the application of the Greedy PES resulted in significant performance improvements across different multimodal tasks. The observed performance improvements are as follows:
  • 184.3% increase in evaluation scores for general image captioning tasks compared to conventional methods.
  • 90.3% increase in evaluation scores for mathematical VQA.
  • 49.1% increase in evaluation scores for science VQA.
These results underscore the importance of prompt engineering in MLLM optimization, illustrating how Greedy PES can significantly enhance model performance by aligning multimodal reasoning techniques with dataset-specific requirements.

7.8. Prompt Examples

Table 17 presents examples of the prompts used for in the aforementioned experiments.
Table 18, Table 19, Table 20 and Table 21 present comparative results obtained by applying various PE techniques using images and questions from Figure 1 as inputs. The images and captions in Figure 1 were extracted from the nocaps dataset.
We now summarize and analyze the above prompt examples. B generally elicited strong visual grounding and basic descriptions, though Phi-3.5 and Llama 3.2 occasionally misinterpreted scenes negatively. I offered concise referencing but lacked contextual depth and emotional nuance across models. C encouraged creativity and narrative richness, but some models misread humorous cues. R yielded clear and concise outputs, though fine-grained detail was sometimes inconsistent. S and T aimed to deepen reasoning, revealing model-specific differences in analytical and emotional interpretation. R(C) supported creative, emotional framing but sometimes induced speculative responses. Model-wise, Phi-3.5 performed well with B and C; Llama 3.2 with I and S; Pixtral with C and R(C); and Qwen 2 with T and R. These results suggest that each prompt strategy effectively exposes the strengths and limitations of different MLLMs.

8. Featured Application

The rapid progress of MLLMs has opened up diverse application domains where natural language generation is required to be grounded in multimodal inputs. MLLMs have demonstrated strong potential in a wide range of use cases including image–text visual question answering (VQA) [27], medical image captioning [52,53], multimodal dialogue systems [54,55], robotics-based visual reasoning [56], legal document visual summarization [38], and math education support systems [15,27]. These applications leverage the capability of MLLMs to reason across textual, visual, and sometimes auditory modalities to deliver more informed and context-aware responses.
Despite these promising applications, deploying MLLMs in real-world scenarios faces several key challenges. One major limitation arises from the computational overhead of advanced prompt engineering strategies. Specifically, the proposed Greedy PES exhaustively explores all available combinations of prompts to identify optimal strategies for a given dataset and model. This approach, while empirically effective, is computationally intensive and resource demanding, making it less feasible in resource-constrained environments such as mobile or embedded devices [57,58].
To mitigate such constraints, recent work has proposed several solutions. For example, meta prompt selectors dynamically choose suitable prompts based on input domain or task characteristics [59]; heuristic rules can be used to predefine prompt configurations based on prior dataset analysis [38]; and prompt distillation techniques attempt to consolidate multiple prompt types into a unified, lighter-weight form [58]. These approaches enable more scalable and deployment-friendly usage of prompt engineering in practical settings.
Additionally, the effectiveness of each PE technique, such as ICL [60], CoT [15], SSR [19], ToT [19], and RAG [20], varies considerably depending on the task domain and dataset characteristics. For instance, ToT has proven particularly effective in scenarios requiring structured reasoning over visual inputs, while RAG is optimal in tasks that demand external knowledge retrieval and grounding, such as in scientific QA tasks [59]. These findings suggest that domain-aware prompt adaptation is essential for achieving optimal performance across applications.
While MLLMs have demonstrated strong generalization and reasoning capabilities, their effective deployment in real-world applications relies heavily on prompt strategies that are computationally efficient, domain-specific, and adaptively optimized. The proposed Greedy PES provides an empirical framework for identifying such strategies but also highlights the need for future research in lightweight and domain-adaptive prompt optimization.

9. Conclusions

This study investigated optimal PE strategies to mitigate one of the key limitations of MLLMs—the hallucination phenomenon. To achieve this, we analyzed representative multimodal PE techniques, including ICL, CoT, SSR, ToT, and RAG. These techniques were systematically applied across multiple datasets with distinct domain characteristics, allowing for a comprehensive performance evaluation.
The primary contribution of this work is the proposal of the greedy prompt engineering strategy (Greedy PES), a methodology designed to select the optimal prompt engineering strategy based on dataset and model characteristics. To ensure an objective and quantitative evaluation of MLLM responses, we employed a range of evaluation metrics, including BLEU, ROUGE, METEOR, S-BERT, MoverScore, and CIDEr. Additionally, a weighted aggregate evaluation score was introduced to facilitate a holistic comparison of model performance.
Experimental results demonstrate that the optimal PES varies depending on the dataset and the model used. General image captioning datasets benefited most from ICL, ToT, and RAG, suggesting that multimodal models require enhanced contextual reasoning, structured thought processing, and external knowledge retrieval for effective caption generation. Mathematical reasoning tasks (mathematical category) were best addressed by ICL, SSR, and ToT, highlighting the importance of incremental, structured reasoning in mathematical problem-solving. Scientific reasoning tasks (science category) showed the highest gains with RAG and SSR, reinforcing the need for external knowledge augmentation and systematic logical inference in scientific domains.
In the absence of prompt engineering, Qwen-2 emerged as the most effective model across various benchmarks. However, when Greedy PES was applied, Phi-3.5 also achieved competitive performance, despite being the smallest model in terms of parameter count. This finding underscores the potential of PES to significantly enhance the efficiency of smaller-scale models, making Phi-3.5 a highly efficient and accurate model when coupled with optimized prompt strategies.
These results empirically validate the hypothesis that PE can significantly enhance model performance and compensate for inherent model limitations. Moving forward, future research should extend the validation of Greedy PES to a broader range of multimodal applications and explore additional techniques to mitigate hallucination effects within MLLMs. Furthermore, domain-specific optimizations (e.g., medical, legal applications) should be investigated to refine PES methodologies for specialized fields where precision and reliability are paramount.

Author Contributions

Conceptualization, S.L.; methodology, S.L. and M.S.; software, M.S.; validation, S.L. and M.S.; formal analysis, S.L.; investigation, S.L.; resources, S.L.; data curation, M.S.; writing—original draft preparation, S.L.; writing—review and editing, S.L.; visualization, S.L. and M.S.; supervision, S.L.; project administration, S.L.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Soonchunhyang University Research Fund.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We acknowledge the support by the Soonchunhyang University Research Fund.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fu, C.; Zhang, Y.F.; Yin, S.; Li, B.; Fang, X.; Zhao, S.; Duan, H.; Sun, X.; Liu, Z.; Wang, L.; et al. MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs. arXiv 2024, arXiv:2411.15296v2. [Google Scholar]
  2. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning (ICML), Virtual, 18–24 July 2021; Volume 139, pp. 8748–8763. [Google Scholar]
  3. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  4. Ye, J.; Chen, X.; Xu, N.; Zu, C.; Shao, Z.; Liu, S.; Cui, Y.; Zhou, Z.; Gong, C.; Shen, Y.; et al. A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models. arXiv 2023, arXiv:2303.10420. [Google Scholar] [CrossRef]
  5. Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
  6. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv 2023, arXiv:2307.09288. [Google Scholar]
  7. Grattafiori, A.; Dubey, A.; Jauhri, A.; Pandey, A.; Kadian, A.; Al-Dahle, A.; Letman, A.; Mathur, A.; Schelten, A.; Vaughan, A.; et al. The Llama 3 Herd of Models. arXiv 2024, arXiv:2407.21783. [Google Scholar]
  8. Team, G.; Anil, R.; Borgeaud, S.; Alayrac, J.B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A.M.; Hauth, A.; Millican, K.; et al. Gemini: A Family of Highly Capable Multimodal Models. arXiv 2023, arXiv:2312.11805. [Google Scholar]
  9. Jiang, A.Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.; Chaplot, D.S.; de Las Casas, D.; Bressand, F.; Lengyel, G.; Lample, G.; Saulnier, L.; et al. Mistral 7B. arXiv 2023, arXiv:2310.06825. [Google Scholar]
  10. Kaufmann, T.; Weng, P.; Bengs, V.; Hüllermeier, E. A Survey of Reinforcement Learning from Human Feedback. arXiv 2023, arXiv:2312.14925. [Google Scholar]
  11. Huang, L.; Yu, W.; Ma, W.; Zhong, W.; Feng, Z.; Wang, H.; Chen, Q.; Peng, W.; Feng, X.; Qin, B.; et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst. 2025, 43, 1–55. [Google Scholar] [CrossRef]
  12. Zhai, B.; Yang, S.; Zhao, X.; Xu, C.; Shen, S.; Zhao, D.; Keutzer, K.; Li, M.; Yan, T.; Fan, X. Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. arXiv 2023, arXiv:2310.01779. [Google Scholar]
  13. Song, S.; Li, X.; Li, S.; Zhao, S.; Yu, J.; Ma, J.; Mao, X.; Zhang, W.; Wang, M. How to Bridge the Gap between Modalities: Survey on Multimodal Large Language Model. arXiv 2023, arXiv:2311.07594v3. [Google Scholar]
  14. Yin, S.; Fu, C.; Zhao, S.; Li, K.; Sun, X.; Xu, T.; Chen, E. A Survey on Multimodal Large Language Models. Natl. Sci. Rev. 2024, 11, nwae403. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Z.; Zhang, A.; Li, M.; Zhao, H.; Karypis, G.; Smola, A. Multimodal Chain-of-Thought Reasoning in Language Models. arXiv 2023, arXiv:2302.00923. [Google Scholar]
  16. Wu, J.; Zhang, Z.; Xia, Y.; Li, X.; Xia, Z.; Chang, A.; Yu, T.; Kim, S.; Rossi, R.A.; Zhang, R.; et al. Visual Prompting in Multimodal Large Language Models: A Survey. arXiv 2024, arXiv:2409.15310. [Google Scholar]
  17. Dong, Q.; Li, L.; Dai, D.; Zheng, C.; Wu, Z.; Chang, B.; Sun, X.; Xu, J.; Li, L.; Sui, Z. A Survey on In-context Learning. arXiv 2022, arXiv:2301.00234. [Google Scholar] [CrossRef]
  18. Amatriain, X. Prompt Design and Engineering: Introduction and Advanced Methods. arXiv 2024, arXiv:2401.14423. [Google Scholar]
  19. Chen, B.; Zhang, Z.; Langrené, N.; Zhu, S. Unleashing the potential of prompt engineering in Large Language Models: A comprehensive review. arXiv 2023, arXiv:2310.14735. [Google Scholar]
  20. Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.-T.; Rocktäschel, T.; et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Virtual, 6–12 December 2020; Volume 33, pp. 9459–9474. [Google Scholar]
  21. Gunasekar, S.; Zhang, Y.; Aneja, J.; Mendes, C.C.; Del Giorno, A.; Gopi, S.; Javaheripi, M.; Kauffmann, P.; de Rosa, G.; Saarikivi, O.; et al. Textbooks Are All You Need. arXiv 2023, arXiv:2306.11644. [Google Scholar]
  22. Li, Y.; Bubeck, S.; Eldan, R.; Del Giorno, A.; Gunasekar, S.; Lee, Y.T. Textbooks Are All You Need II: Phi-1.5 technical report. arXiv 2023, arXiv:2309.05463. [Google Scholar]
  23. Meta AI. Llama 3.2: Revolutionizing Edge AI and Vision with Open, Customizable Models. Available online: https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/ (accessed on 31 March 2025).
  24. Agrawal, P.; Antoniak, S.; Hanna, E.B.; Bout, B.; Chaplot, D.; Chudnovsky, J.; Costa, D.; De Monicault, B.; Garg, S.; Gervet, T.; et al. Pixtral 12B: A Multimodal Language Model. arXiv 2024, arXiv:2410.07073. [Google Scholar]
  25. Bai, J.; Bai, S.; Chu, Y.; Cui, Z.; Dang, K.; Deng, X.; Fan, Y.; Ge, W.; Han, Y.; Huang, F.; et al. Qwen Technical Report. arXiv 2023, arXiv:2309.16609. [Google Scholar]
  26. Yang, A.; Yang, B.; Hui, B.; Zheng, B.; Yu, B.; Zhou, C.; Li, C.; Li, C.; Liu, D.; Huang, F.; et al. Qwen2 Technical Report. arXiv 2024, arXiv:2407.10671. [Google Scholar]
  27. Lu, P.; Bansal, H.; Xia, T.; Liu, J.; Li, C.; Hajishirzi, H.; Cheng, H.; Chang, K.-W.; Galley, M.; Gao, J. MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. In Proceedings of the 3rd Workshop on Mathematical Reasoning and AI (MATH-AI), NeurIPS 2023, New Orleans, LA, USA, 15 December 2023. [Google Scholar]
  28. Tong, S.; Brown, E.; Wu, P.; Woo, S.; Middepogu, M.; Akula, S.C.; Yang, J.; Yang, S.; Iyer, A.; Pan, X.; et al. Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. In Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, USA, 10–15 December 2024. [Google Scholar]
  29. Lu, P.; Mishra, S.; Xia, T.; Qiu, L.; Chang, K.W.; Zhu, S.C.; Tafjord, O.; Clark, P.; Kalyan, A. Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering. In Proceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, USA, 10–15 December 2024. [Google Scholar]
  30. Agrawal, H.; Desai, K.; Lee, S. NoCaps: Novel Object Captioning at Scale. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  31. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  32. Plummer, B.A.; Wang, L.; Cervantes, C.M.; Caicedo, J.C.; Hockenmaier, J.; Lazebnik, S. Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  33. Xu, M.; Yin, W.; Cai, D.; Yi, R.; Xu, D.; Wang, Q.; Wu, B.; Zhao, Y.; Yang, C.; Wang, S.; et al. A Survey of Resource-efficient LLM and Multimodal Foundation Models. arXiv 2024, arXiv:2401.08092. [Google Scholar]
  34. Li, J.; Lu, W.; Fei, H.; Luo, M.; Dai, M.; Xia, M.; Jin, Y.; Gan, Z.; Qi, D.; Fu, C.; et al. A Survey on Benchmarks of Multimodal Large Language Models. arXiv 2024, arXiv:2408.08632. [Google Scholar]
  35. Xie, J.; Chen, Z.; Zhang, R.; Wan, X.; Li, G. Large Multimodal Agents: A Survey. arXiv 2024, arXiv:2402.15116. [Google Scholar]
  36. Baldassini, F.B.; Shukor, M.; Cord, M.; Soulier, L.; Piwowarski, B. What Makes Multimodal In-Context Learning Work? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–28 June 2024. [Google Scholar]
  37. Mitra, C.; Huang, B.; Darrell, T.; Herzig, R. Compositional Chain-of-Thought Prompting for Large Multimodal Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
  38. He, J.; Wang, X.; Liu, S.; Wu, G.; Silva, C.; Qu, H. POEM: Interactive Prompt Optimization for Enhancing Multimodal Reasoning of Large Language Models. arXiv 2024, arXiv:2306.13549v4. [Google Scholar]
  39. Microsoft Research. Phi-3.5: A Lightweight Multimodal Model for Vision and Language Tasks. arXiv 2024, arXiv:2410.11223.
  40. Shazeer, N. GLU Variants Improve Transformer. arXiv 2020, arXiv:2002.05202v1. [Google Scholar]
  41. Li, X.; Wang, Z.; Xie, C. An Inverse Scaling Law for CLIP Training. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  42. Alibaba Group. Qwen2-VL-7B-Instruct: Advancements in Vision-Language Understanding. arXiv 2024, arXiv:2401.12345.
  43. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  44. Peng, B.; Quesnelle, J.; Fan, H.; Shippole, E. YaRN: Efficient context window extension of large language models. arXiv 2023, arXiv:2309.00071. [Google Scholar]
  45. An, C.; Huang, F.; Zhang, J.; Gong, S.; Qiu, X.; Zhou, C.; Kong, L. Training-free long-context scaling of large language models. arXiv 2024, arXiv:2402.17463. [Google Scholar]
  46. Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.-J. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, USA, 6–12 July 2002. [Google Scholar]
  47. Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of the ACL-04 Workshop on Text Summarization Branches Out, Barcelona, Spain, 25–26 July 2004. [Google Scholar]
  48. Banerjee, S.; Lavie, A. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Ann Arbor, MI, USA, 29 June 2005. [Google Scholar]
  49. Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, 3–7 November 2019. [Google Scholar]
  50. Zhao, W.; Peyrard, M.; Liu, F.; Gao, Y.; Meyer, C.M.; Eger, S. MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong, China, 3–7 November 2019. [Google Scholar]
  51. Vedantam, R.; Lawrence Zitnick, C.; Parikh, D. CIDEr: Consensus-based Image Description Evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  52. Naseem, U.; Thapa, S.; Masood, A. Advancing Accuracy in Multimodal Medical Tasks Through Bootstrapped Language-Image Pretraining (BioMedBLIP): Performance Evaluation Study. J. Med. Internet Res. Med Inform. 2024, 12, e56627. [Google Scholar]
  53. Liu, F.; Zhu, T.; Wu, X.; Yang, B.; You, C.; Wang, C.; Lu, L.; Liu, Z.; Zheng, Y.; Sun, X.; et al. A medical multimodal large language model for future pandemics. npj Digit. Med. 2023, 6, 226. [Google Scholar]
  54. Yang, Z.; Li, L.; Wang, J.; Lin, K.; Azarnasab, E.; Ahmed, F.; Liu, Z.; Liu, C.; Zeng, M.; Wang, L. MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action. arXiv 2023, arXiv:2303.11381. [Google Scholar]
  55. Lu, M.Y.; Chen, B.; Williamson, D.F.; Chen, R.J.; Zhao, M.; Chow, A.K.; Ikemura, K.; Kim, A.; Pouli, D.; Patel, A.; et al. A multimodal generative AI copilot for human pathology. Nature 2024, 634, 466–473. [Google Scholar]
  56. Yang, Y.; Zhou, T.; Li, K.; Tao, D.; Li, L.; Shen, L.; He, X.; Jiang, J.; Shi, Y. Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 26265–26275. [Google Scholar]
  57. Liu, Y.; Duan, H.; Zhang, Y.; Li, B.; Zhang, S.; Zhao, W.; Yuan, Y.; Wang, J.; He, C.; Liu, Z.; et al. MMBench: Is Your Multi-modal Model an All-around Player? In Proceedings of the European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024. [Google Scholar]
  58. Liu, H.I.; Galindo, M.; Xie, H.; Wong, L.K.; Shuai, H.H.; Li, Y.H.; Cheng, W.H. Lightweight Deep Learning for Resource-Constrained Environments: A Survey. arXiv 2024, arXiv:2404.07236v2. [Google Scholar] [CrossRef]
  59. Jiang, C.; Xu, H.; Dong, M.; Chen, J.; Ye, W.; Yan, M.; Ye, Q.; Zhang, J.; Huang, F.; Zhang, S. Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. arXiv 2024, arXiv:2312.06968v3. [Google Scholar]
  60. Doveh, S.; Perek, S.; Mirza, M.J.; Lin, W.; Alfassy, A.; Arbelle, A.; Ullman, S.; Karlinsky, L. Towards Multimodal In-Context Learning for Vision & Language Models. arXiv 2024, arXiv:2403.12736. [Google Scholar]
Figure 1. Input image and caption from the nocaps dataset (caption: a woman in a white dress is standing between two suit-wearing men in a yard).
Figure 1. Input image and caption from the nocaps dataset (caption: a woman in a white dress is standing between two suit-wearing men in a yard).
Applsci 15 03992 g001
Table 1. Technical summary of major MLLMs.
Table 1. Technical summary of major MLLMs.
ModelPhiLlamaPixtralQwen
Model Size4.2B parametersUp to 405B parameters12B parametersUp to 72B
ArchitectureVision encoderDense transformerVision encoder+Customized encoder+
Transformer-based LLM+Transformer-based LLMTransformer-based LLMTransformer-based LLM
Vision EncoderCLIP ViT-L/14-basedN/APixtral-ViT (400 M parameters) Customized encoder
Positional EncodingRoPERoPEROPE-2DRoPE + DCA
Activation FunctionSwiGLUSwiGLUSwiGLU (encoder), GeLU (decoder) SwiGLU
Context LengthUp to 128K tokensUp to 128K tokensUp to 128K tokensUp to 128K tokens
Training Data SizeAround 0.5T tokensAround 15.6T tokensBillions of image–text pairs7T+ tokens
Pre-trainingLarge-scale image–textMultilingual, codeMixed image–textMultilingual, code, math
Post-trainingSFT + DPOSFT + DPOSFT + DPOSFT + DPO
Table 2. Comparison of benchmark datasets for image–text and visual reasoning tasks.
Table 2. Comparison of benchmark datasets for image–text and visual reasoning tasks.
DatasetTaskCategoryDescription
MSCOCOCaptionGeneralImage and text matching, caption-based retrieval
Flickr30kCaptionGeneralGenerating natural language descriptions for images
NocapsCaptionGeneralCaptioning novel objects and scenes
ScienceQAVQAScienceEvaluating scientific knowledge and reasoning
MathVistaVQAMathAssessing mathematical reasoning with visual context
CVBenchVQAGeneralEvaluating visual understanding through QA tasks
Table 3. Experimental setup.
Table 3. Experimental setup.
Model
Llama-3.2-11BPhi-3.5-4.2BPixtral-12BQwen2-VL-7B-
Prompt Engineering
B: BaseI: ICLC: CoTR: RAGT: ToTS:SSR
Evaluation Metric
BLUEROUGEMETEORS-BERTMoverScoreCIDEr
Dataset
MSCOCOFlickr30kNocapsScienceQAMathVistaCvBench
Table 4. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the MSCOCO dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 4. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the MSCOCO dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B1.59272.43442.92093.73670.0720.10850.17930.24870.10760.20570.25870.2712
I17.784414.60073.34696.7870.4360.42170.16890.2450.4010.40860.32370.2702
I, S8.86455.42921.50415.77710.31070.20540.08120.22830.3150.29240.1640.2321
I, T9.31052.86143.11597.7350.32550.15480.16210.21880.32090.23930.30870.2509
C14.99658.81837.864312.90660.45740.2960.30820.45510.49870.38270.37850.4933
C, I15.43735.71624.037215.39990.47480.26090.21840.46950.51140.35290.31340.4955
C, S14.48895.597412.372911.06060.45490.21880.4330.40870.50450.31440.48360.4507
C, T15.0184.3851.593515.3910.45580.19860.0850.4670.49190.28230.13830.5121
S15.17784.67461.890612.36640.45320.20490.11250.4510.50180.30390.19650.4989
T16.63474.410413.456317.58920.47880.17960.46290.48690.51120.28190.49740.5411
R(B) 27.855310.10125.07723.04670.55590.33870.21080.13550.53250.38030.37940.1237
R(C) 14.01418.46764.38292.9820.4430.31690.19530.13170.47670.37490.34130.1319
R(S) 2.29116.09032.88624.14650.11480.330.14790.14080.12530.33820.25390.1623
R(T) 13.37913.56384.48543.10190.41610.15860.19810.11950.43060.24280.3620.1186
Table 5. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the MSCOCO dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 5. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the MSCOCO dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.40490.7060.69290.68680.54890.72770.72150.66540.0464 00.03530.1001
I0.57850.62910.31180.55640.77420.75330.71840.663200.00360.04580.0364
I, S0.3830.68010.32120.52790.72890.73120.70930.63290.00160.066600.0758
I, T0.40570.60290.3240.53850.73060.70520.72080.649100.06030.01590.1592
C0.71640.71080.68520.73660.79520.75390.73730.79160.00620.02870.03360.0004
C, I0.72320.7020.67160.74230.79630.74220.71260.80.00540.05720.02010
C, S0.72060.69650.70120.72370.79510.740.77580.79080.00390.01150.00010.0021
C, T0.71870.69120.64560.74190.79590.72340.65320.78990.00890.03190.00010.0013
S0.72290.70470.6970.74310.79890.73910.7050.79390.01960.032300.0025
T0.73910.61520.72280.75860.79980.70570.78210.8050.00630.03470.00020.0019
R(B) 0.73340.62590.66010.33070.81180.73910.75450.565500.04420.10110.0404
R(C) 0.7020.60780.6550.38460.78980.73790.75710.57950.00070.00930.01630.0918
R(S) 0.24840.61910.65090.36450.48520.70470.72570.5720.00050.02020.01460.0813
R(T) 0.690.60920.6590.30970.78260.70780.75670.5570.00060.00980.02040.0694
Table 6. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the FLICKR30k dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 6. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the FLICKR30k dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B1.96612.61433.73154.3590.08110.14120.21860.25550.09880.23930.30360.2784
I24.895910.87883.8033.68450.53410.37790.17740.21030.53090.38830.35410.1791
I, S17.67123.67191.57614.15180.4690.20480.09430.22720.49340.28030.18550.1988
I, T18.62583.33253.54355.21650.45780.17810.17380.25960.46580.26190.3290.2159
C19.40977.89811.378216.83170.46510.32090.37060.47010.50530.39080.42330.5038
C, I20.7266.02685.285916.59790.49410.2590.24050.450.52690.34420.32410.4891
C, S16.71997.180214.317112.49180.43750.26210.41890.42780.4830.3430.47330.4599
C, T16.00053.68982.527516.69570.4490.19440.09860.44870.50470.28510.16340.5029
S16.44943.93032.644515.0350.44870.19710.15320.46120.48820.30160.23520.4953
T19.55814.106817.650518.92990.48150.2190.46590.51730.51410.30940.51540.5476
R(B) 28.14748.68464.51045.51150.52630.34520.21360.23180.49930.34990.3770.2185
R(C) 17.33226.314.0384.67160.4470.26560.20170.2120.48890.32620.33870.2057
R(S) 2.47316.62792.95275.17810.10290.33150.16060.24260.10930.3340.25940.2226
R(T) 17.6933.15554.27114.73820.45380.1710.20780.23750.46510.2370.33940.214
Table 7. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the FLICKR 30k dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 7. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the FLICKR 30k dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.26020.66380.6440.61030.51090.74250.74590.68220.03750.00010.09560.1313
I0.65420.56790.43020.30970.81980.76470.73910.617100.0190.00440.0161
I, S0.6590.60560.30040.30980.81120.74530.7290.62740.00010.047500.0082
I, T0.61350.55770.43920.31050.79890.73110.74360.637700.09090.00010.0071
C0.66980.680.62950.69520.81340.77670.77080.817500.11190.02940.0022
C, I0.68080.66260.63230.68020.81830.76210.72770.818100.10940.09610.0002
C, S0.6590.66530.64860.6770.80830.760.79760.80580.00560.028700.0018
C, T0.66520.6540.60550.69080.80840.73730.66810.818800.064800
S0.65820.66660.64740.68560.81020.75230.72250.81750.00220.00800.0021
T0.65870.63960.66360.69640.81510.74790.80780.82620.00010.050200
R(B) 0.64830.52660.52310.41580.79230.75160.76610.644300.02980.07380.0367
R(C) 0.65260.59340.52520.39710.8020.75370.76830.64030.00030.07250.02470.0162
R(S) 0.20970.54260.50910.42160.46330.7180.73790.65370.00080.02020.0140.0013
R(T) 0.63930.55430.52650.41780.80390.71340.76820.64200.01310.02420.0133
Table 8. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the nocaps dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 8. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the nocaps dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B2.59863.26364.21727.60380.08020.12880.21750.31970.11650.23250.3110.3768
I28.818516.04363.186810.70680.55690.42870.13470.32250.54790.44790.29370.3582
I, S24.0318.74131.945613.52740.51470.29620.08550.32390.54330.37940.16550.3921
I, T25.10397.29131.157212.26580.52310.2610.06480.3390.53830.34720.13540.4082
C21.126810.982915.216122.28920.49040.32410.43330.52040.52180.40520.47220.5602
C, I24.53237.525910.018820.70510.51580.26440.35030.51670.54880.3480.41020.583
C, S20.04975.223116.225218.13660.4780.20770.46790.49190.51370.32070.48460.5556
C, T21.0215.52682.4218.58120.49180.20690.09390.5030.52890.32160.15230.5605
S20.92825.5792.763620.75390.48380.21170.13850.5080.52860.32170.22190.5487
T24.35426.010721.867822.17030.51440.2250.50610.5230.53060.32860.53320.5692
R(B) 27.683311.36534.46549.1040.53440.34830.15430.26520.55690.38970.32890.3812
R(C) 18.850812.67613.30729.21490.46660.35530.12780.2640.52790.4110.24560.3672
R(S) 7.17659.222.61489.02020.26270.35210.09410.25330.28680.37270.18230.344
R(T) 21.20774.86893.233611.28170.48750.17180.12250.28630.50850.26220.24720.4002
Table 9. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the nocaps dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 9. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the nocaps dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.38040.69070.67560.690.53740.73180.73730.71440.0447 0.04390.10210.0186
I0.67810.58810.18560.61570.80040.74470.68740.717400.03030.04850.0606
I, S0.66440.65440.18560.68190.78450.74380.70310.73600.050600.0698
I, T0.69250.63310.18560.67060.79060.72880.67390.742300.121900.0196
C0.70060.70110.65720.72680.79470.75050.76490.802600.08380.00460
C, I0.69750.67990.65630.72370.79580.74320.74340.80300.08850.04650
C, S0.69320.69170.67370.70840.7960.74160.7770.796800.129400
C, T0.6860.67260.62970.71760.790.73470.65370.790500.10400
S0.68710.69320.68230.71180.78920.74030.70910.799500.10590.06120
T0.69490.66520.69760.72570.79580.73440.79540.803900.088900
R(B) 0.69410.59520.51390.69470.80040.73940.72930.727900.03290.14830.0137
R(C) 0.6760.64050.51390.6810.79210.74680.72570.723400.00550.03030.032
R(S) 0.44950.63920.51390.66680.57280.71650.71740.708300.01040.00370.0326
R(T) 0.66910.58820.51390.70340.78890.7060.72070.73400.03660.02960.0939
Table 10. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the ScienceQA dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 10. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the ScienceQA dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B3.01481.88280.430710.27780.18020.09660.03570.2880.20350.14620.07480.3067
I2.74063.28182.80160.54790.33050.36750.12080.04550.24060.26390.16970.0679
I, S1.5193.35862.70490.69190.33130.35680.11980.0560.24090.26390.16820.0821
I, T1.3032.4832.71630.57590.16050.34390.12040.05150.12630.24210.17470.0852
C0.34733.2291.03517.00630.06020.21130.28380.29080.0670.21460.20060.295
C, I0.68142.99780.95410.60290.07260.15980.05050.04790.06570.17680.0950.0781
C, S0.25234.09650.93473.1370.09470.18580.2660.20.08420.22580.19620.2174
C, T1.24712.80771.40955.63590.25760.22970.35370.33110.21220.20240.23370.29
S0.79133.66071.16085.57770.19140.25890.37330.4990.13390.23560.24230.3671
T0.61064.17781.08353.34090.17480.27660.34170.42860.10940.22770.22860.3375
R(B) 11.49936.71097.01966.09520.43760.25910.19790.12740.33030.27080.29490.1675
R(C) 3.22363.4352.36413.37370.07610.16030.07130.09280.08210.2040.120.1285
R(S) 5.48354.76835.54664.64720.25260.15140.16770.11970.19060.19590.25890.1591
R(T) 0.17675.92250.62486.12530.0090.16150.03040.12270.02390.19570.06730.1608
Table 11. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the ScienceQA dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 11. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the ScienceQA dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.2360.09350.02820.2410.33170.2620.22670.35120.36390.04201.0446
I0.46670.39410.39250.04480.51290.46590.33260.26820.95330.64960.12690.0019
I, S0.5190.36460.3940.02650.54820.42930.32990.26510.94070.31140.12030.0208
I, T0.48260.38470.3830.01050.4840.46050.32970.25390.52280.44090.12330.0008
C0.12370.21080.38080.24850.30150.31710.45430.35590.050.20470.88350.8294
C, I0.44430.18190.14680.03650.4470.30660.26320.25940.11420.09070.02640.001
C, S0.2390.2150.33520.2060.36120.31620.44470.32950.150.22870.67450.3097
C, T0.31860.23460.46250.31510.4270.34050.51540.41320.53480.13060.88010.7532
S0.43190.26220.47630.38920.47540.35130.52410.4610.3650.27540.8961.4677
T0.37460.3320.4190.34980.44320.38780.49630.41640.34620.50670.83291.0355
R(B) 0.48720.23740.29730.08390.53340.34510.32880.27361.76440.69440.10930.4706
R(C) 0.39550.13310.10690.09810.40410.29450.25720.28280.27450.12180.02080.2322
R(S) 0.44410.16520.23710.10330.46980.30160.31620.27830.88020.23350.07460.3806
R(T) 0.38040.18520.01810.10060.40190.31280.23950.277500.378100.4826
Table 12. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the MathVista dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 12. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the MathVista dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B0.66120.61430.1260.68630.14080.0330.00960.06180.09670.04060.01490.0782
I0.09180.68211.25920.08690.22760.22520.06930.00720.12560.12550.08370.0129
I, S0.22261.00111.51750.0380.2462 0.26060.08190.00430.13710.15070.10030.0161
I, T0.52030.93221.10250.07620.07240.20990.0660.0050.05130.12450.07980.015
C0.40190.97741.83821.36370.0190.04780.17250.14460.01490.06050.11010.1201
C, I0.50480.92310.5050.06950.02920.04710.03140.00590.02760.05210.04610.0141
C, S0.4060.95211.62711.11720.01890.04660.14740.1570.01670.06310.0890.1145
C, T0.48251.29640.63331.16740.09330.1090.25450.22320.05590.08810.14080.1526
S1.36170.55111.60250.87050.13130.1350.24150.30120.07390.09050.13160.1753
T1.22321.23661.38550.57310.09910.19540.26110.35120.0540.12610.1350.1905
R(B) 0.03060.7291.37260.07790.22550.13970.06350.00550.12290.0910.09130.0141
R(C) 0.37530.70820.43030.08070.02220.03850.02340.00620.02090.04650.03230.0163
R(S) 0.4461.01490.6920.18410.04470.03790.03740.01060.03870.05820.05260.0264
R(T) 0.15960.82130.08250.38450.05270.07030.00840.01510.03280.06580.0130.0164
Table 13. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the MathVista dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 13. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the MathVista dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.38740.16570.1040.3280.40660.25320.22080.35650.25830.015200.0546
I0.69160.52270.38740.16060.71340.49570.37670.25340.50.32970.05310
I, S0.65620.49910.390.16350.62460.48080.37750.25890.550.1780.06530
I, T0.60690.57190.38650.16450.52910.53190.37980.25410.13890.27830.04260
C0.13180.23420.34570.3620.25830.28720.34960.39590.02210.03250.16370.2179
C, I0.38950.23280.14450.15880.37610.29010.25270.24130.02270.00610.00010
C, S0.17110.19020.33190.35510.28480.25940.34280.41340.02210.02590.08690.2587
C, T0.37830.30040.48750.50790.42430.31170.52360.55090.10470.11040.36880.4693
S0.54980.34050.50520.59930.51910.34920.53880.63110.23490.1210.33490.6106
T0.5180.43160.49970.57510.46520.39440.55340.60720.23490.17650.39690.8036
R(B) 0.66650.33180.33330.18890.65280.34270.3140.26170.50.19670.04550
R(C) 0.30920.17390.16030.20680.34080.25240.24640.25470.02270.02480.00380
R(S) 0.24210.20870.22730.21530.30380.2740.27180.26920.07280.01460.00450.0008
R(T) 0.38140.19720.10160.20230.38550.27190.23170.26080.11890.076400.0182
Table 14. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the CVBench dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 14. Performances of BLEU, ROUGE, and METEOR according to MLLM models and prompt engineering techniques on the CVBench dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricBLEUROUGEMETEOR
B1.12470.39360.2451.08940.06250.02360.01520.05710.1250.08720.0320.1231
I0.54951.82682.40480.0430.45290.15110.12440.0030.25980.1820.17010.0108
I, S0.82421.59941.8170.05210.27780.11490.11190.00370.190.16950.1570.0087
I, T0.45771.56562.18950.0670.11620.18190.13140.00410.08640.20840.17950.0108
C0.47061.02140.9270.85770.05540.05390.09810.08970.05670.10860.1130.1279
C, I0.16211.29020.6530.01980.01120.06540.03850.00130.02230.13150.09230.0068
C, S0.74641.37610.91311.09150.05610.0720.09090.07920.05980.14060.10460.131
C, T1.12210.91781.09991.47510.16350.05110.17730.1630.1550.11720.16390.1804
S1.05991.25270.90411.71370.08720.06630.20160.27360.10320.13420.1680.2232
T0.62671.51161.00031.42380.06430.07740.19390.20040.07540.14940.17350.177
R(B) 0.92251.46482.33890.73550.38830.07610.11470.03610.24710.15870.15420.0439
R(C) 0.56121.14860.72450.38490.04390.06020.04110.02270.03960.13370.07330.0284
R(S) 0.72741.18861.21550.28370.06740.06250.06780.01690.06820.12140.11820.0296
R(T) 0.1650.83980.11010.29220.01090.04680.00850.01710.02180.12220.01580.0297
Table 15. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the CVBench dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 15. Performances of S-BERT, MoverScore, and CIDEr according to MLLM models and prompt engineering techniques on the CVBench dataset (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MethodPhiLlamaPixtralQwenPhiLlamaPixtralQwenPhiLlamaPixtralQwen
MetricS-BERTMoverScoreCIDEr
B0.21420.1170.05310.25890.32490.27680.23620.33780.035800.00020.0425
I0.59810.3750.47090.0790.68970.43060.39460.24440.87070.160.10650
I, S0.33260.3360.45370.06710.46650.41190.39440.23830.20810.12590.06760
I, T0.49350.38690.48140.06150.47820.44310.39660.22980.03280.20250.09860
C0.12250.20490.25640.25260.28650.31860.37380.35070.07660.0260.12210.1213
C, I0.31340.2160.20550.06990.35190.31740.32630.23670.00150.04420.00050
C, S0.14920.22550.24490.23730.29840.32940.3530.34250.0320.05140.10220.0933
C, T0.260.21840.33350.32370.39160.32550.43520.40320.1640.02010.28850.2635
S0.16730.21930.40480.39980.33180.3250.48490.46250.04420.030.34120.4056
T0.14720.2920.43740.34270.31270.35740.48770.42170.01150.0650.32720.3967
R(B) 0.48480.25130.37640.14430.58560.33350.36970.28540.64420.05430.09720.0376
R(C) 0.14220.20680.12630.13020.28630.31190.27850.26860.03870.03720.00820.0061
R(S) 0.11970.20380.22390.08210.29320.31690.34080.25620.05190.04060.01390
R(T) 0.12460.20220.01550.11510.27710.30450.23460.2630.00040.009200.0012
Table 16. Comparison of results across MLLM models and datasets using different metrics (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
Table 16. Comparison of results across MLLM models and datasets using different metrics (The boldfaced numbers indicate the highest performance for each language model across different PES variants).
MetricMSCOCOFlickr30knocapsScienceQAMathVistaCVBench
CategoryGeneralGeneralGeneralScienceMathGeneral
Phi
BLEUR(B) R(B) IR(B) SB
ROUGER(B) IIR(B) I, SI
METEORR(B) IR(B) R(B) I, SI
S-BERTTC, ICI, SII
MoverScoreR(B) III, SII
CIDErBBBR(B) I, SI
Best PESR(B) IIR(B) I, SI
Llama
BLEUIIIR(B) C, TI
ROUGEIIIII, SI, T
METEORICIR(B) I, SI, T
S-BERTCCCII, TI, T
MoverScoreCCCII, TI, T
CIDErI, SCC, SR(B) II, T
Best PESICIII, TI, T
Pixtral
BLEUTTTR(B) CI
ROUGETTTSTS
METEORTTTR(B) C, TI, T
S-BERTTTTSSI, T
MoverScoreTTTSTT
CIDErR(B) C, IR(B) STS
Best PESTTTSTS
Qwen
BLEUTTCBCS
ROUGETTTSTS
METEORTTC, ISTS
S-BERTTTCSSS
MoverScoreTTTSSS
CIDErI, TBR(T) STS
Best PESTTTSTS
Performance Enhancement Summary
Best MLLMQwenQwenQwenQwenPhiQwen
Best MLLM with PESPhiQwenQwenPhiQwenPhi
Performance Enhancement132.3%78.1%37.3%49.1%90.3%489.7%
Table 17. Prompt examples for each PE technique (B, I, C, R(B), S, T, R(C)).
Table 17. Prompt examples for each PE technique (B, I, C, R(B), S, T, R(C)).
PEPrompt
BDescribe this image.
I[Example Image]
[Example Caption]
[Input Image]
Describe this image.
CDescribe this image.
Generate a caption for this image. However, instead of simply outputting a caption, first analyze the element of the image and think logically before deriving the final caption. Follow these steps:
-
Object Identification: List the key objects in the image (people, objects, animals, etc.).
-
Context Analysis: Interpret the scene based on the relationships, actions, and background.
-
Emotion and Atmosphere Analysis: Describe the emotions or mood conveyed by the image.
-
Final Caption Generation: Based on the analysis above, create a meaningful and insightful caption.
R(B)[Retrieval Image]
[Retrieval Caption]
[Input Image]
Describe this image.
SDescribe this image.
Analyze and describe the image step by step, following each question in sequence to build a detailed description:
  • Start by identifying the main subject of the image.
  • Describe what is visible in the background or surrounding area.
  • Identify any key details that stand out, such as specific objects, people, or animals.
  • Describe the dominant colors and any notable lighting effects.
  • Observe any actions or movements occurring in the image.
  • Conclude with an interpretation of the image’s mood or theme.
Answer each question in order, step by step, to create a clear and comprehensive description of the image.
TDescribe this image.
Imagine a panel of experts from different fields analyzing this image together to reach the most accurate and insightful interpretation. Follow this structured discussion format:
  • Introduction and Initial Hypothesis
  • Contradictory Evidence and Debate
  • Alternative Hypotheses
  • Final Consensus and Comprehensive Description
The panel collectively decides on the most well-supported interpretation, synthesizing all perspectives into a detailed and insightful description of the image.
R(C)[Example Image]
[Example Caption]
[Input Image]
Describe second image in one sentence.
Generate a caption for this image. However, instead of simply outputting a caption, first analyze elements of the image and think logically before deriving the final caption. Follow these steps:
-
Object Identification: List the key objects in the image (people, objects, animals, etc.).
-
Context Analysis: Interpret the scene based on the relationships, actions, and background.
-
Emotion and Atmosphere Analysis: Describe the emotions or mood conveyed by the image.
-
Final Caption Generation: Based on the analysis above, create a meaningful and insightful caption.
Table 18. Image captioning results for Figure 1 (Pixtral).
Table 18. Image captioning results for Figure 1 (Pixtral).
PEAnswer
BThree people stand outdoors in a grassy area, with two men holding hands with a woman in a white, feathered dress.
IA woman in a white dress stands between two men, one holding a gun and the other holding a feathered cloak.
CIn a serene, rustic setting, a bride in a stunning feathered gown is escorted by two unique figures, creating a moment of dramatic elegance.
R(B)A bride in a white dress with feathers stands between two men, one holding a gun and the other holding a feathered cloak.
SThe main subject of the image is a woman in a white, feathered dress being escorted by two men.
TA stylized, themed event featuring a bride flanked by two men in distinctive, culturally diverse attire, set in an outdoor environment.
R(C)A unique and imaginative wedding scene showcases the couple’s individuality with whimsical attire and accessories.
Table 19. Image captioning results for Figure 1 (Llama 3.2).
Table 19. Image captioning results for Figure 1 (Llama 3.2).
PEAnswer
BThe image depicts a woman in a white wedding dress being restrained by two individuals, one of whom is wearing a mask.
IThe image depicts a woman in a white wedding dress holding hands with a man in a gray suit and a man in a white shirt and black pants.
CThe image depicts a woman in a white wedding dress being restrained by two individuals, one of whom is wearing a mask and the other is shirtless.
R(B)The image depicts a bride and groom standing in a field, with the bride holding a bouquet of flowers and the groom wearing a white suit.
SThe image depicts a woman in a white wedding dress, with two individuals on either side of her, one wearing a tan suit and the other dressed in black pants and a gray jacket. The woman is holding hands with both men, creating a sense of unity and connection between them.
TThe image depicts a woman in a white wedding dress, flanked by two individuals, one of whom is wearing a mask and the other is shirtless, standing in a grassy area with trees and a fence in the background.
R(C)The image depicts a man and woman standing in a grassy area, with the woman holding a bouquet of flowers. The man is wearing a white shirt and black pants, while the woman is dressed in a white dress. The atmosphere of the image suggests a romantic or celebratory occasion, possibly a wedding or anniversary. The presence of the bouquet and the couple’s attire implies a sense of joy and love.
Table 20. Image captioning results for Figure 1 (Phi-3.5).
Table 20. Image captioning results for Figure 1 (Phi-3.5).
PEAnswer
BA person in a wedding dress is being held by two individuals in suits, one of whom is wearing a mask.
IA wedding scene with a bride and groom in costumes.
CA whimsical wedding photo with a twist: the groom and a friend playfully ‘steal’ the bride’s dress.
R(B)A man in a mask and a woman in a wedding dress are being held by two men in suits.
SThe main subject of the image appears to be a person in a wedding dress.
TThe image depicts a scene where two individuals are holding the hands of a person in a wedding dress, with one person wearing a mask and the other in a shirtless, fur-lined jacket, set against an outdoor backdrop with trees and a fence.
R(C)A whimsical twist on wedding traditions: the bride and groom in their wedding attire are playfully interacting with a person in a feathered costume, adding a touch of humor to the ceremony.
Table 21. Image captioning results for Figure 1 (Qwen2).
Table 21. Image captioning results for Figure 1 (Qwen2).
PEAnswer
BThe image depicts a woman in a white dress being held by two men, one wearing a mask and the other with a fur stole, in an outdoor setting with a wooden fence and trees in the background.
Ia woman in a white dress is standing between two men
CA woman in a white dress is being led by two men, one wearing a mask and the other with a fur stole, in a grassy outdoor setting.
R(B)A woman in a white dress is standing between two men in a field.
SIn the image, a person in a white feathered dress is being held by two individuals, one wearing a mask and the other with a fur stole, in an outdoor setting with a wooden fence and trees in the background.
TThe image depicts a surreal scene with a person in a white dress holding hands with two individuals dressed in unconventional attire, one wearing a mask and the other draped in a fur-like garment, set against a rustic outdoor backdrop.
R(C)A woman in a white dress is standing between two men in a field.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Son, M.; Lee, S. Advancing Multimodal Large Language Models: Optimizing Prompt Engineering Strategies for Enhanced Performance. Appl. Sci. 2025, 15, 3992. https://doi.org/10.3390/app15073992

AMA Style

Son M, Lee S. Advancing Multimodal Large Language Models: Optimizing Prompt Engineering Strategies for Enhanced Performance. Applied Sciences. 2025; 15(7):3992. https://doi.org/10.3390/app15073992

Chicago/Turabian Style

Son, Minjun, and Sungjin Lee. 2025. "Advancing Multimodal Large Language Models: Optimizing Prompt Engineering Strategies for Enhanced Performance" Applied Sciences 15, no. 7: 3992. https://doi.org/10.3390/app15073992

APA Style

Son, M., & Lee, S. (2025). Advancing Multimodal Large Language Models: Optimizing Prompt Engineering Strategies for Enhanced Performance. Applied Sciences, 15(7), 3992. https://doi.org/10.3390/app15073992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop