Next Article in Journal
Complete Solutions in the Dilatation Theory of Elasticity with a Representation for Axisymmetry
Previous Article in Journal
Advanced Bimodal Skew-Symmetric Distributions: Methodology and Application to Cancer Cell Protein Data
Previous Article in Special Issue
An Improved Pedestrian Detection Model Based on YOLOv8 for Dense Scenes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing User Perception of Reliability in Computer Vision: Uncertainty Visualization for Probability Distributions

School of Mechanical Engineering, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(8), 986; https://doi.org/10.3390/sym16080986
Submission received: 14 June 2024 / Revised: 13 July 2024 / Accepted: 24 July 2024 / Published: 3 August 2024

Abstract

:
Non-expert users often find it challenging to perceive the reliability of computer vision systems accurately. In human–computer decision-making applications, users’ perceptions of system reliability may deviate from the probabilistic characteristics. Intuitive visualization of system recognition results within probability distributions can serve to enhance interpretability and support cognitive processes. Different visualization formats may impact users’ reliability perceptions and cognitive abilities. This study first compared the mapping relationship between users’ perceived values of system recognition results and the actual probabilistic characteristics of the distribution when using density strips, violin plots, and error bars to visualize normal distributions. The findings indicate that when density strips are used for visualization, users’ perceptions align most closely with the probabilistic integrals, exhibiting the shortest response times and highest cognitive arousal. However, users’ perceptions often exceed the actual probability density, with an average coefficient of 2.53 times, unaffected by the form of uncertainty visualization. Conversely, this perceptual bias did not appear in triangular distributions and remained consistent across symmetric and asymmetric distributions. The results of this study contribute to a better understanding of user reliability perception for interaction designers, helping to improve uncertainty visualization and thereby mitigate perceptual biases and potential trust risks.

1. Introduction

The rapid development of computer vision (CV) technology has fundamentally transformed the collaborative relationship between humans and systems. In specific domains, the visual processing capabilities of CV have surpassed human abilities in multiple dimensions, particularly in hyperspectral classification tasks [1] and visual object tracking [2]. As a result, CV system users typically do not need to directly observe images but rather make decisions and judgments based on the recognition results provided by CV.
However, even the most advanced computer systems, including various CV and image-processing systems, cannot entirely eliminate uncertainty in practical applications [3]. Understanding and managing reliability in various real-world applications is crucial [4]. During the collaboration with the system, the presence of uncertainty makes user decision-making a challenging but common task. Users need to rely on the analytical results provided by the system while also considering the potential uncertainties in these results. Even seemingly minor differences can significantly impact human risk perception and decision-making. Even the most carefully considered decisions may lead to adverse or unexpected outcomes [5].
Although extensive research has precisely modeled the reliability of CV systems, actual end-users, especially non-expert users, often find it difficult to intuitively understand reliability due to a lack of statistical knowledge and understanding of CV system principles. Studies have shown that in risk decisions involving probabilistic or uncertain elements, merely providing users with reported statistical figures or mathematical models is often insufficient. More attention should be given to effectively communicating and presenting probability distributions and related probabilistic characteristics [6]. In the design of interactive interfaces for non-expert users, it is not common to list all data details, especially precise data, such as probability integrals and densities, which are not directly related to recognition results and require a statistical background to understand. A better method of expressing uncertainty information is to display potential probability distributions rather than aggregate statistical data that most ordinary users find difficult to comprehend [7]. When users understand the probability distribution of the data they are using, the likelihood of errors in decision-making and estimation is significantly reduced [8]. Research literature strongly indicates that if probability information is appropriately presented, ordinary people can understand and use it, generally improving decision quality [9].
Therefore, the visualization of probability distributions is crucial in describing and presenting the reliability of CV and the associated uncertainties [10]. Uncertainty visualizers often use probability distribution visualizations to express unknown factors in data, statistical analysis, and predictions [11]. This is partly because the underlying phenomena captured have inherent uncertainties modeled as probability distributions, and partly because operations applied to the data (e.g., aggregation) produce distributions [12]. Additionally, many artificial intelligence models use probability distributions and their characteristics as essential information and basis for recognition and decision-making [13]. Among various probability distributions, normal distribution and triangular distribution, as two typical symmetric distributions, have the widest applications and are often used as basic examples of probability models [14,15]. Besides symmetric distributions, triangular distribution also has two common asymmetric subtypes—positive skew and negative skew. Research into uncertainty visualization of probability distributions may become a breakthrough in enhancing the perception of system reliability and increasing system transparency [16].
To achieve user-perceivable reliability through uncertainty visualization, the most critical issue lies in selecting appropriate visualization formats that ensure users’ perceived values of system results align with the actual probabilistic characteristics of the distributions. The visualization forms of uncertainty distributions can create perceptual biases, as transforming the true distribution of uncertain data or the computational logic of the system into visual elements might differ from users’ prior knowledge, expectations, or cognitive models [17]. Common visualization formats can significantly alter the way users make decisions about uncertain data, leading to inconsistencies between their decisions and statistical inferences [18]. Poor visualization can create a “knowledge gap” between users and the data [19], resulting in overestimation, underestimation, and misunderstanding of the data [20,21]. Despite the rapid advancements in visualization research, conveying and managing uncertainty remains challenging [22,23]. This difficulty poses a challenge to achieving user-perceivable reliability.
Overestimations, underestimations, and misunderstandings of CV system reliability often cause trust issues in human–computer collaborative decision-making, such as disuse or misuse, as illustrated in Figure 1. When users’ perception of system reliability exceeds the actual probabilistic characteristics, they may overlook the inherent uncertainties, leading to a false sense of security regarding system results [6] and resulting in poor decision-making [11]. Conversely, when perceived reliability is too low, users may disregard the system’s recommendations, thus missing out on the potential advantages of CV technology. These biases in perceiving system reliability gradually erode users’ confidence in the technology or system [24], hindering trust and confidence in CV systems [25,26]. Designers and users still view uncertainty as an obstacle rather than a design opportunity [27]. Providing reliability values in interaction design does not necessarily mean they are clear and understandable [28], as designers often overlook whether these values are clear and useful to humans [29]. In fact, many designers initially do not intend to convey uncertainty [30].
To enable non-expert end-users to intuitively and comprehensively understand the reliability of CV systems and the associated uncertainties, this study formulated the following task scenarios based on practical user-oriented visual system application contexts. In this study, multiple recognition results for a specific object form a probability distribution. When the CV system’s detection value for the object falls within a certain position in this distribution, users are required to perceive the system’s reliability. In this context, the study first reveals and measures the mapping relationship between users’ perceptions of system detection values and the actual characteristics of probability distributions when faced with uncertainty visualization formats, such as density strips, violin plots, and error bars. The experimental results indicate that when using density strips, users’ perceptual decisions show the highest consistency with probability integrals, exhibiting the shortest response times and highest cognitive arousal. However, users’ perception of the occurrence probability of system results often exceeds the actual probability density by an average factor of 2.53 times, unaffected by the form of uncertainty visualization. In contrast, this perceptual bias does not appear with triangular distributions and remains consistent across symmetric and asymmetric triangular distributions.
The remainder of this paper is organized as follows. In Section 2, we discuss the main works related to uncertainty visualization and probability distributions. Section 3 compares and determines the most suitable visualization formats for reliability perception through uncertainty visualization experiments. Section 4 explores user probability perception in triangular distributions. Section 5 discusses the obtained results, and finally, Section 6 concludes the paper.

2. Related Works

2.1. Uncertainty Visualization

Despite the rapid advancements in visualization research, there is still a need for more empirical studies to promote the perception of system reliability and the development of uncertainty visualization [11]. Currently, it is not always clear which designs are optimal or how encoding choices influence risk perception and decision-making [31]. The processes of visual data analysis, analytical reasoning, and the derivation of new knowledge by humans lack sufficient empirical research as a foundation [32]. These empirical studies should be an integral part of the discipline of visualization analysis. Without practical experience or in-depth user studies, effectively implementing and utilizing uncertainty visualization remains an unresolved academic issue [33].
On one hand, existing theoretical models aimed at describing various aspects of visual analysis and knowledge construction processes require user-oriented empirical studies for validation [32]. On the other hand, it remains unclear whether current visualization methods are the most suitable for promoting optimal decision-making [10], necessitating the evaluation of uncertainty visualization methods within specific contexts [34]. For example, human subject experiments are needed to test users’ understanding of different forms of uncertainty representation, with variations including different types of training to explain uncertainty representation [35]. Therefore, there is an urgent need for research focused on uncertainty expression and management to evaluate the efficacy of novel visualization interpretive interfaces across various application scenarios [29].

2.2. Visualization of Probability Distributions

The study of probability distributions is a central task in visual data analysis and an increasingly common method across various fields [12]. In uncertainty analysis, probability density plots and probability integral plots are essential [36]. Probability distributions serve as primary explanatory tools for various natural phenomena [37]. For instance, in ocean visible-spectrum remote sensing, probability densities play a critical role in simulating and elucidating Kelvin wake reflectance, aiding the understanding of the imaging process in visible-spectrum remote sensing [38]. In the context of geographic data fusion and domain adaptation, the fusion of heterogeneous data, estimation of distributions, and computation of optimal coupling all consider probability densities [39]. Probability densities in the form of density maps can also be used to estimate object counts in images. By generating density maps, the probability density of objects in different areas of an image can be represented. Compared to baseline models, these density maps are more accurate, clearer, and show stronger localization capabilities [40]. To obtain comprehensive and precise probability densities, some studies use a large number of unlabeled and labeled images to guide generators in producing the probability density space of synthetic images [41]. Descriptive statistics of spectral index thresholds for salinized wasteland and other land types were analyzed and compared using probability density curves, highlighting the mean and standard deviation values for each land type and month [42]. The probability densities derived from remote sensing analysis emphasize the close relationship between land cover and water use across entire watersheds [43]. During data assimilation in remote sensing, the system states, model parameters, or initial conditions of certain processes are assumed to be random variables and defined by probability density functions [44]. Drift correction in overlapping continuous images is performed using probability density function matching to correct for image inconsistencies [45]. The product of low-dimensional Gaussian mixture models is used to approximate the density, and the estimated density function is then used to calculate the maximum likelihood estimate of the target relative fraction of the observed pixel spectrum, yielding the test statistic for target detection [46]. A generative neural-network-based alternative method for hyperspectral image compression identifies the probability distribution of real data derived from random latent codes [47]. These methods have achieved encouraging results in identifying relevant targets in hyperspectral datasets, even in complex backgrounds and subpixel target detection environments [48].
Information about probability distributions helps users interpret the range of uncertainties [49], which is particularly relevant to rational decision-making. Users often need to fully understand this probabilistic information to make informed decisions [50]. Drawing attention to any given probability range can affect how people assess that probability and make decisions based on the provided information [51]. However, designing effective visual metaphors to convey probability distributions remains challenging [12]. Common probability information visualizations in scientific communication (e.g., variants of confidence intervals) can introduce systematic errors [11]. For example, error bars encoding confidence intervals or standard errors are easily misinterpreted, possibly because such frequency statistics are misunderstood as indicating the probability of the estimate rather than the variability in the process generating that estimate. Similarly, the nuances of statistical abstractions make it difficult for many to interpret probability density plots, such as gradient plots and violin plots. While these variations of probability density visualizations theoretically express the underlying distributions adequately, they largely depend on the observer’s statistical literacy [6].

2.3. Visualization Forms

The choice and design of visualization forms can significantly impact the communication of uncertainty information [5] and play a crucial role in users’ perception and decision-making [52]. Generally, graphical representations help people recognize uncertainty and make estimates, but it is important to select the appropriate type of graph to accurately convey uncertainty [49]. Different visualization forms vary in distinguishability and comprehensibility and influence data processing through data suppression as well as visual and cognitive biases [53]. In the current research community on visualization forms, most studies have adopted a comparative approach to examine various visualization forms within specific tasks. This study summarizes the visualization forms compared in several related research works, as shown in Table 1.
Given the numerous visualization methods, it is essential to filter them based on specific criteria. One key criterion is the scalability of the visualization form, which allows for multi-indicator expansion. Although most visualizations perform well in single-indicator scenarios, their presentation limitations hinder their ability to be arranged in a multi-indicator format. Although this study utilizes single-indicator visualizations, future research plans to adopt parallel coordinates suitable for large datasets and multiple indicators [7]. Therefore, the chosen visualization method must be capable of vertical expansion to accommodate screen display requirements and users’ visual habits. The suitable uncertainty visualization methods identified are as follows:
  • Density strips: A density strip is a shaded band in which the depth of shading at any given point indicates the probability density of a continuous variable or an estimated parameter. The visual characteristic of ambiguity inherent in density strips is considered to convey probability more naturally. First, density strips visually emphasize the continuity of probability density, avoiding the potential for misinterpretation of confidence intervals as binary ‘in or out’ thresholds. By examining gradients and color intensities, density strips aid in understanding the regional imagery of probability density functions [36]. Furthermore, they offer considerable flexibility in use, functioning both as descriptive tools and inferential devices for hypothesis testing or confidence intervals [65]. Several studies have employed density strips or similar fuzzy visualizations for expressing uncertainty probabilities, such as visualizing observational data with point estimates sized by sample, providing uncertainty analogies based on user priors, and predictive posterior visualizations [66]. When presenting estimates of expected mileage with diffused color density strips, drivers reported improved driving experiences and reduced anxiety, maintaining trust in the vehicle [67]. Using density strips to provide the overall distribution of clinical measurements offers a more comprehensive view for descriptive visualization and inferential estimation [65].
  • Violin plots: Analogous to bell curves, violin plots represent distributions by correlating the probability of specific values with their corresponding width. These plots provide an instinctive understanding of predictive variance in aspects such as value, central tendency (such as the mean), and the form of the distribution (such as normal or skewed). Additionally, they enhance an intuitive evaluation of probabilities or unexpected outcomes in a way that is more consistent with traditional statistical interpretations [7]. Research indicates that violin plots or similar representations of probability density allow users to understand underlying distributions more accurately than those showing predictions or confidence intervals [49], and are more suitable for visualizing point and probability estimates [68]. They have been shown to improve user intuition, particularly with regards to misconceptions about probability distributions and relationships [69], resulting in higher accuracy [15] and better-quality decision-making [70].
  • Error bars: Resembling box plots, error bars are summary displays that convey distribution information with minimal graphical representation [4]. Typically, error bars represent a range of standard deviations, standard errors, or confidence intervals [7]. Widely used in scientific publications and other domains when representing uncertainty, error bars may be the most recognized method for visualizing a range of potential statistical values [56]. Studies have found that error bars or box plots are powerful tools for presenting distributions, producing more accurate results than histograms, dot plots, and band plots, and are not adversely affected by increased sample sizes [15]. In probability estimation tasks, error bars have consistently outperformed other plot types that depict underlying distribution shapes and are simpler to understand and use [10].

3. Experiment 1: Visualization Form Comparison

The primary objective of this experiment was to investigate how different forms of uncertainty visualization impacted participants’ perception and decision-making when interacting with a system that does not provide entirely reliable results. This will help identify the most perceptually effective visualization format. Based on the analysis in Section 2, most studies compare the effectiveness of various visualizations in different application scenarios. Therefore, this experiment will compare three probabilistic visualization formats that are suitable and scalable: density strip, violin plot, and error bar. These forms are also considered common methods for conveying cognitive uncertainty [49]. Considering that the distributions in this experiment were simplified to normal distributions, the characteristic variations of the density strip and violin plot also followed normal distribution, whereas the error bar only represents the range of the distribution. The three visualization formats are illustrated in Figure 2.

3.1. Procedure

To ensure that participants without a background in remote sensing can understand the experiment, a simplified and easy-to-understand CV recognition scenario was created. In multiple remote sensing measurements of an object, due to factors such as weather, lighting, occlusion, and equipment, the observed reflection values of the object at a specific wavelength are normally distributed, with a mean value of μ. Now, a not entirely reliable remote sensing system measures the object’s reflection value as v. Participants were required to answer the following three questions:
  • How confident are you that the actual reflection value of the object is higher than the red point v?
  • What is the likelihood that the actual reflection value equals the measured value v?
  • Do you think this system is reliable?
The experimental interface, as shown in Figure 3, displays the visualization graphics and the test point on the left side. For Questions 1 and 2, participants responded using sliders. The current value represented by the slider is displayed on the right side in percentage form, precise to two decimal places. Using point estimates and sliders to specify probability values can effectively reflect participants’ perception of probabilities [66]. For Question 3, participants selected “Reliable” or “Unreliable” by clicking with the mouse. The experiment was implemented on the Unity3D platform, which supports slider and click interactions.
Before the formal start of the experiment, the experimenter provided a detailed description of the experiment, including the scenario setup and the significance of the questions. The experimenter explained that each trial is independent, meaning that each red point represents the result from a different system. After sufficient explanation and communication with the participants, and allowing them to practice, the formal experiment began once the participants indicated they fully understood the task and the experiment.
Considering that the primary research subjects in this study were non-expert users, several difficulties were encountered during the experiment. For the experimenter, the most critical task was to ensure that participants understood that the probability distributions reflected by uncertainty visualization represent historical distributions formed from multiple measurements, rather than a probability distribution obtained from a single detection.
For the participants, the main difficulty in understanding centered on responding to Question 2. Mathematically, in a continuous probability distribution, the probability of any specific event occurring is zero. However, once the experimenter explained that the question primarily examines the perception and estimation of the system’s reliability, the participants expressed that they could understand the practical significance of the question. Additionally, it was necessary for participants to clearly understand that the v value corresponding to the red point in each trial is the detection value from systems with different reliabilities, and that the reliability of the system remains independent in each trial.

3.2. Measures and Hypotheses

The evaluation of the perceptual utility of visualization formats includes three aspects: the degree of match with probabilistic characteristics, response time, and pupil diameter. The degree of match with probabilistic characteristics refers to the alignment between the probabilistic features conveyed by the visualization and those perceived by the participants.
For Question 1, participants were asked about their confidence that the actual value exceeds the red point v. This confidence is closely related to the probability that the actual value exceeds the red point v. Here, the red point v serves as a threshold or critical point. Therefore, the most relevant probabilistic characteristic for Question 1 is the true value of the probability integral (TVPI) above point v [18]. Setting participants’ responses to Question 1 as P1, comparing P1 with TVPI can illustrate the discrepancy between the confidence generated through perception and the probability expressed by the visualization. In other words, analyzing Question 1 directly reflects participants’ perception of the probability distribution shape, thereby effectively comparing the utility of the visualization. Based on these analyses, the first hypothesis can be proposed:
H1: 
The visualization format significantly affects the match between P1 and TVPI.
For Question 2, participants were asked about the likelihood that the actual reflection value equals the system-measured value v. Here, the value v represents a measurement from a not entirely reliable system. Setting participants’ responses to Question 2 as P2, P2 directly reflects participants’ perception of the reliability of the system-measured value. The position of v in the probability density distribution illustrates the relative standing of this measurement within historical measurement statistics. Therefore, the most relevant probabilistic characteristic for P2 is the true value of the probability density (TVPD) at point v. Comparing P2 with TVPD further reveals the discrepancy between participants’ reliability perception and the probability expressed by the visualization. Based on these analyses, the second hypothesis can be proposed:
H2: 
The visualization format significantly affects the match between P2 and TVPD.
Similar to Question 2, Question 3 measures users’ perception of reliability. However, Question 3 aims to identify a threshold rather than an accurate reliability score. When the value v falls into different intervals of the probability distribution, participants may choose “Reliable” or “Unreliable”, thereby delineating their perceived reliability intervals for the system.
In addition to the measures of matching with probabilistic characteristics, response time and pupil diameter can be directly collected. Eye movement data were obtained using a Tobii eye tracker, and data analysis was performed using Statistical Product and Service Solutions (SPSS) software. Based on the response time and pupil diameter, the following hypotheses can be proposed:
H3: 
The visualization format significantly affects perceptual decision response time.
H4: 
The visualization format significantly affects pupil diameter.

3.3. Stimuli and Trial Generation

Fourteen test points were selected within the distribution range. To avoid heuristic judgments by participants, i.e., calculating probabilities evenly within the distribution range, this experiment selected test points based on the TVPI. In a vertical normal distribution, a point was taken, and the value of the upward integral was calculated, which served as the basis for point selection. Points with true probability integral values of 0, 0.01, 0.05, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 0.99, and 1 were chosen, and the corresponding TVPDs at these points were obtained. The name of each point and the reflection value it represents are shown in the Figure 4.

3.4. Participants

Participants were students majoring in design at Southeast University, with a total of 21 participants (11 males and 10 females) aged between 21 and 29. Each participant received a 30 RMB reward upon completing the experiment. None of the participants had a background in remote sensing, and their major does not include statistics courses. Due to the uniform course curriculum, all students had a similar level of familiarity with probability concepts. This consistency allowed for the maximal control of confounding variables in the experiment. All participants provided informed consent regarding the experiment’s purpose and data collection process.
The experiment followed a within-subjects design, with the independent variables being the 3 visualization encoding methods and the 14 test points corresponding to each visualization method, resulting in 3 × 14 = 42 trials. All trials were randomly presented to each participant, and each trial required the participant to answer 3 questions, totaling 126 questions. Participants took a break after every 14 trials to avoid fatigue, resting twice in total. The entire experiment lasted approximately 20 min, and eye-tracking data were recorded throughout the process.

3.5. Experimental Results

3.5.1. Analysis of Question 1

Participants’ responses to Question 1 were denoted as P1. The P1 values obtained under each visualization method were fitted to the TVPIs. The slope, k, and goodness of fit, R2, were used as evaluation criteria, where a k value closer to 1 and a higher R2 indicate that the visualization method aligns participants’ perceptions with the probabilistic characteristics. The fitting results are shown in Table 2. As seen in Table 2, the density strip visualization method best-matched the true values of the probability integrals.
To further demonstrate the significant effect of visualization forms, a correlation analysis was conducted. First, the differences between P1s and the TVPIs under each visualization method were calculated. The root mean square error, δ, between the test values and the true values is expressed as follows:
δ = i = 1 14 P 1 i TVPI i 2 2
The distribution of δ values under different visualization methods is shown in Figure 5. Subsequent correlation tests indicated a strong correlation between δ and visualization methods (F = 5.708, p = 0.001), thereby demonstrating statistical significance. Combining the results from the figure and the correlation analysis, the previous conclusion was further validated: density strips was the optimal visualization method for matching, followed by the violin plot and the error bar.

3.5.2. Analysis of Question 2

Participants’ responses to Question 2 were denoted as P2. The P2 values were fitted to the TVPDs. Since the TVPD is completely symmetrical in distribution, the values appeared in pairs from smallest to largest and were fitted using a linear function. The fitting results are shown in Table 3.
Correlation analysis indicated a strong correlation between the P2s and TVPDs (F = 2409.109, p < 0.001). However, the visualization method did not have a significant impact on probability judgment (F = 0.895, p = 0.443).

3.5.3. Analysis of Question 3

The threshold for participants’ judgment of system reliability was analyzed. For the distribution range constituted by the 14 test points on the vertical axis, each participant’s choices sequentially formed 3 intervals: “unreliable”, “reliable”, and “unreliable”. The test points dividing these three intervals were designated as the upper threshold point and the lower threshold point. Subsequent correlation analysis indicated that the position of the threshold points was independent of the visualization form (lower threshold point position: F = 1.315, p = 0.275; upper threshold point position: F = 0.741, p = 0.531).

3.5.4. Response Time

The average time taken by participants to complete the test tasks (14 trials) under the three visualization forms is shown in Table 4. The table reveals that the density strip visualization resulted in the shortest average response time, and there was a significant correlation between the visualization form and response time (F = 3.564, p = 0.047). Furthermore, to investigate the impact of test point positions on response times, Figure 6 was generated. In Figure 6, with test point positions on the horizontal axis and response times on the vertical axis, green dots represent the average response times of participants across three visualizations for a single trial. Further correlation analysis between the response time and test point position indicated that the position of the test points significantly affected the response time (F = 4.599, p < 0.001). According to Figure 6, the response time was longest when the test points were at ±4.

3.5.5. Pupil Diameter

Pupil diameter, a measure of cognitive arousal during task completion, was the primary focus of eye-tracking data analysis. The average pupil diameter under different visualization forms is reported in Table 4. A significant correlation was found between pupil diameter and visualization form (F = 6.001, p = 0.001). The findings demonstrate that subjects exhibited the largest pupil diameter and, by implication, the highest level of cognitive arousal while engaging with the density plot visualization of uncertainty.

4. Experiment 2: Effects of Distribution Types

4.1. Purpose of the Experiment

In the comparative visualization experiment, density strips were found to have better visualization efficacy when the probability distribution was normal. However, the perception of P2 was significantly higher than the actual probability density values, and this bias was not influenced by the form of visualization. Although, mathematically, the probability of a point occurring is not equal to its probability density, in practical applications, this bias may lead non-expert users to overestimate the reliability of the system. Therefore, this experiment aimed to investigate whether this overestimation persisted in triangular distributions. The triangular distribution, another widely used distribution [69], includes symmetric, positively skewed, and negatively skewed subtypes. Thus, the purpose of this experiment was to explore whether different distribution subtypes affected participants’ probability perception.

4.2. Stimuli and Trial Generation

First, distributions and test points were constructed based on the symmetrical distribution. The mean of the symmetrical distribution is denoted as μ. Two distributions with peaks at μ + 0.4 and μ−0.4 relative to the symmetrical distribution’s coordinate system were named positively skewed and negatively skewed distributions, respectively. Each distribution had 13 test points selected based on the true probability density values, specifically at 0.1, 0.2, 0.3, 0.5, 0.7, 0.9, 1, 0.9, 0.7, 0.5, 0.3, 0.2, and 0.1. The final visualization forms and test points are shown in Figure 7.

4.3. Procedure

The experimental setup was basically the same as in Section 3, but participants only needed to answer one question: “How likely do you think it is that the actual reflectance value is the system-measured value v?”
The experiment followed a within-subjects design, with the independent variable being the 3 subtypes of the triangular distribution, each corresponding to 13 test points, totaling 39 trials. All trials were randomly presented to each participant. The entire experiment lasted approximately 10 min. The experimental interface is depicted in Figure 8.
Participants were students majoring in design at Southeast University, with a total of 40 participants (26 males and 14 females) aged between 22 and 29. Each participant received a 30 RMB reward upon completing the experiment. None of the participants had a background in remote sensing, and their major does not include statistics courses. All participants provided informed consent regarding the experiment’s purpose and data collection process. The experiment was implemented using the Unity3D platform for slider interaction.

4.4. Experimental Results

Participants’ responses to the question were denoted as P3. The P3 values were fitted to the TVPDs. Although the distribution shapes were not all symmetrical, the TVPD values were symmetrical. Therefore, the TVPDs were paired from smallest to largest and fitted using linear regression. The fitting results for each distribution shape are shown in Table 5, and the visualization of the linear fitting results is shown in Figure 9.
The ANOVA was conducted to examine the influence of the variable Subtype on the linear relationship between TVPDs and P3s. The ANOVA results indicated no significant interaction effect between TVPDs and Subtype on P3s (F(2,1554) = 0.422, p = 0.656), suggesting that the slope of the relationship between TVPDs and P3s did not significantly differ across subtypes. Furthermore, the main effect of Subtype on P3s was also not significant (F(2,1554) = 0.293, p = 0.746), indicating that the intercepts of the regression lines were not significantly different among subtypes.
In terms of the primary predictor, TVPDs displayed a significant main effect (F(1,1554) = 14,454, p < 0.001), confirming that TVPD was a robust predictor of P3s across all subtypes. The large F statistic for TVPDs suggested a strong linear relationship with P3s.
In summary, these results revealed a high degree of explanatory power for the linear models, suggesting that TVPDs were a strong predictor of P3s across the different subtypes examined. The consistency of these high R2 values across subtypes reinforces the ANOVA findings, which indicated that subtype did not significantly influence the effect of TVPDs on P3s. The homogeneity in the predictive strength of T3s across subtypes supports the conclusion that the subtype classification did not significantly modulate the linear relationship between TVPDs and P3s.

5. Discussion

This study primarily compared the consistency between users’ perceived reliability of CV systems, represented by remote sensing recognition, and the actual values of the probability distributions under different visualization formats. The experiment detailed in Section 3 investigated the impact of density strip, violin plot, and error bar on users’ perceived reliability within a normal distribution, ultimately determining that the density strip provided the highest perceptual utility. Building on this, Section 4 further explored the influence of probability distribution shapes on the perceptual utility of the density strip, concluding that participants’ perceptual results remained consistent across symmetric and asymmetric sub-shapes of the triangular distribution.
In the context of the development of CV technology and new trends in human–computer collaboration, the findings of this study can enhance interaction designers’ understanding of end-user, particularly non-expert user, reliability estimation. This understanding aids in forming users’ intuitive perceptions of system reliability. By researching the visualization of uncertainty in probability distributions, user misunderstandings of actual probability distributions can be reduced, and the inherent uncertainty of system recognition results can be better understood. This aids users in comprehending probabilities and perceiving risks [71], thereby enabling more effective utilization of system recognition results.

5.1. Visualization Methods

The experiment in Section 3 demonstrated that the visualization format significantly impacted the P1–TVPI match, response time, and pupil diameter, thereby supporting hypotheses H1, H3, and H4. The P1–TVPI match reflects the alignment between the confidence generated by the participants through visualization and the true probability integral value intended to be conveyed by the visualization. Therefore, the P1–TVPI match directly reflects participants’ perception of the probability distribution shape, effectively comparing the utility of different visualizations. The comparison revealed that the density strip exhibited the highest match, followed by the violin plot and the error bar. Additionally, the density strip induced the shortest response time and the largest pupil diameter, indicating the highest perceptual decision efficiency and cognitive arousal when participants engaged with this visualization format.
Unexpectedly, the visualization format did not significantly affect the P2–TVPD match, thus not rejecting H2. The P2–TVPD match reflects the degree of alignment between the “participants’ perceived reliability of the system-measured value” and the “actual probability density conveyed by the visualization”. The lack of significant impact of the visualization form on the P2–TVPD match might be because Question 2 is relatively simpler compared to Question 1. Estimating the reliability of the system-measured value is easier than estimating the likelihood that the actual value exceeds the measured value, thereby reducing the impact of the visualization form.

5.1.1. Density Strips

When using density strips, participants exhibited optimal performance in terms of P1 matching accuracy, response time, and cognitive arousal. This aligns with [49], which suggested that density strips are the most accurate representation of the underlying probability distribution around point estimates. This could be due to the variation in shading intensity, which makes it easy for participants to map and associate it with changes in probability gradients.
However, high arousal also implies fatigue. Since this study emphasized the need for users to focus on uncertainty to avoid misinterpretations of probability information leading to erroneous decisions, pupil diameter was used as a positive evaluation metric. Yet, pupil diameter is also commonly used to assess cognitive load, whereby high cognitive load indicates consumption of cognitive resources and potential fatigue. Addressing this contradiction, we posit that in the current trend of human–computer collaboration, most image processing and recognition tasks are performed by CV systems. Users need to engage with probability information primarily when uncertainty is high or there is significant inconsistency between user perception and decision support systems. In such scenarios, it is crucial to direct users’ attention and cognitive resources toward understanding uncertainty, allowing them to make cognitive adjustments based on their experience and domain knowledge. Hence, density strips can be particularly effective in these contexts.

5.1.2. Violin Plots

Violin plots were found to be slightly inferior to density strips in terms of P1, response time, and arousal levels. Violin plots have shown commendable efficacy in numerous studies and designs. For instance, the authors of [69] advocated for violin plots as a means to improve user understanding of probability distribution shapes and confidence intervals. Furthermore, the authors of [68] employed violin plots to provide a visual reference for point estimates to participants. Nevertheless, the performance of violin plots was not as pronounced in this experiment. This may be attributed to the violin plots’ representation of probability trends through curves, which are not conducive to an intuitive understanding of area and thereby impact the calculation of integral values. As noted in [36], users find it challenging to intuitively explore probability distribution plots, establishing a mapping between curves and areas, and obtaining the area of a given region without supplementary information.

5.1.3. Error Bars

Error bars are considered one of the most accurate and commonly used powerful tools in data analysis and distribution presentation, as stated in [15,56], and consistently outperform other visualization methods in probability estimation tasks across various studies, as reviewed in [10], where the participants subjectively found them easier to interpret and use. However, error bars still exhibit certain limitations in specific tasks. Their visual discontinuity can impede users’ perception of underlying probability distributions, potentially leading to misinterpretations. Error bars are often not perceived as continuous distributions but rather as discrete ranges, as suggested in [7]. Additionally, due to their simplistic form, error bars struggle to engage users with probabilistic information and uncertainty. This may cause users to rely on visual reasoning strategies, such as direct approximation based on visual distance, rather than a more careful perception and consideration of probabilities and uncertainties, as discussed in [58]. Thus, error bars may not be ideally suited for conveying information for probability estimation and could foster misunderstandings of the potential probability distribution within the uncertainty range, according to the authors of [10].

5.2. Reliability Perception and Risk Management

According to the experiments in Section 3, there was a significant deviation between participants’ perceived P2s and the TVPDs. Compared to other studies that report a severe overestimation of probabilities [72], the mean value of k obtained in this study was 2.53. However, in the experiments in Section 4, participants’ perceived P3 for the same question showed a good fit to the TVPDs, with a mean k value of 1.08. Additionally, the three subtypes of the triangular distribution did not significantly impact the fitting effect. This suggests that within the same category of symmetrical distributions (e.g., different shapes of triangular distributions, such as symmetric, positively skewed, and negatively skewed), the mapping to cognitive space had similar effects. However, different categories of distributions (e.g., normal distribution vs. triangular distribution) showed varying effects on probability perception mapping.
This discrepancy and misunderstanding should be taken seriously. Misunderstanding probability information may nullify the benefits of widely recommended shifts to estimation using effect sizes and confidence intervals [69]. Overestimating the reliability of a system could lead users to overlook potential uncertainties, giving them a false sense of security regarding system results [6], which could result in poor decisions and increased risk [11]. Therefore, interaction designers need to implement subjective probability corrections in uncertainty representation [73] to manage user-centric risk. For example, when visualizing the uncertainty of a normal distribution, if the reliability estimation of system results by users is a key indicator, the gradient of changes in density strips should be reduced by a factor of 2.53 to achieve more accurate cognitive mapping.

5.3. Interval Division

The most important aspect of dividing the intervals of “reliable”, “to be examined”, and “unreliable” is the threshold values that define these intervals. First, consider the third question in the experiment from Section 4, which asked participants whether they considered the CV system reliable. By converting the participants’ choices into threshold points marked on the probability distribution, clear segment intervals emerged. Within the range of −2 and +2 points, where the probability density was greater than 0.37, participants consistently considered the system reliable. Outside the range of −5 and +5 points, where the probability density was less than 0.12, participants consistently considered the system unreliable. However, between −3 and −4 points and +3 and +4 points, where the probability density ranged from 0.32 to 0.24, participants’ choices diverged. Additionally, an examination of response times revealed that participants took significantly longer to respond when test points were located between −5 and −3 points and +3 and +5 points.
Based on these results, we can further divide the intervals for human–computer collaboration in recognition decision-making, adopting different interaction collaboration methods for different intervals. In the “reliable” interval, end-users are likely to directly adopt the system’s recommendations, and there is no need to waste cognitive resources on evaluating the system’s reliability, or the explanation of uncertainty can be provided as an optional feature. In the “unreliable” interval, the reliability of users’ knowledge and experience should be considered, as well as adjusting the system’s reliability. Between the “reliable” and “unreliable” intervals, a “to be examined” interval should be established. In this interval, blindly adopting the system’s recommendations or ignoring uncertainties should be avoided. A better strategy is to guide users to pay attention to uncertainties. By adopting this recognition decision-making method based on matching intervals, the advantages of both humans and computers can be balanced, avoiding the abandonment of technology, and making users more vigilant, thus improving overall decision quality. Therefore, the “to be examined” interval is an important scenario for utilizing visualization for reliability estimation and uncertainty management.

5.4. Innovative Discussion

Currently, there exists extensive research on uncertainty visualization and CV systems. In comparison to these findings, this study offers the following significance and innovations.

5.4.1. CV Systems and Perceived Reliability

In contrast to existing research on users’ perceived reliability of CV systems, the primary innovation of this study lies in extending the application trends of CV systems and utilizing uncertainty visualization to enhance users’ perceived reliability.
Firstly, regarding the application trends of CV and user collaboration methods, this study posits that the development of CV technology will not be confined to image processing. The broader concept encompasses all tasks that involve acquiring, processing, analyzing, and understanding visual data through computer algorithms and techniques. Therefore, unlike existing studies that directly present recognition images, this study suggested that the traditional method of human visual recognition will be phased out. The new trend in collaboration with CV should involve reliability perception based on probability distributions provided by CV. These probability distributions can represent various uncertainties in CV results, such as confidence scores, intersection-over-union ratios, or simple measurements of a specific metric. Although the background of remote sensing recognition was used, it served merely to provide participants with a concrete scenario for better understanding the experiment, without being restricted to the field of remote sensing recognition. This approach is similar to the exploratory thinking found in studies such as [74], which attempt to address uncertainty issues through case studies, offering a comparative perspective for more extensive analytical research.
Secondly, concerning reliability perception, current human–computer collaboration still views uncertainty as an obstacle, rather than a design opportunity [27]. Many designers initially do not intend to convey uncertainty [30]. In contrast, we consider ambiguity and uncertainty as materials and opportunities for design, rather than noise interference [74], echoing the call to “actively use uncertainty rather than merely cope with it” [75]. This study argues that uncertainty visualization is a crucial step in achieving perceived reliability. While visualization alone cannot entirely eliminate uncertainty, visualizing potential probability distribution shapes can promote more accurate probability estimation and appropriate interpretation of potential probability distributions. Without such visualization to convey uncertainty, users cannot calibrate their internal sense of reliability for patterns or claims in cognitive space. Therefore, uncertainty visualization, as an important complementary perspective, should be integrated into the entire process of communicating and handling reliability [76]. This integration can attract users’ attention and reflection, enabling them to make decisions based on domain knowledge and experience [77], and ultimately provide optimal human–computer collaboration performance.

5.4.2. Uncertainty Visualization Research

Compared to existing research on uncertainty visualization, this study’s innovation lies in targeting non-expert users, identifying the visualization method with the optimal reliability perception utility, and employing different utility measurement methods.
Firstly, this study focused on non-expert users. Existing research predominantly targets expert users, with all details of the probability distribution annotated on the visualizations [15]. However, with the evolving trends in CV systems, an increasing number of end-users will be non-experts. For non-expert users, visualizations with extensive details often result in overplotting, which can impair their data perception, leading to a “cannot see the forest for the trees” scenario. Under time pressure, non-expert users are often overwhelmed by data details, hindering their ability to make beneficial decisions based on the overall data context. Simplified uncertainty visualizations, which omit excessive annotations, can better represent the overall data distribution for non-expert users, creating semantic matches and resonances that naturally interpret uncertainty. This holistic perception is crucial for establishing system reliability and trust, which is a key factor in applying CV technology in high-risk decision-making. In some situations, enabling users to perceive and understand the data is far more important than having high-performance computing systems [78].
Secondly, in existing uncertainty visualization research, violin plots are widely used for representing estimates [68], confidence intervals [69], and comparing model uncertainties [79,80]. Similarly, error bars are extensively employed to summarize [81] and describe uncertainties [56]. However, providing system reliability values in interaction design does not necessarily imply that they are clear and comprehensible [28], as designers often overlook whether these values are clear and useful to humans [29]. Distinguishing from these studies, our research proposed the density strip as an uncertainty visualization method with the optimal reliability perception utility.
Thirdly, existing methods for evaluating uncertainty visualizations typically rely on subjective indicators, such as Likert scales or subjective perception evaluations [62,82], or by assessing user interaction logs, cognitive walkthroughs, or internal expert reviews [83]. In contrast, this study utilized the match between perceived and actual values, along with response time and eye-tracking metrics, to comprehensively evaluate the utility of the visualizations.

5.5. Limitations

The primary limitation of this study is the small number of participants. Although the sample size met the requirements calculated using Gpower software, more data could yield more accurate statistical results. Secondly, the conclusion that density strips have good visualization effects was primarily based on the “to be examined” interval in practical applications, where users should focus on uncertainty and, therefore, exhibit higher arousal. However, the issue of fatigue was not considered. If users are required to maintain high arousal over long periods, their performance may decline.
Another limitation of this study is that all participants were selected from the field of design studies. The rationale for choosing design students was two-fold: Firstly, it allowed for the maximal control of confounding variables, as none of the participants had taken probability and statistics courses, ensuring that they were similarly unfamiliar with probability concepts as non-expert users. Secondly, design students have taken courses in user research and are thus well-versed in understanding the needs and contexts of non-expert end-users. This understanding facilitated better communication during the experiment, as participants could easily grasp the experiment’s objectives and significance. However, this also resulted in a lack of diversity among the participants. To address this limitation, we plan to conduct follow-up studies involving non-student participants or professionals from various fields. By comparing results across different domains, we aim to derive more generalizable conclusions.

6. Conclusions

This study explored the difficulties non-expert users face in estimating the reliability of CV systems, particularly how users’ perceptions of system reliability may deviate from the probabilistic characteristics of relevant distributions in practical human–computer decision-making applications. The study first compared the effectiveness of density strips, violin plots, and error bars in displaying normal distributions. The results showed that when density strips were used for visualization, users’ perceptions of system results most closely matched the probability integrals, exhibited the shortest response times, and the highest cognitive arousal. However, users’ perceptions of the occurrence probability of system results were generally higher than the actual probability density, with an average coefficient of 2.53 times, and this phenomenon was unaffected by the form of uncertainty visualization. In contrast, this perception bias did not occur with triangular distributions and remained consistent across symmetric and asymmetric distributions. The findings of this study provide empirical evidence and theoretical support for the user-oriented application of CV systems under new trends. Understanding users’ reliability estimation mechanisms helps interaction designers improve uncertainty visualization designs, avoiding users’ perception biases and potential trust risks.

Author Contributions

Conceptualization, X.W.; methodology, X.W. and R.H.; software, X.W. and R.H.; validation, X.W.; formal analysis, X.W. and R.H.; investigation, X.W.; resources, X.W. and R.H.; data curation, R.H.; writing—original draft preparation, X.W.; writing—review and editing, X.W. and C.X.; visualization, X.W. and R.H.; supervision, C.X.; project administration, C.X.; funding acquisition, C.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 72271053 and 71871056.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank the anonymous reviewers of this paper for their constructive suggestions and comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shao, Y.; Sang, N.; Gao, C.; Ma, L. Spatial and Class Structure Regularized Sparse Representation Graph for Semi-Supervised Hyperspectral Image Classification. Pattern Recognit. 2018, 81, 81–94. [Google Scholar] [CrossRef]
  2. Pi, Z.; Shao, Y.; Gao, C.; Sang, N. Instance-Based Feature Pyramid for Visual Object Tracking. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3774–3787. [Google Scholar] [CrossRef]
  3. Jiang, J.; Karran, A.J.; Coursaris, C.K.; Léger, P.-M.; Beringer, J. A Situation Awareness Perspective on Human-AI Interaction: Tensions and Opportunities. Int. J. Hum.–Comput. Interact. 2023, 39, 1789–1806. [Google Scholar] [CrossRef]
  4. Kusumastuti, S.A.; Pollard, K.A.; Oiknine, A.H.; Dalangin, B.; Raber, T.R.; Files, B.T. Practice Improves Performance of a 2D Uncertainty Integration Task Within and Across Visualizations. IEEE Trans. Vis. Comput. Graph. 2023, 29, 3949–3960. [Google Scholar] [CrossRef] [PubMed]
  5. Matzen, L.E.; Howell, B.C.; Trumbo, M.C.S.; Divis, K.M. Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making. IEEE Comput. Graph. Appl. 2023, 43, 72–82. [Google Scholar] [CrossRef] [PubMed]
  6. Kale, A.; Nguyen, F.; Kay, M.; Hullman, J. Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data. IEEE Trans. Vis. Comput. Graph. 2019, 25, 892–902. [Google Scholar] [CrossRef] [PubMed]
  7. Franconeri, S.L.; Padilla, L.M.; Shah, P.; Zacks, J.M.; Hullman, J. The Science of Visual Data Communication: What Works. Psychol. Sci. Public Interest 2021, 22, 110–161. [Google Scholar] [CrossRef] [PubMed]
  8. Ślusarski, M.; Jurkiewicz, M. Visualisation of Spatial Data Uncertainty. A Case Study of a Database of Topographic Objects. ISPRS Int. J. Geo-Inf. 2020, 9, 16. [Google Scholar] [CrossRef]
  9. Ripberger, J.; Bell, A.; Fox, A.; Forney, A.; Livingston, W.; Gaddie, C.; Silva, C.; Jenkins-Smith, H. Communicating Probability Information in Weather Forecasts: Findings and Recommendations from a Living Systematic Review of the Research Literature. Weather Clim. Soc. 2022, 14, 481–498. [Google Scholar] [CrossRef]
  10. Heltne, A.; Frans, N.; Hummelen, B.; Falkum, E.; Germans Selvik, S.; Paap, M.C.S. A Systematic Review of Measurement Uncertainty Visualizations in the Context of Standardized Assessments. Scand. J. Psychol. 2023, 64, 595–608. [Google Scholar] [CrossRef]
  11. Padilla, L.M.K.; Castro, S.C.; Hosseinpour, H. Chapter Seven—A Review of Uncertainty Visualization Errors: Working Memory as an Explanatory Theory. In Psychology of Learning and Motivation; Federmeier, K.D., Ed.; The Psychology of Learning and Motivation; Academic Press: Cambridge, MA, USA, 2021; Volume 74, pp. 275–315. [Google Scholar]
  12. Srabanti, S.; Veiga, C.; Silva, E.; Lage, M.; Ferreira, N.; Miranda, F. A Comparative Study of Methods for the Visualization of Probability Distributions of Geographical Data. Multimodal Technol. Interact. 2022, 6, 53. [Google Scholar] [CrossRef]
  13. Zhang, J.; Yuan, L.; Ran, T.; Peng, S.; Tao, Q.; Xiao, W.; Cui, J. A Dynamic Detection and Data Association Method Based on Probabilistic Models for Visual SLAM. Displays 2024, 82, 102663. [Google Scholar] [CrossRef]
  14. Nguyen, H.D.; McLachlan, G.J. Maximum Likelihood Estimation of Triangular and Polygonal Distributions. Comput. Stat. Data Anal. 2016, 102, 23–36. [Google Scholar] [CrossRef]
  15. Newburger, E.; Elmqvist, N. Comparing Overlapping Data Distributions Using Visualization. Inf. Vis. 2023, 22, 291–306. [Google Scholar] [CrossRef]
  16. Brasse, J.; Broder, H.R.; Förster, M.; Klier, M.; Sigler, I. Explainable Artificial Intelligence in Information Systems: A Review of the Status Quo and Future Research Directions. Electron. Mark. 2023, 33, 26. [Google Scholar] [CrossRef]
  17. Jean, V.; Boucher, M.-A.; Frini, A.; Roussel, D. Uncertainty in Three Dimensions: The Challenges of Communicating Probabilistic Flood Forecast Maps. Hydrol. Earth Syst. Sci. 2023, 27, 3351–3373. [Google Scholar] [CrossRef]
  18. Correll, M.; Gleicher, M. Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error. IEEE Trans. Vis. Comput. Graph. 2014, 20, 2142–2151. [Google Scholar] [CrossRef]
  19. Pérez-Messina, I.; Ceneda, D.; El-Assady, M.; Miksch, S.; Sperrle, F. A Typology of Guidance Tasks in Mixed-Initiative Visual Analytics Environments. Comput. Graph. Forum 2022, 41, 465–476. [Google Scholar] [CrossRef]
  20. Xiong, C.; Setlur, V.; Bach, B.; Koh, E.; Lin, K.; Franconeri, S. Visual Arrangements of Bar Charts Influence Comparisons in Viewer Takeaways. IEEE Trans. Vis. Comput. Graph. 2022, 28, 955–965. [Google Scholar] [CrossRef]
  21. Ceneda, D.; Andrienko, N.; Andrienko, G.; Gschwandtner, T.; Miksch, S.; Piccolotto, N.; Schreck, T.; Streit, M.; Suschnigg, J.; Tominski, C. Guide Me in Analysis: A Framework for Guidance Designers. Comput. Graph. Forum 2020, 39, 269–288. [Google Scholar] [CrossRef]
  22. Woelmer, W.M.; Moore, T.N.; Lofton, M.E.; Thomas, R.Q.; Carey, C.C. Embedding Communication Concepts in Forecasting Training Increases Students’ Understanding of Ecological Uncertainty. Ecosphere 2023, 14, e4628. [Google Scholar] [CrossRef]
  23. Spiegelhalter, D.; Pearson, M.; Short, I. Visualizing Uncertainty About the Future. Science 2011, 333, 1393–1400. [Google Scholar] [CrossRef]
  24. Cila, N. Designing Human-Agent Collaborations: Commitment, Responsiveness, and Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–18. [Google Scholar]
  25. Carr, R.H.; Semmens, K.; Montz, B.; Maxfield, K. Improving the Use of Hydrologic Probabilistic and Deterministic Information in Decision-Making. Bull. Am. Meteorol. Soc. 2021, 102, E1878–E1896. [Google Scholar] [CrossRef]
  26. Shulner-Tal, A.; Kuflik, T.; Kliger, D. Enhancing Fairness Perception—Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. Int. J. Hum.–Comput. Interact. 2023, 39, 1455–1482. [Google Scholar] [CrossRef]
  27. Benjamin, J.J.; Berger, A.; Merrill, N.; Pierce, J. Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–14. [Google Scholar]
  28. Stephanidis, C.; Salvendy, G.; Antona, M.; Chen, J.Y.C.; Dong, J.; Duffy, V.G.; Fang, X.; Fidopiastis, C.; Fragomeni, G.; Fu, L.P.; et al. Seven HCI Grand Challenges. Int. J. Hum.–Comput. Interact. 2019, 35, 1229–1269. [Google Scholar] [CrossRef]
  29. Abdul, A.; Vermeulen, J.; Wang, D.; Lim, B.Y.; Kankanhalli, M. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–18. [Google Scholar]
  30. Theron, R.; Padilla, L.M. Editorial: Uncertainty Visualization and Decision Making. Front. Comput. Sci. 2021, 3, 758406. [Google Scholar] [CrossRef]
  31. Bancilhon, M.; Liu, Z.; Ottley, A. Let’s Gamble: How a Poor Visualization Can Elicit Risky Behavior. In Proceedings of the 2020 IEEE Visualization Conference (VIS), Salt Lake City, UT, USA, 25–30 October 2020; pp. 196–200. [Google Scholar]
  32. Andrienko, N.; Andrienko, G.; Chen, S.; Fisher, B. Seeking Patterns of Visual Pattern Discovery for Knowledge Building. Comput. Graph. Forum 2022, 41, 124–148. [Google Scholar] [CrossRef]
  33. McNutt, A. What Are Table Cartograms Good for Anyway? An Algebraic Analysis. Comput. Graph. Forum 2021, 40, 61–73. [Google Scholar] [CrossRef]
  34. Korporaal, M.; Ruginski, I.T.; Fabrikant, S.I. Effects of Uncertainty Visualization on Map-Based Decision Making Under Time Pressure. Front. Comput. Sci. 2020, 2, 32. [Google Scholar] [CrossRef]
  35. Cassenti, D.N.; Kaplan, L.M. Robust Uncertainty Representation in Human-AI Collaboration. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III; SPIE: Bellingham, WA, USA, 2021; Volume 11746, pp. 249–262. [Google Scholar]
  36. Zhao, W.; Wang, G.; Wang, Z.; Liu, L.; Wei, X.; Wu, Y. A Uncertainty Visual Analytics Approach for Bus Travel Time. Vis. Inform. 2022, 6, 1–11. [Google Scholar] [CrossRef]
  37. Guk, A.P.; Khlebnikova, E.P.; Shlyakhova, M.M. Technology of Regional and Global Water Monitoring Objects According to Remote Sensing Data. In Proceedings of the 25th International Symposium on Atmospheric and Ocean Optics: Atmospheric Physics, Novosibirsk, Russian, 1–5 July 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11208, pp. 1117–1121. [Google Scholar]
  38. Song, M.; Wang, S.; Zhao, P.; Chen, Y.; Wang, J. Modeling Kelvin Wake Imaging Mechanism of Visible Spectral Remote Sensing. Appl. Ocean Res. 2021, 113, 102712. [Google Scholar] [CrossRef]
  39. Liu, Z.; Qiu, Q.; Li, J.; Wang, L.; Plaza, A. Geographic Optimal Transport for Heterogeneous Data: Fusing Remote Sensing and Social Media. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6935–6945. [Google Scholar] [CrossRef]
  40. Gao, G.; Liu, Q.; Hu, Z.; Li, L.; Wen, Q.; Wang, Y. PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5619412. [Google Scholar] [CrossRef]
  41. Li, J.; Liao, Y.; Zhang, J.; Zeng, D.; Qian, X. Semi-Supervised DEGAN for Optical High-Resolution Remote Sensing Image Scene Classification. Remote Sens. 2022, 14, 4418. [Google Scholar] [CrossRef]
  42. Sun, Y.; Li, X.; Shi, H.; Cui, J.; Wang, W.; Ma, H.; Chen, N. Modeling Salinized Wasteland Using Remote Sensing with the Integration of Decision Tree and Multiple Validation Approaches in Hetao Irrigation District of China. CATENA 2022, 209, 105854. [Google Scholar] [CrossRef]
  43. Bawa, A.; Senay, G.B.; Kumar, S. Satellite Remote Sensing of Crop Water Use across the Missouri River Basin for 1986–2018 Period. Agric. Water Manag. 2022, 271, 107792. [Google Scholar] [CrossRef]
  44. Huang, J.; Gómez-Dans, J.L.; Huang, H.; Ma, H.; Wu, Q.; Lewis, P.E.; Liang, S.; Chen, Z.; Xue, J.-H.; Wu, Y.; et al. Assimilation of Remote Sensing into Crop Growth Models: Current Status and Perspectives. Agric. For. Meteorol. 2019, 276–277, 107609. [Google Scholar] [CrossRef]
  45. Irani Rahaghi, A.; Lemmin, U.; Sage, D.; Barry, D.A. Achieving High-Resolution Thermal Imagery in Low-Contrast Lake Surface Waters by Aerial Remote Sensing and Image Registration. Remote Sens. Environ. 2019, 221, 773–783. [Google Scholar] [CrossRef]
  46. Levin, I.; Hershkovitz, T.; Rotman, S. Hyperspectral Target Detection Using Cluster-Based Probability Models Implemented in a Generalized Likelihood Ratio Test. In Proceedings of the Image and Signal Processing for Remote Sensing XXV, Strasbourg, France, 9–12 September 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11155, pp. 174–185. [Google Scholar]
  47. Deng, C.; Cen, Y.; Zhang, L. Learning-Based Hyperspectral Imagery Compression through Generative Neural Networks. Remote Sens. 2020, 12, 3657. [Google Scholar] [CrossRef]
  48. Bajić, M. Modeling and Simulation of Very High Spatial Resolution UXOs and Landmines in a Hyperspectral Scene for UAV Survey. Remote Sens. 2021, 13, 837. [Google Scholar] [CrossRef]
  49. van der Bles, A.M.; van der Linden, S.; Freeman, A.L.J.; Mitchell, J.; Galvao, A.B.; Zaval, L.; Spiegelhalter, D.J. Communicating Uncertainty about Facts, Numbers and Science. R. Soc. Open Sci. 2019, 6, 181870. [Google Scholar] [CrossRef]
  50. Ben-Moshe, N.; Levinstein, B.A.; Livengood, J. Probability and Informed Consent. Theor Med Bioeth 2023, 44, 545–566. [Google Scholar] [CrossRef] [PubMed]
  51. Klockow-McClain, K.E.; McPherson, R.A.; Thomas, R.P. Cartographic Design for Improved Decision Making: Trade-Offs in Uncertainty Visualization for Tornado Threats. Ann. Am. Assoc. Geogr. 2020, 110, 314–333. [Google Scholar] [CrossRef]
  52. Preston, A.; Ma, K.-L. Communicating Uncertainty and Risk in Air Quality Maps. IEEE Trans. Vis. Comput. Graph. 2023, 29, 3746–3757. [Google Scholar] [CrossRef]
  53. Glaser, M.; Lengyel, D.; Toulouse, C.; Schwan, S. How Do We Deal with Uncertain Information? Effects of Verbal and Visual Expressions of Uncertainty on Learning. Educ. Psychol. Rev. 2022, 34, 1097–1131. [Google Scholar] [CrossRef]
  54. Dimara, E.; Bezerianos, A.; Dragicevic, P. Conceptual and Methodological Issues in Evaluating Multidimensional Visualizations for Decision Support. IEEE Trans. Vis. Comput. Graph. 2018, 24, 749–759. [Google Scholar] [CrossRef]
  55. Hopster-den Otter, D.; Muilenburg, S.N.; Wools, S.; Veldkamp, B.P.; Eggen, T.J.H.M. Comparing the Influence of Various Measurement Error Presentations in Test Score Reports on Educational Decision-Making. Assess. Educ. Princ. Policy Pract. 2019, 26, 123–142. [Google Scholar] [CrossRef]
  56. Hofman, J.M.; Goldstein, D.G.; Hullman, J. How Visualizing Inferential Uncertainty Can Mislead Readers About Treatment Effects in Scientific Results. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–12. [Google Scholar]
  57. Mulder, K.J.; Lickiss, M.; Black, A.; Charlton-Perez, A.J.; McCloy, R.; Young, J.S. Designing Environmental Uncertainty Information for Experts and Non-Experts: Does Data Presentation Affect Users’ Decisions and Interpretations? Meteorol. Appl. 2020, 27, e1821. [Google Scholar] [CrossRef]
  58. Kale, A.; Kay, M.; Hullman, J. Visual Reasoning Strategies for Effect Size Judgments and Decisions. IEEE Trans. Vis. Comput. Graph. 2021, 27, 272–282. [Google Scholar] [CrossRef]
  59. Dy, B.; Ibrahim, N.; Poorthuis, A.; Joyce, S. Improving Visualization Design for Effective Multi-Objective Decision Making. IEEE Trans. Vis. Comput. Graph. 2022, 28, 3405–3416. [Google Scholar] [CrossRef]
  60. Millet, B.; Majumdar, S.J.; Cairo, A.; McNoldy, B.D.; Evans, S.D.; Broad, K. Exploring the Impact of Visualization Design on Non-Expert Interpretation of Hurricane Forecast Path. Int. J. Hum.–Comput. Interact. 2022, 40, 425–440. [Google Scholar] [CrossRef]
  61. Taieb-Maimon, M.; Ya’akobi, E.; Itzhak, N.; Zaltsman, Y. Comparing Visual Encodings for the Task of Anomaly Detection. Int. J. Hum.–Comput. Interact. 2022, 40, 357–375. [Google Scholar] [CrossRef]
  62. Kale, A.; Wu, Y.; Hullman, J. Causal Support: Modeling Causal Inferences with Visualizations. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1150–1160. [Google Scholar] [CrossRef] [PubMed]
  63. Alves, T.; Delgado, T.; Henriques-Calado, J.; Gonçalves, D.; Gama, S. Exploring the Role of Conscientiousness on Visualization-Supported Decision-Making. Comput. Graph. 2023, 111, 47–62. [Google Scholar] [CrossRef]
  64. Yang, L.; Xiong, C.; Wong, J.K.; Wu, A.; Qu, H. Explaining with Examples: Lessons Learned from Crowdsourced Introductory Description of Information Visualizations. IEEE Trans. Vis. Comput. Graph. 2023, 29, 1638–1650. [Google Scholar] [CrossRef] [PubMed]
  65. Weir, C.J.; Bowman, A.W. Density Strips: Visualisation of Uncertainty in Clinical Data Summaries and Research Findings. BMJ Evid.-Based Med. 2022, 27, 373–377. [Google Scholar] [CrossRef] [PubMed]
  66. Kim, Y.-S.; Kayongo, P.; Grunde-McLaughlin, M.; Hullman, J. Bayesian-Assisted Inference from Visualized Data. IEEE Trans. Vis. Comput. Graph. 2021, 27, 989–999. [Google Scholar] [CrossRef] [PubMed]
  67. Jung, M.F.; Sirkin, D.; Gür, T.M.; Steinert, M. Displayed Uncertainty Improves Driving Experience and Behavior: The Case of Range Anxiety in an Electric Car. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 2201–2210. [Google Scholar]
  68. Zhao, W.; Jiang, H.; Tang, K.; Pei, W.; Wu, Y.; Qayoom, A. Knotted-Line: A Visual Explorer for Uncertainty in Transportation System. J. Comput. Lang. 2019, 53, 1–8. [Google Scholar] [CrossRef]
  69. Kalinowski, P.; Lai, J.; Cumming, G. A Cross-Sectional Analysis of Students’ Intuitions When Interpreting CIs. Front. Psychol. 2018, 9, 112. [Google Scholar] [CrossRef]
  70. Fernandes, M.; Walls, L.; Munson, S.; Hullman, J.; Kay, M. Uncertainty Displays Using Quantile Dotplots or CDFs Improve Transit Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–12. [Google Scholar]
  71. Qin, C.; Joslyn, S.; Savelli, S.; Demuth, J.; Morss, R.; Ash, K. The Impact of Probabilistic Tornado Warnings on Risk Perceptions and Responses. J. Exp. Psychol.-Appl. 2023, 30, 206–239. [Google Scholar] [CrossRef]
  72. Toet, A.; van Erp, J.B.; Boertjes, E.M.; van Buuren, S. Graphical Uncertainty Representations for Ensemble Predictions. Inf. Vis. 2019, 18, 373–383. [Google Scholar] [CrossRef]
  73. Yang, F.; Hedayati, M.; Kay, M. Subjective Probability Correction for Uncertainty Representations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–17. [Google Scholar]
  74. Panagiotidou, G.; Vandam, R.; Poblome, J.; Moere, A.V. Implicit Error, Uncertainty and Confidence in Visualization: An Archaeological Case Study. IEEE Trans. Vis. Comput. Graph. 2022, 28, 4389–4402. [Google Scholar] [CrossRef] [PubMed]
  75. Boukhelifa, N.; Perrin, M.-E.; Huron, S.; Eagan, J. How Data Workers Cope with Uncertainty: A Task Characterisation Study. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 3645–3656. [Google Scholar]
  76. Lin, H.; Akbaba, D.; Meyer, M.; Lex, A. Data Hunches: Incorporating Personal Knowledge into Visualizations. IEEE Trans. Vis. Comput. Graph. 2023, 29, 504–514. [Google Scholar] [CrossRef] [PubMed]
  77. Blastland, M.; Freeman, A.L.J.; van der Linden, S.; Marteau, T.M.; Spiegelhalter, D. Five Rules for Evidence Communication. Nature 2020, 587, 362–364. [Google Scholar] [CrossRef] [PubMed]
  78. Purificato, E.; Lorenzo, F.; Fallucchi, F.; De Luca, E.W. The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes. Int. J. Hum.–Comput. Interact. 2023, 39, 1543–1562. [Google Scholar] [CrossRef]
  79. Najafzadeh, M.; Basirian, S.; Li, Z. Vulnerability of the Rip Current Phenomenon in Marine Environments Using Machine Learning Models. Results Eng. 2024, 21, 101704. [Google Scholar] [CrossRef]
  80. Kumar, M.; Samui, P.; Kumar, D.R.; Asteris, P.G. State-of-the-Art XGBoost, RF and DNN Based Soft-Computing Models for PGPN Piles. Geomech. Geoengin. 2024, 1–16. [Google Scholar] [CrossRef]
  81. Cousineau, D.; Goulet, M.-A.; Harding, B. Summary Plots with Adjusted Error Bars: The Superb Framework with an Implementation in R. Adv. Methods Pract. Psychol. Sci. 2021, 4, 25152459211035109. [Google Scholar] [CrossRef]
  82. Hullman, J. Why Authors Don’t Visualize Uncertainty. IEEE Trans. Vis. Comput. Graph. 2020, 26, 130–139. [Google Scholar] [CrossRef]
  83. Han, W.; Schulz, H.-J. Providing Visual Analytics Guidance through Decision Support. Inf. Vis. 2023, 22, 140–165. [Google Scholar] [CrossRef]
Figure 1. User perception of CV system reliability deviating from actual probabilistic characteristics leads to trust issues in human–computer collaboration.
Figure 1. User perception of CV system reliability deviating from actual probabilistic characteristics leads to trust issues in human–computer collaboration.
Symmetry 16 00986 g001
Figure 2. The three visualization forms. The yellow line represents the shape of the normal distribution.
Figure 2. The three visualization forms. The yellow line represents the shape of the normal distribution.
Symmetry 16 00986 g002
Figure 3. Experimental interface for the comparison of visualization formats. The violin plot represents the probability distribution derived from historical measurements.
Figure 3. Experimental interface for the comparison of visualization formats. The violin plot represents the probability distribution derived from historical measurements.
Symmetry 16 00986 g003
Figure 4. The names, reflection values, TVPIs, and TVPDs corresponding to the 14 test points. The yellow line represents the shape of the normal distribution.
Figure 4. The names, reflection values, TVPIs, and TVPDs corresponding to the 14 test points. The yellow line represents the shape of the normal distribution.
Symmetry 16 00986 g004
Figure 5. Distribution of δ across various visualization forms.
Figure 5. Distribution of δ across various visualization forms.
Symmetry 16 00986 g005
Figure 6. The test point positions are plotted on the horizontal axis, response times on the vertical axis, and green dots represent the average response times of participants completing a single trial for each test point position.
Figure 6. The test point positions are plotted on the horizontal axis, response times on the vertical axis, and green dots represent the average response times of participants completing a single trial for each test point position.
Symmetry 16 00986 g006
Figure 7. The visual representation and the positioning of test points. The red dot represents all possible positions where the test point may appear.
Figure 7. The visual representation and the positioning of test points. The red dot represents all possible positions where the test point may appear.
Symmetry 16 00986 g007
Figure 8. The experimental interface of experiment 2. The density strip represents the probability distribution derived from historical measurements. The red dot signifies the system-measured value v.
Figure 8. The experimental interface of experiment 2. The density strip represents the probability distribution derived from historical measurements. The red dot signifies the system-measured value v.
Symmetry 16 00986 g008
Figure 9. Linear fit efficacy diagram. The green scatter points represent P3s of the subjects and the yellow line represents the effect of the linear fit.
Figure 9. Linear fit efficacy diagram. The green scatter points represent P3s of the subjects and the yellow line represents the effect of the linear fit.
Symmetry 16 00986 g009
Table 1. Visualization forms in relevant studies in recent years.
Table 1. Visualization forms in relevant studies in recent years.
ReferencesVisualization Forms
Dimara et al. (2018) [54]Parallel Coordinated PlotScatterplot MatrixTabular Chart
Hopster-den Otter et al. (2019) [55]Error Bar Color ValueBlurOmitting Error
Jurkiewicz, (2020) [8]Color HueGlyphsContourGrain Density
Hofman et al. (2020) [56]Error Bar—95% Prediction IntervalsError Bar—95% Confidence IntervalsError Bar—Rescaled 95% Confidence IntervalsHypothetical Outcome Plots
Mulder et al. (2020) [57]Worded ProbabilitySpaghetti PlotFan PlotBox Plot
Kale et al. (2021) [58]Bell CurvesError BarHypothetical Outcome PlotsQuantile Dot Plots
Srabanti et al. (2022) [12]Distribution Dot MapHypothetical Outcome MapDistribution Interaction Map
Dy et al. (2022) [59]Parallel Coordinates PlotsScatter Plot MatricesHeat MapsRadar Charts
Millet et al. (2022) [60]Probability Density ShadingUniform, Diffuse Gray ShadingCone Of Uncertainty
Taieb-Maimon et al. (2022) [61]Color SaturationPositionSizeTabular
Kale et al. (2022) [62]Bar ChartsIcon ArraysText Tables
Xiong et al. (2022) [20]Vertically Juxtaposed BarHorizontally Juxtaposed BarOverlaid BarStacked Bar
Preston and Ma, (2023) [52]Dot MapSmall MultipleStandard ContourSensor-Based Map
Alves et al. (2023) [63]Parallel Coordinated PlotScatterplot Matrix
Kusumastuti et al. (2023) [4]InterlaceScatterEllipse
Yang et al. (2023) [64]Parallel Coordinated PlotConnected Scatter PlotChord DiagramMekko Chart
Table 2. Linear fitting efficacy of P1s and TVPIs across various visualization forms.
Table 2. Linear fitting efficacy of P1s and TVPIs across various visualization forms.
FormSlope, kInterceptCoefficient (R2)
Density strips0.978500.9879
Violin plots0.958800.9846
Error bars0.938200.9737
Table 3. Linear fitting efficacy of P2s and TVPDs across various visualization forms.
Table 3. Linear fitting efficacy of P2s and TVPDs across various visualization forms.
FormSlope, kInterceptCoefficient (R2)
Density strips2.6100−0.09530.9920
Violin plots2.5662−0.07380.9948
Error bars2.4256−0.00770.9904
Table 4. Mean response time and pupil diameter of participants across different visualization forms.
Table 4. Mean response time and pupil diameter of participants across different visualization forms.
FormResponse Time (s)Pupil Diameter (mm)
Density strips225.43.453
Violin plots232.63.402
Error bars248.53.366
Table 5. Linear fitting efficacy of P3s and TVPDs across various subtypes.
Table 5. Linear fitting efficacy of P3s and TVPDs across various subtypes.
FormSlope, kInterceptCoefficient (R2)
Symmetric1.0834−0.08670.9127
Positive Skew1.0699−0.07540.8852
Negative Skew1.0898−0.08560.9108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Hu, R.; Xue, C. Enhancing User Perception of Reliability in Computer Vision: Uncertainty Visualization for Probability Distributions. Symmetry 2024, 16, 986. https://doi.org/10.3390/sym16080986

AMA Style

Wang X, Hu R, Xue C. Enhancing User Perception of Reliability in Computer Vision: Uncertainty Visualization for Probability Distributions. Symmetry. 2024; 16(8):986. https://doi.org/10.3390/sym16080986

Chicago/Turabian Style

Wang, Xinyue, Ruoyu Hu, and Chengqi Xue. 2024. "Enhancing User Perception of Reliability in Computer Vision: Uncertainty Visualization for Probability Distributions" Symmetry 16, no. 8: 986. https://doi.org/10.3390/sym16080986

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop