Next Article in Journal
Research on Perception and Control Technology for Dexterous Robot Operation
Previous Article in Journal
State-of-the-Art 800 V Electric Drive Systems: Inverter–Machine Codesign for Energy Efficiency Optimization
Previous Article in Special Issue
The Influence of Virtual Character Design on Emotional Engagement in Immersive Virtual Reality: The Case of Feelings of Being
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Gaze-Based Human–Computer Interaction for Museums and Exhibitions: Technologies, Applications and Future Perspectives

Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Via A. Ferrata 5, 27100 Pavia, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2023, 12(14), 3064; https://doi.org/10.3390/electronics12143064
Submission received: 30 May 2023 / Revised: 3 July 2023 / Accepted: 11 July 2023 / Published: 13 July 2023

Abstract

:
Eye tracking technology is now mature enough to be exploited in various areas of human–computer interaction. In this paper, we consider the use of gaze-based communication in museums and exhibitions, to make the visitor experience more engaging and attractive. While immersive and interactive technologies are now relatively widespread in museums, the use of gaze interaction is still in its infancy—despite the benefits it could provide, for example, to visitors with motor disabilities. Apart from some pioneering early works, only the last few years have seen an increase in gaze-based museum applications. This literature review aims to discuss the state of the art on this topic, highlighting advantages, limitations and current and future trends.

1. Introduction

Immersive and interactive technologies are increasingly being used to attract more visitors to museums and exhibitions. Renowned museums all over the world (such as the National Gallery in London or the Smithsonian Institution in Washington D.C.) frequently use digital installations to display cultural and scientific knowledge in a more engaging way.
The scientific literature has extensively examined the possibilities of augmented and virtual reality (AR/VR) technologies applied to Cultural Heritage [1,2,3,4], as well as the usage of Serious Gaming [5,6], which is very effective for teaching young visitors. Virtual museums (i.e., museums or collections accessible online) have also been widely studied [7], especially showing their importance during the COVID-19 pandemic [8,9].
In this paper, we focus on a particular, and little studied, type of digital installation, namely gaze-based interactive applications. Traditionally, interactive installations for museums and exhibitions have mainly used gestural communication, especially exploiting the capabilities of Microsoft Kinect [10,11,12]. Even if this kind of application is now well-established, museums are always interested in new approaches and new ways to effectively present their collections. Gaze-based interaction is relatively new in this field, but has great potential: it can provide a natural and intuitive interaction in which visitors can retrieve information by simply looking at the items on display; it can be a hygienic alternative to touchscreens (a very important aspect after the COVID-19 pandemic) and it can also enhance the accessibility of a museum or exhibition, since eye tracking technology is commonly used by motor-impaired people to communicate.
After some pioneering studies that date back to the late 1980s [13], the first attempts to use eye tracking technology in museums were focused on studying visitors’ behavior [14,15,16], their cognitive processes while observing artwork [17,18], and their emotional reactions [19]. More recently, various museums have used gaze-based solutions to both collect data and engage visitors—for instance, by showing them, after the visit, how they observed the items on display and/or highlighting possible similarities of their eye paths with those of other visitors. In this regard, notable examples include the Cleveland Museum of Art (https://mw18.mwconf.org/glami/gaze-tracker/ (accessed on 20 April 2023)) (United States), the ARoSArt Museum (https://userexperienceawards.com/2017-submissions/aros-art-museum-aros-public/ (accessed on 20 April 2023)) (Denmark) and the M-Museum Leuven (https://www.mleuven.be/en/research-support/research/eye-tracking-research-how-do-we-look-art (accessed on 20 April 2023)) (Belgium). These can mostly be considered “passive” uses of eye tracking technology, since there is not an actual, explicit interaction—gaze input is used for some kind of “a posteriori” analysis. There are also cases, such as the Science Museum of Trento (https://www.srlabs.it/en/project/muse/ (accessed on 20 April 2023)) (Italy), in which the interaction based on eye tracking per se (therefore not necessarily connected with the exhibited works) is simply included in scientific museums for informational purposes, to make the existence of this technology known to the general public.
A particular case involves robot-guides, a relatively new but active research field [20,21,22]. Gaze is sometimes used in this context, but mainly in an indirect way. For example, a robot may detect a visitor looking at a painting and start describing it [21]. In these applications, however, gaze direction is only roughly estimated by considering the visitor’s body and head position and orientation, and gaze input does not have an active role in the interaction, apart from triggering the activation of the robot.
For the present literature review, we analyzed the state-of-the-art of gaze-based interaction for museums and exhibitions, with the purpose of identifying current and future trends, as well as of highlighting the advantages and limitations of this technology. We considered only active gaze input, and only those works where real eye tracking devices were used (not simply estimates of the direction of the head).
The paper is structured as follows: Section 2 highlights the purposes and goals of this review; Section 3 provides an overview of eye tracking technology; Section 4 presents the methodology and choices we adopted for paper selection; Section 5 and Section 6 illustrate and discuss the results obtained; Section 7, lastly, draws some conclusions.

2. Objectives

This review has two main goals. First, we summarize a relatively new application field of eye tracking not yet analyzed in other scientific literature reviews. We think that our work can be useful for researchers dealing with interactive applications in the Cultural Heritage field, since it discusses a possible alternative to the most common interactive approaches typically used in museums, namely, gestural and touch interfaces. Secondly, museums and exhibitions are among the few public contexts where gaze-based interaction has been used “in the wild”. Most gaze-based interactive applications have been developed as assistive solutions, to allow people with severe motor disabilities to interact with the computer, or for mere research purposes, to conduct experiments in controlled laboratory settings. However, very few general and practical applications of gaze-based communication are reported in the literature. Museums are a very good test field for eye tracking technology, since they are both public places visited by many people and relatively quiet environments.
We think that a review of gaze-based applications for museums and exhibitions can also be useful for those who need to develop gaze-based interactive applications for use in public places in general, which is a recent trend in eye tracking technology.

3. Eye Tracking Technology

Eye tracking is a technique for detecting and measuring eye movements and characteristics [23]. An eye tracker can sense a person’s gaze locations at a certain frequency. Finding gaze position allows identification of fixations and saccades. Fixations, which typically last between 100 and 600 ms [24,25], are time periods during which the eyes are almost still, with the gaze being focused on a specific element of the scene. On the other hand, saccades, which normally last less than 100 ms [25], are very fast eye movements occurring between consecutive pairs of fixations, with the purpose of relocating the gaze on a different element in the visual scene.
Electro-oculography, scleral contact lens/search coil, photo-oculography, video-oculography and pupil center-corneal reflection are some of the eye-tracking technologies that have been developed over time [23].
Electro-oculography (EOG), one of the oldest methods to record eye movements, measures the skin’s electrical potential differences through small electrodes placed around the eyes [26]. This solution allows recording of eye movements even when the eyes are closed, but is generally more invasive and less accurate and precise than other approaches.
Scleral contact lens/search coil is another old method consisting of small coils of wire inserted in special contact lenses. The user’s head is then placed inside a magnetic field to generate an electrical potential that allows estimation of eye position [27]. While this technique has a very high spatial and temporal resolution, it is also extremely invasive and uncomfortable, used practically only for physiological studies.
Photo- and video-oculography (POG and VOG) are generally video-based methods in which small cameras, incorporated in head-mounted devices, measure eye features such as pupil size, iris–sclera boundaries and possible corneal reflections. The assessment of these characteristics can occur both automatically and manually. However, these systems tend to be inaccurate and are mainly used for medical purposes [23].
Pupil center-corneal reflection (PCCR) is the most used eye tracking technique nowadays. Its basic principle consists of using infrared (or near-infrared) light sources to illuminate the eyes and detect reflections on their surface (Figure 1); this allows determination of the gaze direction [23]. Infrared light is employed because it is invisible and also produces a better contrast between pupil and iris. The prices of these eye trackers range from a few hundreds to tens of thousands of euros, depending on their accuracy and gaze sampling frequency. All the works analyzed in the present review employ this technology.
There are two main kinds of eye trackers, namely, remote and wearable. Remote eye trackers (Figure 2, left) are normally non-intrusive devices (often little “bars”) that are positioned at the bottom of standard displays. They are currently the most prevalent kind of eye trackers. Wearable eye trackers (Figure 2, right), on the other hand, are frequently used to study viewing behavior in real-world settings. Recent wearable eye trackers look more and more like glasses, making them much more comfortable than in the past.
Psychology [28], neuroscience [29], marketing [30], education [31], usability [32,33] and biometrics [34,35] are all fields in which eye tracking technology has been applied, for instance, to determine the user’s gaze path while looking at something (e.g., an image or a web page) or to obtain information about the screen regions that are most frequently inspected. When using an eye tracker as an input tool (i.e., for interactive purposes, in an explicit way), gaze data must be evaluated in real-time, so that the computer can respond to specific gaze behaviors [36]. Gaze input is also extremely beneficial as an assistive technology for people who are unable to use their hands. Several assistive solutions have been devised to date, including those for writing [37,38,39], surfing the Web [40,41] and playing music [42,43].
Two common ways to provide gaze input are through dwell time and gaze gestures. Dwell time, which is the most used approach, consists of fixating a target element (e.g., a button) for a certain time (the dwell time), after which an action connected to that element is triggered. The duration of the dwell time can vary depending on the application, but it should be chosen so as to avoid the so called “Midas touch problem” [44], i.e., involuntary selections occurring when simply looking at the elements of an interface.
Gaze gestures consist of gaze paths performed by the user to trigger specific actions. This approach can be fast and is immune to the Midas touch problem, but it is also generally less intuitive than the dwell time (since the user needs to memorize a set of gaze gestures, it may have a steep learning curve). For this reason, gaze gestures are recommended only for applications meant to be used multiple times, such as writing systems (e.g., [45,46]).
Hybrid approaches that mix dwell time and gaze gestures (e.g., for interacting with video games [47]) have also been proposed, while other gaze input methods (such as those based on blinks [48] or smooth pursuit [39]) are currently less common.

4. Materials and Methods

4.1. Searching Methodology

The literature review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [49]. We considered two large scientific databases: Scopus and Web of Science. We checked for updates until the end of April 2023. The search query was defined as follows: (“gaze” OR “eye tracking”) AND (“interaction” OR “interactive”) AND (“museum” OR “exhibition”). We only took into account works from the year 2000 onwards—previously, eye tracking technology was at such a primitive level that it is practically impossible to think of real interactive applications.

4.2. Selection Criteria

In our review, we only included peer-reviewed papers, published in journals or conference proceedings and written in English, presenting interactive gaze-based applications for real or virtual museums or exhibitions. We considered both remote and wearable eye trackers. We focused on direct interaction, namely, applications in which the user is conscious of the interaction and receives some feedback after providing a gaze input (e.g., a descriptive audio starts after a visitor looks at a painting for a certain amount of time). Passive interaction, in which the gaze behavior of the user is analyzed at the end of the interaction (e.g., to visualize a visitor’s gaze map), was not considered. Similarly, we discarded those studies in which gaze data were collected to perform a posteriori analysis (e.g., for behavioral or psychological studies), only as an auxiliary input (e.g., to improve the visual perception of an AR/VR simulation and not to actively interact) or in which gaze direction was only approximately estimated by detecting head/body position and orientation.
We also excluded those interactive applications in which museums were cited as one of the possible application scenarios, but no real contextualized experiments were conducted. Finally, for robotic museum guides, we discarded those studies in which the visitors’ gaze was simply used to trigger the attention of the robot and had no active role in the interaction.

4.3. Selected Records

As summarized in Figure 3, 272 papers were retrieved from the two considered databases. A total of 65 duplicates were detected and removed, as well as 23 editorials/introductions. From the remaining 184 papers, 130 were excluded by the analysis of titles and abstracts, since they did not match the selection criteria or were out of scope (e.g., they were artistic studies). We were able to retrieve, in all, 54 eligible papers. We then carefully analyzed the text of all of them and we excluded 40 papers that did not match our selection criteria or were out of scope (one paper was excluded because it was about the evaluation of a new eye tracker, two papers because they described the creation of a dataset of gaze data acquired while visitors looked at artwork, and one paper because it was only a basic mock-up). In conclusion, 14 articles were selected for the review.

5. Results

5.1. Initial Analysis

We started our analysis by considering the temporal distribution and other categorizations of the selected papers.
The temporal distribution (Figure 4) shows that no work prior to 2012 matched our criteria, which is consistent with the evolution of eye tracking technology. Before 2012, eye trackers were mainly used for studies in controlled environments [23], such as laboratories or medical centers, or as an assistive technology for people with disabilities [50]. The last 10 years have seen a significant reduction in the prices and sizes of eye trackers, which have now become more affordable and viable, even for common applications (for example, in public places [39,51]).
Analyzing the content of the selected works, we categorized them in four different ways. A first category considered the scenario for which each application was designed, i.e., either a virtual or real museum/exhibition. For the latter case, we further subdivided the category into applications actually tested in a real museum and applications designed for a museum but only tested in a simulated setting, such as a research laboratory. We labeled these three categories as virtual museum, real museum and simulated museum, respectively. As summarized in Table 1, we can see that only two papers related to virtual museums, while most applications were designed for (and often actually tested in) real museums.
A second categorization considered Augmented Reality (AR) and Virtual Reality (VR) applications—always non-immersive VR, as we did not find any immersive VR cases where gaze was actively used for interaction. From Table 2, we can see that the papers were almost evenly distributed in the two categories.
A third categorization took into account the type of eye tracker employed, either wearable or remote. As can be seen from Table 3, the distribution between the two categories was exactly the same as that in Table 2 (which was easily predictable, as all AR applications also need a head-mounted display).
Finally, we considered the possible coexistence of additional input methods, other than gaze. From Table 4, we can see that there were only a few applications that also employed gestures or voice, while the majority only used gaze. All the gaze-only applications used the dwell time principle to make selections.
Figure 5 shows the four categories and their intersections using a Venn diagram.
Given these initial results, we decided to continue the analysis using the categorizations shown in Table 2 and Table 3, that divided the papers almost equally.

5.2. Wearable Eye Trackers/AR Applications

In 2012, Toyama et al. [60] proposed Museum Guide 2.0, an AR application that exploits the user’s gaze and traditional computer vision techniques to identify objects of interest and provide audio information about them. The core idea is that the user moves in the museum wearing the eye tracking device and obtains audio information about the observed objects. The object recognition algorithm is based on SIFT (a scale-invariant feature transform), while fixations are used to focus the attention of the algorithm. Tests conducted in a laboratory setting showed good performance in recognizing the reference objects and good feedback by users.
In 2012 as well, Schuchert et al. [61] developed an AR prototype that employed a bidirectional head-mounted display (HMD), called BiMi, as part of the ARtSENSE project. The prototype used gaze movements to pan and zoom on augmented images displayed on the HMD.
Between 2016 and 2018, Mokatren et al. [55,56,57] investigated the feasibility of incorporating a mobile eye tracker into an audio guide system for museum visitors. To achieve this purpose, an image-matching-based indoor location approach and an eye-gaze recognition technique were merged to determine the user’s center of attention into two separate versions of a mobile audio guide, namely, “proactive” and “reactive”. In the proactive version, a “beep” sound is played once the visitor’s position and the point or item of interest are determined, and audio information about the exhibit is then supplied shortly after. In the reactive version, a “beep” sound is played when the visitor’s position and the point or item of interest are determined, and the system waits for a mid-air gesture (a “stop sign”) to be made. The delivery of the audio information occurs once the user performs the necessary motion. A traditional museum visitor mobile guide system was created that employed a smartphone and low-energy Bluetooth beacons for location. The system was connected with a commercial mobile eye tracker, the Pupil-Dev eye tracker by Pupil Labs. If the visitor looked at an exhibit for three seconds, the image-based positioning procedure began, identified the visitor’s location and point of interest and played audio information about the desired exhibit.
In 2019, Yang et al. [62] proposed a project aimed at enhancing landscape and genre paintings by virtually spatializing relevant sounds for the depicted items and scenes (for example, cattle mooing). Visitors’ auditory impression is tailored by tracking their eyes during the viewing process. When a visitor focuses on an object or a specific region of a painting, the appropriate sound is highlighted while the other sounds are attenuated. The developed prototype employs a wearable eye tracker (Pupil Labs Core by Pupil Labs) for gaze tracking, standard headphones for audio perception and a laptop in a backpack for gaze computation, viewing pose estimation and virtual sound spatialization. A user study with 14 participants indicated that the gaze-based audio augmentation helped them better focus on the areas of interest, and the entire pipeline improved their experience with paintings. The visitor is considered as looking at the object of interest if at least 80% of their gaze is perceived on it. The corresponding sound is then amplified and the other sounds are attenuated. When the user looks at other parts of the picture, all virtual noises are played at a balanced level, without highlighting any object or scenario.
More recently, in 2021, Piening et al. [58] presented empirical research comparing a gaze-adaptive interface to an always-on interface in activities that require focusing both on real and virtual information. Users can look at relevant spots in the actual world, and, when they indicate their interest by staring at a specific point, the system feeds them with extra information. A prototype was created for the German Museum in Munich, Germany, to supplement a music studio display spanning 4 × 2 m and containing 15 pieces. The studio included historic sound generators and transformers. Experiments were conducted over the course of three days during normal museum hours and fifty people were recruited while visiting the museum. Using Vive base stations, participants’ eye movements were tracked in front of the exhibition in an area of 3.5 × 3 m, and static “hit boxes” were put over each object. The interaction was controlled by gaze, and each AR panel featured two to four layers of information containing more content and media (activated after looking at an object for 300 ms).
Finally, in 2022, Giariskanis et al. [59] described an ongoing, multimodal AR project that used a head-worn AR display to combine speech, music and sound with gesture- and gaze-based interaction. Visitors move in an actual museum setting where real and digital cultural artifacts are mixed. The Microsoft Hololens 2 head-worn AR display is used to build the AR experience, originally conceived for an exhibition at the Archaeological Museum of Chania in Crete (Greece). A multimodal story underpinning the AR experience directs visitors as they look at artifacts. Voice commands are used to engage with the narrative, while hand and gaze-based interactions are exploited to identify objects using instructions and musical sounds. The AR system was built with the Unity Game Engine and the Mixed Reality Toolkit (MRTK).

5.3. Remote Eye Trackers/Non-Immersive VR Applications

In 2014, Nagamatsu et al. [52] presented a prototype of public display controlled by gaze at the Maritime Museum of Kobe (Japan). The device was composed of two monochrome GigE cameras (with resolution of 2048 × 1088 pixels), eight infrared light sources, a projector and a PC. The cameras were placed in front of a showcase. Visitors could stop by the exhibit and look at four pictures on display, and their gaze was projected (in the form of a circle) on the observed image. While this may not be considered true interaction, displaying visual feedback on the items being watched can be useful to attract visitors’ attention.
In 2015, Cantoni et al. [53,54] developed a gaze-based interactive application for the exhibition “1525–2015. Pavia, the Battle, the Future. Nothing was the same again,” held at the Visconti Castle in Pavia (Italy). The application allows visitors to interact, using only their gaze, with images (in the case of the Pavia exhibition, they were photos of seven famous tapestries depicting the main phases of the Battle of Pavia). After a short tutorial, visitors can select one of the pictures, pan and zoom over them and retrieve information about the depicted characters. Each image contains various “sensitive areas”, that, when observed, are highlighted with a semi-transparent yellow rectangle. A short descriptive text then appears nearby the rectangle to provide contextual information. Three eye tracking workstations were available at the Pavia exhibition, each equipped with an EyeTribe eye tracker with a sampling frequency of 30 Hz. The application was generally well received by visitors, despite some problems due to the limited calibration performance of the eye tracker. More than 3000 people tried it, though only about 1000 did not stop after the tutorial but continued the interaction further.
In 2018, Al-Thani et al. [64] designed an application for virtual museums to investigate two main research questions: (1) “Do users prefer gaze-based interaction over mouse-based interaction in the context of digital heritage artifacts?” and (2) “Do users perceive gaze-based interaction more natural than mouse-based interaction when dealing with digital heritage artifacts?”. In a preliminary study, carried out in five museums in Qatar with paintings portraying pearl diving, four visitors per museum were observed while interacting with some items, and all their actions/events were noted. The main research was then conducted in a university laboratory with 60 participants. The experiment was carried out in three phases. Phase 1 involved mouse or gaze interaction, randomly assigned, followed by questionnaires. Phase 2 involved interacting with the opposite mode of the prior phase, again followed by a questionnaire. Finally, in phase 3, the participants could choose between the two interaction modes for free exploration. Users who engaged with the virtual gallery via gaze had a higher mean “affective response” than those who employed the mouse, while the results for ease of navigation were ambiguous. In addition, 88% of the participants preferred gaze-based interaction (judged as “more natural” by 92% of them).
In 2021, Raptis et al. [65] presented an interactive system for virtual museums aimed at assisting visitors in gaining better knowledge of art through visual and audio interactions. The MuMIA (Multi-Modal Interactions in Art) technology enables visitors to engage with art exhibitions in a variety of ways. Each exhibit is divided into Areas of Interest (AOIs), and each AOI contains cultural information connected to the features of the exhibits (e.g., theme, creator, era, etc.). When a visitor stands in front of an art exhibit, they visually investigate it and receive essential audio information. They then identify specific AOIs by looking at them and asking the system, using voice commands, for more information. The system then searches for and plays the relevant audio file. The visit scenario was simulated with “The School of Athens” painting, using an Eye Tribe eye tracker.
Finally, in 2022, Dondi et al. [63] proposed an evolution of the work presented by Cantoni et al. [53,54]. The previously developed system was entirely revised, improving the interface according to the feedback provided by the visitors to the 2015 exhibition. This new application, called Gaze-based Artworks Explorer, allows users to both select and navigate generic images of paintings. The user can pan and zoom the chosen image and trigger some “active areas” that provide textual, image and/or video information related to those elements of the painting. The design of the application followed gamification principles, to encourage visitors to find all the available active areas. A “back-end” tool called ActiveArea Selector was also developed, that allowed art experts or museum curators to design and modify the active areas of each painting. The application was tested only in a laboratory setting with 33 participants (unfortunately, the planned trial in a real museum could not take place due to the COVID- pandemic). A Tobii 4c eye tracker, with a sampling rate of 90 Hz, was employed. The participants were asked to complete a series of tasks involving all the main functions of the application (selecting an image, panning, zooming, finding all the active areas and activating one of the areas). A questionnaire allowed researchers to receive subjective feedback about the application. The user study showed good performance, both in terms of time necessary to complete the given task and in terms of subjective perception, with scores equal to or greater than four (in a Likert scale from 1 to 5) for all questions.

6. Discussion

Although the first gaze-based interactive applications for museums date back more than a decade, the use of eye tracking as a technology to attract and engage visitors can still be considered relatively new and uncommon today. The few applications tested in real museums have only been in use for short periods, mainly as prototypes exploited for scientific purposes. However, our analysis has highlighted important aspects that we believe will characterize the future of eye tracking in museums and exhibitions.

6.1. Summary on the Use of Wearable Eye Trackers/AR Applications

To improve the visiting experience in museums, wearable eye trackers are being integrated into AR applications. By tracking the user’s gaze, these systems can identify elements of interest and deliver appropriate auditory information or augmentations, potentially resulting in a more informative, personalized and interactive experience. Mobile eye trackers and audio guide systems are being combined to create proactive and reactive versions of traditional mobile audio guides.
Eye tracking technology is also used to spatialize appropriate sounds in paintings or exhibits based on the visitor’s gaze. When a visitor focuses on a specific object or area, the related sound can be amplified while other sounds are reduced. Such gaze-based audio augmentation can increase the visitor’s focus and enhance their overall experience with the artwork.
The evolution of augmented reality applications in museums now includes multimodal interactions that exploit gaze-based engagement together with gestures, voice, music and general sounds. Voice commands allow visitors to interact with the narrative, and hand and eye gestures can be used to identify items and play music or instructions. Such multimodal strategy can produce an attractive, immersive and dynamic AR experience.

6.2. Summary on the Use of Remote Eye Trackers/Non-Immersive VR Applications

Like wearable devices, remote eye trackers can grab visitors’ attention by providing them with proper output based on their gaze; even a simple circle shown onto an observed image can act as a visual indicator of visitors’ focus and involvement. Visitors can engage with exhibits using their gaze, picking and examining particular photos or items, panning and zooming over them and obtaining contextual information on specific characters or objects. Studies aimed at comparing gaze-based interaction to traditional mouse-based interaction have shown that eye input can be better from an “emotional” point of view, and may also be perceived as more natural.
Multimodal interaction also plays an important role, mainly as a combination of gaze and voice input. For example, visitors can visually explore exhibits, gaze-select specific areas of interest and use voice commands to ask for further information. Moreover, visitor interaction with artwork can be facilitated using gamification principles. For instance, visitors can use their gaze to move around paintings by panning, zooming and triggering “active areas”, which then provide text, images and/or videos about specific elements. Through enhanced interaction and learning opportunities, this strategy can encourage visitors to view the complete piece of art.
Due to the COVID-19 pandemic, several of the experiments presented in the papers analyzed in this review were carried out in laboratories or in very specific exhibitions. Nevertheless, the results of user studies involving task completion and participant feedback show that gaze-based applications are generally appreciated by users.
Overall, the integration of non-immersive VR applications and remote eye trackers into museum settings open new opportunities for engaging visitors, personalizing interactions and improving their comprehension and enjoyment of exhibits. For a more engaging museum experience, gaze-based interaction and visual feedback can draw interest, encourage exploration and provide pertinent information.

6.3. General Considerations

All eye trackers, whether wearable or remote, have both advantages and disadvantages; Table 5 summarizes the most distinctive features of the two types of devices, especially when used in museums.
Regarding wearable eye trackers, one problem is certainly their generally high cost compared to remote devices, but this issue will probably be overcome in the next years, due to the constant improvements in technology. However, the “hygienic” problem should be considered for wearable tools—especially after the COVID-19 pandemic. Sharing them among different visitors, without appropriate sanitation procedures, may represent both an actual risk and a psychological barrier. Of course, similar considerations apply to any wearable device, including those for augmented and virtual reality. On the contrary, remote eye trackers are generally cheaper and more hygienic, but lack the freedom of movement of wearable solutions.
A well-known limitation of eye tracking technology (common to both wearable and remote devices) is the need for an initial calibration procedure when a good precision is required, which may make the interaction less natural and, in some cases, may force the visitor to remain relatively still. Non-calibrated gaze-based applications have appeared in recent years (e.g., for writing [39,51] or for basic interaction in public places [66,67]), but they are still significantly slower than their calibrated counterpart.
Despite these limitations, eye tracking technology certainly has its advantages as well. For example, remote devices are a more hygienic solution than the much more popular touchscreens, and can also help improve accessibility by enabling people with mobility impairments to interact with the exhibit using their gaze only. In general, eye tracking can be used to provide hands-free input by simply focusing on places of interest, thus implementing a more natural way of interacting with exhibits in a museum (for example, showing visitors information about what they are looking at in a painting).
The review of the scientific literature has also highlighted that some research paths are not yet fully explored. In particular, gaze-based interactive applications for virtual museums are surely a less-studied topic. Similarly, we have not found any immersive VR applications for museums that actively use the gaze to interact with virtual objects. The gaze is mainly exploited to simply “align” the simulation, or, more rarely, for implicit interaction (e.g., for the creation of virtual artwork using the visitors’ gaze paths [68]).

6.4. Future Perspectives

The above analysis has shown the potential of eye tracking technology for museums and exhibitions, but it has also highlighted some limitations of current approaches. Here, we suggest possible future research paths, also taking into account the current trends in related research fields. Table 6 provides a brief summary of our analysis.
The first problem to address is surely the need for initial calibration. While calibration is a standard procedure in a laboratory setting, it may be problematic in a museum (and, more generally, in public places). Since calibration depends on the user’s position, a museum visitor may move too much (for example, because they are distracted by other people) and, thus, need to repeat the procedure. Calibration problems may occur not only with remote eye trackers, but also with wearable devices. For example, a visitor may adjust the position of eye tracking glasses (to make them more comfortable), losing the initial calibration during the process. Repeated calibrations may be perceived as a boring nuisance, especially if compared to other interactive devices, such as touch screens. The need for calibration is also the main reason why remote devices may require one to stay relatively still during their use. Removing this constraint would ensure greater freedom of movement for visitors when dealing with applications based on remote eye trackers. For all these reasons, researchers should focus on developing applications that do not need calibration (for example, because the gaze-based interaction occurs through big target graphical elements, which allow pointing errors to be tolerated). A sort of “hidden” calibration can also be considered. For example, the initial screen of an application may contain a large central element to be looked at to start the interaction. This element can be used to perform a so-called one-point calibration, without the user being aware of it. While less precise than standard calibration procedures, this quick calibration method provides a rough initial reference that can then be used to adjust the user’s estimated gaze positions. Finally, Artificial Intelligence (AI) can also help. In fact, new algorithms for gaze estimation that exploit deep learning have been recently proposed [69]. With the rapid evolution of Artificial Intelligence, it is probable that robust solutions that do not require calibration (or allow it to be achieved through very fast procedures) will soon be developed, thus guaranteeing a more immediate and natural interaction.
Eye tracking technology paired with AI could also offer real-time interpretation of exhibits or artwork. As visitors look at an exhibit, deep learning models could deliver rapid translations, audio descriptions or contextual information. This would enable a more engaging and instructive experience for a wider spectrum of visitors. Moreover, data on visitors’ gaze patterns and interests could be processed in real time to create custom experiences based on the preferences of each visitor. This would allow museums to provide specialized tours, recommend different itineraries or offer supplemental material tailored to a specific visitor’s attention patterns. Based on individual preferences, AI algorithms could evaluate gaze patterns and provide real-time guidance or suggestions on the most efficient routes or exhibits to explore. This “intelligent wayfinding” could help visitors save time and avoid missing important features.
Immersive VR can be another interesting path to explore for future research, especially for virtual museums. The COVID pandemic has shown the need to accelerate the digitization of artwork to make them available to people all around the world. Immersive VR is surely the best option for this scenario. Explicit gaze interaction in VR is a well-studied topic (e.g., [70]), whose solutions could easily be extended to the exploration of virtual museums. As for gaze-based AR applications, there are already various examples in which gaze is used alone or in combination with voice and gestures (see Table 2 and Table 4). Future improvements will involve the consolidation of this technology, with more robust implementations of existing solutions, better responsiveness and a more realistic “merge” between virtual and real content. The availability of powerful HMDs with eye-tracking capabilities (such as Microsoft Hololens or the recently announced Apple Vision Pro) can surely ensure, in the near future, the possibility of more convenient gaze-based interaction for both AR and VR applications.
Finally, it should be noted that, while there are many gaze-based solutions designed to help people with mobility impairments, no trials have been proposed in museums until now. This is probably due to the fact that all the applications we have presented in our review have only been installed for a relatively short time, for experimental purposes or for specific events. However, museums are generally very interested in improving their accessibility and we think that eye tracking technology can be effectively used for this purpose. Research in this direction should include extensive testing with people with motor disabilities, to better understand how to properly design and contextualize gaze-based assistive applications in museums or temporary exhibitions.

7. Conclusions

In this literature review, we have analyzed the state-of-the-art of gaze-based interactive applications for museums and exhibitions. Eye tracking technology can allow these cultural settings to improve visitor engagement, personalize experiences, optimize exhibit layouts and create more inclusive environments. Museums may attract and fascinate visitors by skillfully utilizing modern technology, offering them a memorable and interactive experience.
The results of our analysis have shown that, in the last decade, eye tracking technology has been applied in museums in various ways for designing AR and non-immersive VR applications, using both wearable and remote devices. However, there is still room for improvement, with several interesting research paths, including non-calibrated algorithms, guided and personalized content, interactive VR for virtual museums and assistive installations for motor impaired visitors.
Of course, it is important to stress that, as with any technology, ethical considerations and visitor privacy should be prioritized when using eye tracking devices in museums and exhibitions. To preserve confidence and provide a positive user experience for all visitors, clear communication, consent and data anonymization should be established.
In conclusion, even if gaze-based interactive solutions for museums and exhibitions are still few, we think this is a promising research field, especially with the emergence of new AI algorithms that may further improve the precision and robustness of eye tracking devices and help in the creation of personalized visitor experiences.

Author Contributions

Conceptualization, P.D. and M.P.; methodology, P.D. and M.P.; formal analysis, P.D. and M.P.; investigation, P.D. and M.P.; writing—original draft preparation, P.D.; writing—review and editing, M.P.; visualization, P.D.; supervision, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pedersen, I.; Gale, N.; Mirza-Babaei, P.; Reid, S. More than Meets the Eye: The Benefits of Augmented Reality and Holographic Displays for Digital Cultural Heritage. J. Comput. Cult. Herit. 2017, 10, 11. [Google Scholar] [CrossRef]
  2. Ibrahim, N.; Ali, N.M. A Conceptual Framework for Designing Virtual Heritage Environment for Cultural Learning. J. Comput. Cult. Herit. 2018, 11, 1–27. [Google Scholar] [CrossRef]
  3. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
  4. Wang, C.; Zhu, Y. A Survey of Museum Applied Research Based on Mobile Augmented Reality. Comput. Intell. Neurosci. 2022, 2022, 2926241. [Google Scholar] [CrossRef]
  5. Mortara, M.; Catalano, C.E.; Bellotti, F.; Fiucci, G.; Houry-Panchetti, M.; Petridis, P. Learning cultural heritage by serious games. J. Cult. Herit. 2014, 15, 318–325. [Google Scholar] [CrossRef] [Green Version]
  6. DaCosta, B.; Kinsell, C. Serious Games in Cultural Heritage: A Review of Practices and Considerations in the Design of Location-Based Games. Educ. Sci. 2023, 13, 47. [Google Scholar] [CrossRef]
  7. Styliani, S.; Fotis, L.; Kostas, K.; Petros, P. Virtual museums, a survey and some issues for consideration. J. Cult. Herit. 2009, 10, 520–528. [Google Scholar] [CrossRef]
  8. Choi, B.; Kim, J. Changes and Challenges in Museum Management after the COVID-19 Pandemic. J. Open Innov. Technol. Mark. Complex. 2021, 7, 148. [Google Scholar] [CrossRef]
  9. Giannini, T.; Bowen, J.P. Museums and Digital Culture: From Reality to Digitality in the Age of COVID-19. Heritage 2022, 5, 192–214. [Google Scholar] [CrossRef]
  10. Fanini, B.; d’Annibale, E.; Demetrescu, E.; Ferdani, D.; Pagano, A. Engaging and shared gesture-based interaction for museums the case study of K2R international expo in Rome. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; Volume 1, pp. 263–270. [Google Scholar] [CrossRef]
  11. Yoshida, R.; Tamaki, H.; Sakai, T.; Nakadai, T.; Ogitsu, T.; Takemura, H.; Mizoguchi, H.; Namatame, M.; Saito, M.; Kusunoki, F.; et al. Novel application of Kinect sensor to support immersive learning within museum for children. In Proceedings of the 2015 9th International Conference on Sensing Technology (ICST), Auckland, New Zealand, 8–10 December 2015; pp. 834–837. [Google Scholar] [CrossRef]
  12. Dondi, P.; Lombardi, L.; Rocca, I.; Malagodi, M.; Licchelli, M. Multimodal workflow for the creation of interactive presentations of 360 spin images of historical violins. Multimed. Tools Appl. 2018, 77, 28309–28332. [Google Scholar] [CrossRef]
  13. Buquet, C.; Charlier, J.; Paris, V. Museum application of an eye tracker. Med. Biol. Eng. Comput. 1988, 26, 277–281. [Google Scholar] [CrossRef]
  14. Wooding, D.S.; Mugglestone, M.D.; Purdy, K.J.; Gale, A.G. Eye movements of large populations: I. Implementation and performance of an autonomous public eye tracker. Behav. Res. Methods Instrum. Comput. 2002, 34, 509–517. [Google Scholar] [CrossRef] [Green Version]
  15. Wooding, D.S. Eye movements of large populations: II. Deriving regions of interest, coverage, and similarity using fixation maps. Behav. Res. Methods Instrum. Comput. 2002, 34, 518–528. [Google Scholar] [CrossRef] [Green Version]
  16. Milekic, S. Gaze-tracking and museums: Current research and implications. In Museums and the Web 2010: Proceedings; Archives & Museum Informatics: Toronto, ON, Canada, 2010; pp. 61–70. [Google Scholar]
  17. Eghbal-Azar, K.; Widlok, T. Potentials and Limitations of Mobile Eye Tracking in Visitor Studies. Soc. Sci. Comput. Rev. 2013, 31, 103–118. [Google Scholar] [CrossRef]
  18. Villani, D.; Morganti, F.; Cipresso, P.; Ruggi, S.; Riva, G.; Gilli, G. Visual exploration patterns of human figures in action: An eye tracker study with art paintings. Front. Psychol. 2015, 6, 1636. [Google Scholar] [CrossRef] [Green Version]
  19. Calandra, D.M.; Di Mauro, D.; D’Auria, D.; Cutugno, F. E.Y.E. C.U.: An Emotional eYe trackEr for Cultural heritage sUpport. In Empowering Organizations: Enabling Platforms and Artefacts; Springer International Publishing: Cham, Switzerland, 2016; pp. 161–172. [Google Scholar] [CrossRef]
  20. Das, D.; Rashed, M.G.; Kobayashi, Y.; Kuno, Y. Supporting Human–Robot Interaction Based on the Level of Visual Focus of Attention. IEEE Trans. Hum. Mach. Syst. 2015, 45, 664–675. [Google Scholar] [CrossRef]
  21. Rashed, M.G.; Suzuki, R.; Lam, A.; Kobayashi, Y.; Kuno, Y. A vision based guide robot system: Initiating proactive social human robot interaction in museum scenarios. In Proceedings of the 2015 International Conference on Computer and Information Engineering (ICCIE), Rajshahi, Bangladesh, 26–27 November 2015; pp. 5–8. [Google Scholar] [CrossRef]
  22. Iio, T.; Satake, S.; Kanda, T.; Hayashi, K.; Ferreri, F.; Hagita, N. Human-like guide robot that proactively explains exhibits. Int. J. Soc. Robot. 2020, 12, 549–566. [Google Scholar] [CrossRef] [Green Version]
  23. Duchowski, A.T. Eye Tracking Methodology: Theory and Practice, 3rd ed.; Springer International Publishing AG: Cham, Switzerland, 2017. [Google Scholar]
  24. Velichkovsky, B.M.; Dornhoefer, S.M.; Pannasch, S.; Unema, P.J. Visual Fixations and Level of Attentional Processing. In Proceedings of the ETRA 2000 Symposium on Eye Tracking Research & Applications, Palm Beach Gardens, FL, USA, 6–8 November 2000; ACM: New York, NY, USA, 2000; pp. 79–85. [Google Scholar] [CrossRef] [Green Version]
  25. Robinson, D.A. The mechanics of human saccadic eye movement. J. Physiol. 1964, 174, 245–264. [Google Scholar] [CrossRef]
  26. Shackel, B. Pilot study in electro-oculography. Br. J. Ophthalmol. 1960, 44, 89. [Google Scholar] [CrossRef] [Green Version]
  27. Robinson, D.A. A Method of Measuring Eye Movemnent Using a Scieral Search Coil in a Magnetic Field. IEEE Trans. Bio-Med. Electron. 1963, 10, 137–145. [Google Scholar] [CrossRef]
  28. Mele, M.L.; Federici, S. Gaze and eye-tracking solutions for psychological research. Cogn. Process. 2012, 13, 261–265. [Google Scholar] [CrossRef] [PubMed]
  29. Popa, L.; Selejan, O.; Scott, A.; Mureşanu, D.F.; Balea, M.; Rafila, A. Reading beyond the glance: Eye tracking in neurosciences. Neurol. Sci. 2015, 36, 683–688. [Google Scholar] [CrossRef] [PubMed]
  30. Wedel, M.; Pieters, R. Eye tracking for visual marketing. Found. Trends Mark. 2008, 1, 231–320. [Google Scholar] [CrossRef] [Green Version]
  31. Cantoni, V.; Perez, C.J.; Porta, M.; Ricotti, S. Exploiting Eye Tracking in Advanced E-Learning Systems. In Proceedings of the CompSysTech ’12: 13th International Conference on Computer Systems and Technologies, Ruse, Bulgaria, 22–23 June 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 376–383. [Google Scholar] [CrossRef]
  32. Nielsen, J.; Pernice, K. Eyetracking Web Usability; New Riders Press: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  33. Mosconi, M.; Porta, M.; Ravarelli, A. On-Line Newspapers and Multimedia Content: An Eye Tracking Study. In Proceedings of the SIGDOC ’08: 26th Annual ACM International Conference on Design of Communication, Lisbon, Portugal, 22–24 September 2008; Association for Computing Machinery: New York, NY, USA, 2008; pp. 55–64. [Google Scholar] [CrossRef]
  34. Kasprowski, P.; Ober, J. Eye Movements in Biometrics. In Biometric Authentication; Maltoni, D., Jain, A.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 248–258. [Google Scholar]
  35. Porta, M.; Dondi, P.; Zangrandi, N.; Lombardi, L. Gaze-Based Biometrics From Free Observation of Moving Elements. IEEE Trans. Biom. Behav. Identity Sci. 2022, 4, 85–96. [Google Scholar] [CrossRef]
  36. Duchowski, A.T. Gaze-based interaction: A 30 year retrospective. Comput. Graph. 2018, 73, 59–69. [Google Scholar] [CrossRef]
  37. Majaranta, P.; Räihä, K.J. Text entry by gaze: Utilizing eye-tracking. In Text Entry Systems: Mobility, Accessibility, Universality; Morgan Kaufmann: San Francisco, CA, USA, 2007; pp. 175–187. [Google Scholar]
  38. Porta, M. A study on text entry methods based on eye gestures. J. Assist. Technol. 2015, 9, 48–67. [Google Scholar] [CrossRef]
  39. Porta, M.; Dondi, P.; Pianetta, A.; Cantoni, V. SPEye: A Calibration-Free Gaze-Driven Text Entry Technique Based on Smooth Pursuit. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 312–323. [Google Scholar] [CrossRef]
  40. Kumar, C.; Menges, R.; Müller, D.; Staab, S. Chromium based framework to include gaze interaction in web browser. In Proceedings of the 26th International Conference on World Wide Web Companion, Geneva, Switzerland, 3–7 April 2017; pp. 219–223. [Google Scholar]
  41. Casarini, M.; Porta, M.; Dondi, P. A Gaze-Based Web Browser with Multiple Methods for Link Selection. In Proceedings of the ETRA ’20 Adjunct: ACM Symposium on Eye Tracking Research and Applications, Stuttgart, Germany, 2–5 June 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
  42. Davanzo, N.; Dondi, P.; Mosconi, M.; Porta, M. Playing Music with the Eyes through an Isomorphic Interface. In Proceedings of the COGAIN ’18: Workshop on Communication by Gaze Interaction, Warsaw, Poland, 15 June 2018; ACM: New York, NY, USA, 2018; pp. 5:1–5:5. [Google Scholar] [CrossRef]
  43. Valencia, S.; Lamb, D.; Williams, S.; Kulkarni, H.S.; Paradiso, A.; Ringel Morris, M. Dueto: Accessible, Gaze-Operated Musical Expression. In Proceedings of the ASSETS ’19: 21st International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 28–30 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 513–515. [Google Scholar] [CrossRef] [Green Version]
  44. Jacob, R.J. Eye movement-based human-computer interaction techniques: Toward non-command interfaces. Adv. Hum.-Comput. Interact. 1993, 4, 151–190. [Google Scholar]
  45. Wobbrock, J.O.; Rubinstein, J.; Sawyer, M.W.; Duchowski, A.T. Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry. In Proceedings of the ETRA ’08: 2008 Symposium on Eye Tracking Research & Applications, Savannah, Georgia, 26–28 March 2008; ACM: New York, NY, USA, 2008; pp. 11–18. [Google Scholar] [CrossRef] [Green Version]
  46. Porta, M.; Turina, M. Eye-S: A Full-Screen Input Modality for Pure Eye-Based Communication. In Proceedings of the ETRA ’08: 2008 Symposium on Eye Tracking Research & Applications, Savannah, Georgia, 26–28 March 2008; ACM: New York, NY, USA, 2008; pp. 27–34. [Google Scholar] [CrossRef]
  47. Istance, H.; Bates, R.; Hyrskykari, A.; Vickers, S. Snap clutch, a moded approach to solving the Midas touch problem. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, Savannah, Georgia, 26–28 March 2008; pp. 221–228. [Google Scholar] [CrossRef]
  48. Królak, A.; Strumiłło, P. Eye-blink detection system for human–computer interaction. Univ. Access Inf. Soc. 2012, 11, 409–419. [Google Scholar] [CrossRef] [Green Version]
  49. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  50. Majaranta, P.; Aoki, H.; Donegan, M.; Witzner Hansen, D.; Hansen, J.P. Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies; IGI Global: Hershey, PA, USA, 2011. [Google Scholar]
  51. Zeng, Z.; Neuer, E.S.; Roetting, M.; Siebert, F.W. A One-Point Calibration Design for Hybrid Eye Typing Interface. Int. J. Hum. Comput. Interact. 2022, 1–14. [Google Scholar] [CrossRef]
  52. Nagamatsu, T.; Fukuda, K.; Yamamoto, M. Development of Corneal Reflection-Based Gaze Tracking System for Public Use. In Proceedings of the PerDis ’14: International Symposium on Pervasive Displays, Copenhagen, Denmark, 3–4 June 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 194–195. [Google Scholar] [CrossRef]
  53. Cantoni, V.; Merlano, L.; Nugrahaningsih, N.; Porta, M. Eye Tracking for Cultural Heritage: A Gaze-Controlled System for Handless Interaction with Artworks. In Proceedings of the CompSysTech ’16: 17th International Conference on Computer Systems and Technologies 2016, Palermo, Italy, 23–24 June 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 307–314. [Google Scholar] [CrossRef]
  54. Cantoni, V.; Dondi, P.; Lombardi, L.; Nugrahaningsih, N.; Porta, M.; Setti, A. A Multi-Sensory Approach to Cultural Heritage: The Battle of Pavia Exhibition. IOP Conf. Ser. Mater. Sci. Eng. 2018, 364, 012039. [Google Scholar] [CrossRef]
  55. Mokatren, M.; Kuflik, T.; Shimshoni, I. A Novel Image Based Positioning Technique Using Mobile Eye Tracker for a Museum Visit. In Proceedings of the MobileHCI ’16: 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, Florence, Italy, 6–9 September 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 984–991. [Google Scholar] [CrossRef]
  56. Mokatren, M.; Kuflik, T.; Shimshoni, I. Exploring the Potential Contribution of Mobile Eye-Tracking Technology in Enhancing the Museum Visit Experience. In Proceedings of the AVI*CH, Bari, Italy, 7–10 June 2016; pp. 23–31. [Google Scholar]
  57. Mokatren, M.; Kuflik, T.; Shimshoni, I. Exploring the potential of a mobile eye tracker as an intuitive indoor pointing device: A case study in cultural heritage. Future Gener. Comput. Syst. 2018, 81, 528–541. [Google Scholar] [CrossRef]
  58. Piening, R.; Pfeuffer, K.; Esteves, A.; Mittermeier, T.; Prange, S.; Schröder, P.; Alt, F. Looking for Info: Evaluation of Gaze Based Information Retrieval in Augmented Reality. In Human-Computer Interaction—INTERACT 2021; Ardito, C., Lanzilotti, R., Malizia, A., Petrie, H., Piccinno, A., Desolda, G., Inkpen, K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 544–565. [Google Scholar] [CrossRef]
  59. Giariskanis, F.; Kritikos, Y.; Protopapadaki, E.; Papanastasiou, A.; Papadopoulou, E.; Mania, K. The Augmented Museum: A Multimodal, Game-Based, Augmented Reality Narrative for Cultural Heritage. In Proceedings of the IMX ’22: ACM International Conference on Interactive Media Experiences, Aveiro, Portugal, 22–24 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 281–286. [Google Scholar] [CrossRef]
  60. Toyama, T.; Kieninger, T.; Shafait, F.; Dengel, A. Gaze Guided Object Recognition Using a Head-Mounted Eye Tracker. In Proceedings of the ETRA ’12: Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 91–98. [Google Scholar] [CrossRef] [Green Version]
  61. Schuchert, T.; Voth, S.; Baumgarten, J. Sensing Visual Attention Using an Interactive Bidirectional HMD. In Proceedings of the Gaze-In ’12: 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, Santa Monica, CA, USA, 26 October 2012; Association for Computing Machinery: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  62. Yang, J.; Chan, C.Y. Audio-Augmented Museum Experiences with Gaze Tracking. In Proceedings of the MUM ’19: 18th International Conference on Mobile and Ubiquitous Multimedia, Pisa, Italy, 26–29 November 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  63. Dondi, P.; Porta, M.; Donvito, A.; Volpe, G. A gaze-based interactive system to explore artwork imagery. J. Multimodal User Interfaces 2022, 16, 55–67. [Google Scholar] [CrossRef]
  64. Al-Thani, L.K.; Liginlal, D. A Study of Natural Interactions with Digital Heritage Artifacts. In Proceedings of the 2018 3rd Digital Heritage International Congress (DigitalHERITAGE) Held Jointly with 2018 24th International Conference on Virtual Systems & Multimedia (VSMM 2018), San Francisco, CA, USA, 26–30 October 2018; pp. 1–4. [Google Scholar] [CrossRef]
  65. Raptis, G.E.; Kavvetsos, G.; Katsini, C. MuMIA: Multimodal Interactions to Better Understand Art Contexts. Appl. Sci. 2021, 11, 2695. [Google Scholar] [CrossRef]
  66. Porta, M.; Caminiti, A.; Dondi, P. GazeScale: Towards General Gaze-Based Interaction in Public Places. In Proceedings of the ICMI ’22: 2022 International Conference on Multimodal Interaction, Bengaluru, India, 7–11 November 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 591–596. [Google Scholar] [CrossRef]
  67. Zeng, Z.; Liu, S.; Cheng, H.; Liu, H.; Li, Y.; Feng, Y.; Siebert, F. GaVe: A webcam-based gaze vending interface using one-point calibration. J. Eye Mov. Res. 2023, 16. [Google Scholar] [CrossRef]
  68. Mu, M.; Dohan, M. Community Generated VR Painting Using Eye Gaze. In Proceedings of the MM ’21: 29th ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; Association for Computing Machinery: New York, NY, USA, 2021; pp. 2765–2767. [Google Scholar] [CrossRef]
  69. Pathirana, P.; Senarath, S.; Meedeniya, D.; Jayarathna, S. Eye gaze estimation: A survey on deep learning-based approaches. Expert Syst. Appl. 2022, 199, 116894. [Google Scholar] [CrossRef]
  70. Plopski, A.; Hirzle, T.; Norouzi, N.; Qian, L.; Bruder, G.; Langlotz, T. The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-Worn Extended Reality. ACM Comput. Surv. 2022, 55, 1–39. [Google Scholar] [CrossRef]
Figure 1. Example of eye detection with the Gazepoint GP3 HD eye tracker: above, the eyes detected within the face; below, pupil/corneal reflections.
Figure 1. Example of eye detection with the Gazepoint GP3 HD eye tracker: above, the eyes detected within the face; below, pupil/corneal reflections.
Electronics 12 03064 g001
Figure 2. Examples of remote and wearable eye trackers. On the left, highlighted in red, a Tobii 4c (by Tobii) remote device; on the right, a PupilCore (by Pupil Labs) wearable tool—photos taken in our laboratory.
Figure 2. Examples of remote and wearable eye trackers. On the left, highlighted in red, a Tobii 4c (by Tobii) remote device; on the right, a PupilCore (by Pupil Labs) wearable tool—photos taken in our laboratory.
Electronics 12 03064 g002
Figure 3. PRISMA flow diagram.
Figure 3. PRISMA flow diagram.
Electronics 12 03064 g003
Figure 4. Temporal distribution of the selected studies.
Figure 4. Temporal distribution of the selected studies.
Electronics 12 03064 g004
Figure 5. Venn diagram of the considered categories [52,53,54,55,56,57,58,59,60,61,62,63,64,65].
Figure 5. Venn diagram of the considered categories [52,53,54,55,56,57,58,59,60,61,62,63,64,65].
Electronics 12 03064 g005
Table 1. Categorization of papers according to the considered scenario, ordered by year.
Table 1. Categorization of papers according to the considered scenario, ordered by year.
CategoryStudies
Real Museum[52,53,54,55,56,57,58,59]
Simulated Museum[60,61,62,63]
Virtual Museum[64,65]
Table 2. Categorization of papers according to the type of application, ordered by year.
Table 2. Categorization of papers according to the type of application, ordered by year.
CategoryStudies
AR[55,56,57,58,59,60,61,62]
VR (non-immersive)[52,53,54,63,64,65]
Table 3. Categorization of papers according to the type of eye tracker employed, ordered by year.
Table 3. Categorization of papers according to the type of eye tracker employed, ordered by year.
CategoryStudies
Wearable Eye Tracker[55,56,57,58,59,60,61,62]
Remote Eye Tracker[52,53,54,63,64,65]
Table 4. Categorization of papers according to the type(s) of input used, ordered by year.
Table 4. Categorization of papers according to the type(s) of input used, ordered by year.
CategoryStudies
Gaze only[52,53,54,58,60,61,62,63,64]
Gaze and Voice[65]
Gaze and Gesture[55,56,57]
Gaze, Voice and Gesture[59]
Table 5. Summary of the distinctive pros and cons of wearable and remote eye trackers.
Table 5. Summary of the distinctive pros and cons of wearable and remote eye trackers.
Type of ETProsCons
WearableFreedom of movementHygienic risk
May be uncomfortable to wear
Usually more expensive
RemoteHygienic solutionMay require user to stay relatively still
Usually cheaper
Table 6. Summary of possible future trends for gaze-based museum applications.
Table 6. Summary of possible future trends for gaze-based museum applications.
GoalHow the Goal Can Be Achieved
Removing/reducing calibrationDesigning interfaces with large target elements
Implementing quick “rough” calibration (e.g., one-point calibration)
Exploiting new AI-based gaze estimation methods
Creating immersive VR applicationsApplying existing immersive VR gaze-based interaction methods (e.g., those employed in video games)
Promoting the digitization of artworks (with the consequent creation of more virtual museums)
Consolidating AR applicationsExtensively testing and enhancing existing AR approaches in museums (e.g., regarding robustness and responsiveness)
Implementing personalized toursPerforming real-time processing of visitors’ gaze patterns
Providing recommendations based on visitors’ interest (e.g., suggesting artwork similar to those previously observed)
Improving museum accessibilityCarrying out tests with motor impaired people
Contextualizing existing gaze-based assistive applications to the museum environment
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dondi, P.; Porta, M. Gaze-Based Human–Computer Interaction for Museums and Exhibitions: Technologies, Applications and Future Perspectives. Electronics 2023, 12, 3064. https://doi.org/10.3390/electronics12143064

AMA Style

Dondi P, Porta M. Gaze-Based Human–Computer Interaction for Museums and Exhibitions: Technologies, Applications and Future Perspectives. Electronics. 2023; 12(14):3064. https://doi.org/10.3390/electronics12143064

Chicago/Turabian Style

Dondi, Piercarlo, and Marco Porta. 2023. "Gaze-Based Human–Computer Interaction for Museums and Exhibitions: Technologies, Applications and Future Perspectives" Electronics 12, no. 14: 3064. https://doi.org/10.3390/electronics12143064

APA Style

Dondi, P., & Porta, M. (2023). Gaze-Based Human–Computer Interaction for Museums and Exhibitions: Technologies, Applications and Future Perspectives. Electronics, 12(14), 3064. https://doi.org/10.3390/electronics12143064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop