Next Article in Journal
Simulation of Soil Temperature under Plateau Zokor’s (Eospalax baileyi) Disturbance in the Qinghai Lake Watershed, Northeast Qinghai–Tibet Plateau
Previous Article in Journal
Transition Cow Nutrition and Management Strategies of Dairy Herds in the Northeastern United States: Associations of Nutritional Strategies with Analytes, Health, Milk Yield, and Reproduction
Previous Article in Special Issue
Promoting Good Nonhuman Primate Welfare outside Regular Working Hours
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Touchscreen-Based, Multiple-Choice Approach to Cognitive Enrichment of Captive Rhesus Macaques (Macaca mulatta)

1
Cognitive Neuroscience Laboratory, German Primate Center, 37077 Goettingen, Germany
2
Leibniz-Science Campus Primate Cognition, 37077 Goettingen, Germany
3
Population and Behavioral Health Services, California National Primate Research Center, University of California, Davis, CA 95817, USA
4
Faculty for Biology and Psychology, Goettingen University, 37073 Goettingen, Germany
*
Author to whom correspondence should be addressed.
Animals 2023, 13(17), 2702; https://doi.org/10.3390/ani13172702
Submission received: 28 July 2023 / Revised: 18 August 2023 / Accepted: 22 August 2023 / Published: 24 August 2023
(This article belongs to the Special Issue Care Strategies of Non-Human Primates in Captivity)

Abstract

:

Simple Summary

Across the last decades, animal welfare science has established that regular access to sensory, motor, and cognitive stimulation significantly improves captive animal’s well-being. In primates, in particular, cognitive enrichment protocols have been crucial to alleviate boredom and, more generally, several symptoms of compromised wellbeing. Despite this, cognitive enrichment practices have not received the same level of attention as structural and social enrichment. Consequently, captive animals are usually given ample climbing or grooming opportunities but are less frequently provided with intellectual challenges. This is especially problematic for primates and other species with high cognitive abilities and demands. Following and in order to expand upon recent scientific and technological progress, we developed a multiple-choice interface for touchscreen devices tailored to rhesus macaques housed at the facility of the Cognitive Neuroscience Laboratory of the German Primate Center. The interface allows the animals to flexibly choose between three tasks on a trial-by-trial basis, allowing them to switch activities as desired. Generally, our animals showed consistent task preferences, across time of day and weekly sessions, while also displaying proficiency in doing so. We believe that with a multiple-choice approach, it is possible to increase animal wellbeing by providing captive animals more opportunities to control their own environment, simultaneously providing researchers with a reliable and scalable method for cognitive assessment and animal training.

Abstract

Research on the psychological and physiological well-being of captive animals has focused on investigating different types of social and structural enrichment. Consequently, cognitive enrichment has been understudied, despite the promising external validity, comparability, and applicability. As we aim to fill this gap, we developed an interactive, multiple-choice interface for cage-mounted touchscreen devices that rhesus monkeys (Macaca mulatta) can freely interact with, from within their home enclosure at the Cognitive Neuroscience Laboratory of the German Primate Center. The multiple-choice interface offers interchangeable activities that animals can choose and switch between. We found that all 16 captive rhesus macaques tested consistently engaged with the multiple-choice interface across 6 weekly sessions, with 11 of them exhibiting clear task preferences, and displaying proficiency in performing the selected tasks. Our approach does not require social separation or dietary restriction and is intended to increase animals’ sense of competence and agency by providing them with more control over their environment. Thanks to the high level of automation, our multiple-choice interface can be easily incorporated as a standard cognitive enrichment practice across different facilities and institutes working with captive animals, particularly non-human primates. We believe that the multiple-choice interface is a sustainable, scalable, and pragmatic protocol for enhancing cognitive well-being and animal welfare in captivity.

1. Introduction

Over the last three decades, considerable technological, experimental, and theoretical advancements have contributed to the refinement of cognitive assessment and enrichment of non-human primates (NHPs) in captivity. In fact, there is a substantial scientific literature where the fields of neuroscience, cognitive science, and experimental psychology intersect and from which autonomous, cage-based, computerized, testing and training protocols have emerged. Such protocols have the capacity to conduct cognitive assessments of and provide cognitive enrichment to NHPs simultaneously [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. NHPs, who possess a rich set of cognitive abilities, often encounter only simple intellectual challenges in captivity. Incidentally, this mismatch between the level of skill possessed by the individuals and the level of challenge provided has been proposed as the main underlying factor for boredom [16,17]. In turn, boredom is associated with elevated levels of self-directed behaviors such as hair-plucking, stereotypies, and several other hallmarks of impaired well-being [18,19]. Computerized, cage-based, interactive systems can alleviate boredom by providing dynamic, autonomous, adaptable, and scalable cognitive enrichment [20]. Furthermore, these cage-based interaction systems have several other welfare- and scientific-related advantages for NHPs. (1) NHPs can interact with such devices from their home enclosure within auditory and olfactory (and sometimes even physical) contact with their group mates [3,4,5,15,21], and (2) they can do so at their own pace, often from within their home enclosure, with either limited or no physical constraint [22]. Such spontaneous, self-regulated engagement has been suggested to enhance the animal’s sense of agency and competence, which results in improved psychological well-being [1,9,23,24]. (3) A wide range of paradigms can be applied and improve data quality, whereby issues such as test-retest reliability (i.e., inconsistent results across multiple repetitions) and construct validity (i.e., appropriateness of the assessment with respect to the construct of interest) can be remediated (see [25,26] for comprehensive description on general statistical concerns regarding traditional cognitive testing in NHPs). (4) Testing and training protocols can be quickly evaluated for efficacy and fine-tuned if needed. (5) The advent of animal identification [3,5,27] allows for automated adjustments of cognitive assessment and/or enrichment protocols with respect to changes in the individual’s level of engagement and proficiency. Therefore, such flexible protocols can facilitate highly individualized autonomous training [5,28], as well as cognitive assessment and enrichment.
Accumulating evidence supports the idea that animals, including NHPs, are able and/or prefer to have the opportunity to choose between tasks [4,29,30,31,32,33,34,35,36,37,38,39]. There is indeed ample evidence that being able to control and choose how to engage enhances the sense of agency and competence, an aspect often lacking in captive environments. For example, Perdue and colleagues (2014) presented capuchin monkeys (Cebus imitator) and rhesus macaques (Macaca mulatta) with the option of choosing the order in which they would subsequently conduct a series of tasks (i.e., free choice) or conduct the same tasks in a predetermined order (i.e., forced choice). Subsequently, two sets of predetermined orders were created: a randomized order and an order mimicking the animal’s previous order selection. While reward probability and task demands were independent of the animals’ choice, all animals tested (six capuchin and five rhesus monkeys) preferred the free choice over the forced choice option, albeit there was high inter-individual variability. The authors suggested that the NHPs preferred to choose because having and making a choice is an act of control over one’s environment. This interpretation is in line with existing literature that links control of one’s environment with psychological well-being [34,40,41,42,43]. Beyond welfare-related benefits, choice paradigms with non-human primates have been demonstrated to be experimentally advantageous, allowing, for example, to assess the chromatic preferences in rhesus macaques [31] and food preferences in apes and monkeys [32]; as well as uncovering contrafreeloading behavior in Japanese macaques, a behavior in which animals “work to obtain food even though identical food is freely available” [33].
Building on the literature describing the beneficial effect of choice availability on animal welfare, we argue that integrating choice into cage-based, computerized protocols aligns cognitive enrichment and cognitive assessment even further. To this end, we designed a multiple-choice interface (MCI) and tested captive rhesus macaques with our autonomous, cage-based, touchscreen system eXperimental Behavioral Instrument [4,28]. Using the MCI interface, animals could select between three tasks to perform on a trial-by-trial basis: a reach task in which a fixed circle needed to be touched (static reach); a dynamic version of the reach task in which the circle bounced around the screen until touched (dynamic reach) and a task in which one picture was shown to the animal (from a pool of 126 pictures of NHPs) (picture viewing). While the first two tasks delivered a fluid reward upon a successful touch to the circle, we considered picture viewing to be rewarding on its own, and therefore, did not program this task to distribute a fluid reward. We quantified (1) the level of engagement and proficiency of 16 male rhesus macaques with an MCI protocol, and (2) the animals’ frequency of choice for each task depending on the position of the task stimulus on the MCI interface, time of the day, and over multiple sessions. Ultimately, by quantifying the animal’s level and style of engagement we assessed the feasibility of the MCI protocol as a cognitive enrichment tool. Animals operating the device purposefully and proficiently would suggest that a multiple-choice interface is indeed a suitable tool to provide cognitive enrichment and cognitive assessment for captive NHPs.

2. Materials and Methods

Research with non-human primates represents a small but indispensable component of neuroscience research. The scientists in this study are aware and are committed to the great responsibility they have in ensuring the best possible science with the least possible harm to the animals [44,45]. This study is part of our corresponding efforts [46,47,48].

2.1. Animals and Housing

The study was conducted on 16 adult male rhesus macaques (Macaca mulatta, 5–17 years of age, mean 10 years) belonging to seven social groups (two to four individuals per group, see Table 1), all already capable to perform various cognitive training paradigms. Data collection occurred on days in which the animals were not involved in their main experimental routine. All animals were housed in accordance with all applicable German and European regulations in the facility of the Cognitive Neuroscience Laboratory at the German Primate Center (DPZ) in Goettingen, Germany. The facility provides the animals with an enriched environment, including a multitude of toys and wooden structures, access to outdoor and indoor space and associated natural as well as artificial light, and exceeding the size requirements of the European regulations. The light cycle in the facility is automatically controlled to achieve daily 12 h dark/light cycles (from 7:00 to 19:00). During data collection, animals had free access to water, and monkey chow, and were provided with fresh food (e.g., fruits, vegetables, nuts) between 12:30 and 14:00. All animals participated in previous cognitive experiments using the same cage-mounted device described in this study.

2.2. Apparatus

The eXperimental Behavioral Instrument XBI [4,5,28,49] is mounted directly on the animals’ enclosure and can autonomously run cognitive experiments as well as various enrichment protocols (Figure 1). The device can operate stand-alone for several hours, requires little to no human supervision, needs minimal weekly maintenance, provides video monitoring and recording of each session, and is fully integrated into the DPZ’s local area network. The device is comprised of a centralized computational unit (MacBook Air MacOS Catalina, 10.15), a microcontroller (Teensy 3.5) to acquire touchscreen information and to handle the reward systems, a touchscreen (15 inches, 30.4 cm by 22.8 cm), two peristaltic pumps, and a custom-made mouthpiece (placed at 24 cm from the touchscreen) for reward delivery. In this study, pictures of the animals were taken at the beginning of every trial by a camera placed above the touchscreen to associate each trial outcome with the correct animal using a custom-made Matlab script (Matlab 2020b©, The MathWorks, Inc., Natick, MA, USA).

2.3. Experimental Paradigm

During each session, the XBIs ran an interactive, multiple-choice interface, developed as an extension of a proof-of-concept exploring multiple-choice behavior in one adult male rhesus macaque [4]. The MCI is comprised of multiple tasks from which the animal can freely choose amongst each trial throughout the session. Importantly, the type and nature of each task in the MCI can be tailored to address specific experimental questions or provide specific enrichment activities. In the current study, we investigated the feasibility of using a multiple-choice interface as cognitive enrichment by providing three tasks to the animals: two versions of a motor task that were rewarded with diluted juice for correct responses, and a picture viewing task that delivered no fluid reward. Each trial was initiated by the animal selecting one of three stimuli (from here on referred to as ‘buttons’) at the bottom of the screen (i.e., taskbar; see Figure 1). The buttons (5 degrees of visual angle in diameter) differed in shape and were arranged horizontally. Each button shape was associated with one task and triggered the start of a corresponding task if touched. Regardless of the task selected, a picture of the animal was taken after a button was touched and before the corresponding trial was loaded. These pictures were used for manual animal identification after the session was complete. In a pilot version of the experiment, in which two groups and four animals participated, the arrangement of the buttons was randomized in every trial. All other animals underwent a version of the experiment in which the icons’ arrangement was pseudorandomized every 60 min instead of on every trial. This modification was meant to expose the animals longer to each configuration to promote learning and disentangle task preferences from side biases (potentially due to handedness).
The MCI interface was programmed and run by the open-source software MWorks (http://mworks-project.org, version 0.10, 1 July 2023). The two motor tasks (static reach and dynamic reach tasks) were structured in the same way. Upon task selection and a trial initiation, the unselected buttons disappeared and a red circle (target) of variable size (5 to 10 degrees in diameter with consequent chance levels 7.33% to 30.24% respectively) appeared in a random location of the screen. Here animals were required to touch the target to correctly complete the trial (hit), which triggered acoustic feedback (‘ding’) and 0.37 mL of fluid reward. Touching anywhere outside the target and inside the gray background (excluding the task bar) resulted in a failure, which triggered different acoustic feedback (‘error’) but no fluid reward. If neither the target nor the background was touched within 5 s from target onset (timeout) the trial was considered ignored and aborted without acoustic feedback or fluid reward. Regardless of the trial outcome, the Home Screen and task bar appeared again after an inter-trial interval (randomized between 1.5 and 2.5 s) so that the animal could choose again. While in the static reach task (the circle, i.e., Button 1) the target was stationary, the target bounced around the screen at a variable speed (randomized between 10 and 30 degrees/second on a trial-by-trial basis) until the end of the trial during the dynamic reach task (the square, i.e., Button 2). If the picture viewing task was selected (the rhombus, i.e., Button 3), a random NHP picture (from a pool of 126) was presented in the middle of the touchscreen for 5 s. This task did not require any additional action by the animal and did not deliver any fluid reward. All pictures displayed a variety of NHP species performing a wide range of behaviors in naturalistic and semi-naturalistic environments at the DPZ or its field stations. All pictures were shown on a gray background, directly above the task bar (Figure 1).

2.4. Experimental Sessions

Experimental sessions occurred on Saturdays, when the animals’ main experimental routine did not occur and in parallel with standard weekend husbandry routines. Sessions were started between 8:00 and 9:30 and were stopped between 16:30 and 18:00 for an average of 8 h (minimum of 7 h and 39 min; maximum of 10 h and 2 min), for six consecutive Saturdays. All animals were fed between 12:30 and 14:00. Due to other experimental reasons, one group was tested over six sessions (from Saturday to Friday, with a break on Sunday). Most of the sessions occurred in parallel, with three to five XBI devices running autonomously and unsupervised simultaneously.

2.5. Manual Animal Identification for Data Curation

A picture from the front camera was taken every time a button was selected by an animal. Pictures were then used offline to identify which animal initiated the trial. This information was ultimately appended to the curated data frame so that data analysis could be conducted by animal identity. A custom-made Matlab app (Matlab 2020b©, The MathWorks, Inc., Natick, MA, USA) was programmed ad hoc for the offline labeling of each picture, at a speed of ~50 pictures per minute.

2.6. Data Analysis

Besides the analysis on task preference described below, all data were curated in Matlab (Matlab 2020b©, The MathWorks, Inc., Natick, MA, USA) and the analysis of performance, as well as visualization, was conducted in Python 3.9.

2.6.1. General Engagement Analysis

To estimate the uniformity of engagement within each given session, across animals and sessions, we (1) normalized the time of each trial initiation to the time of each session end (Figure 2b) and (2) computed the median of each of the resulting distributions to (3) ultimately identify at which proportion of each session animals had performed half of the trials (Figure 2c, see also [5,49,50]). A distribution centered at 0.5 would indicate that animals performed as many trials before as after the session midpoint. The resulting distribution centers at 0.47 (Q1 = 0.14; Q3 = 0.76) which is significantly different from a hypothetical distribution of 0.5 (two-sided t-test; p-value = 0.0115, t = −2.58, N = 92), indicating that animals engaged more in the first half of the session compared to the second half, see Figure 1c.

2.6.2. Task Preference Analysis

To investigate whether the animals exhibited a preference for one of the three tasks, we fit a Bayesian Generalized Linear Mixed Model (GLMM) with the family specified as ‘categorical’ (picture, static task, or dynamic task; note that the picture task was the reference category for the model) and a logit-link function. We included the position of the button (left, middle, or right), time of day, and session as predictors to account for potential hand preferences, variation in engagement throughout the day, and development of preferences over time, respectively. Additionally, we evaluated correlations between all variables (all Pearson correlation coefficients were below 0.5) and covariates were z-transformed to a mean of 0 and a standard deviation of 1 to provide more comparable estimates [51,52]. As random effects, we included animal identity, the group identity, and day into the model with all possible random slopes to keep type I error rates at the nominal level of 0.05 [52,53]. Data were analyzed with the ‘brms’ package (version 2.16.3 [54]) in R (version 4.1.2: R Core Team, 2021), which in turn makes use of ‘Stan’, a reference computational framework for fitting Bayesian models [54]. Each model was run using four Markov Chain Monte Carlo (MCMC) chains for 2500 iterations, including 1000 “warm-up” iterations for each chain. We checked the convergence diagnostics of the model and found no divergent transitions. We also found that all R-hat values were equal to 1.00, and that visual inspection of the plotted chains confirmed convergence. We used weakly informative priors to improve convergence, to avoid overfitting, and to regularize parameter estimates [55]. The prior for each intercept was a normal distribution with a mean of 0 and a standard deviation of 1. For the beta coefficients, we used a prior with a normal distribution with a mean of 0 and a standard deviation of 0.5. For the standard deviation of group-level (random) effects, we used a prior with an exponential distribution with scale parameter 1. Lastly, we used a LKJ Cholesky prior with scale parameter 2 for the correlations between random slopes.
As we were interested in individual differences in task preference, we examined the estimates and credible intervals of the random effects of animal identity in the Bayesian categorical GLMM (see Figure 3). We considered that there was evidence of individual task preference when at least one of the three tasks had an estimate and credible interval above those of at least one other task. Model estimates are reported as the mean of the posterior distribution with 95% credible intervals.

2.6.3. Task Proficiency Analysis

To investigate animals’ performance in the static and dynamic tasks a series of Pearson correlation tests were run with the Python package statsmodel (version 0.13.0) between the average hit rate (adjusted to the change in chance level dependent on stimulus size) of each animal at each speed value. Across the stimulus sizes, 5, 6, 7, 8, 9, and 10 degrees of visual angles, chance levels were estimated to be 7.33%, 10.53%, 14.41%, 18.81%, 23.99%, 30.24% and interpreted as the likelihood of touching the stimulus by chance, based on the stimulus (hit) to background (failure) surface ratio. As the picture viewing task required no interaction from the animal and could not be terminated before 5 s had passed, throughout the data analysis we only quantified the number of times this task was selected, to assess animal choice behavior in the task preference analysis described below.

3. Results

3.1. General Engagement

We first evaluated whether rhesus macaques’ engagement remained stable across sessions or declined due to the fading out of novelty effects (as fluid/food control and social separation were not applied in this study). On average animals conducted 418 trials per session (Q1 = 239, Q3 = 806; Figure 2a), with no evidence of a significant decrease between the first and the last session (assessed through an animal-wise partial correlation that accounted for variations in session duration, Table 1). Within sessions, however, we observed a decrease in trial number (two-sided t-test; p-value = 0.0115, t = −2.58, N = 92), indicating that animals engaged more in the first half of the session compared to the second half (Figure 1c, see Methods 2.6.1 as well as previous publications [5,49,50]).

3.2. Task Preference

After establishing that the animals consistently interacted with the MCI, we assessed task preference across all animals and buttons’ positions, by quantifying which task was selected more often (see Methods 2.6.2). We found evidence that 12 out of 16 animals (75%) showed consistent task preference across button positions on the screen, time of day, and session. Of those indicating a preference, 10 animals (~83%) preferred the static task while two animals (animals ni and sa) preferred the dynamic task. More analysis results of the Bayesian model can be found in the Supplementary Materials.

3.3. Task Proficiency

Having established that most animals exhibited a task preference with the multiple-choice interface, we evaluated whether such engagement is purposeful or casual [18,56], and by extension, whether animals engage with the challenge using their own competence and potentially develop mastery. We found that the hit rate in the static task across all animals was already above chance at the smallest stimulus size we tested (namely 5 degrees of visual angle) and the hit rate increased with increasing stimulus size above chance level (Figure 4a). We used the adjusted hit rate (subtracting the chance level from the hit rate for each size independently) across all subsequent analyses to have a measure of performance that was not affected by relative changes in chance level due to stimulus size. We, therefore, evaluated modulations in adjusted hit rate caused by the different speed values we used (10 to 30 degrees of visual angle) and throughout consecutive sessions (Figure 4b) in the dynamic task. We found a significant modulation effect of speed on the adjusted hit rate for 8 animals out of 16, no modulation for three animals, and insufficient number of trials for the remaining five animals (see Table 1). Nonetheless, across all animals, medium to small stimuli elicited the highest adjusted hit rate, when paired with low speed in the dynamic task (Figure 4c). The optimal stimulus speed and size for our animals, from a psychophysical point of view, was a size of 5 degrees visual angle and a speed of 10 degrees of visual angle per second.
Figure 4. Proficiency in the static task (a) and dynamic task (bd). (a) Absolute hit rate as a function of stimulus size in the static task across all animals (error bars are the percentile intervals at a 50% width, representing the range of half the distribution). Dashed line represents the theoretical chance level of the static task depending on the stimulus size, as the likelihood of touching the stimulus is function of the ration between stimulus and background size. (b) Hit rate adjusted to theoretical chance level (see panel a) as a function of stimulus speed and stimulus size (line thickness), on the left panel and as function of consecutive sessions and speeds (line thickness) on the right panel. (c) Heat map with average adjusted hit rate as function of both speed and size, across all animals. (d) Animal count of the size-speed combination with the highest adjusted hit rate.
Figure 4. Proficiency in the static task (a) and dynamic task (bd). (a) Absolute hit rate as a function of stimulus size in the static task across all animals (error bars are the percentile intervals at a 50% width, representing the range of half the distribution). Dashed line represents the theoretical chance level of the static task depending on the stimulus size, as the likelihood of touching the stimulus is function of the ration between stimulus and background size. (b) Hit rate adjusted to theoretical chance level (see panel a) as a function of stimulus speed and stimulus size (line thickness), on the left panel and as function of consecutive sessions and speeds (line thickness) on the right panel. (c) Heat map with average adjusted hit rate as function of both speed and size, across all animals. (d) Animal count of the size-speed combination with the highest adjusted hit rate.
Animals 13 02702 g004
Table 1. On the left, descriptive statistics on animals’ general level of engagement (Figure 1a), with correlation values (Pearson), and respective p-values, on the trials per session across consecutive sessions. On the right, summary statistics of Pearson correlation between speed and hit rate (adjusted to size-dependent chance level, see methods). Overall, to account for multiple comparisons the adjusted alpha level is 0.003 (16 comparisons per analysis). p-values in bold mark significance reached for the respective animal.
Table 1. On the left, descriptive statistics on animals’ general level of engagement (Figure 1a), with correlation values (Pearson), and respective p-values, on the trials per session across consecutive sessions. On the right, summary statistics of Pearson correlation between speed and hit rate (adjusted to size-dependent chance level, see methods). Overall, to account for multiple comparisons the adjusted alpha level is 0.003 (16 comparisons per analysis). p-values in bold mark significance reached for the respective animal.
General Engagement
(Figure 1a)
Correlation between Speed and Adjusted Hitrate
(Dynamic task only Figure 4b)
GroupAnimalAgeAvg. Trials per SessionSessionsPearson
Corr.
p-Value
(a = 0.003)
Total TrialsPearson
Corr.
Confidence Intervals (95%)Number of Speed ValuesTrials Performedp-Value (a = 0.003)
1al1322066−0.650.2213,068−0.91−0.97, −0.8211622>0.001
cl734260.600.272229−0.71−0.87, −0.42111790.0141
2ba177855−0.690.33364−0.89−0.95, −0.742110250.0003
ni74685−0.380.611997−0.53−0.78, −0.12214260.0097
3ca1253260.140.83538−0.27−0.65, 0.221846>0.001
ea116496−0.930.013937−0.67−0.85, −0.33216300.0085
4cu16505−0.200.77491−0.56−0.8, −0.17219630.2771
pi122475−0.320.691119−0.66−0.85, −0.32213790.5265
5de1110596−0.0050.996922−0.62−0.83, −0.25214990.0009
el1115616−0.390.58992−0.78−0.91, −0.522111690.0011
6he679260.50.394203−0.59−0.82, −0.1819760.0030
lo57176−0.390.514057−0.55−0.79, −0.1621663>0.001
pa92286−0.150.81380−0.56−0.8, −0.18212080.0077
sa928360.490.41697−0.15−0.54, 0.3211480.0001
7na66660.050.93436−0.74−0.89, −0.46219840.0079
vi92776−0.370.531588−0.13−0.54, 0.33201300.5895

4. Discussion

The present study aimed to investigate the feasibility of using a touchscreen-based, multiple-choice interface (MCI) as a cognitive enrichment tool for captive rhesus macaques. We developed an interactive interface, optimized for touchscreen interaction, which allowed the animals to freely choose between three tasks: a static reach task, a dynamic reach task, and a picture viewing task. Our findings demonstrated that the rhesus macaques consistently engaged with the multiple-choice interface, exhibited task preferences, and displayed proficiency in performing the selected tasks.

4.1. Engagement and Sustainability of Interaction

One important aspect of cognitive enrichment is to provide animals with engaging activities that promote well-being. Our results showed that rhesus macaques maintained a sustained level of interaction with the multiple-choice interface across multiple sessions. Despite the absence of food/fluid control or social separation, the animals actively participated in the tasks, indicating their intrinsic motivation and interest in the paradigm. This finding is consistent with previous studies showing that non-human primates (NHP) can engage with computerized systems autonomously and for extended periods [5,18,27]. We argue that such propensity, together with the dynamic nature of touchscreen paradigms, could be leveraged to develop adaptive training and testing paradigms that consistently trigger animals’ curiosity.
The sustained engagement observed in our study also suggests that the novelty effect did not account for the animals’ interaction, as engagement remained stable throughout the sessions. Furthermore, our analysis revealed that animals tended to perform more trials in the first half of the session compared to the second half. This pattern suggests that the animals may have experienced some degree of habituation or reduced motivation over time within each session. It is worth noting that the decline in trials within a session could be influenced by factors such as satiety or fatigue, which might have affected the animals’ motivation to continue interacting with the tasks.
Finally, in contrast to the study conducted by Perdue and colleagues [35], in our experiment after each selection, at each trial, the selected button remained on the screen instead of disappearing. While this choice reduced the space on the screen available to display task-relevant stimuli (e.g., the red circle or the pictures), we decided on this approach to further help the animals to understand the button-to-task association. We believe that this was instrumental in alleviating the cognitive load for the animals, especially considering that buttons changed position on the bar every hour.

4.2. Task Preference and Individual Differences

The multiple-choice interface provides the animals with the opportunity to select and switch between different tasks at any moment. Interestingly, our analysis revealed that the animals exhibited individual preferences for specific tasks. The static reach task was the most preferred among the animals, followed by the dynamic reach task. Only two animals showed a preference for the dynamic reach task over the static reach task. These individual differences in task preference highlight the importance of considering the animals’ individual needs and preferences when designing cognitive enrichment programs. Tailoring enrichment activities to individual preferences can enhance engagement and maximize the benefits to the animals’ psychological well-being. The preference for the static reach task could be attributed to its high value-to-effort ratio. The animals might have found this task more rewarding and easier to perform compared to the dynamic reach task. The dynamic reach task involved a moving target, which required more precise timing and coordination. The animals’ preference for the static reach task aligns with previous studies showing that non-human primates often choose tasks that provide the highest rewards with minimal effort [35]. Interestingly, finding two animals preferring the higher effort task (namely the dynamic task) could be interpreted as evidence of contrafreeloading behavior [33], a behavior in which animals are willing to put effort to gain food that was otherwise freely available. As this study was not designed to directly probe contrafreeloading, we advise caution with such an interpretation, which we consider only anecdotal and requires further controlled experiments.

4.3. Proficiency and Mastery Development

While assessing the animals’ performance in the tasks, we found that our monkeys exhibited proficiency in both the static task and in the dynamic task. In the static reach task, animals achieved hit rates above chance level across different stimulus sizes, demonstrating accurate target selection. The animals’ proficiency in the dynamic task was only marginally influenced by the combination of stimulus speed and size and surprisingly, the medium to small stimuli paired with low speed elicited the highest adjusted hit rate. Albeit anecdotal, this finding might reflect a propensity for a flow-like state among our captive rhesus macaques. It appears that our animals found it optimal to engage (and perhaps even enjoy) when the task provided a moderate level of challenge, like the concept of flow experienced by humans during immersive activities. The animals might have reached higher levels of eye-hand coordination in the dynamic task in trials with a higher level of challenge. This observation aligns with the idea that animals, like humans, are motivated by the opportunity to achieve a state of optimal arousal and skill utilization [57].

4.4. Practical Implications and Future Directions

The touchscreen-based, multiple-choice approach presented in this study has several practical implications for providing cognitive enrichment to captive rhesus macaques. The multiple-choice interface, implemented in the eXperimental Behavioral Instrument (XBI), can be integrated as a standard cognitive enrichment practice in facilities and institutes working with captive animals, particularly non-human primates. The high level of automation and the potential to tailor the tasks to individual preferences (when real-time animal identification is available) make the approach scalable, sustainable, and pragmatic for enhancing cognitive well-being and animal welfare. Future research could explore further modifications and refinements to the multiple-choice interface and the tasks offered. The development of additional task options and the inclusion of more complex cognitive challenges could provide even greater enrichment opportunities for the animals. Moreover, investigating the long-term effects of the multiple-choice approach on the animals’ well-being, cognitive abilities, and social dynamics would provide valuable insights into the efficacy and potential benefits of this cognitive enrichment tool.
We believe that the time has come to consider cognitive enrichment practices as essential as the other more established types of environmental enrichment. Cognitive enrichment practices can consistently exercise the animal’s species-specific needs that have little opportunity to be expressed in captivity [6,17,24,58]. Consistently exercising cognitive capabilities, in turn, creates long-term benefits, such as: reduced distress and associated stereotypical behaviors; enhanced sense of competence, agency, and problem-solving in general and increased neuroplasticity against cognitive decline and impairment [20,34,59,60,61,62,63]. Moreover, with the type of cognitive enrichment here proposed we believe it will be possible to assess the role of genetic and environmental factors on cognitive development and decline [64,65,66], with crucial ethical implications for both humans and animals.
In conclusion, our study demonstrated the feasibility and effectiveness of a touchscreen-based, multiple-choice approach as a cognitive enrichment tool for captive rhesus macaques. The animals consistently engaged with the tasks, exhibited individual task preferences, and demonstrated proficiency in performing the selected tasks. Our approach has the potential to improve the psychological well-being of captive animals while providing researchers with a reliable and scalable method for conducting scientific research on animal cognition and welfare simultaneously [56,67].

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ani13172702/s1. Table S1: Output of the Bayesian generalized linear mixed model investigating task preference during the Multiple Choice Interface experiment. Figure S1: Probability of task selection in the Multiple-Choice Interface experiment. Figure S2: The effect of session on task selection behavior in the Multiple-Choice Interface experiment. Figure S3: The effect of the screen button position on task selection behavior in the Multiple-Choice Interface experiment.

Author Contributions

Conceptualization, A.C., D.P., L.C.C. and S.T.; Data curation, A.C., D.P., L.C.C., A.N. and P.Y.; Formal analysis, A.C. and L.C.C.; Funding acquisition, S.T.; Investigation, A.C., D.P., A.N. and P.Y.; Methodology, A.C., D.P., L.C.C. and R.R.B.; Project administration, A.C. and D.P.; Resources, R.R.B. and S.T.; Software, A.C. and R.R.B.; Supervision, S.T.; Visualization, A.C. and L.C.C.; Writing—original draft, A.C.; Writing—review and editing, D.P., L.C.C., A.N., P.Y. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

The study was supported by a grant to ST from the Deutsche Forschungsgemeinschaft (DFG): Research Unit 2591 “Severity assessment in animal-based research” (Project 14) and Research Unit 1847 (GA1475-C1). We acknowledge support by the Leibniz Association through funding for the Leibniz ScienceCampus Primate Cognition.

Institutional Review Board Statement

All animal procedures of this study were in accordance with all applicable German and European regulations on husbandry procedures and conditions. As the interactions of the animals with the XBI system in the context of this study were entirely self-initiated and happened in their social housing setting they constitute environmental enrichment for which no Institutional Review Board (or Ethics Committee) approval is needed.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to being currently under further analysis.

Acknowledgments

The authors would like to thank the animal caretakers and the technical staff of the lab for the professional help and support in accommodating data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Andrews, M.W. Video-task paradigm extended to Saimiri. Percept. Mot. Ski. 1993, 76, 183–191. [Google Scholar] [CrossRef]
  2. Brannon, E.M.; Andrews, M.W.; Rosenblum, L.A. Effectiveness of Video of Conspecifics as a Reward for Socially Housed Bonnet Macaques (Macaca radiata). Percept. Mot. Ski. 2016, 98, 849–858. [Google Scholar] [CrossRef]
  3. Butler, J.L.; Kennerley, S.W. Mymou: A low-cost, wireless touchscreen system for automated training of nonhuman primates. Behav. Res. Methods 2019, 51, 2559–2572. [Google Scholar] [CrossRef]
  4. Calapai, A.; Berger, M.; Niessing, M.; Heisig, K.; Brockhausen, R.; Treue, S.; Gail, A. A cage-based training, cognitive testing and enrichment system optimized for rhesus macaques in neuroscience research. Behav. Res. 2017, 49, 35–45. [Google Scholar] [CrossRef]
  5. Calapai, A.; Cabrera-Moreno, J.; Moser, T.; Jeschke, M. Flexible auditory training, psychophysics, and enrichment of common marmosets with an automated, touchscreen-based system. Nat. Commun. 2022, 13, 1648. [Google Scholar] [CrossRef] [PubMed]
  6. Clark, F.E. Great ape cognition and captive care: Can cognitive challenges enhance well-being? Appl. Anim. Behav. Sci. 2011, 135, 1–12. [Google Scholar] [CrossRef]
  7. Drea, C.M. Studying primate learning in group contexts: Tests of social foraging, response to novelty, and cooperative problem solving. Methods 2006, 38, 162–177. [Google Scholar] [CrossRef]
  8. Fagot, J.; Paleressompoulle, D. Automatic testing of cognitive performance in baboons maintained in social groups. Behav. Res. Methods 2009, 41, 396–404. [Google Scholar] [CrossRef] [PubMed]
  9. Gazes, R.P.; Brown, E.K.; Basile, B.M.; Hampton, R.R. Automated cognitive testing of monkeys in social groups yields results comparable to individual laboratory-based testing. Anim. Cogn. 2013, 16, 445–458. [Google Scholar] [CrossRef]
  10. O’Leary, J.D.; O’Leary, O.F.; Cryan, J.F.; Nolan, Y.M. A low-cost touchscreen operant chamber using a Raspberry PiTM. Behav. Res. 2018, 50, 2523–2530. [Google Scholar] [CrossRef]
  11. Richardson, W.K.; Washburn, D.A.; Hopkins, W.D.; Savage-rumbaugh, E.S.; Rumbaugh, D.M. The NASA/LRC computerized test system. Behav. Res. Methods Instrum. Comput. 1990, 22, 127–131. [Google Scholar] [CrossRef] [PubMed]
  12. Schmitt, V. Implementing portable touchscreen-setups to enhance cognitive research and enrich zoo-housed animals. J. Zoo Aquar. Res. 2019, 7, 50–58. [Google Scholar] [CrossRef]
  13. Takemoto, A.; Izumi, A.; Miwa, M.; Nakamura, K. Development of a compact and general-purpose experimental apparatus with a touch-sensitive screen for use in evaluating cognitive functions in common marmosets. J. Neurosci. Methods 2011, 199, 82–86. [Google Scholar] [CrossRef]
  14. Truppa, V.; Garofoli, D.; Castorina, G.; Piano Mortari, E.; Natale, F.; Visalberghi, E. Identity concept learning in matching-to-sample tasks by tufted capuchin monkeys (Cebus apella). Anim. Cogn. 2010, 13, 835–848. [Google Scholar] [CrossRef]
  15. Walker, J.D.; Pirschel, F.; Gidmark, N.; MacLean, J.N.; Hatsopoulos, N.G. A platform for semiautomated voluntary training of common marmosets for behavioral neuroscience. J. Neurophysiol. 2020, 123, 1420–1426. [Google Scholar] [CrossRef]
  16. Csikszentmihalyi, M.; Csikszentmihalyi, I. Beyond Boredom and Anxiety; Jossey-Bass Publishers, University of Michigan: Ann Arbor, MI, USA, 1975. [Google Scholar]
  17. Meehan, C.L.; Mench, J.A. The challenge of challenge: Can problem solving opportunities enhance animal welfare? Appl. Anim. Behav. Sci. 2007, 102, 246–261. [Google Scholar] [CrossRef]
  18. Clark, F. Cognitive enrichment and welfare: Current approaches and future directions. Anim. Behav. Cogn. 2017, 4, 52–71. [Google Scholar] [CrossRef]
  19. Yeates, J.W.; Main, D.C.J. Assessment of positive welfare: A review. Vet. J. 2008, 175, 293–300. [Google Scholar] [CrossRef]
  20. Bennett, A.J.; Perkins, C.M.; Tenpas, P.D.; Reinebach, A.L.; Pierre, P.J. Moving evidence into practice: Cost analysis and assessment of macaques’ sustained behavioral engagement with videogames and foraging devices. Am. J. Primatol. 2016, 78, 1250–1264. [Google Scholar] [CrossRef]
  21. Fagot, J.; Gullstrand, J.; Kemp, C.; Defilles, C.; Mekaouche, M. Effects of freely accessible computerized test systems on the spontaneous behaviors and stress level of Guinea baboons (Papio papio). Am. J. Primatol. 2014, 76, 56–64. [Google Scholar] [CrossRef]
  22. Hansmeyer, L.; Yurt, P.; Naubahar, A.; Trunk, A.; Berger, M.; Calapai, A.; Treue, S.; Gail, A. Home-enclosure based behavioral and wireless neural recording setup for unrestrained rhesus macaques. eNeuro 2022, 10. [Google Scholar] [CrossRef]
  23. Evans, T.A.; Beran, M.J.; Chan, B.; Klein, E.D.; Menzel, C.R. An efficient computerized testing method for the capuchin monkey (Cebus apella): Adaptation of the LRC-CTS to a socially housed nonhuman primate species. Behav. Res. Methods 2008, 40, 590–596. [Google Scholar] [CrossRef] [PubMed]
  24. Tarou, L.R.; Bashaw, M.J. Maximizing the effectiveness of environmental enrichment: Suggestions from the experimental analysis of behavior. Appl. Anim. Behav. Sci. 2007, 102, 189–204. [Google Scholar] [CrossRef]
  25. Farrar, B.; Altschul, D.; Fischer, J.; van der Mescht, J.; Placì, S.; Troisi, C.A.; Ostojic, L. Trialling Meta-Research in Comparative Cognition: Claims and Statistical Inference in Animal Physical Cognition. Anim. Behav. Cogn. 2020, 7, 1–26. [Google Scholar] [CrossRef] [PubMed]
  26. Schubiger, M.N.; Fichtel, C.; Burkart, J.M. Validity of Cognitive Tests for Non-human Animals: Pitfalls and Prospects. Front. Psychol. 2020, 11, 1835. [Google Scholar] [CrossRef]
  27. Fagot, J.; Bonté, E. Automated testing of cognitive performance in monkeys: Use of a battery of computerized test systems by a troop of semi-free-ranging baboons (Papio papio). Behav. Res. Methods 2010, 42, 507–516. [Google Scholar] [CrossRef]
  28. Berger, M.; Calapai, A.; Stephan, V.; Niessing, M.; Burchardt, L.; Gail, A.; Treue, S. Standardized automated training of rhesus monkeys for neuroscience research in their housing environment. J. Neurophysiol. 2018, 119, 796–807. [Google Scholar] [CrossRef]
  29. Catania, A.C. Freedom and knowledge: An experimental analysis of preference in pigeons. J. Exp. Anal. Behav. 1975, 24, 89–106. [Google Scholar] [CrossRef]
  30. Catania, A.C.; Sagvolden, T. Preference for free choice over forced choice in pigeons. J. Exp. Anal. Behav. 1980, 34, 77–86. [Google Scholar] [CrossRef]
  31. Humphrey, N. Colour and Brightness Preferences in Monkeys. Nature 1971, 229, 615–617. [Google Scholar] [CrossRef]
  32. Huskisson, S.M.; Jacobson, S.L.; Egelkamp, C.L.; Ross, S.R.; Hopper, L.M. Using a Touchscreen Paradigm to Evaluate Food Preferences and Response to Novel Photographic Stimuli of Food in Three Primate Species (Gorilla gorilla gorilla, Pan troglodytes, and Macaca fuscata). Int. J. Primatol. 2020, 41, 5–23. [Google Scholar] [CrossRef]
  33. Ogura, T. Contrafreeloading and the value of control over visual stimuli in Japanese macaques (Macaca fuscata). Anim. Cogn. 2011, 14, 427–431. [Google Scholar] [CrossRef] [PubMed]
  34. Owen, M.A.; Swaisgood, R.R.; Czekala, N.M.; Lindburg, D.G. Enclosure choice and well-being in giant pandas: Is it all about control? Zoo Biol. 2005, 24, 475–481. [Google Scholar] [CrossRef]
  35. Perdue, B.M.; Evans, T.A.; Washburn, D.A.; Rumbaugh, D.M.; Beran, M.J. Do monkeys choose to choose? Learn. Behav. 2014, 42, 164–175. [Google Scholar] [CrossRef] [PubMed]
  36. Suzuki, S. Selection of Forced- and Free-Choice by Monkeys (Macaca Fascicularis). Percept. Mot. Ski. 1999, 88, 242–250. [Google Scholar] [CrossRef]
  37. Suzuki, S.; Matsuzawa, T. Choice Between Two Discrimination Tasks in Chimpanzees (Pan troglodytes). Jpn. Psychol. Res. 1997, 39, 226–235. [Google Scholar] [CrossRef]
  38. Voss, S.C.; Homzie, M.J. Choice as a Value. Psychol. Rep. 1970, 26, 912–914. [Google Scholar] [CrossRef]
  39. Washburn, D.A.; Hopkins, W.D.; Rumbaugh, D.M. Perceived control in rhesus monkeys (Macaca mulatta): Enhanced video-task performance. J. Exp. Psychol. Anim. Behav. Process 1991, 17, 123–129. [Google Scholar] [CrossRef]
  40. Kitchen, A.M.; Martin, A.A. The effects of cage size and complexity on the behaviour of captive common marmosets, Callithrix jacchus jacchus. Lab. Anim. 1996, 30, 317–326. [Google Scholar] [CrossRef] [PubMed]
  41. Sambrook, T.D.; Buchanan-Smith, H.M. Control and Complexity in Novel Object Enrichment. Anim. Welf. 1997, 6, 207–216. [Google Scholar] [CrossRef]
  42. Bekinschtein, P.; Oomen, C.A.; Saksida, L.M.; Bussey, T.J. Effects of environmental enrichment and voluntary exercise on neurogenesis, learning and memory, and pattern separation: BDNF as a critical variable? Semin. Cell Dev. Biol. 2011, 22, 536–542. [Google Scholar] [CrossRef] [PubMed]
  43. Ritvo, S.E.; MacDonald, S.E. Preference for free or forced choice in Sumatran orangutans (Pongo abelii). J. Exp. Anal. Behav. 2020, 113, 419–434. [Google Scholar] [CrossRef]
  44. Robinson, L.M.; Weiss, A. Nonhuman Primate Welfare: From History, Science, and Ethics to Practice; Springer Nature: Cham, Switzerland, 2023; ISBN 978-3-030-82707-6. [Google Scholar]
  45. Roelfsema, P.R.; Treue, S. Basic Neuroscience Research with Nonhuman Primates: A Small but Indispensable Component of Biomedical Research. Neuron 2014, 82, 1200–1204. [Google Scholar] [CrossRef] [PubMed]
  46. Cassidy, L.C.; Bethell, E.J.; Brockhausen, R.R.; Boretius, S.; Treue, S.; Pfefferle, D. The Dot-Probe Attention Bias Task as a Method to Assess Psychological Well-Being after Anesthesia: A Study with Adult Female Long-Tailed Macaques (Macaca fascicularis). Eur. Surg. Res. 2023, 64, 37–53. [Google Scholar] [CrossRef]
  47. Pfefferle, D.; Plümer, S.; Burchardt, L.; Treue, S.; Gail, A. Assessment of stress responses in rhesus macaques (Macaca mulatta) to daily routine procedures in system neuroscience based on salivary cortisol concentrations. PLoS ONE 2018, 13, e0190190. [Google Scholar] [CrossRef]
  48. Unakafov, A.M.; Möller, S.; Kagan, I.; Gail, A.; Treue, S.; Wolf, F. Using imaging photoplethysmography for heart rate estimation in non-human primates. PLoS ONE 2018, 13, e0202581. [Google Scholar] [CrossRef] [PubMed]
  49. Yurt, P.; Calapai, A.; Mundry, R.; Treue, S. Assessing cognitive flexibility in humans and rhesus macaques with visual motion and neutral distractors. Front. Psychol. 2022, 13, 1047292. [Google Scholar] [CrossRef]
  50. Cabrera-Moreno, J.; Jeanson, L.; Calapai, A. Group-based, autonomous, individualized training and testing of long-tailed macaques (Macaca fascicularis) in their home enclosure to a visuo-acoustic discrimination task. Front. Psychol. 2022, 13, 1047242. [Google Scholar] [CrossRef] [PubMed]
  51. Aiken, L.S.; West, S.G.; Reno, R.R. Multiple Regression: Testing and Interpreting Interactions; SAGE: Newbury Park, CA, USA, 1991; ISBN 978-0-7619-0712-1. [Google Scholar]
  52. Schielzeth, H. Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 2010, 1, 103–113. [Google Scholar] [CrossRef]
  53. Barr, D.J.; Levy, R.; Scheepers, C.; Tily, H.J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 2013, 68, 255–278. [Google Scholar] [CrossRef]
  54. Bürkner, P.-C. brms: An R Package for Bayesian Multilevel Models Using Stan. J. Stat. Softw. 2017, 80, 1–28. [Google Scholar] [CrossRef]
  55. McElreath, R. Statistical Rethinking: A Bayesian Course with Examples in R and STAN, 2nd ed.; Chapman and Hall/CRC: New York, NY, USA, 2020; ISBN 978-0-429-02960-8. [Google Scholar]
  56. Clark, F.E. Bridging pure cognitive research and cognitive enrichment. Anim. Cogn. 2022, 25, 1671–1678. [Google Scholar] [CrossRef] [PubMed]
  57. Clark, F.E. In the Zone: Towards a Comparative Study of Flow State in Primates. Anim. Behav. Cogn. 2023, 10, 62–88. [Google Scholar] [CrossRef]
  58. Brydges, N.M.; Braithwaite, V.A. Measuring Animal Welfare: What Can Cognition Contribute? Annu. Rev. Biomed. Sci. 2008, 10, T91–T103. [Google Scholar] [CrossRef]
  59. Seligman, M.E.P. Helplessness: On Depression, Development, and Death; W H Freeman/Times Books/Henry Holt & Co: New York, NY, USA, 1975; 250p, ISBN 978-0-7167-0752-3. [Google Scholar]
  60. Rosenzweig, M.R.; Bennett, E.L. Psychobiology of plasticity: Effects of training and experience on brain and behavior. Behav. Brain Res. 1996, 78, 57–65. [Google Scholar] [CrossRef] [PubMed]
  61. Scarmeas, N.; Stern, Y. Cognitive reserve and lifestyle. J. Clin. Exp. Neuropsychol. 2003, 25, 625–633. [Google Scholar] [CrossRef] [PubMed]
  62. Leotti, L.A.; Iyengar, S.S.; Ochsner, K.N. Born to Choose: The Origins and Value of the Need for Control. Trends Cogn. Sci. 2010, 14, 457–463. [Google Scholar] [CrossRef]
  63. Perdue, B. Mechanisms underlying cognitive bias in nonhuman primates. Anim. Behav. Cogn. 2017, 4, 105–118. [Google Scholar] [CrossRef]
  64. Harrison, R.A.; Mohr, T.; van de Waal, E. Lab cognition going wild: Implementing a new portable touchscreen system in vervet monkeys. J. Anim. Ecol. 2023, 92, 1545–1559. [Google Scholar] [CrossRef]
  65. Joly, M.; Ammersdörfer, S.; Schmidtke, D.; Zimmermann, E. Touchscreen-Based Cognitive Tasks Reveal Age-Related Impairment in a Primate Aging Model, the Grey Mouse Lemur (Microcebus murinus). PLoS ONE 2014, 9, e109393. [Google Scholar] [CrossRef]
  66. Lacreuse, A.; Raz, N.; Schmidtke, D.; Hopkins, W.D.; Herndon, J.G. Age-related decline in executive function as a hallmark of cognitive ageing in primates: An overview of cognitive and neurobiological studies. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2020, 375, 20190618. [Google Scholar] [CrossRef] [PubMed]
  67. Nawroth, C.; Langbein, J.; Coulon, M.; Gabor, V.; Oesterwind, S.; Benz-Schwarzburg, J.; von Borell, E. Farm Animal Cognition—Linking Behavior, Welfare and Ethics. Front. Vet. Sci. 2019, 6, 24. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The eXperimental Behavioral Instrument XBI. (a) Front view of the XBI with the reward mouthpiece, the monitor and touchscreen, and the two cameras; (b) the multiple-choice interface (MCI) comprising the Home Screen, the Task Bar, and the Buttons leading to the three tasks; (c) flow chart of the transition between tasks and Home Screen depending on trial outcome.
Figure 1. The eXperimental Behavioral Instrument XBI. (a) Front view of the XBI with the reward mouthpiece, the monitor and touchscreen, and the two cameras; (b) the multiple-choice interface (MCI) comprising the Home Screen, the Task Bar, and the Buttons leading to the three tasks; (c) flow chart of the transition between tasks and Home Screen depending on trial outcome.
Animals 13 02702 g001
Figure 2. Level of engagement with the MCI. (a) On the left: number of trials of each animal at each session (dot size); on the right: median trials per session across all animals (green letter-value plot). Animals on the x axis are showed next to their respective partners, indicated by the solid black line highlighting animals belonging to the same group. (b) Distribution of all trial times (normalized to the end of each session, session proportion) for each animal and each session. (c) Distribution of all median trials times across all animals and sessions. Dashed green line and green shaded area represent the median of all median trials as well as the area between the median minimum and median maximum.
Figure 2. Level of engagement with the MCI. (a) On the left: number of trials of each animal at each session (dot size); on the right: median trials per session across all animals (green letter-value plot). Animals on the x axis are showed next to their respective partners, indicated by the solid black line highlighting animals belonging to the same group. (b) Distribution of all trial times (normalized to the end of each session, session proportion) for each animal and each session. (c) Distribution of all median trials times across all animals and sessions. Dashed green line and green shaded area represent the median of all median trials as well as the area between the median minimum and median maximum.
Animals 13 02702 g002
Figure 3. Task preference across all animals. Across all panels, each dot represents the proportion of trials in which the monkeys selected the three tasks, with the dots’ color representing the position of the button on the screen, for each given task and dot size reflecting the number of trials performed by each animal for each task and button position. Black dots and relative whiskers represent the Bayesian model probability estimates and the 95% credible intervals, respectively (calculated for the “middle” button position as we considered this position to be neutral with respect to potential differences in hand preference). The star symbol next the animal names indicate for which animal there is evidence of task preference.
Figure 3. Task preference across all animals. Across all panels, each dot represents the proportion of trials in which the monkeys selected the three tasks, with the dots’ color representing the position of the button on the screen, for each given task and dot size reflecting the number of trials performed by each animal for each task and button position. Black dots and relative whiskers represent the Bayesian model probability estimates and the 95% credible intervals, respectively (calculated for the “middle” button position as we considered this position to be neutral with respect to potential differences in hand preference). The star symbol next the animal names indicate for which animal there is evidence of task preference.
Animals 13 02702 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Calapai, A.; Pfefferle, D.; Cassidy, L.C.; Nazari, A.; Yurt, P.; Brockhausen, R.R.; Treue, S. A Touchscreen-Based, Multiple-Choice Approach to Cognitive Enrichment of Captive Rhesus Macaques (Macaca mulatta). Animals 2023, 13, 2702. https://doi.org/10.3390/ani13172702

AMA Style

Calapai A, Pfefferle D, Cassidy LC, Nazari A, Yurt P, Brockhausen RR, Treue S. A Touchscreen-Based, Multiple-Choice Approach to Cognitive Enrichment of Captive Rhesus Macaques (Macaca mulatta). Animals. 2023; 13(17):2702. https://doi.org/10.3390/ani13172702

Chicago/Turabian Style

Calapai, Antonino, Dana Pfefferle, Lauren C. Cassidy, Anahita Nazari, Pinar Yurt, Ralf R. Brockhausen, and Stefan Treue. 2023. "A Touchscreen-Based, Multiple-Choice Approach to Cognitive Enrichment of Captive Rhesus Macaques (Macaca mulatta)" Animals 13, no. 17: 2702. https://doi.org/10.3390/ani13172702

APA Style

Calapai, A., Pfefferle, D., Cassidy, L. C., Nazari, A., Yurt, P., Brockhausen, R. R., & Treue, S. (2023). A Touchscreen-Based, Multiple-Choice Approach to Cognitive Enrichment of Captive Rhesus Macaques (Macaca mulatta). Animals, 13(17), 2702. https://doi.org/10.3390/ani13172702

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop