Next Article in Journal
Effects on the Facial Growth of Rapid Palatal Expansion in Growing Patients Affected by Juvenile Idiopathic Arthritis with Monolateral Involvement of the Temporomandibular Joints: A Case-Control Study on Posteroanterior and Lateral Cephalograms
Next Article in Special Issue
Effects of 12-Week Methylphenidate Treatment on Neurometabolism in Adult Patients with ADHD: The First Double-Blind Placebo-Controlled MR Spectroscopy Study
Previous Article in Journal
Intra-Cystic (In Situ) Mucoepidermoid Carcinoma: A Clinico-Pathological Study of 14 Cases
Previous Article in Special Issue
Dopaminergic Degeneration and Small Vessel Disease in Patients with Normal Pressure Hydrocephalus Who Underwent Shunt Surgery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computational Modeling for Neuropsychological Assessment of Bradyphrenia in Parkinson’s Disease

1
Department of Neurology, Hannover Medical School, Carl-Neuberg-Straße 1, 30625 Hannover, Germany
2
Behavioral Engineering Research Group, KU Leuven, Naamsestraat 69, 3000 Leuven, Belgium
3
Movement Control & Neuroplasticity Research Group, Department of Movement Sciences, KU Leuven, Tervuursevest 101, 3001 Leuven, Belgium
4
LBI - KU Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2020, 9(4), 1158; https://doi.org/10.3390/jcm9041158
Submission received: 9 March 2020 / Revised: 9 April 2020 / Accepted: 16 April 2020 / Published: 18 April 2020

Abstract

:
The neural mechanisms of cognitive dysfunctions in neurological diseases remain poorly understood. Here, we conjecture that this unsatisfying state-of-the-art is in part due to the non-specificity of the typical behavioral indicators for cognitive dysfunctions. Our study addresses the topic by advancing the assessment of cognitive dysfunctions through computational modeling. We investigate bradyphrenia in Parkinson’s disease (PD) as an exemplary case of cognitive dysfunctions in neurological diseases. Our computational model conceptualizes trial-by-trial behavioral data as resulting from parallel cognitive and sensorimotor reinforcement learning. We assessed PD patients ‘on’ and ‘off’ their dopaminergic medication and matched healthy control (HC) participants on a computerized version of the Wisconsin Card Sorting Test. PD patients showed increased retention of learned cognitive information and decreased retention of learned sensorimotor information from previous trials in comparison to HC participants. Systemic dopamine replacement therapy did not remedy these cognitive dysfunctions in PD patients but incurred non-desirable side effects such as decreasing cognitive learning from positive feedback. Our results reveal novel insights into facets of bradyphrenia that are indiscernible by observable behavioral indicators of cognitive dysfunctions. We discuss how computational modeling may contribute to the advancement of future research on brain–behavior relationships and neuropsychological assessment.

Graphical Abstract

1. Introduction

Neuropsychological impairments are well-documented in idiopathic Parkinson’s disease (PD; reviewed in [1]). The main pathological characteristic of PD is the decline of dopaminergic cells in the substantia nigra pars compacta (SNpc) [2,3]. Braak’s theory [2,4] characterizes disease progression, which is divided into six different stages. Early stages of PD are characterized by non-motor symptoms, including a number of neuropsychological impairments (reviewed in [1,5,6]). At later stages, the death of dopaminergic cells in the SNpc correlates with the emergence of motor symptoms, the presence of which typically lead to the diagnosis of the disease. According to Braak’s theory, dementia arises as the last Braak stages are reached. PD is treated by dopamine (DA) replacement therapy, which is adjusted to alleviate motor symptoms to the best possible motility. Thus, systemic DA replacement aims at substituting the missing DA in the dorsal striatum (in which the dopaminergic cells from the SNpc have their axonal terminals). However, titrating systemic DA replacement solely at the best possible motility may incur potential side effects. That is, optimal DA replacement in the nigro-striatal DA system may be associated with DA overdosing in meso-limbic and/or meso-cortical DA systems, thereby potentially inducing medication-related neuropsychological impairments [1,7,8,9,10,11,12].
A cardinal motor symptom of many PD patients is bradykinesia, which might be best summarized as ‘slowness of movement’ [13,14]. The neuropsychological equivalent of bradykinesia in PD is bradyphrenia, i.e., ‘slowness of thought’ [15,16,17,18,19,20,21,22]. However, bradyphrenia is not synonymous with slowed behavioral indicators of processing speed (e.g., response times in laboratory tasks), which would necessarily originate from both bradyphrenia and bradykinesia. Bradyphrenia has previously been described as cognitive akinesia [23], and we refer to it here as ‘inflexibility of thought’. As such, PD-related bradyphrenia is typically studied by a number of neuropsychological tests that are targeting covert aspects of attentional flexibility [24,25,26]. Of major interest for the current study are the behavioral abnormalities on the Wisconsin Card Sorting Test (WCST) [5,6,27], which are typically considered as behavioral evidence indicating attentional inflexibility in patients with PD [28]. Although these observable WCST indicators may provide well-documented sensitive markers of PD-related bradyphrenia, they yet lack specificity because similarly impaired WCST indicators occur in other neurological diseases [29,30,31,32,33,34] and also in psychiatric disorders [35,36,37,38].
On the WCST [27,39,40,41], participants are required to sort stimulus cards to key cards according to a periodically changing category, which is one of three rival categories (see Figure 1). Behavioral WCST indicators are typically the number of perseveration errors (i.e., erroneous category repetitions following negative feedback) and the number of set-loss errors (i.e., erroneous category switches following positive feedback) [27]. Performance on the WCST can be conceptualized as feedback-driven learning [42,43,44] because in order to identify the prevailing category, participants have to rely on feedback about the correctness of their current sorts: Positive feedback indicates that the executed sort was correct, whereas negative feedback indicates that the executed sort was incorrect. We showed in an earlier study [43] that WCST feedback affects both the cognitive selection of sorting categories and the sensorimotor selection of a particular response. We thus conceptualized feedback-driven learning on the WCST as occurring at two distinguishable levels in parallel, i.e., as sensorimotor learning at the lower level (i.e., which response to execute) and as cognitive learning at the higher level (i.e., which category to apply).
Putatively related to their lack of specificity, the underlying neural mechanisms of these behavioral WCST indicators remain poorly understood [30,34]. Here, we suggest that progress with regard to WCST-derived brain–behavior relationships does not primarily depend on further improvements at the neuroanatomical (or neurofunctional) level. In our opinion, such progress primarily depends on further improvements at the hitherto much neglected behavioral level. These improvements include (though are not limited to) applying more elaborate data analysis methods such as computational modeling techniques [42,48,49,50,51,52].
The computational approach allows specifying learning processes that are assumed to underlie observable behavior. These learning processes can be conceptualized mechanistically, and they act together in explicitly defined ways. Computational models allow decomposing the finally expressed overt behavior on neuropsychological tests such as the WCST into interacting covert components. Hence, computational modeling of WCST performance allows quantifying some of the contributing learning mechanisms. The pursuance of the computational approach promises to allow investigating associations between explicitly defined learning processes and neural mechanisms [49,53].
Here, we applied the parallel reinforcement-learning (RL) [54,55,56,57,58,59,60] model of trial-by-trial behavior on the WCST, which we introduced in a recent publication [44]. Since learning on the WCST can be conceptualized as feedback driven, RL represents a natural approach for modeling dynamic changes in behavior on the WCST. Our RL model comprises distinct learning from positive feedback and from negative feedback. The reason for the distinction between positive and negative learning is that DA-midbrain signaling may code positive and negative learning in different ways due to the potential reward-quality of positive feedback [61,62,63]. We also incorporated a simple retention mechanism [64,65], which ensured that what was learned from feedback stimuli on a particular WCST trial remained available in mind for short periods of time. Furthermore, we conceptualized WCST performance at two parallel, yet distinct levels of learning [43,44]: The higher-level cognitive learning (which may also be referred to as cortical, declarative, or goal-directed) is completed by lower-level sensorimotor learning (which may also be referred to as striatal, procedural, or automatic). In the context of the WCST, cognitive learning considers objects of thought (i.e., which category to apply on a particular trial) that guide the selection of task-appropriate responses. Sensorimotor learning bypasses these objects of thought as it is solely concerned with selecting responses (see Figure 1): Responses that were followed by positive feedback tend to be repeated, whereas responses that were followed by negative feedback tend to be avoided on upcoming trials.
Our study was directed toward two main goals. The first goal was advancing neuropsychological assessment of bradyphrenia/attentional inflexibility in PD by expanding purely observable indicators, such as perseveration and set-loss errors on the WCST, through a computational approach. The second goal was characterizing the specific learning dysfunctions that are associated with disease pathology and/or DA replacement therapy in patients with PD. In order to achieve these goals, we studied patients with PD and matched healthy control (HC) participants twice on a computerized version of the WCST (cWCST) [27,45,46,47,48]. Patients with PD were assessed ‘on’ DA medication and ‘off’ DA medication (i.e., after withdrawal of DA medication).

2. Materials and Methods

2.1. Procedure

The relationship between PD pathology and cWCST performance was studied in a between-subjects design, comparing PD patients and HC participants. The effect of DA replacement therapy on cWCST performance was studied in a within-subjects design. PD patients were assessed during two testing sessions; once with their usual DA replacement therapy (‘on’ medication) and once after withdrawal of DA replacement therapy (‘off’ medication). To account for a potential effect of the repeated assessment on cWCST performance, we also assessed HC participants at two testing sessions, and we included the testing session as an additional within-subjects factor in the analyses. Note that analyses of first testing sessions were reported by [48,66].
Performance on the cWCST was first analyzed by means of conditional error probabilities. Second, we analyzed cWCST performance by means of computational modeling, i.e., we implemented the parallel reinforcement-learning model [44].

2.2. Participants

The initial sample of PD patients comprised 44 inpatients and outpatients with idiopathic PD referred to the department of Neurology at Hannover Medical School. Patients were diagnosed by experienced neurologists in the field of movement disorders. Patients with any severe neurological or psychiatric condition other than PD, or a history of neurosurgical therapy were not considered for this study. One patient was excluded due to antidepressant medication. We excluded patients who were assessed only once (n = 22) and patients who completed less than half of the cWCST (i.e., less than 20 switches of the correct category) in at least one of the testing sessions (n = 5), resulting in a final sample of N = 16 (11 female) PD patients. Table A1 displays PD patients’ demographic, clinical, and psychological data. Table A2 provides information about PD patients’ DA replacement therapy. None of the PD patients received infusional therapy at the time of testing. PD patients’ median Hoehn and Yahr stage was 2 (range: 1–3). Ten patients were tested first ‘off’ medication and six patients were tested first ‘on’ medication. The median time between the two testing sessions of PD patients was 2 days (range: 1–29). The median time of withdrawal from usual DA replacement therapy at the start of the testing session ‘off’ medication was 14 h (range: 4–177). The severity of clinical motor symptoms ‘off’ and ‘on’ medication was assessed by means of the Unified Parkinson’s Disease Rating Scale-part III (UPDRS III). PD patients’ mean UPDRS III scores were higher ‘off’ than ‘on’ medication, indicating an increase of motor symptoms after the withdrawal from DA replacement therapy (see Table A1).
Thirty-six participants without neurological or psychiatric diseases served as an HC group. One of these participants had to be excluded due to inability of performing the cWCST. One additional participant was excluded because of completing less than half of the cWCST, resulting in a final sample of N = 34 (16 female) HC participants. Table A1 gives information about demographic and psychological data of HC participants. The median time between the two sessions for HC participants was 4 days (range: 1–16).
The two testing sessions were scheduled at the same time of the day, with the exception of two HC participants and three PD patients, whose day times between testing sessions deviated for more than 4 h. All participants in this study scored 21 or higher on the Montreal Cognitive Assessment [67]. All participants were offered a compensation of €25 per testing session. The study was approved by the ethics committee of Hannover Medical School (vote number: 6589). All participants gave informed consent in accordance with the Declaration of Helsinki. For further details, see [48,66].

2.3. Computerized Wisconsin Card Sorting Test

Participants were required to match stimulus cards to one of four key cards W = {one red triangle, two green stars, three yellow crosses, and four blue balls} according to one of three viable sorting categories U = {color, form, number} by pressing one of four keys V = {response 1, response 2, response 3, response 4}. Keys were spatially mapped to the position of key cards on the computer screen (see Figure 1). Stimulus cards varied on the three dimensions color, shape, and number but never shared more than one attribute with any of the key cards. The target display presented a stimulus card and key cards, which appeared invariantly above the stimulus card. Target displays remained on screen until a response was detected. Feedback cues were presented 800 ms after response detection and remained on screen for another 400 ms. The German words for repeat (i.e., ‘bleiben’) and shift (i.e., ‘wechseln’) served as positive and negative feedback cues, respectively. Feedback cues indicated whether the previous response was correct or incorrect and whether the applied sorting category should be shifted or repeated on the upcoming trial [68]. The next target display appeared 800 ms after feedback-cue offset. The correct sorting category switched randomly [69] after a minimum of two or more correct category repetitions. The average number of correct card sorts that was required to trigger a switch of the correct category was 3.5 trials. The cWCST was terminated after 39 switches of the correct sorting category or upon participant’s request. Prior to the experimental session, participants completed a short practice session including four category switches. Participants were explicitly informed about the viable sorting categories and about the fact that correct sorting categories change occasionally. The cWCST was programmed using Presentation® and responses were collected on a Cedrus® response pad.

2.4. Error Analysis

We analyzed perseveration errors (an erroneous repetition of the applied category following negative feedback) and set-loss errors (an erroneous switch of the applied category following positive feedback). We computed conditional error probabilities by dividing the number of committed errors by the number of trials on which a respective error type was possible (e.g., if six perseveration errors are committed on a total of 60 trials following negative feedback, the conditional perseveration error probability is 6/60 = 0.1). Conditional error probabilities were entered into Bayesian repeated measures analyses of variance (ANOVA) using JASP version 11.1 [70]. First, we analyzed the effect of PD pathology on conditional error probabilities by means of a Bayesian ANOVA including the within-subjects factor error type (perseveration error vs. set-loss error) and the between-subjects factor disease (HC vs. PD). For this analysis, we pooled individual conditional error probabilities across testing sessions. That is, we computed individual mean conditional error probabilities across the first and the second testing sessions. Second, we analyzed the effect of DA replacement therapy on conditional error probabilities of PD patients. We conducted a Bayesian ANOVA including the within-subjects factors error type (perseveration error vs. set-loss error) and medication (‘on’ vs. ‘off’ medication). For an analysis of effect of the testing session on conditional error probabilities, see Appendix B.
Results of Bayesian ANOVAs were reported as analysis of effects [71]. Evidence for an effect (i.e., a main effect of a factor or the interaction of factors) in the data was quantified by means of inclusion Bayes factors (BFinclusion). Inclusion Bayes factors gave the change from prior probability odds to posterior probability odds for the inclusion of an effect. Prior probabilities for the inclusion of an effect p(inclusion) were computed as the sum of prior probabilities of all ANOVA models which included the effect of interest. Likewise, posterior probabilities for the inclusion of an effect p(inclusion|data) were computed as the sum of all posterior probabilities of these ANOVA models. For example, the Bayesian ANOVA including the factors error type and disease considers five ANOVA models (i.e., a null model including no effects, two models including only the main effect of error type or the main effect of disease, a model including both main effects, and the full model including both main effects and the interaction effect of error type and disease). Each ANOVA model had a prior probability of 1/5 = 0.2. Thus, the prior probability for the inclusion of the main effect disease, which is included in three ANOVA models, is p(inclusion) = 3 × 0.2 = 0.6, giving a prior probability odd of 0.6/(1 − 0.6) = 1.5. The sum of posterior probabilities for the three ANOVA models including the main effect disease may be p(inclusion|data) = 0.9, giving a posterior probability odd of 0.9/(1 − 0.9) = 9. Following, the resulting inclusion Bayes factor is BFinclusion = 9/1.5 = 6. For all Bayesian ANOVAs, default settings of JASP were used. We implemented uniform prior probabilities for all ANOVA models under consideration. For descriptive statistics, we reported mean conditional error probabilities together with 95% credibility intervals. 95% credibility intervals were computed as 1.96 standard errors of the mean around the mean.

2.5. Computational Modeling

For computational modeling of cWCST performance, we implemented the parallel RL model [44]. The parallel RL model conceptualizes cWCST performance as cognitive and sensorimotor learning, which occur in parallel, by means of Q-learning algorithms [53,54,72,73,74]. Q-learning algorithms operate on feedback prediction values, which quantify how strongly a positive or negative feedback is predicted following the application of a category or the execution of a response. Feedback prediction values are updated trial-wise in response to an observed feedback. The strength of the updating of feedback prediction values is modulated by prediction errors. Prediction errors are defined as the difference between the received feedback and the feedback prediction value. Higher prediction errors indicate stronger updating of feedback prediction values.
Individual parameters of the parallel RL model are learning rate parameters for cognitive and sensorimotor learning. Learning rates give the extent to which prediction errors are integrated into feedback prediction values of the applied category (for cognitive learning) or the executed response (for sensorimotor learning). In order to account for different strengths of learning from positive and negative feedback, learning rate parameters for cognitive and sensorimotor learning were further separated for trials following positive and negative feedback [61,73,75,76]. Highest values of learning rates indicate that a prediction error will be added to the feedback prediction value of the applied category or the executed response without attenuation. In contrast, with the lowest possible learning rate, no updating of the feedback prediction value of the applied category or the executed response will happen. The parallel RL model also incorporates cognitive and sensorimotor retention rates, which quantify how much information from previous trials will be retained for the current trial [64,77]. With highest values of retention rates, feedback prediction values from the previous trial will transfer to the current trial without mitigation. In contrast, with lowest values of retention rates, feedback prediction values are not transferred to the current trial. In such cases, responding depends entirely on the last received feedback. Lastly, an individual inverse temperature parameter gives the extent to which responding accords to integrated feedback prediction values. More precisely, the inverse temperature parameter indicates whether differences in integrated feedback prediction values are attenuated (inverse temperature values higher than 1) or emphasized (inverse temperature values less than 1). For a detailed description of the parallel RL model and further information about parameter estimation, see Appendix C.
We used Bayesian tests for direction to quantify evidence for effects of disease, medication, and session on model parameters [53,78]. In contrast to inclusion Bayes Factors, which we interpreted only if they were larger than 1, Bayes factors from Bayesian tests for direction were also interpreted if they were smaller than 1. As such, Bayes factors from Bayesian tests for direction indicate evidence for a decrease of a model parameter (BF < 1) or for an increase of a model parameter (BF > 1). For interpretation of evidential strength of Bayes factors, we followed [79]; Bayes factors larger than 3 (or less than 1/3) were interpreted as substantial evidence for the presence of an effect, Bayes factors larger than 10 (or less than 1/10) were interpreted as strong evidence for the presence of an effect, and Bayes factors larger than 100 (or less than 1/100) were interpreted as extreme evidence for the presence of an effect.
Inspection of individual conditional error probabilities and estimates of model parameters did not indicate any considerable effects of gender, time between testing sessions, and time of withdrawal from DA replacement therapy on conditional error probabilities and parameter estimates or on any comparisons involving these measures (see Appendix D). The implemented code can be downloaded from https://osf.io/nwrca, which also provides further specifications of hierarchical Bayesian analysis.

3. Results

3.1. Error Analysis

Mean conditional error probabilities are shown in Figure 2. First, we analyzed the effects of PD pathology on conditional error probabilities. Results of the Bayesian ANOVA including the between-subjects factor disease are reported in Table 1. There was extreme evidence for an effect of error type on conditional error probabilities (BFinclusion > 1000). Conditional perseveration error probabilities were generally higher than conditional set-loss error probabilities. There was neither evidence for a main effect of disease on conditional error probabilities (BFinclusion = 0.452) nor for the interaction effect of error type and disease on conditional error probabilities (BFinclusion = 0.465).
Next, we analyzed the effects of DA replacement therapy in PD patients on conditional error probabilities. Results of the Bayesian ANOVA including the within-subjects factor medication are reported in Table 2. Again, we found extreme evidence for an effect of error type on conditional error probabilities (BFinclusion > 1000). However, there was no evidence for a main effect of medication on conditional error probabilities (BFinclusion = 0.247) and there was no evidence for an interaction effect of error type and medication on conditional error probabilities (BFinclusion = 0.351).

3.2. Computational Modeling

Descriptive statistics of group-level model parameters for cognitive and sensorimotor learning are presented in Figure 3. Learning rate parameters were overall higher for cognitive learning than for sensorimotor learning, indicating a stronger impact of cognitive learning when compared to sensorimotor learning. Learning rate parameters for cognitive learning were higher after positive than after negative feedback. Thus, participants showed stronger cognitive learning from positive feedback than from negative feedback. In contrast, learning rate parameters for sensorimotor learning were smaller after positive feedback than after negative feedback. In fact, the sensorimotor learning rate after positive feedback was close to zero. Hence, participants showed sensorimotor learning from negative feedback, but they barely showed sensorimotor learning from positive feedback. The sensorimotor retention rate was higher than the cognitive retention rate. Participants retained more sensorimotor-learning information from previous trials than cognitive-learning information from previous trials. The inverse temperature parameter was less than 1 (HC: median = 0.150, Q0.25 = 0.142, Q0.75 = 0.158; PD ‘off’: median = 0.157, Q0.25 = 0.145, Q0.75 = 0.168; PD ‘on’: median = 0.152, Q0.25 = 0.138, Q0.75 = 0.165), indicating that differences in integrated feedback prediction values were emphasized.
Computational modeling analysis revealed three effects of PD pathology on model parameters: First, there was strong evidence for a decreased cognitive retention rate in HC participants (BF = 0.095; see Table 3). Second, there was substantial evidence for an increased sensorimotor retention rate in HC participants (BF = 4.725). PD patients retained more cognitive-learning information from previous trials when compared to HC participants. In contrast, PD patients retained less sensorimotor-learning information from previous trials than HC participants did. Third, there was strong evidence for a decrease of the sensorimotor learning rate parameter after positive feedback in HC participants (BF = 0.073). With regard to effects of DA replacement therapy in PD patients on model parameters, we found substantial evidence for a decreased cognitive learning rate after positive feedback for PD patients ‘on’ medication (BF = 0.282; see Table 3). Thus, PD patients ‘on’ medication showed reduced cognitive learning from positive feedback. This finding was mirrored by the sensorimotor learning rate after positive feedback, as there was strong evidence for a decrease of this model parameter when PD patients were tested ‘on’ medication (BF = 0.077). There was also substantial evidence for an increase of the cognitive retention rate when PD patients were tested ‘on’ medication (BF = 3.323). Thus, DA replacement therapy further increased the heightened cognitive retention rate of PD patients.

4. Discussion

The present data show how the construct of PD-related bradyphrenia [15,16,17,18,19,20,21,22,23], which is difficult to study with purely behavioral methods, can be investigated through applying computational techniques. Computational modeling of trial-by-trial reinforcement learning on the WCST [44] revealed that PD patients and HC participants learned similarly from trial-by-trial feedback on the cWCST. The two groups differed with regard to retention rates, with PD patients retaining (a) learned cognitive information from previous trials better, and (b) learned sensorimotor information from previous trials worse than HC participants did. We also found that systemic DA replacement therapy, which was titrated in individual patients for the best possible motility, incurred bradyphrenic side effects by (a) further increasing cognitive retention rates, and (b) decreasing cognitive learning from positive feedback. In the following sections, we discuss implications for neuropsychological sequelae of PD and DA replacement therapy, as well as implications for the investigation of brain–behavior relationships and for neuropsychological assessment. Furthermore, we outline study limitations and future research directions.

4.1. Implications for Neuropsychological Sequelae of PD

Computational modeling revealed that PD patients retained learned cognitive information from previous trials better than HC participants did (see Figure 3). Cognitive retention rates represent one of the latent variables in our computational RL model [44]. Higher cognitive retention rates indicate better retention (i.e., higher levels of activation) of objects of thought (i.e., categories in the cWCST context) through time. In this case, mental shifting between objects of thought will be hampered due to stronger proactive interference that is exerted from the retained categories. We conclude from our data that PD patients are characterized by a reduced flexibility of cognitive learning compared to HC participants (see Figure 4a,b for illustration). Thus, PD pathophysiology [2,3,4] is associated with a cognitive symptom, which can probably be best described as ‘inflexibility of thought’ (i.e., bradyphrenia; [19,20,21,22]). As a result, PD should not merely be considered as a ‘movement disorder’ because cognitive symptoms such as attentional inflexibility represent an integral manifestation of the disease around its mid-stage [2,3,4].
Traditional behavioral indicators of bradyphrenia are confounded by the presence of bradykinesia in PD [20]. For example, response times on more or less complex sensorimotor tasks represent a mixture of mental and motor slowing, and disentangling mental slowing from prolonged response times has proven difficult to achieve [20]. Computational modeling provides a technique for isolating less impure (latent) measures of bradyphrenia, which seem to be less contaminated by bradykinesia than response times are. Errors might likewise originate from a mixture of a variety of different mental processes, one of which is bradyphrenia. Our study provides a good example for the mixture problem: We found that cognitive retention rates, but not perseveration errors, were sensitive to group membership (see Figure 2 and Figure 3). Behaviorally manifested perseverative tendencies may occur as sequelae of bradyphrenia under certain circumstances, but they are the result of a variety of different mental processes, some of which may not be affected by the PD pathophysiology. As a result, manifest behavioral expressions of bradyphrenia seem to offer less-sensitive indicators of PD-related bradyphrenia than computationally derived latent variables do.
PD patients also retained learned sensorimotor information from previous trials worse than HC participants (see Figure 3). We defined sensorimotor learning as being concerned with response selection. Noticeable sensorimotor learning happened exclusively after the reception of negative feedback, indicating that participants tended to avoid responses that were followed by negative feedback. Reduced sensorimotor retention rates, such as that seen in PD patients when compared to HC participants, show that learned sensorimotor information (i.e., which response to avoid) dissipated more rapidly through time (see Figure 4a,c for illustration). These data reveal that PD pathophysiology [2,3,4,80] is associated with another mental symptom, which can probably be best described as impaired stimulus-response learning (or, in terms of sensorimotor learning, selecting a key card by executing a response). Impaired stimulus-response learning was repeatedly reported in PD patients, most frequently studied by means of probabilistic classification tasks [81,82,83].

4.2. Implications for Neuropsychological Sequelae of DA Replacement Therapy

We found that DA replacement therapy did not relieve PD-related cognitive symptoms. On the contrary, our results indicate that DA replacement therapy induces additional cognitive symptoms. That is, PD patients ‘on’ medication showed increased cognitive retention rates, indicating worsening of the already bradyphrenic attentional inflexibility compared to PD patients tested ‘off’ medication. In addition, PD patients ‘on’ medication showed decreased cognitive learning from positive feedback (see Figure 3). Reduced cognitive learning from positive feedback indicates that the activation of objects of thought (i.e., categories) achieves attenuated peaks, thereby inducing attentional instability (see Figure 4a,d for illustration). Thus, our results are in line with previous research, demonstrating that DA replacement therapy may induce iatrogenic cognitive impairments [1,5,7,8,10,11,12].
It is well-recognized that performance on cognitive tasks depends on optimal levels of DA, implying that both an insufficient and an excessive level of DA impairs performance on such tasks [1,5,7,8,10,11,12]. In early PD, DA depletion is most severe in the dorsal striatum, which is part of the nigro-striatal DA system. Other DA systems appear to be relatively spared from DA depletion in early PD, such as the meso-limbic and the meso-cortical DA systems [10]. DA replacement therapy of PD is titrated to ameliorate motor symptoms, aiming to restore the missing DA in the dorsal striatum. An optimal DA replacement therapy may relieve motor symptoms, but it can cause an overdosing of the less DA-depleted meso-limbic and meso-cortical DA systems. Conclusively, the cognitive impairments that were induced by DA replacement therapy (see above) in this study might occur as a corollary of excessive DA levels in the meso-limbic and/or the meso-cortical DA system [1,5,7,8].
This study found that DA replacement therapy incurred two iatrogenic side effects, i.e., attentional inflexibility, indicated by increased cognitive retention rates, and attentional instability, indicated by decreased cognitive learning from positive feedback. The meso-cortical DA system plays a crucial role in attentional flexibility [84,85,86], whereas the meso-limbic DA system is associated with anticipation of reward or positive feedback [87,88]. Thus, the iatrogenic cognitive impairments induced by DA replacement therapy could possibly be subserved by distinct DA systems [89]; the overstimulation of the meso-cortical DA system might cause attentional inflexibility, as indicated by an increased cognitive retention rate, whereas an overstimulation of the meso-limbic DA system might induce attentional instability via reduced cognitive learning from positive feedback.

4.3. Implications for Brain–Behavior Relationships

Computational models specify explicit cognitive architectures, i.e., cognitive components and their exactly defined ways of interaction. We considered interactions between explicitly defined reinforcement learning processes (Equations (A2), (A3), (A6) and (A7)), plus simple mechanisms of short-term retention (Equations (A1) and (A5)), as the cognitive architecture, which was finally expressed as overt behavior on the cWCST. The particular cognitive architecture of our computational model may be best described as parallel RL, and the question how well competing models fit behavioral cWCST data has been addressed elsewhere [44]. The results from that study led to the conclusion that parallel RL provides a better conceptualization of behavioral cWCST data than competing models do, thereby lending initial credibility to the potential adequacy of parallel RL as a suitable descriptor of some of the covert cognitive processes that underlie overt behavior on the WCST.
The issue of how computationally derived cognitive processes are mapped to neural mechanisms deserves further inquiry. In particular, details of the underlying neural mechanisms that are associated with the latent variables that can be gained from applying our parallel RL model, such as individual learning rate or retention rate parameters or trial-wise feedback prediction values or prediction errors, should be addressed by appropriate brain imaging studies. The approach to combine computational modeling and brain imaging represents a promising avenue for the advancement of brain–behavior relationships [49,53]. The main reason why we consider this approach as a promising technique is that computationally derived latent variables may provide less impure indicators of covert cognitive processes than behavioral indicators do. As such, the parallel RL model conceptualizes the final behavioral outcome on any trial of the cWCST as emerging from a mixture of many interacting, but isolable, componential processes (Equations (A8) and (A9)).

4.4. Implications for Neuropsychological Assessment

To date, clinical neuropsychological assessment of cognitive dysfunctions relies almost exclusively on the results that can be obtained from behavioral observations. The WCST is just one of the many examples of how clinical neuropsychological assessment usually works: Test authors and examiners draw ad-hoc conclusions about cognitive dysfunctions, which are based on counts of the occurrence of particular behavioral events, such as—in the example of the WCST—the number of perseveration and/or set-loss errors committed. Clinical neuropsychological assessment refers to cognitive assessment, albeit these covert cognitive processes remain unobservable. Thus, clinical neuropsychological assessment involves inferences that clearly exceed the behavioral observations. With regard to the WCST, typical inferences would be that a patient who showed corresponding behavioral signs suffers from impaired abstraction, or from cognitive inflexibility and/or distractibility. Cognitive constructs utilized for clinical neuropsychological assessment are often ill-defined, bearing the problem that they appear as arbitrary re-descriptions of the behavioral observations that were made.
Computational modeling bears the potential for clinical neuropsychological assessment to improve its inferential capability. Specifically, latent variables that represent computationally derived reflections of presumed cognitive processes may replace the traditional verbal constructs of clinical neuropsychological assessment. As noted above, behavioral indicators are better conceived as resulting from a mixture of many contributing covert processes. Selecting just one ‘main’ process may constitute an over-simplified inference. Computationally derived latent variables may be less susceptible to this shortcoming of the traditional manner in which clinical neuropsychological assessment is practiced. In that regard, it should be noted that our parallel RL model yields—for each individual—a set of latent variables. We found that some of these latent variables, but not directly observed behavioral indicators (i.e., conditional error probabilities; see Figure 2), were sensitive to disease/medication status. This sensitivity gradient may occur because the latent variables are less impure than behavioral indicators are. We also showed that the variability of the latent variables is unique, i.e., that it is non-redundant with regard to the conventional measures that are available from non-computational investigation (Table A7). However, it remains a possibility that decomposing observable behavioral indicators into less impure error scores provides another route towards an assessment of more specific cognitive dysfunctions (e.g., [27,79,90]). However, until now, this route has not led to sufficiently pure behavioral measures of cognitive dysfunctions.
There are some relatively straightforward clinical implications of computational cognitive neuropsychology. Considering the present study as an exemplary forerunner of that approach, the pursuit of computational cognitive neuropsychology may change our way of caring for patients with chronic neurological diseases such as PD in several ways. First, our general diagnostic work-up may change as outlined in the previous paragraphs, shifting our focus away from behavioral observations toward latent cognitive variables. Latent variables may provide more detailed and less impure information about cognitive sequelae of chronic neurological diseases and their progression. This could enable us to trace, and hopefully predict the long-term course of individual patients. Second, given that some of the latent cognitive variables were sensitive to adverse effects of DA treatment, such computational diagnostic work-ups may guide titrating DA medication dosage in individual PD patients in such a way that desired treatment effects are at their optimum while non-desirable treatment effects are minimized.

4.5. Study Limitations and Directions for Future Research

PD was shown to be robustly associated with impaired behavioral WCST indicators (for a meta-analysis, see [28]). In this study, mean conditional set-loss error probabilities—and to a lesser extent conditional perseveration error probabilities—were increased in PD patients when compared to HC participants (see Figure 2). However, we did not find evidence for an effect of disease on conditional error probabilities (see Table 1), most likely due to the relatively small sample size, which constitutes a limitation of this study. Thus, a replication study with larger sample size might reveal more nuanced effects of PD pathophysiology and/or DA replacement therapy. Furthermore, results of Bayesian repeated measures ANOVAs should be interpreted with caution, since conditional error probabilities appeared to be skewed towards zero, which might indicate a violation of the assumption of normally distributed conditional error probabilities.
Furthermore, DA replacement therapy was not fully balanced across first and second testing sessions, which might limit the interpretability of our results. However, in computational modeling analysis, we countered this problem by explicitly accounting for the repeated cWCST administration, allowing us to disentangle effects of DA replacement therapy from session effects on model parameters (see Appendix E). However, our results need to be consolidated using a fully balanced study design. Furthermore, effects of DA medication, particularly L-Dopa, can last for several days after withdrawal [91]. Thus, the time of withdrawal from DA replacement therapy in many PD patients of this study (median = 14 h) might have been too short for a complete DA depletion. L-Dopa and dopamine agonists were shown to have differentiable cognitive effects [5]. However, the majority of PD patients in this study received a mixture of L-Dopa and DA agonists (see Table A2). Hence, cognitive effects of L-Dopa and DA agonists were not dissociable in this study; their specific effects should be addressed by future research. When investigating the cognitive effects of DA medication, it is also important to consider the DA receptors that are targeted by the administered DA medication [11,92], as DA receptor subtypes may be related to distinct cognitive processes [61,73,93,94].
Results of the Montreal Cognitive Assessment did not indicate that any PD patient was affected by mild cognitive impairment (MCI) [95]. However, the Montreal Cognitive Assessment represents a fast assessment tool for screening potential MCI in PD. Thus, a reliable diagnosis based on a comprehensive assessment is missing [96], which constitutes a further limitation of this study.
In our earlier study, we concluded that parallel RL provides a better conceptualization of behavioral cWCST data than competing models do [44]. It remains an open question whether this conclusion, which was based on the analysis of a large sample of young volunteers (N = 375), is generalizable to other populations, such as PD patients. In this study, we did not consider competing computational models because the relatively small sample size limited the validity of such model comparison efforts. Future studies should—whenever appropriate—consider all candidate computational models and base further analyses on the computational model that provides the best conceptualization of behavioral data [97].
Estimates of the sensorimotor learning rate following positive feedback were close to zero (see Figure 3), which is in line with results of our previous study [44]. Thus, the sensorimotor learning did virtually not happen following positive feedback (see Figure 4a,d for illustration; for a detailed discussion, see [44]). As even highest estimates of the sensorimotor learning rate following positive feedback were negligible in size and variance when compared to estimates of other learning rate parameters, we refrained from any interpretation of disease and medication effects on this close-to-zero model parameter.
Our parallel RL model provides a route towards the advanced assessment of cognitive dysfunctions. In this exemplary study, we analyzed data obtained from a computerized variant of the WCST, i.e., the cWCST. However, the parallel RL is not restricted to analysis of cWCST data but it is rather applicable to data obtained by any available WCST variant [41,98,99,100]. The sole requirement for implementing the parallel RL model is that data must be provided in a trial-by-trial format, rendering our computational modeling approach a promising method for (re)analyzing clinical WCST data. In order to increase the power of computational modeling analysis, such data sets might be merged across several studies and/or research centers.
There are two major statistical advantages of the implemented computational approach over the conducted error analysis. First, we estimated model parameters under consideration of individual trial-by-trial dynamics (as defined by the parallel-RL model), whereas conditional error probabilities were computed by simply aggregating the number of committed perseveration or set-loss errors. Second, we used hierarchical Bayesian analysis for parameter estimation [74,101]. Hierarchical Bayesian analysis provides high statistical power by considering individual differences in model parameters while pooling information across all individuals by means of group-level parameters. In contrast, group-level statistics of error analysis were computed as mean conditional error probabilities.

5. Conclusions

PD patients showed (a) bradyphrenic attentional inflexibility, and (b) reduced sensorimotor retention. DA medication does not remediate any of these cognitive symptoms. On the contrary, DA medication decreases cognitive learning from positive feedback in PD patients, thereby inducing attentional instability. We discussed the iatrogenic effects of DA medication as probably originating from overdosing meso-limbic/cortical DA systems. In conclusion, apart from reduced sensorimotor retention, PD patients who are under DA medication are prone to (a) bradyphrenic attentional inflexibility, probably due to the characteristic brain pathophysiology, and (b) attentional instability, probably due to iatrogenic effects of DA medication. These insights were made possible through the application of a computational RL model [44], which served to decompose WCST performance into covert constituent parts and provided latent variables that may be less impure than behavioral indicators of cognitive impairments. These computationally derived latent variables should be utilized in future research for investigating brain–behavior relationships. The latent variables may also be utilized to derive novel neuropsychological assessment techniques for bradyphrenia/attentional flexibility in PD and in other neurological diseases and psychiatric disorders.

Author Contributions

Conceptualization, A.S. and B.K.; methodology, A.S. and B.K.; software, A.S.; validation, A.S.; formal analysis, A.S. and M.K.H.; investigation, A.S., F.L., C.S. and B.K.; resources, B.K.; data curation, A.S., M.K.H., F.L. and C.S.; writing—original draft preparation, A.S. and B.K; writing—review and editing, A.S., F.L., C.S., M.K.H. and B.K.; visualization, A.S. and B.K.; supervision, B.K.; project administration, F.L., C.S. and B.K.; funding acquisition, F.L., C.S. and B.K. All authors have read and agree to the published version of the manuscript.

Funding

This research was funded by Petermax-Müller Stiftung, Hannover, Germany. Florian Lange received funding from the German National Academic Foundation and the FWO and European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 665501.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Demographic, clinical, and psychological characteristics.
Table A1. Demographic, clinical, and psychological characteristics.
Healthy Control Participants (N = 34)Parkinson’s Disease Patients (N = 16)
MeanSDnMeanSDn
Age (years)63.189.713459.948.8416
Education (years)13.494.093414.402.8815
Disease duration (years)---6.754.0416
UPDRS III ‘on’---17.758.0516
UPDRS III ‘off’---27.9212.6813
LEDD---932.63437.8616
MoCA (cognitive status)28.351.973427.471.0816
WST (premorbid intelligence)30.713.793423.0032.7116
AES (apathy)10.776.813415.318.8516
BDI-II (depression)6.598.28346.564.4316
BSI-18 (psychiatric status)6.167.99337.074.8914
Anxiety1.822.31332.791.3714
Depression1.913.14331.291.3314
Somatization2.443.26333.002.9614
SF-36 (health status)74.2120.123359.0716.6614
Physical functioning79.5522.653356.7924.1514
Physical role functioning71.9740.873335.7141.2714
Bodily pain71.5826.203361.7126.6714
General health perception60.3318.353351.2920.0814
Vitality63.7919.613354.2917.8514
Social role functioning88.2616.523365.1824.6014
Emotional role functioning84.9130.113383.3636.3914
Mental health77.5815.843367.7112.1014
BIS-brief (impulsiveness)15.454.173414.843.7416
DII (impulsivity)
Functional5.852.95346.002.4516
Dysfunctional2.382.59343.313.9516
QUIP-RS (impulse control)0.611.42286.009.7815
SPQ (schizotypal traits)4.463.67334.573.6714
Interpersonal2.522.41332.361.9114
Cognitive-perceptual1.151.18331.071.3914
Disorganized0.791.11331.141.5114
Disease duration was defined as the difference between the year of testing and the year of Parkinson’s disease symptom onset in the medical records; UPDRS III = Unified Parkinson’s Disease Rating Scale-part III; LEDD = Levodopa Equivalent Daily Dosage (mg per day) [102]; MoCA = Montreal Cognitive Assessment [67]; WST = Wortschatztest [103]; AES = Apathy Evaluation Scale [104]; BDI-II = Beck Depression Inventory-II [105]; BSI- 18 = Brief Symptom Inventory (18-item version) [106]; SF-36 = Short Form Health Survey [107]; BIS-Brief = Barratt Impulsiveness Scale (8-item version) [108]; DII = Dickman Impulsivity Inventory [109]; QUIP-RS = Questionnaire for Impulsive-Compulsive Disorders in Parkinson’s Disease—Rating Scale [110]; SPQ = Schizotypal Personality Questionnaire [111].
Table A2. Parkinson’s disease patients’ medication and LEDD scores.
Table A2. Parkinson’s disease patients’ medication and LEDD scores.
Patient NumberMedication (mg)LEDD
1Pramipexole 3.15450
2Pramipexole 0.52, Rasagiline 1175
3Pramipexole 3.15, L-Dopa 300, C-L-Dopa 100825
4L-Dopa 400, Amantadine 200, Rasagiline 1, Rotigotine 121060
5L-Dopa 1000, Entecapone 1000, Pramipexole 1.75, Oral Selegiline 101680
6L-Dopa 400400
7L-Dopa 600, Entecapone 600, Pramipexole 1.04948
8L-Dopa 350, C-L-Dopa 500, Pramipexole 2.11025
9L-Dopa 400, Rotigotine 2460
10L-Dopa 550, Rotigotine 16, Rasagiline 1, Amantadine 2001330
11L-Dopa 600, Amantadine 200, Rotigotine 6, Cabergoline 61380
12L-Dopa 600, C-L-Dopa 100, Entecapone 1200, Pramipexole 1.571098
13L-Dopa 600, C-L-Dopa 300, Entecapone 800, Cabergoline 61497
14L-Dopa 700, C-L-Dopa 100, Entecapone 600, Rotigotine 81246
15L-Dopa 500, Piribidil 50550
16L-Dopa 600, Entecapone 800798
LEDD = Levodopa Equivalent Daily Dosage (mg per day) [102]; C-L-Dopa = Controlled-Release L-Dopa; Pramipexole was reported as base form dosis. For computation of LEDD scores, the salt form was used (Pramipexole dihydrochloride 1 H2O) [102].

Appendix B

Descriptive statistics of conditional error probabilities for the first and second testing sessions are reported in Table A3 Bayesian ANOVA with the factors error type (perseveration error vs. set-loss error) and session (first vs. second session; see Table A4) revealed extreme evidence for an effect of error type on conditional error probabilities (BFinclusion > 1000). There was also extreme evidence for a main effect of session on conditional error probabilities (BFinclusion = 319.214), indicating that conditional error probabilities were generally reduced on the second testing session. There was no evidence for the interaction effect of error type and session on conditional error probabilities (BFinclusion = 1.962).
Table A3. Descriptive statistics of conditional error probabilities for first and second testing sessions (mean with 95% credibility interval in parentheses).
Table A3. Descriptive statistics of conditional error probabilities for first and second testing sessions (mean with 95% credibility interval in parentheses).
Error TypeSession
FirstSecond
Perseveration Error0.145 (0.121, 0.169)0.110 (0.086, 0.133)
Set-Loss Error0.052 (0.039, 0.064)0.034 (0.023, 0.044)
Table A4. Analysis of session effects on conditional error probabilities.
Table A4. Analysis of session effects on conditional error probabilities.
Effectsp(Inclusion) p(Inclusion|Data)BFinclusion
Error Type0.600>0.999>1000 ***
Session 0.6000.998319.214 ***
Error Type x Session 0.2000.3291.962
*** extreme evidence.

Appendix C

The cognitive Q-learning algorithm operates on feedback prediction values of sorting categories. More precisely, Q C ( t ) is a 3 (categories) × 1 vector that gives feedback prediction values for the application of any category on trial t. For trial-wise updating of feedback prediction values, the strength of transfer of feedback prediction values from trial t to trial t+1 is modeled as:
Q C ( t ) = γ C Q C ( t )
where γ C ϵ [0,1] is an individual cognitive retention rate. High values of γ C represent a strong impact of previous feedback prediction values on current responding. Next, trial-wise prediction errors δ C ( t ) are computed with regard to the applied category u ϵ U on trial t as:
δ C ( t ) = r ( t ) Q C , u ( t )
where r(t) is 1 or −1 for a received positive or negative feedback, respectively. Feedback prediction values for the application of categories are updated following a delta-learning rule:
Q C ( t + 1 ) = Q C ( t ) + Y C ( t ) α C δ C ( t )
Y C ( t ) is a 3 × 1 dummy vector, which is 1 for the applied category u and 0 for all other categories on trial t. Y C ( t ) ensures that only the feedback prediction values for the applied category u on trial t is updated by the prediction error δ C ( t ) . Independent individual learning rate parameters for positive and negative feedback, α C + ϵ [0,1] and α C ϵ [0,1], respectively, quantify the strength of learning from the presence of a prediction error. Feedback prediction values for the application of categories Q C ( t + 1 ) are matched to responses, which is represented by a 4 (responses) × 1 vector Q R C ( t + 1 ) . For response v ϵ V on trial t+1, Q R C , v ( t + 1 ) is computed as:
Q R C , v ( t + 1 ) = X v T ( t + 1 )   Q C ( t + 1 )
with X v ( t + 1 ) is a 3 (categories) × 1 vector that represents the match between a stimulus card and response v with regard to sorting categories. Let X v ( t + 1 ) be 1 if a stimulus card matches response v by category u and let X v ( t + 1 ) be 0 elsewise. X v T ( t + 1 ) denotes the transpose of X v ( t + 1 ) . To account for responses that match no viable sorting category (i.e., responses that certainly yield a negative feedback), feedback prediction values of such responses are assigned a value of Q R C , v ( t + 1 ) = 1 .
Sensorimotor Q-learning parallels cognitive Q-learning. However, sensorimotor Q-learning directly operates on feedback prediction values for the execution of responses. Hence, a 4 (responses) × 1 vector Q S ( t ) gives feedback prediction values for the execution of any response on trial t. Again, the strength of the transfer of feedback prediction values from trial t to trial t+1 is modeled as:
Q S ( t ) = γ S Q S ( t )
with the individual sensorimotor retention rate γ S ϵ (0,1). Trial-wise prediction errors δ S ( t ) with regard to the executed response v on trial t are computed as:
δ S ( t ) = r ( t ) Q S , v ( t )
Feedback prediction values for the execution of responses are then updated as:
Q S ( t + 1 ) = Q S ( t ) + Y S ( t ) α S δ S ( t )
Again, Y S ( t ) is a 4 × 1 dummy vector that is 1 for the executed response v and 0 for all other responses on trial t, which ensured that only feedback prediction values of the executed response are updated by the prediction error. For sensorimotor learning, we assumed independent learning rate parameters for positive and negative feedback, α S + ϵ [0,1] and α S ϵ [0,1], respectively.
Response probabilities are computed by linear integrating feedback prediction values of cognitive and sensorimotor Q-learning. Integrated feedback prediction values Q s u m ( t + 1 ) on trial t+1 are computed as:
Q s u m ( t + 1 ) = Q R C ( t + 1 ) + Q S ( t + 1 )
The probability of executing response v on trial t+1 is computed using a “softmax” logistic function on integrated feedback prediction values as:
P v ( t + 1 ) = e Q s u m , v ( t + 1 ) τ j = 1 4 e Q s u m , j ( t + 1 ) τ
with τ [ 0 ,   ] is an individual inverse temperature parameter, indicating whether differences in integrated feedback prediction values are attenuated (τ > 1) or emphasized (0 < τ < 1).
Parameter estimation of the parallel RL model was carried out by means of hierarchical Bayesian analysis [74,76,101,112,113,114,115,116] using RStan [117]. Hierarchical Bayesian analysis assumes that individual-level model parameters are nested within group-level model parameters, which allows parameter estimation with high statistical power [65,101,112]. More precisely, the parallel RL model comprised seven model parameters Z [ α C + , α C , γ C , α S + , α S , γ S , τ ] . Each individual-level model parameter was drawn from a group-level normal distribution with mean μ z and standard deviation σ z . In order to facilitate hierarchical Bayesian analysis, we used a non-centered parameterization [74,118]. That is, we sampled individual-level model parameters from a standard normal distribution that was multiplied by σ z (i.e., the group-level scale parameter) and shifted by μ z (i.e., the group-level location parameter). Individual deviation from μ z of participant i was introduced by location parameter z i . Thus, model parameter z i was parameterized as:
z i = μ z + σ z z i
In order to research the relationships of PD pathology and DA replacement therapy with model parameters, we extended hierarchical Bayesian analysis following the procedure outlined by [119] (see also [53]). First, to account for associations of PD pathology with model parameters, we introduced the between-subjects effect disease. Disease-related shifts of model parameters were accounted for by adding parameter μ d i s e a s e , z to the group-level location parameter μ Z exclusively for HC participants. Second, in order to account for medication-related shifts of model parameters, we introduced the within-subjects effect medication. For all testing sessions of PD patients ‘on’ medication, we added the variable m e d i c a t i o n z , i to the group-level location parameter μ z . As medication was a within-subjects effect, it was a participant-specific random variable that was drawn from a group-level normal distribution with location parameter μ m e d i c a t i o n , z , standard deviation σ m e d i c a t i o n , z and individual-level location parameters z m e d i c a t i o n , i . Again, we used a non-centered parameterization for m e d i c a t i o n z , i as:
m e d i c a t i o n z , i = μ m e d i c a t i o n , z + σ m e d i c a t i o n , z z m e d i c a t i o n , i
In order to account for variance in model parameters that was introduced by the repeated administration of the cWCST, we introduced the within-subjects effect session. The variable s e s s i o n z , i was added to the group-level location parameter μ z for all second testing sessions. s e s s i o n z , i was again a participant-specific random variable that was drawn from a group-level normal distribution with location parameter μ s e s s i o n , z , standard deviation σ s e s s i o n , z , and individual-level location parameter z s e s s i o n , i :
s e s s i o n z , i = μ s e s s i o n , z + σ s e s s i o n , z z s e s s i o n , i
Note that session effects were not of primary interest for this study. For analyses of session effects, see Appendix E.
To further facilitate hierarchical Bayesian analysis, we conducted parameter estimation in an unconstrained space [74,76,118], i.e., location and scale parameters were free to vary without constraints. As model parameters had upper and lower boundaries, the linear integration of location and scale parameters (Equation (A13)) was mapped to the interval [0, 1] using the Probit function. The Probit is the inverse-cumulative standard normal distribution. Note that this procedure is also appropriate for the inverse temperature parameter, which did not exceed estimates of 1 [44].
In summary, hierarchical Bayesian analysis included data from J = 100 cWCST administrations that were obtained from 34 HC participants and 16 PD patients, who were all administered the cWCST twice. For PD patients, testing sessions were conducted either ‘on’ or ‘off’ dopaminergic medication. Thus, model parameter z j of cWCST administration j was defined as:
z j = P r o b i t ( μ z + σ z z i + I d , j μ d i s e a s e , z + I m , j m e d i c a t i o n z , i + I s , j s e s s i o n z , i )
In order to account for the potential role of disease, medication, and session in cWCST administration j, we included a set of dummy variables I . Disease-related shifts of model parameters were modeled by including parameter μ d i s e a s e , z exclusively for HC participants. Thus, I d , j was coded 1 if cWCST administration j was obtained from a HC participant and 0 otherwise. Shifts of model parameters for cWCST administrations ‘on’ medication were modeled by variable m e d i c a t i o n z , i . Thus, I m , j was coded 1 if cWCST administration j was obtained from a PD patient ‘on’ medication and 0 otherwise. The variable s e s s i o n z , i was exclusively added for second testing sessions. Thus, I s , j was coded 1 if cWCST administration j was a second testing sessions and 0 otherwise.
In line with previous studies [44,74], all location parameters had normal prior distributions (μ = 0, σ = 1) and all scale parameters had half Cauchy prior distributions (μ = 0, σ = 5). Sampling was done using three chains of 1000 iterations, including 500 warm-up iterations per chain. Convergence of chains was checked visually by trace-plots and quantitatively by the R ^ statistic [120].
Bayes factors (BF) for effects of disease, medication, and session were computed by dividing the posterior density of parameters μ d i s e a s e , z , μ m e d i c a t i o n , z , or μ s e s s i o n , z above zero by the respective posterior density below zero. This method is appropriate as prior distributions of μ d i s e a s e , z , μ m e d i c a t i o n , z , and μ s e s s i o n , z were symmetric around zero [121].
For descriptive statistics of model parameters, we analyzed posterior distributions of group-level location parameters. The group-level posterior distribution of model parameter z was computed as:
z = P r o b i t ( μ z + I d μ d i s e a s e , z + I m μ m e d i c a t i o n , z + I s μ s e s s i o n , z )
Dummy variables I were used to compute group-level model parameter posterior distributions for HC participants ( I d = 1 ,   I m = 0 ) and PD patients ‘off’ ( I d = 0 ,   I m = 0 ) and ‘on’ ( I d = 1 ,   I m = 1 ) medication. As we were not interested in session effects on model parameters, we averaged posterior distributions of model parameters across estimates of first ( I s = 0 ) and second testing sessions ( I s = 1 ) .

Appendix D

Figure A1. Individual conditional error probabilities separated by (a) gender, (b) median time between testing sessions, and (c) median time of withdrawal from dopaminergic medication.
Figure A1. Individual conditional error probabilities separated by (a) gender, (b) median time between testing sessions, and (c) median time of withdrawal from dopaminergic medication.
Jcm 09 01158 g0a1
Figure A2. Medians of individual-level posterior distributions (derived by Equation (A13)) separated by (a) gender, (b) median time between testing sessions, and (c) median time of withdrawal from dopaminergic medication.
Figure A2. Medians of individual-level posterior distributions (derived by Equation (A13)) separated by (a) gender, (b) median time between testing sessions, and (c) median time of withdrawal from dopaminergic medication.
Jcm 09 01158 g0a2

Appendix E

For descriptive statistics of model parameters for the first and second testing session, we averaged posterior distributions of model parameters across HC participants and PD patients (Table A5).
Table A5. Descriptive statistics of group-level posterior distributions of model parameters for first and second testing sessions (median with lower and upper quartiles in parentheses).
Table A5. Descriptive statistics of group-level posterior distributions of model parameters for first and second testing sessions (median with lower and upper quartiles in parentheses).
ParameterSession
FirstSecond
α C + 0.517 (0.477, 0.558)0.612 (0.573, 0.652)
α C 0.219 (0.191, 0.249)0.342 (0.307, 0.378)
γ C 0.121 (0.096, 0.148)0.188 (0.154, 0.225)
α S + 0.004 (0.002, 0.009)<0.001 (<0.001, 0.001)
α S 0.054 (0.046, 0.065)0.031 (0.024, 0.039)
γ S 0.345 (0.274, 0.406)0.401 (0.307, 0.480)
τ 0.149 (0.139, 0.158)0.157 (0.148, 0.167)
There was strong and extreme evidence for an increase of cognitive learning rate parameters from the first to the second testing session (BF = 15.854 and BF = 1499.000 following positive and negative feedback, respectively; see Table A6). There was also strong evidence for a decrease of sensorimotor learning rate parameters from the first to the second testing session (BF = 0.046 and BF = 0.016 following positive and negative feedback, respectively). Cognitive learning from feedback was stronger on the second testing session when compared to the first testing session. In contrast, sensorimotor learning was less strong on the second testing session when compared to the first testing session. Additionally, there was strong evidence for an increase of the cognitive retention rate from the first to the second testing session (BF = 47.387), indicating that more information from previous trials retained in the cognitive on the second testing session.
Table A6. Bayes factors for effects of session on model parameters.
Table A6. Bayes factors for effects of session on model parameters.
ParameterBayes Factor
α C + 15.854 **
α C 1499.000 ***
γ C 47.387 **
α S + 0.046 **
α S 0.016 **
γ S 2.282
τ 2.676
** strong evidence; *** extreme evidence.

Appendix F

Table A7. Pearson correlation coefficients between model parameters and conditional error probabilities as well as participants’ demographic, clinical, and psychological characteristics.
Table A7. Pearson correlation coefficients between model parameters and conditional error probabilities as well as participants’ demographic, clinical, and psychological characteristics.
Healthy Control ParticipantsParkinson’s Disease Patients
α C + α C γ C α S + α S γ S τ α C + α C γ C α S + α S γ S τ
Perseveration errors−0.55 **−0.92 ***−0.66 ***−0.320.340.080.52 **−0.63 *−0.91 ***−0.440.340.540.300.63 *
Set-loss Errors−0.94 ***−0.70 ***−0.61 ***−0.210.07−0.040.80 ***−0.96 ***−0.69 **−0.57 *0.67 *0.220.300.88 ***
Age (years)−0.40 *−0.30−0.35−0.06−0.060.150.42 *−0.50−0.42−0.460.39−0.020.320.42
Education (years)0.160.07−0.03−0.050.10−0.21−0.050.62 *0.340.40−0.54−0.18−0.05−0.55
Disease duration (years) −0.47−0.10−0.050.51−0.030.090.33
UPDRS III ‘on’ −0.180.200.020.14−0.41−0.200.24
UPDRS III ‘off’ −0.200.470.320.14−0.14−0.060.19
LEDD −0.29−0.010.060.23−0.23−0.240.19
MoCA (cognitive status)0.42 *0.340.23−0.210.170.12−0.390.030.450.160.00−0.05−0.11−0.16
WST (premorbid intelligence)0.330.280.210.070.01−0.08−0.400.360.310.150.020.150.09−0.43
AES (apathy)−0.14−0.14−0.03−0.210.040.100.02−0.500.080.050.10−0.32−0.020.48
BDI-II (depression)−0.14−0.21−0.21−0.040.18−0.050.09−0.020.410.19−0.37−0.38−0.170.04
BSI-18 (psychiatric status)−0.23−0.22−0.26−0.130.07−0.020.16−0.38−0.11−0.250.200.150.520.27
Anxiety−0.24−0.17−0.23−0.150.110.110.18−0.020.05−0.14−0.140.080.35−0.07
Depression−0.18−0.22−0.23−0.090.05−0.200.08−0.50−0.20−0.320.030.030.230.40
Somatization−0.21−0.20−0.25−0.110.060.070.17−0.39−0.11−0.200.390.190.60 *0.29
SF-36 (health status)0.230.310.260.06−0.04−0.03−0.260.28−0.110.22−0.130.24−0.41−0.19
Physical functioning0.210.200.160.190.020.01−0.220.14−0.32−0.03−0.310.00−0.48−0.06
Physical role functioning0.230.41 *0.290.11−0.07−0.03−0.29−0.02−0.40−0.02−0.110.45−0.280.07
Bodily pain0.250.290.230.26−0.17−0.20−0.230.270.150.47−0.29−0.12−0.44−0.13
General health perception0.240.300.270.16−0.10−0.21−0.310.23−0.050.15−0.230.18−0.58−0.20
Vitality0.030.000.03−0.06−0.040.01−0.060.18−0.24−0.03−0.020.30−0.30−0.18
Social role functioning0.290.310.180.04−0.050.00−0.240.22−0.18−0.220.270.470.25−0.25
Emotional role functioning0.100.220.21−0.140.050.07−0.100.350.250.340.190.010.14−0.32
Mental health0.140.170.21−0.150.070.05−0.160.190.120.250.170.27−0.13−0.12
BIS-brief (impulsiveness)−0.17−0.17−0.07−0.170.060.220.08−0.120.45−0.02−0.05−0.24−0.03−0.01
DII (impulsivity)
Functional0.030.060.090.04−0.070.060.13−0.06−0.31−0.260.180.06−0.210.04
Dysfunctional−0.23−0.11−0.010.06−0.19−0.040.30−0.190.28−0.03−0.22−0.27−0.320.10
QUIP-RS (impulse control)−0.030.230.270.02−0.14−0.23−0.040.130.450.37−0.11−0.080.07−0.17
SPQ (schizotypal traits)−0.12−0.27−0.12−0.330.30−0.040.03−0.280.040.00−0.260.01−0.210.26
Interpersonal−0.14−0.33−0.15−0.250.160.000.09−0.310.070.10−0.110.110.160.29
Cognitive-perceptual0.100.170.12−0.150.49 **0.00−0.240.050.110.09−0.330.10−0.49−0.05
Disorganized−0.19−0.35−0.20−0.380.11−0.130.13−0.34−0.10−0.21−0.19−0.19−0.270.33
Individual-level model parameters were computed as modes of posterior distributions obtained from Equation (A13). Posterior distributions of individual-level model parameters and conditional error probabilities were pooled across session (and medication) effects. Disease duration was defined as the difference between the year of testing and the year of Parkinson’s disease symptom onset in the medical records; UPDRS III = Unified Parkinson’s Disease Rating Scale-part III; LEDD = Levodopa Equivalent Daily Dosage (mg per day) [102]; MoCA = Montreal Cognitive Assessment [67]; WST = Wortschatztest [103]; AES = Apathy Evaluation Scale [104]; BDI-II = Beck Depression Inventory-II [105]; BSI- 18 = Brief Symptom Inventory (18-item version) [106]; SF-36 = Short Form Health Survey [107]; BIS-Brief = Barratt Impulsiveness Scale (8-item version) [108]; DII = Dickman Impulsivity Inventory [109]; QUIP-RS = Questionnaire for Impulsive-Compulsive Disorders in Parkinson’s Disease—Rating Scale [110]; SPQ = Schizotypal Personality Questionnaire [111]; * substantial evidence; ** strong evidence; *** extreme evidence.

References

  1. Seer, C.; Lange, F.; Georgiev, D.; Jahanshahi, M.; Kopp, B. Event-related potentials and cognition in Parkinson’s disease: An integrative review. Neurosci. Biobehav. Rev. 2016, 71, 691–714. [Google Scholar] [CrossRef] [PubMed]
  2. Hawkes, C.H.; Del Tredici, K.; Braak, H. A timeline for Parkinson’s disease. Parkinsonism Relat. Disord. 2010, 16, 79–84. [Google Scholar] [CrossRef] [PubMed]
  3. Braak, H.; Del Tredici, K. Nervous system pathology in sporadic Parkinson disease. Neurology 2008, 70, 1916–1925. [Google Scholar] [CrossRef]
  4. Braak, H.; Tredici, K.D.; Rüb, U.; de Vos, R.A.I.; Jansen Steur, E.N.H.; Braak, E. Staging of brain pathology related to sporadic Parkinson’s disease. Neurobiol. Aging 2003, 24, 197–211. [Google Scholar] [CrossRef]
  5. Dirnberger, G.; Jahanshahi, M. Executive dysfunction in Parkinson’s disease: A review. J. Neuropsychol. 2013, 7, 193–224. [Google Scholar] [CrossRef]
  6. Kudlicka, A.; Clare, L.; Hindle, J.V. Executive functions in Parkinson’s disease: Systematic review and meta-analysis. Mov. Disord. 2011, 26, 2305–2315. [Google Scholar] [CrossRef]
  7. Cools, R. Dopaminergic modulation of cognitive function-implications for L-DOPA treatment in Parkinson’s disease. Neurosci. Biobehav. Rev. 2006, 30, 1–23. [Google Scholar] [CrossRef]
  8. Gotham, A.M.; Brown, R.G.; Marsden, C.D. ‘Frontal’ cognitive function in patients with Parkinson’s disease “on” and “off” Levodopa. Brain 1988, 111, 299–321. [Google Scholar] [CrossRef]
  9. Thurm, F.; Schuck, N.W.; Fauser, M.; Doeller, C.F.; Stankevich, Y.; Evens, R.; Riedel, O.; Storch, A.; Lueken, U.; Li, S.-C. Dopamine modulation of spatial navigation memory in Parkinson’s disease. Neurobiol. Aging 2016, 38, 93–103. [Google Scholar] [CrossRef]
  10. Vaillancourt, D.E.; Schonfeld, D.; Kwak, Y.; Bohnen, N.I.; Seidler, R. Dopamine overdose hypothesis: Evidence and clinical implications. Mov. Disord. 2013, 28, 1920–1929. [Google Scholar] [CrossRef] [Green Version]
  11. Cools, R.; D’Esposito, M. Inverted-U–shaped dopamine actions on human working memory and cognitive control. Biol. Psychiatry 2011, 69, 113–125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Li, S.-C.; Lindenberger, U.; Bäckman, L. Dopaminergic modulation of cognition across the life span. Neurosci. Biobehav. Rev. 2010, 34, 625–630. [Google Scholar] [CrossRef] [PubMed]
  13. Postuma, R.B.; Berg, D.; Stern, M.; Poewe, W.; Olanow, C.W.; Oertel, W.; Obeso, J.; Marek, K.; Litvan, I.; Lang, A.E.; et al. MDS clinical diagnostic criteria for Parkinson’s disease. Mov. Disord. 2015, 30, 1591–1601. [Google Scholar] [CrossRef] [PubMed]
  14. Bologna, M.; Paparella, G.; Fasano, A.; Hallett, M.; Berardelli, A. Evolving concepts on bradykinesia. Brain 2020, 143, 727–750. [Google Scholar] [CrossRef]
  15. Peavy, G.M. Mild cognitive deficits in Parkinson disease: Where there is bradykinesia, there is bradyphrenia. Neurology 2010, 75, 1038–1039. [Google Scholar] [CrossRef]
  16. Pate, D.S.; Margolin, D.I. Cognitive slowing in Parkinson’s and Alzheimer’s patients: Distinguishing bradyphrenia from dementia. Neurology 1994, 44, 669. [Google Scholar] [CrossRef]
  17. Weiss, H.D.; Pontone, G.M. “Pseudo-syndromes” associated with Parkinson disease, dementia, apathy, anxiety, and depression. Neurol. Clin. Pract. 2019, 9, 354–359. [Google Scholar] [CrossRef]
  18. Low, K.A.; Miller, J.; Vierck, E. Response slowing in Parkinson’s disease: A psychophysiological analysis of premotor and motor processes. Brain 2002, 125, 1980–1994. [Google Scholar] [CrossRef] [Green Version]
  19. Rogers, D.; Lees, A.J.; Smith, E.; Trimble, M.; Stern, G.M. Bradyphrenia in Parkinson’s disease and psychomotor retardation in depressive illness: An experimental study. Brain 1987, 110, 761–776. [Google Scholar] [CrossRef]
  20. Vlagsma, T.T.; Koerts, J.; Tucha, O.; Dijkstra, H.T.; Duits, A.A.; van Laar, T.; Spikman, J.M. Mental slowness in patients with Parkinson’s disease: Associations with cognitive functions? J. Clin. Exp. Neuropsychol. 2016, 38, 844–852. [Google Scholar] [CrossRef] [Green Version]
  21. Revonsuo, A.; Portin, R.; Koivikko, L.; Rinne, J.O.; Rinne, U.K. Slowing of information processing in Parkinson’s disease. Brain Cogn. 1993, 21, 87–110. [Google Scholar] [CrossRef] [PubMed]
  22. Kehagia, A.A.; Barker, R.A.; Robbins, T.W. Neuropsychological and clinical heterogeneity of cognitive impairment and dementia in patients with Parkinson’s disease. Lancet Neurol. 2010, 9, 1200–1213. [Google Scholar] [CrossRef]
  23. Rogers, D. Bradyphrenia in parkinsonism: A historical review. Psychol. Med. 1986, 16, 257–265. [Google Scholar] [CrossRef] [PubMed]
  24. Rustamov, N.; Rodriguez-Raecke, R.; Timm, L.; Agrawal, D.; Dressler, D.; Schrader, C.; Tacik, P.; Wegner, F.; Dengler, R.; Wittfoth, M.; et al. Absence of congruency sequence effects reveals neurocognitive inflexibility in Parkinson’s disease. Neuropsychologia 2013, 51, 2976–2987. [Google Scholar] [CrossRef] [PubMed]
  25. Rustamov, N.; Rodriguez-Raecke, R.; Timm, L.; Agrawal, D.; Dressler, D.; Schrader, C.; Tacik, P.; Wegner, F.; Dengler, R.; Wittfoth, M.; et al. Attention shifting in Parkinson’s disease: An analysis of behavioral and cortical responses. Neuropsychology 2014, 28, 929–944. [Google Scholar] [CrossRef] [PubMed]
  26. Aarsland, D.; Bronnick, K.; Williams-Gray, C.; Weintraub, D.; Marder, K.; Kulisevsky, J.; Burn, D.; Barone, P.; Pagonabarraga, J.; Allcock, L.; et al. Mild cognitive impairment in Parkinson disease: A multicenter pooled analysis. Neurology 2010, 75, 1062–1069. [Google Scholar] [CrossRef]
  27. Lange, F.; Seer, C.; Kopp, B. Cognitive flexibility in neurological disorders: Cognitive components and event-related potentials. Neurosci. Biobehav. Rev. 2017, 83, 496–507. [Google Scholar] [CrossRef]
  28. Lange, F.; Brückner, C.; Knebel, A.; Seer, C.; Kopp, B. Executive dysfunction in Parkinson’s disease: A meta-analysis on the Wisconsin Card Sorting Test literature. Neurosci. Biobehav. Rev. 2018, 93, 38–56. [Google Scholar] [CrossRef]
  29. Beeldman, E.; Raaphorst, J.; Twennaar, M.; de Visser, M.; Schmand, B.A.; de Haan, R.J. The cognitive profile of ALS: A systematic review and meta-analysis update. J. Neurol. Neurosurg. Psychiatry 2016, 87, 611–619. [Google Scholar] [CrossRef] [Green Version]
  30. Demakis, G.J. A meta-analytic review of the sensitivity of the Wisconsin Card Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology 2003, 17, 255–264. [Google Scholar] [CrossRef]
  31. Lange, F.; Seer, C.; Müller-Vahl, K.; Kopp, B. Cognitive flexibility and its electrophysiological correlates in Gilles de la Tourette syndrome. Dev. Cogn. Neurosci. 2017, 27, 78–90. [Google Scholar] [CrossRef] [PubMed]
  32. Lange, F.; Seer, C.; Salchow, C.; Dengler, R.; Dressler, D.; Kopp, B. Meta-analytical and electrophysiological evidence for executive dysfunction in primary dystonia. Cortex 2016, 82, 133–146. [Google Scholar] [CrossRef] [PubMed]
  33. Lange, F.; Vogts, M.-B.; Seer, C.; Fürkötter, S.; Abdulla, S.; Dengler, R.; Kopp, B.; Petri, S. Impaired set-shifting in amyotrophic lateral sclerosis: An event-related potential study of executive function. Neuropsychology 2016, 30, 120–134. [Google Scholar] [CrossRef] [PubMed]
  34. Nyhus, E.; Barceló, F. The Wisconsin Card Sorting Test and the cognitive assessment of prefrontal executive functions: A critical update. Brain Cogn. 2009, 71, 437–451. [Google Scholar] [CrossRef]
  35. Roberts, M.E.; Tchanturia, K.; Stahl, D.; Southgate, L.; Treasure, J. A systematic review and meta-analysis of set-shifting ability in eating disorders. Psychol. Med. 2007, 37, 1075–1084. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Romine, C. Wisconsin Card Sorting Test with children: A meta-analytic study of sensitivity and specificity. Arch. Clin. Neuropsychol. 2004, 19, 1027–1041. [Google Scholar] [CrossRef]
  37. Shin, N.Y.; Lee, T.Y.; Kim, E.; Kwon, J.S. Cognitive functioning in obsessive-compulsive disorder: A meta-analysis. Psychol. Med. 2014, 44, 1121–1130. [Google Scholar] [CrossRef]
  38. Snyder, H.R. Major depressive disorder is associated with broad impairments on neuropsychological measures of executive function: A meta-analysis and review. Psychol. Bull. 2013, 139, 81–132. [Google Scholar] [CrossRef] [Green Version]
  39. Grant, D.A.; Berg, E.A. A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. J. Exp. Psychol. 1948, 38, 404–411. [Google Scholar] [CrossRef]
  40. Berg, E.A. A simple objective technique for measuring flexibility in thinking. J. Gen. Psychol. 1948, 39, 15–22. [Google Scholar] [CrossRef]
  41. Heaton, R.K.; Chelune, G.J.; Talley, J.L.; Kay, G.G.; Curtiss, G. Wisconsin Card Sorting Test Manual: Revised and Expanded; Psychological Assessment Resources Inc.: Odessa, FL, USA, 1993. [Google Scholar]
  42. Bishara, A.J.; Kruschke, J.K.; Stout, J.C.; Bechara, A.; McCabe, D.P.; Busemeyer, J.R. Sequential learning models for the Wisconsin card sort task: Assessing processes in substance dependent individuals. J. Math. Psychol. 2010, 54, 5–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Kopp, B.; Steinke, A.; Bertram, M.; Skripuletz, T.; Lange, F. Multiple levels of control processes for Wisconsin Card Sorts: An observational study. Brain Sci. 2019, 9, 141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Steinke, A.; Lange, F.; Kopp, B. Parallel model-based and model-free reinforcement learning for card sorting performance. 2020. manuscript submitted for publication. [Google Scholar]
  45. Barceló, F. The Madrid card sorting test (MCST): A task switching paradigm to study executive attention with event-related potentials. Brain Res. Protoc. 2003, 11, 27–37. [Google Scholar] [CrossRef]
  46. Lange, F.; Dewitte, S. Cognitive flexibility and pro-environmental behaviour: A multimethod approach. Eur. J. Pers. 2019, 56, 46–54. [Google Scholar] [CrossRef]
  47. Lange, F.; Kröger, B.; Steinke, A.; Seer, C.; Dengler, R.; Kopp, B. Decomposing card-sorting performance: Effects of working memory load and age-related changes. Neuropsychology 2016, 30, 579–590. [Google Scholar] [CrossRef]
  48. Steinke, A.; Lange, F.; Seer, C.; Kopp, B. Toward a computational cognitive neuropsychology of Wisconsin card sorts: A showcase study in Parkinson’s disease. Comput. Brain Behav. 2018, 1, 137–150. [Google Scholar] [CrossRef]
  49. Gläscher, J.; Adolphs, R.; Tranel, D. Model-based lesion mapping of cognitive control using the Wisconsin Card Sorting Test. Nat. Commun. 2019, 10, 1–2. [Google Scholar] [CrossRef] [Green Version]
  50. Beste, C.; Adelhöfer, N.; Gohil, K.; Passow, S.; Roessner, V.; Li, S.-C. Dopamine modulates the efficiency of sensory evidence accumulation during perceptual decision making. Int. J. Neuropsychopharmacol. 2018, 21, 649–655. [Google Scholar] [CrossRef]
  51. Caso, A.; Cooper, R.P. A neurally plausible schema-theoretic approach to modelling cognitive dysfunction and neurophysiological markers in Parkinson’s disease. Neuropsychologia 2020, 140, 107359. [Google Scholar] [CrossRef]
  52. Browning, M.; Carter, C.; Chatham, C.H.; Ouden, H.D.; Gillan, C.; Baker, J.T.; Paulus, M.P. Realizing the clinical potential of computational psychiatry: Report from the Banbury Center Meeting, February 2019. 2019. Available online: https://www.doi.org/10.31234/osf.io/5qbxp (accessed on 1 March 2020).
  53. McCoy, B.; Jahfari, S.; Engels, G.; Knapen, T.; Theeuwes, J. Dopaminergic medication reduces striatal sensitivity to negative outcomes in Parkinson’s disease. Brain 2019, 142, 3605–3620. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  55. Niv, Y. Reinforcement learning in the brain. J. Math. Psychol. 2009, 53, 139–154. [Google Scholar] [CrossRef] [Green Version]
  56. Silvetti, M.; Verguts, T. Reinforcement learning, high-level cognition, and the human brain. In Neuroimaging—Cognitive and Clinical Neuroscience; Bright, P., Ed.; InTech: Rijeka, Croatia, 2012; pp. 283–296. [Google Scholar]
  57. Gerraty, R.T.; Davidow, J.Y.; Foerde, K.; Galvan, A.; Bassett, D.S.; Shohamy, D. Dynamic flexibility in striatal-cortical circuits supports reinforcement learning. J. Neurosci. 2018, 38, 2442–2453. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Fontanesi, L.; Gluth, S.; Spektor, M.S.; Rieskamp, J. A reinforcement learning diffusion decision model for value-based decisions. Psychon. Bull. Rev. 2019, 26, 1099–1121. [Google Scholar] [CrossRef]
  59. Fontanesi, L.; Palminteri, S.; Lebreton, M. Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: A meta-analytical approach using diffusion decision modeling. Cogn. Affect. Behav. Neurosci. 2019, 19, 490–502. [Google Scholar] [CrossRef]
  60. Caligiore, D.; Arbib, M.A.; Miall, R.C.; Baldassarre, G. The super-learning hypothesis: Integrating learning processes across cortex, cerebellum and basal ganglia. Neurosci. Biobehav. Rev. 2019, 100, 19–34. [Google Scholar] [CrossRef]
  61. Frank, M.J.; Seeberger, L.C.; O’Reilly, R.C. By carrot or by stick: Cognitive reinforcement learning in Parkinsonism. Science 2004, 306, 1940–1943. [Google Scholar] [CrossRef] [Green Version]
  62. Schultz, W.; Dayan, P.; Montague, P.R. A neural substrate of prediction and reward. Science 1997, 275, 1593–1599. [Google Scholar] [CrossRef] [Green Version]
  63. Schultz, W. Reward prediction error. Curr. Biol. 2017, 27, 369–371. [Google Scholar] [CrossRef] [Green Version]
  64. Erev, I.; Roth, A.E. Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. Am. Econ. Rev. 1998, 88, 848–881. [Google Scholar]
  65. Steingroever, H.; Wetzels, R.; Wagenmakers, E.J. Validating the PVL-Delta model for the Iowa gambling task. Front. Psychol. 2013, 4, 1–17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Lange, F.; Seer, C.; Loens, S.; Wegner, F.; Schrader, C.; Dressler, D.; Dengler, R.; Kopp, B. Neural mechanisms underlying cognitive inflexibility in Parkinson’s disease. Neuropsychologia 2016, 93, 142–150. [Google Scholar] [CrossRef] [PubMed]
  67. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef] [PubMed]
  68. Kopp, B.; Lange, F. Electrophysiological indicators of surprise and entropy in dynamic task-switching environments. Front. Hum. Neurosci. 2013, 7, 300. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Altmann, E.M. Advance preparation in task switching. Psychol. Sci. 2004, 15, 616–622. [Google Scholar] [CrossRef] [PubMed]
  70. JASP Team. JASP, Version 0.11.1. 2019. Available online: https://www.jasp-stats.org (accessed on 7 January 2020).
  71. Van den Bergh, D.; van Doorn, J.; Marsman, M.; Draws, T.; van Kesteren, E.-J.; Derks, K.; Dablander, F.; Gronau, Q.F.; Kucharský, Š.; Raj, A.; et al. A Tutorial on Conducting and Interpreting a Bayesian ANOVA in JASP. 2019. Available online: https://www.psyarxiv.com/spreb (accessed on 1 February 2020).
  72. Daw, N.D.; Gershman, S.J.; Seymour, B.; Dayan, P.; Dolan, R.J. Model-based influences on humans’ choices and striatal prediction errors. Neuron 2011, 69, 1204–1215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Frank, M.J.; Moustafa, A.A.; Haughey, H.M.; Curran, T.; Hutchison, K.E. Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. Proc. Natl. Acad. Sci. USA 2007, 104, 16311–16316. [Google Scholar] [CrossRef] [Green Version]
  74. Ahn, W.-Y.; Haines, N.; Zhang, L. Revealing neurocomputational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Comput. Psychiatry 2017, 1, 24–57. [Google Scholar] [CrossRef]
  75. Palminteri, S.; Lebreton, M.; Worbe, Y.; Grabli, D.; Hartmann, A.; Pessiglione, M. Pharmacological modulation of subliminal learning in Parkinson’s and Tourette’s syndromes. Proc. Natl. Acad. Sci. USA 2009, 106, 19179–19184. [Google Scholar] [CrossRef] [Green Version]
  76. Haines, N.; Vassileva, J.; Ahn, W.-Y. The Outcome-Representation Learning model: A novel reinforcement learning model of the Iowa Gambling Task. Cogn. Sci. 2018, 42, 2534–2561. [Google Scholar] [CrossRef] [Green Version]
  77. Steingroever, H.; Wetzels, R.; Wagenmakers, E.J. Absolute performance of reinforcement-learning models for the Iowa Gambling Task. Decision 2014, 1, 161–183. [Google Scholar] [CrossRef]
  78. Pedersen, M.L.; Frank, M.J.; Biele, G. The drift diffusion model as the choice rule in reinforcement learning. Psychon. Bull. Rev. 2017, 24, 1234–1251. [Google Scholar] [CrossRef] [PubMed]
  79. Wagenmakers, E.-J.; Wetzels, R.; Borsboom, D.; van der Maas, H.L.J. Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). J. Pers. Soc. Psychol. 2011, 100, 426–432. [Google Scholar] [CrossRef] [PubMed]
  80. Willemssen, R.; Falkenstein, M.; Schwarz, M.; Müller, T.; Beste, C. Effects of aging, Parkinson’s disease, and dopaminergic medication on response selection and control. Neurobiol. Aging 2011, 32, 327–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Knowlton, B.J.; Mangels, J.A.; Squire, L.R. A neostriatal habit learning system in humans. Science 1996, 273, 1399–1402. [Google Scholar] [CrossRef] [Green Version]
  82. Yin, H.H.; Knowlton, B.J. The role of the basal ganglia in habit formation. Nat. Rev. Neurosci. 2006, 7, 464–476. [Google Scholar] [CrossRef]
  83. Shohamy, D.; Myers, C.E.; Kalanithi, J.; Gluck, M.A. Basal ganglia and dopamine contributions to probabilistic category learning. Neurosci. Biobehav. Rev. 2008, 32, 219–236. [Google Scholar] [CrossRef] [Green Version]
  84. Floresco, S.B.; Magyar, O. Mesocortical dopamine modulation of executive functions: Beyond working memory. Psychopharmacology 2006, 188, 567–585. [Google Scholar] [CrossRef]
  85. Müller, J.; Dreisbach, G.; Goschke, T.; Hensch, T.; Lesch, K.-P.; Brocke, B. Dopamine and cognitive control: The prospect of monetary gains influences the balance between flexibility and stability in a set-shifting paradigm. Eur. J. Neurosci. 2007, 26, 3661–3668. [Google Scholar] [CrossRef]
  86. Goschke, T.; Bolte, A. Emotional modulation of control dilemmas: The role of positive affect, reward, and dopamine in cognitive stability and flexibility. Neuropsychologia 2014, 62, 403–423. [Google Scholar] [CrossRef]
  87. Shohamy, D.; Adcock, R.A. Dopamine and adaptive memory. Trends Cogn. Sci. 2010, 14, 464–472. [Google Scholar] [CrossRef] [PubMed]
  88. Aarts, E.; Nusselein, A.A.M.; Smittenaar, P.; Helmich, R.C.; Bloem, B.R.; Cools, R. Greater striatal responses to medication in Parkinson׳s disease are associated with better task-switching but worse reward performance. Neuropsychologia 2014, 62, 390–397. [Google Scholar] [CrossRef] [PubMed]
  89. Beste, C.; Willemssen, R.; Saft, C.; Falkenstein, M. Response inhibition subprocesses and dopaminergic pathways: Basal ganglia disease effects. Neuropsychologia 2010, 48, 366–373. [Google Scholar] [CrossRef] [PubMed]
  90. Barceló, F.; Knight, R.T. Both random and perseverative errors underlie WCST deficits in prefrontal patients. Neuropsychologia 2002, 40, 349–356. [Google Scholar] [CrossRef]
  91. Albin, R.L.; Leventhal, D.K. The missing, the short, and the long: Levodopa responses and dopamine actions. Ann. Neurol. 2017, 82, 4–19. [Google Scholar] [CrossRef]
  92. Beste, C.; Stock, A.-K.; Epplen, J.T.; Arning, L. Dissociable electrophysiological subprocesses during response inhibition are differentially modulated by dopamine D1 and D2 receptors. Eur. Neuropsychopharmacol. 2016, 26, 1029–1036. [Google Scholar] [CrossRef]
  93. Eisenegger, C.; Naef, M.; Linssen, A.; Clark, L.; Gandamaneni, P.K.; Müller, U.; Robbins, T.W. Role of dopamine D2 receptors in human reinforcement learning. Neuropsychopharmacology 2014, 39, 2366–2375. [Google Scholar] [CrossRef]
  94. Bensmann, W.; Zink, N.; Arning, L.; Beste, C.; Stock, A.-K. Dopamine D1, but not D2, signaling protects mental representations from distracting bottom-up influences. Neuroimage 2020, 204, 116243. [Google Scholar] [CrossRef]
  95. Freitas, S.; Simões, M.R.; Alves, L.; Santana, I. Montreal Cognitive Assessment: Validation study for Mild Cognitive Impairment and Alzheimer Disease. Alzheimer Dis. Assoc. Disord. 2013, 27, 37–43. [Google Scholar] [CrossRef]
  96. Litvan, I.; Goldman, J.G.; Tröster, A.I.; Schmand, B.A.; Weintraub, D.; Petersen, R.C.; Mollenhauer, B.; Adler, C.H.; Marder, K.; Williams-Gray, C.H.; et al. Diagnostic criteria for mild cognitive impairment in Parkinson’s disease: Movement Disorder Society Task Force guidelines. Mov. Disord. 2012, 27, 349–356. [Google Scholar] [CrossRef] [Green Version]
  97. Palminteri, S.; Wyart, V.; Koechlin, E. The importance of falsification in computational cognitive modeling. Trends Cogn. Sci. 2017, 21, 425–433. [Google Scholar] [CrossRef] [PubMed]
  98. Schretlen, D.J. Modified Wisconsin Card Sorting Test (M-WCST): Professional Manual; Psychological Assessment Resources Inc.: Lutz, FL, USA, 2010. [Google Scholar]
  99. Kongs, S.K.; Thompson, L.L.; Iverson, G.L.; Heaton, R.K. WCST-64: Wisconsin Card Sorting Test-64 Card Version: Professional Manual; Psychological Assessment Resources Inc.: Lutz, FL, USA, 2000. [Google Scholar]
  100. Nelson, H.E. A modified card sorting test sensitive to frontal lobe defects. Cortex 1976, 12, 313–324. [Google Scholar] [CrossRef]
  101. Kruschke, J.K. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan; Academic Press: New York, NY, USA, 2015. [Google Scholar]
  102. Tomlinson, C.L.; Stowe, R.; Patel, S.; Rick, C.; Gray, R.; Clarke, C.E. Systematic review of levodopa dose equivalency reporting in Parkinson’s disease. Mov. Disord. 2010, 25, 2649–2653. [Google Scholar] [CrossRef] [PubMed]
  103. Schmidt, K.-H.; Metzler, P. Wortschatztest: WST.; Beltz Test: Göttingen, Germany, 1992. [Google Scholar]
  104. Marin, R.S.; Biedrzycki, R.C.; Firinciogullari, S. Reliability and validity of the apathy evaluation scale. Psychiatry Res. 1991, 38, 143–162. [Google Scholar] [CrossRef]
  105. Beck, A.T.; Steer, R.A.; Ball, R.; Ranieri, W.F. Comparison of Beck Depression Inventories-IA and-II in psychiatric outpatients. J. Pers. Assess. 1996, 67, 588–597. [Google Scholar] [CrossRef]
  106. Derogatis, L.R. Brief Symptom Inventory 18: Administration, Scoring and Procedures Manual; NCS Pearson Inc.: Minneapolis, MN, USA, 2001. [Google Scholar]
  107. Ware, J.E.; Sherbourne, C.D. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med. Care 1992, 30, 473–483. [Google Scholar] [CrossRef]
  108. Steinberg, L.; Sharp, C.; Stanford, M.S.; Tharp, A.T. New tricks for an old measure: The development of the Barratt Impulsiveness Scale–Brief (BIS-Brief). Psychol. Assess. 2013, 25, 216–226. [Google Scholar] [CrossRef]
  109. Dickman, S.J. Functional and dysfunctional impulsivity: Personality and cognitive correlates. J. Pers. Soc. Psychol. 1990, 58, 95–102. [Google Scholar] [CrossRef]
  110. Weintraub, D.; Mamikonyan, E.; Papay, K.; Shea, J.A.; Xie, S.X.; Siderowf, A. Questionnaire for impulsive-compulsive disorders in Parkinson’s Disease-Rating Scale. Mov. Disord. 2012, 27, 242–247. [Google Scholar] [CrossRef] [Green Version]
  111. Raine, A.; Benishay, D. The SPQ-B: A brief screening instrument for schizotypal personality disorder. J. Pers. Disord. 1995, 9, 346–355. [Google Scholar] [CrossRef]
  112. Ahn, W.-Y.; Krawitz, A.; Kim, W.; Busemeyer, J.R.; Brown, J.W. A model-based fMRI analysis with hierarchical Bayesian parameter estimation. J. Neurosci. Psychol. Econ. 2011, 4, 95–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Lee, M.D. How cognitive modeling can benefit from hierarchical Bayesian models. J. Math. Psychol. 2011, 55, 1–7. [Google Scholar] [CrossRef]
  114. Lee, M.D.; Wagenmakers, E.-J. Bayesian Cognitive Modeling: A Practical Course; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  115. Rouder, J.N.; Lu, J. An introduction to Bayesian hierarchical models with an application in the theory of signal detection. Psychon. Bull. Rev. 2005, 12, 573–604. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Shiffrin, R.; Lee, M.D.; Kim, W.; Wagenmakers, E.-J. A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cogn. Sci. 2008, 32, 1248–1284. [Google Scholar] [CrossRef]
  117. Stan Development Team. RStan: The R interface to Stan 2018. Available online: https://www.mc-stan.org (accessed on 1 November 2019).
  118. Betancourt, M.J.; Girolami, M. Hamiltonian Monte Carlo for hierarchical models. In Current Trends in Bayesian Methodology with Applications; Upadhyay, S.K., Umesh, S., Dey, D.K., Loganathan, A., Eds.; CRC Press: Boca Raton, FL, USA, 2013; pp. 79–97. [Google Scholar]
  119. Sharp, M.E.; Foerde, K.; Daw, N.D.; Shohamy, D. Dopamine selectively remediates “model-based” reward learning: A computational approach. Brain 2016, 139, 355–364. [Google Scholar] [CrossRef] [Green Version]
  120. Gelman, A.; Rubin, D.B. Inference from iterative simulation using multiple sequences. Stat. Sci. 1992, 7, 457–472. [Google Scholar] [CrossRef]
  121. Marsman, M.; Wagenmakers, E.-J. Three insights from a Bayesian interpretation of the one-sided P value. Educ. Psychol. Meas. 2017, 77, 529–539. [Google Scholar] [CrossRef]
Figure 1. An exemplary trial sequence on the computerized Wisconsin Card Sorting Test (WCST) [27,45,46,47,48]. On Trial t, a stimulus card (one green cross) could be sorted by the color category (inner left key card, response 2), the number category (far left key card, response 1), or the shape category (inner right key card, response 3). In the current example, the shape category was applied as indicated by observing response 3. A positive feedback (i.e., the visually presented word “REPEAT”) indicated that this sort was correct, implying that the shape category should be repeated on the upcoming trials. However, on Trial t+1, the number category was applied, as indicated by observing response 3. Erroneous switches of the applied category after positive feedbacks are referred to as a set-loss errors. A subsequently presented negative feedback stimulus (i.e., the visually presented word “SWITCH”) announced that that sort was incorrect, implying that a category switch is required. On Trial t+2 of the current example, the number category was repeated, as indicated by observing response 2. Erroneous repetitions of categories following negative feedbacks are referred to as perseveration errors.
Figure 1. An exemplary trial sequence on the computerized Wisconsin Card Sorting Test (WCST) [27,45,46,47,48]. On Trial t, a stimulus card (one green cross) could be sorted by the color category (inner left key card, response 2), the number category (far left key card, response 1), or the shape category (inner right key card, response 3). In the current example, the shape category was applied as indicated by observing response 3. A positive feedback (i.e., the visually presented word “REPEAT”) indicated that this sort was correct, implying that the shape category should be repeated on the upcoming trials. However, on Trial t+1, the number category was applied, as indicated by observing response 3. Erroneous switches of the applied category after positive feedbacks are referred to as a set-loss errors. A subsequently presented negative feedback stimulus (i.e., the visually presented word “SWITCH”) announced that that sort was incorrect, implying that a category switch is required. On Trial t+2 of the current example, the number category was repeated, as indicated by observing response 2. Erroneous repetitions of categories following negative feedbacks are referred to as perseveration errors.
Jcm 09 01158 g001
Figure 2. Conditional error probabilities. Large circles indicate mean conditional error probabilities. Error bars indicate the 95% credibility interval. Small circles indicate individual conditional error probabilities.
Figure 2. Conditional error probabilities. Large circles indicate mean conditional error probabilities. Error bars indicate the 95% credibility interval. Small circles indicate individual conditional error probabilities.
Jcm 09 01158 g002
Figure 3. Model parameters for cognitive and sensorimotor learning. Large circles indicate medians of group-level posterior distributions (derived by Equation (A14)). Error bars indicate lower and upper quartiles of group-level posterior distributions. Small circles indicate medians of individual-level posterior distributions (derived by Equation (A13)); a.u. = arbitrary units; * substantial evidence for the presence of an effect; ** strong evidence for the presence of an effect.
Figure 3. Model parameters for cognitive and sensorimotor learning. Large circles indicate medians of group-level posterior distributions (derived by Equation (A14)). Error bars indicate lower and upper quartiles of group-level posterior distributions. Small circles indicate medians of individual-level posterior distributions (derived by Equation (A13)); a.u. = arbitrary units; * substantial evidence for the presence of an effect; ** strong evidence for the presence of an effect.
Jcm 09 01158 g003
Figure 4. Exemplary effects of between-group variations of model parameters on trial-to-trial dynamics of feedback prediction values. (a) The representative trial sequence on the cWCST, as depicted in Figure 1. Panels (b), (c), and (d) give feedback prediction values across seven trials, the first three of them are shown in (a). Note that on Trials 4 to 7, neither the shape category was applied nor was response 3 pressed. Panel (b) shows cognitive-learning feedback prediction values for the application of the shape category. The received positive feedback on Trial 1 causes an increase in feedback prediction values. With high configurations of the cognitive retention rate (i.e., γ C ), such as seen in Parkinson’s Disease (PD) patients, more information from previous trials (i.e., feedback prediction values) was retained. Panel (c) shows sensorimotor-learning feedback prediction values for the execution of response 3. The execution of response 3 was followed by a positive feedback on Trial 1. However, as the sensorimotor learning rate for positive feedback was close to zero, no updating of feedback prediction values happened. On Trial 2, a negative feedback followed the execution of response 3, causing a decrease in feedback prediction values. With low values of the sensorimotor retention rate (i.e., γ S ), such as seen in PD patients, less sensorimotor-learning information from previous trials was retained. Panel (d) shows the effect of a low configuration of the cognitive learning rate for positive feedback (i.e., α C + ), such as seen in PD patients ‘on’ dopaminergic medication. Low learning rate configurations decrease learning from positive feedback, which is indicated by a reduced amplitude of feedback prediction values. All presented effects of model parameters on trial-to-trial dynamics of feedback prediction values were computed by varying exclusively the parameter of interest at arbitrary values while holding all other model parameters constant.
Figure 4. Exemplary effects of between-group variations of model parameters on trial-to-trial dynamics of feedback prediction values. (a) The representative trial sequence on the cWCST, as depicted in Figure 1. Panels (b), (c), and (d) give feedback prediction values across seven trials, the first three of them are shown in (a). Note that on Trials 4 to 7, neither the shape category was applied nor was response 3 pressed. Panel (b) shows cognitive-learning feedback prediction values for the application of the shape category. The received positive feedback on Trial 1 causes an increase in feedback prediction values. With high configurations of the cognitive retention rate (i.e., γ C ), such as seen in Parkinson’s Disease (PD) patients, more information from previous trials (i.e., feedback prediction values) was retained. Panel (c) shows sensorimotor-learning feedback prediction values for the execution of response 3. The execution of response 3 was followed by a positive feedback on Trial 1. However, as the sensorimotor learning rate for positive feedback was close to zero, no updating of feedback prediction values happened. On Trial 2, a negative feedback followed the execution of response 3, causing a decrease in feedback prediction values. With low values of the sensorimotor retention rate (i.e., γ S ), such as seen in PD patients, less sensorimotor-learning information from previous trials was retained. Panel (d) shows the effect of a low configuration of the cognitive learning rate for positive feedback (i.e., α C + ), such as seen in PD patients ‘on’ dopaminergic medication. Low learning rate configurations decrease learning from positive feedback, which is indicated by a reduced amplitude of feedback prediction values. All presented effects of model parameters on trial-to-trial dynamics of feedback prediction values were computed by varying exclusively the parameter of interest at arbitrary values while holding all other model parameters constant.
Jcm 09 01158 g004
Table 1. Analysis of effects of error type and disease on conditional error probabilities.
Table 1. Analysis of effects of error type and disease on conditional error probabilities.
Effects p(Inclusion) p(Inclusion|Data) BFinclusion
Error Type0.600>0.999 >1000 ***
Disease 0.6000.4040.452
Error Type x Disease 0.2000.1040.465
*** extreme evidence.
Table 2. Analysis of effects of error type and medication on conditional error probabilities.
Table 2. Analysis of effects of error type and medication on conditional error probabilities.
Effects p(Inclusion) p(Iinclusion|Data) BFinclusion
Error Type0.600 >0.999>1000 ***
Medication 0.600 0.270 0.247
Error Type x Medication0.200 0.081 0.351
*** extreme evidence.
Table 3. Bayes factors for effects of disease and medication on model parameters.
Table 3. Bayes factors for effects of disease and medication on model parameters.
ParameterDefinitionEffect
DiseaseMedication
α C + cognitive learning rate following positive feedback1.5190.282 *
α C cognitive learning rate following negative feedback0.9402.676
γ C cognitive retention rate0.095 **3.323 *
α S + sensorimotor learning rate following positive feedback0.073 **0.077 **
α S sensorimotor learning rate following negative feedback1.1370.521
γ S sensorimotor retention rate4.725 *1.075
τ inverse temperature parameter0.5510.720
* substantial evidence; ** strong evidence.

Share and Cite

MDPI and ACS Style

Steinke, A.; Lange, F.; Seer, C.; Hendel, M.K.; Kopp, B. Computational Modeling for Neuropsychological Assessment of Bradyphrenia in Parkinson’s Disease. J. Clin. Med. 2020, 9, 1158. https://doi.org/10.3390/jcm9041158

AMA Style

Steinke A, Lange F, Seer C, Hendel MK, Kopp B. Computational Modeling for Neuropsychological Assessment of Bradyphrenia in Parkinson’s Disease. Journal of Clinical Medicine. 2020; 9(4):1158. https://doi.org/10.3390/jcm9041158

Chicago/Turabian Style

Steinke, Alexander, Florian Lange, Caroline Seer, Merle K. Hendel, and Bruno Kopp. 2020. "Computational Modeling for Neuropsychological Assessment of Bradyphrenia in Parkinson’s Disease" Journal of Clinical Medicine 9, no. 4: 1158. https://doi.org/10.3390/jcm9041158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop