Veracity Judgments Based on Complications: A Training Experiment
Abstract
:1. Introduction
1.1. Complications as a Verbal Veracity Cue
1.2. The Present Experiment
1.3. Hypotheses
2. Materials and Methods
2.1. Participants
2.2. Design
2.3. Stimuli
2.4. Procedure
2.5. Coding
3. Results
3.1. Sensitivity
3.2. Response Bias
3.3. Highlighting Task
3.3.1. Total Number of Highlighted Complications
3.3.2. Complications Discriminability
3.4. End-of-Study Questionnaire
4. Discussion
Limitations and Future Directions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Deeb, H.; Vrij, A.; Leal, S. The effects of a Model Statement on information elicitation and deception detection in multiple interviews. Acta Psychol. 2020, 207, 103080. [Google Scholar] [CrossRef] [PubMed]
- Bogaard, G.; Colwell, K.; Crans, S. Using the reality interview improves the accuracy of the criteria-based content analysis and reality monitoring. Appl. Cogn. Psychol. 2019, 33, 1018–1031. [Google Scholar] [CrossRef]
- Matsumoto, D.; Hwang, H.C. Clusters of nonverbal behaviors differ according to type of question and veracity in investigative interviews in a mock crime context. J. Police Crim. Psychol. 2018, 33, 302–315. [Google Scholar] [CrossRef]
- Rosenfeld, J.P. Detecting Concealed Information and Deception: Recent Developments; Elsevier: London, UK, 2018. [Google Scholar]
- Orthey, R.; Meijer, E.; Kooistra, E.; Broers, N. How to detect concealed crime knowledge in situations with little information using the Forced Choice Test. Collabra Psychol. 2022, 8, 37483. [Google Scholar] [CrossRef]
- Van der Zee, S.; Poppe, R.; Taylor, P.J.; Anderson, R. To freeze or not to freeze: A culture-sensitive motion capture approach to detecting deceit. PLoS ONE 2019, 14, e0215000. [Google Scholar] [CrossRef]
- Vrij, A.; Palena, N.; Leal, S.; Caso, L. The relationship between complications, common knowledge details and self-handicapping strategies and veracity: A meta-analysis. Eur. J. Psychol. Appl. Leg. Context 2021, 13, 55–77. [Google Scholar] [CrossRef]
- Deeb, H.; Vrij, A.; Leal, S.; Fallon, M.; Mann, S.; Luther, K.; Granhag, P.A. Mapping details to elicit information and cues to deceit: The effects of map richness. Eur. J. Psychol. Appl. Leg. Context 2022, 14, 11–19. [Google Scholar] [CrossRef]
- Leal, S.; Vrij, A.; Deeb, H.; Fisher, R.P. Interviewing to detect omission lies. Appl. Cogn. Psychol. 2023, 37, 26–41. [Google Scholar] [CrossRef]
- Fuller, C.M.; Biros, D.P.; Wilson, R.L. Decision support for determining veracity via linguistic-based cues. Decis. Support Syst. 2009, 46, 695–703. [Google Scholar] [CrossRef]
- DePaulo, B.M.; Lindsay, J.J.; Malone, B.E.; Muhlenbruck, L.; Charlton, K.; Cooper, H. Cues to deception. Psychol. Bull. 2003, 129, 74–118. [Google Scholar] [CrossRef]
- Sporer, S.L.; Schwandt, B. Moderators of nonverbal indicators of deception: A meta-analytic synthesis. Psychol. Public Policy Law 2007, 13, 1–34. [Google Scholar] [CrossRef]
- Denault, V.; Talwar, V.; Plusquellec, P.; Larivière, V. On deception and lying: An overview of over 100 years of social science research. Appl. Cogn. Psychol. 2022, 36, 805–819. [Google Scholar] [CrossRef]
- Hauch, V.; Blandón-Gitlin, I.; Masip, J.; Sporer, S.L. Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personal. Soc. Psychol. Rev. 2015, 19, 307–342. [Google Scholar] [CrossRef]
- Amado, B.G.; Arce, R.; Farina, F.; Vilarino, M. Criteria-Based Content Analysis (CBCA) reality criteria in adults: A meta-analytic review. Int. J. Clin. Health Psychol. 2016, 16, 201–210. [Google Scholar] [CrossRef] [PubMed]
- Masip, J. Deception detection: State of the art and future prospects. Psicothema 2017, 29, 149–159. [Google Scholar] [CrossRef]
- Palena, N.; De Napoli, F. Beware, not everyone lies the same way! Investigating the effects of interviewees’ profiles and lie content on verbal cues. Soc. Sci. 2024, 13, 85. [Google Scholar] [CrossRef]
- Nahari, G. Meta-research perspectives on verbal lie detection. Brain Sci. 2023, 13, 392. [Google Scholar] [CrossRef]
- Schutte, M.; Bogaard, G.; Mac Giolla, E.; Warmelink, L.; Kleinberg, B.; Verschuere, B. Man versus Machine: Comparing manual with LIWC coding of perceptual and contextual details for verbal lie detection. PsyArXiv 2021. [Google Scholar] [CrossRef]
- Luke, T.J. Lessons from Pinocchio: Cues to deception may be highly exaggerated. Perspect. Psychol. Sci. 2019, 14, 646–671. [Google Scholar] [CrossRef]
- Verschuere, B.; Bogaard, G.; Meijer, E. Discriminating deceptive from truthful statements using the verifiability approach: A meta-analysis. Appl. Cogn. Psychol. 2021, 35, 374–384. [Google Scholar] [CrossRef]
- Colwell, K.; Hiscock-Anisman, C.K.; Memon, A.; Taylor, L.; Prewett, J. Assessment Criteria Indicative of Deception (ACID): An integrated system of investigative interviewing and detecting deception. J. Investig. Psychol. Offender Profiling 2007, 4, 167–180. [Google Scholar] [CrossRef]
- Nahari, G.; Ashkenazi, T.; Fisher, R.P.; Granhag, P.A.; Hershkovitz, I.; Masip, J.; Meijer, E.; Nisin, Z.; Sarid, N.; Taylor, P.J.; et al. Language of Lies: Urgent issues and prospects in verbal lie detection research. Leg. Criminol. Psychol. 2019, 24, 1–23. [Google Scholar] [CrossRef]
- Oberlader, V.A.; Naefgen, C.; Koppehele-Gossel, J.; Quinten, L.; Banse, R.; Schmidt, A.F. Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis. Law Hum. Behav. 2016, 40, 440–457. [Google Scholar] [CrossRef]
- Oberlader, V.A.; Quinten, L.; Banse, R.; Volbert, R.; Schmidt, A.F.; Schönbrodt, F.D. Validity of content-based techniques for credibility assessment—How telling is an extended meta-analysis taking research bias into account? Appl. Cogn. Psychol. 2021, 35, 393–410. [Google Scholar] [CrossRef]
- Hauch, V.; Sporer, S.L.; Masip, J.; Blandón-Gitlin, I. Can credibility criteria be assessed reliably? A meta-analysis of criteria-based content analysis. Psychol. Assess. 2017, 29, 819–834. [Google Scholar] [CrossRef]
- Vrij, A.; Leal, S.; Jupe, L.; Harvey, A. Within-subjects verbal lie detection measures: A comparison between total detail and proportion of complications. Leg. Criminol. Psychol. 2018, 23, 265–279. [Google Scholar] [CrossRef]
- Amado, B.G.; Arce, R.; Herraiz, A. Psychological injury in victims of child sexual abuse: A meta-analytic review. Psychosoc. Interv. 2015, 24, 49–62. [Google Scholar] [CrossRef]
- Deeb, H.; Vrij, A.; Leal, S.; Mann, S. Combining the Model Statement and the sketching while narrating interview techniques to elicit information and detect lies in multiple interviews. Appl. Cogn. Psychol. 2021, 35, 1478–1491. [Google Scholar] [CrossRef]
- Volbert, R.; Steller, M. Is this testimony truthful, fabricated, or based on false memory? Eur. Psychol. 2014, 19, 207–220. [Google Scholar] [CrossRef]
- Levine, T.R.; Lapinski, M.K.; Banas, J.; Wong, N.C.; Hu, A.D.; Endo, K.; Baum, K.L.; Anders, L.N. Self-construal, self and other benefit, and the generation of deceptive messages. J. Intercult. Commun. Res. 2022, 31, 29–47. [Google Scholar]
- Hartwig, M.; Granhag, P.A.; Strömwall, L. Guilty and innocent suspects’ strategies during police interrogations. Psychol. Crime Law 2007, 13, 213–227. [Google Scholar] [CrossRef]
- Maier, B.G.; Niehaus, S.; Wachholz, S.; Volbert, R. The strategic meaning of CBCA criteria from the perspective of deceivers. Front. Psychol. 2018, 9. [Google Scholar] [CrossRef] [PubMed]
- Granhag, P.A.; Hartwig, M. The Strategic Use of Evidence (SUE) technique: A conceptual overview. In Deception Detection: Current Challenges and New Approaches; Granhag, P.A., Vrij, A., Verschuere, B., Eds.; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
- Honts, C.R.; Kircher, J.C. Mental and physical countermeasures reduce the accuracy of polygraph tests. J. Appl. Psychol. 1994, 79, 252–259. [Google Scholar] [CrossRef]
- Vrij, A.; Leal, S.; Deeb, H.; Castro Campos, C.; Fisher, R.F.; Mann, S.; Jo, E.; Alami, N. The effect of using countermeasures in interpreter-absent and interpreter-present interviews. Eur. J. Psychol. Appl. Leg. Context 2022, 14, 53–72. [Google Scholar] [CrossRef]
- Vrij, A.; Leal, S.; Fisher, R.P.; Mann, S.; Deeb, H.; Jo, E.; Campos, C.C.; Hamzeh, S. The efficacy of using countermeasures in a Model Account interview. Eur. J. Psychol. Appl. Leg. Context 2020, 12, 23–34. [Google Scholar] [CrossRef]
- Mac Giolla, E.; Luke, T.J. Does the cognitive approach to lie detection improve the accuracy of human observers? Appl. Cogn. Psychol. 2021, 35, 385–392. [Google Scholar] [CrossRef]
- Levine, T.R. Truth-default theory (TDT) a theory of human deception and deception detection. J. Lang. Soc. Psychol. 2014, 33, 378–392. [Google Scholar] [CrossRef]
- Bond, C.F., Jr.; DePaulo, B.M. Accuracy of deception judgments. Personal. Soc. Psychol. Rev. 2006, 10, 214–234. [Google Scholar] [CrossRef] [PubMed]
- Leal, S.; Vrij, A.; Warmelink, L.; Vernham, Z.; Fisher, R. You cannot hide your telephone lies: Providing a model statement as an aid to detect deception in insurance telephone calls. Leg. Criminol. Psychol. 2015, 20, 129–146. [Google Scholar] [CrossRef]
- Jarosz, A.F.; Wiley, J. What are the odds? A practical guide to computing and reporting Bayes factors. J. Probl. Solving 2014, 7, 2. [Google Scholar] [CrossRef]
- Mac Giolla, E.; Ly, A. What to do with all these Bayes factors: How to make Bayesian reports in deception research more informative. Leg. Criminol. Psychol. 2020, 25, 65–71. [Google Scholar] [CrossRef]
- Wagenmakers, E.J.; Love, J.; Marsman, M.; Jamil, T.; Ly, A.; Verhagen, J.; Selker, R.; Gronau, Q.F.; Dropmann, D.; Boutin, B.; et al. Bayesian inference for psychology. Part II: Example applications with JASP. Psychon. Bull. Rev. 2018, 25, 58–76. [Google Scholar] [CrossRef] [PubMed]
- Cohen, J. A power primer. Psychol. Bull. 1992, 112, 155–159. Available online: https://www2.psych.ubc.ca/~schaller/528Readings/Cohen1992.pdf (accessed on 10 July 2024). [CrossRef] [PubMed]
- Nakagawa, S.; Cuthill, I.C. Effect size, confidence interval and statistical significance: A practical guide for biologists. Biol. Rev. 2007, 82, 591–605. [Google Scholar] [CrossRef]
- Deans-Browne, C.; Singmann, H. How to Compute Signal Detection Theory Functions in JASP: A Case Study; 2020. Available online: https://jasp-stats.org/2020/10/29/how-to-compute-signal-detection-theory-functions-in-jasp-a-case-study/ (accessed on 9 August 2024).
- Stanislaw, H.; Todorov, N. Calculation of signal detection theory measures. Behav. Res. Methods Instrum. Comput. 1999, 31, 137–149. [Google Scholar] [CrossRef]
- Nisbett, R.E.; Wilson, T.D. Telling more than we can know: Verbal reports on mental processes. Psychol. Rev. 1977, 84, 231–259. [Google Scholar] [CrossRef]
- King, M.F.; Bruner, G.C. Social desirability bias: A neglected aspect of validity testing. Psychol. Mark. 2000, 17, 79–103. [Google Scholar] [CrossRef]
- Kleider-Offutt, H.M.; Clevinger, A.M.; Bond, A.D. Working memory and cognitive load in the legal system: Influences on police shooting decisions, interrogation and jury decisions. J. Appl. Res. Mem. Cogn. 2016, 5, 426–433. [Google Scholar] [CrossRef]
- Street, C.N. ALIED: Humans as adaptive lie detectors. J. Appl. Res. Mem. Cogn. 2015, 4, 335–343. [Google Scholar] [CrossRef]
- Ganis, G. Detecting deception using neuroimaging. In Detecting Deception: Current Challenges and Cognitive Approaches; Granhag, P.A., Vrij, A., Verschuere, B., Eds.; John Wiley & Sons: Chichester, UK, 2015. [Google Scholar]
- Hartwig, M.; Bond, C.F., Jr. Lie detection from multiple cues: A meta-analysis. Appl. Cogn. Psychol. 2014, 28, 661–676. [Google Scholar] [CrossRef]
- Strömwall, L.A.; Willén, R.M. Inside criminal minds: Offenders’ strategies when lying. J. Investig. Psychol. Offender Profiling 2011, 8, 271–281. [Google Scholar] [CrossRef]
Untrained M, SD [95% CI] | Trained M, SD [95% CI] | Statistics | p | BF10 | Cohen’s d [95% CI] | |
---|---|---|---|---|---|---|
Sensitivity | −0.01, 1.26 [−0.37, 0.35] | 0.02, 1.53 [−0.49, 0.52] | −0.09 | 0.930 | 0.23 | −0.02 [−0.44, 0.41] |
Response bias | −0.04, 0.84 [−0.28, 0.20] | 0.05, 0.56 [−0.13, 0.24] | −0.64 | 0.521 | 0.27 | −0.14 [−0.56, 0.29] |
Total highlighted complications | 9.12, 7.27 [7.04, 11.21] | 39.26, 27.73 [30.15, 48.38] | 53.21 | <0.001 | 1.572 × 107 | −1.58 [−2.06, −1.09] |
Average complications hit rate | 0.21, 0.14 [0.17, 0.25] | 0.29, 0.18 [0.23, 0.35] | 5.48 | 0.022 | 2.48 | −0.51 [−0.94, −0.07] |
Untrained M, SD [95% CI] | Trained M, SD [95% CI] | |
---|---|---|
Veracity Judgments | ||
Hits | 2.35, 1.36 [1.96, 2.74] | 2.24, 1.28 [1.82, 2.66] |
False alarms | 2.10, 1.25 [1.74, 2.46] | 1.97, 1.08 [1.62, 2.33] |
Misses | 2.65, 1.36 [2.26, 3.04] | 2.76, 1.28 [2.34, 3.19] |
Correct rejections | 2.90, 1.25 [2.54, 3.26] | 3.03, 1.08 [2.67, 3.38] |
Complications | ||
Hits | 9.12, 7.27 [7.04, 11.21] | 14.37, 9.16 [11.36, 17.38] |
False alarms | 24.74, 21.78 [17.58, 31.89] | |
Misses: Complications that are not coded | 27.65, 21.45 [21.49, 33.81] | 32.79, 15.54 [27.68, 37.90] |
Misses: Complications that are coded as details | 3.18, 3.43 [2.06, 4.31] |
Truth M, SD [95% CI] | Lie M, SD [95% CI] | F | p | BF10 | Cohen’s d [95% CI] | |
---|---|---|---|---|---|---|
Total highlighted complications | 14.61, 19.09 [10.54, 18.68] | 7.70, 8.03 [5.99, 9.41] | 19.96 | <0.001 | 132.32 | 0.47 [0.17, 0.77] |
Complications hit rate | 0.24, 0.17 [0.20, 0.27] | 0.26, 0.21 [0.21, 0.30] | 0.58 | 0.450 | 0.23 | −0.10 [−0.40, 0.19] |
Truth M, SD [95% CI] | Lie M, SD [95% CI] | F | p | BF10 | Cohen’s d [95% CI] | |
---|---|---|---|---|---|---|
Trained | ||||||
Total highlighted complications | 26.08, 23.84 [18.24, 33.91] | 13.18, 9.26 [10.14, 16.23] | 11.72 | 0.002 | 45.09 | 0.71 [0.24, 1.18] |
Complications hit rate | 0.30, 0.17 [0.24, 0.35] | 0.28, 0.26 [0.20, 0.37] | 0.12 | 0.731 | 0.25 | 0.09 [−0.36, 0.55] |
Untrained | ||||||
Total highlighted complications | 5.71, 5.45 [4.15, 7.28] | 3.45, 2.68 [2.68, 4.22] | 11.98 | 0.001 | 28.67 | 0.53 [0.12, 0.93] |
Complications hit rate | 0.19, 0.15 [0.14, 0.23] | 0.23, 0.17 [0.19, 0.28] | 4.65 | 0.036 | 1.34 | −0.25 [−0.65, 0.15] |
Questionnaire Item | Untrained | Trained | t | p | BF10 | Cohen’s d [95% CI] | ||
---|---|---|---|---|---|---|---|---|
M (SD) | 95% CI | M (SD) | 95% CI | |||||
To what extent do you feel confident you judged the transcripts accurately? | 4.41 (1.26) | 4.05, 4.77 | 4.21 (1.36) | 3.76, 4.66 | 0.70 | 0.489 | 0.26 | 0.15 [−0.27, 0.58] |
The highlighting task: | ||||||||
Was distracting | 2.67 (1.75) | 2.17, 3.18 | 3.16 (1.97) | 2.51, 3.80 | −1.20 | 0.236 | 0.81 | −0.26 [−0.69, 0.16] |
Was difficult | 3.90 (1.62) | 3.43, 4.36 | 4.24 (1.28) | 3.82, 4.66 | −1.09 | 0.280 | 0.34 | −0.23 [−0.65, 0.20] |
Required mental effort | 4.69 (1.72) | 4.20, 5.19 | 5.74 (1.18) | 5.35, 6.12 | −3.35 | 0.001 | 4.26 | −0.69 [−1.13, −0.25] |
Did you feel rushed when judging the transcripts? | 1.84 (1.20) | 1.49, 2.18 | 2.16 (1.67) | 1.61, 2.71 | −1.00 | 0.320 | 0.47 | −0.23 [−0.65, 0.20] |
Did you feel any sort of anxiety while judging the transcripts? | 2.39 (1.50) | 1.96, 2.82 | 2.79 (1.86) | 2.18, 3.40 | −1.09 | 0.282 | 2.53 | −0.24 [−0.67, 0.19] |
Was there any kind of distractor around while you were judging the transcripts? | 1.33 (0.85) | 1.08, 1.57 | 1.92 (1.28) | 1.50, 2.34 | −2.47 | 0.016 | 0.62 | −0.56 [−0.99, −0.13] |
To what extent did you look at complications when making your judgments? | 4.89 (1.53) | 4.37, 5.41 | 19.22 | <0.001 | 2.121 × 1016 | |||
To what extent do you think that looking at complications enhanced your judgment accuracy? | 4.72 (1.30) | 4.28, 5.16 | 21.78 | <0.001 | 6.230 × 1019 | |||
To what extent did you find it difficult to look for complications in the transcripts? | 4.44 (1.30) | 4.01, 4.88 | 20.56 | <0.001 | 6.920 × 1017 | |||
To what extent do you understand what complications mean? | 5.08 (1.05) | 4.73, 5.44 | 28.99 | <0.001 | 1.602 × 1022 |
Cues | Trained | Untrained |
---|---|---|
Detailedness | 55% | 41% |
Logical/Common knowledge details | 24% | 37% |
Language | 13% | 33% |
Keywords or specific details | 13% | 29% |
Hesitation | 21% | 20% |
Consistency | 3% | 20% |
Emotions | 3% | 16% |
Complications | 21% | 0% |
Over-explanations | 16% | 4% |
Verbatim account | 3% | 12% |
Inconsistency | 13% | 10% |
Tone | 13% | 4% |
Pauses | 3% | 10% |
Other | 5% | 8% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Deeb, H.; Vrij, A.; Burkhardt, J.; Leal, S.; Mann, S. Veracity Judgments Based on Complications: A Training Experiment. Behav. Sci. 2024, 14, 839. https://doi.org/10.3390/bs14090839
Deeb H, Vrij A, Burkhardt J, Leal S, Mann S. Veracity Judgments Based on Complications: A Training Experiment. Behavioral Sciences. 2024; 14(9):839. https://doi.org/10.3390/bs14090839
Chicago/Turabian StyleDeeb, Haneen, Aldert Vrij, Jennifer Burkhardt, Sharon Leal, and Samantha Mann. 2024. "Veracity Judgments Based on Complications: A Training Experiment" Behavioral Sciences 14, no. 9: 839. https://doi.org/10.3390/bs14090839
APA StyleDeeb, H., Vrij, A., Burkhardt, J., Leal, S., & Mann, S. (2024). Veracity Judgments Based on Complications: A Training Experiment. Behavioral Sciences, 14(9), 839. https://doi.org/10.3390/bs14090839