1. Introduction
Debate about the psychological impacts of screen time on young people remains a regular feature of societal conversation. Specifically, concerns that screen time negatively impacts psychosocial functioning, health and behaviours remain widespread, and are routinely voiced in academic, policy and media circles. Whilst some evidence routinely shows negative correlations between screen time and well-being [
1,
2], recent reviews and meta-analyses repeatedly conclude that the evidence base is difficult to interpret and presents mixed findings [
3,
4]. In recent years, a growing consensus has emerged suggesting that the associations observed between screen time and psychological or health outcomes are very small in size and that there lacks clear longitudinal evidence that they represent a pure causal effect driven by screen media [
5,
6,
7].
Strong evidence concerning the positive or negative impacts of screen time is, therefore, surprisingly sparse. At the same time, other factors tend to explain far more variability when researchers adopt more theoretically-aligned approaches that explain the link between screen time and psychosocial well-being [
3,
7]. For example, social support garnered through screen-based activities [
8], types of social interactions [
9], pre-existing levels of social anxiety [
10], and fear of missing out [
11] might be more influential in predicting well-being outcomes than screen time directly. Further, when quantifying negative outcomes, especially for young people, many other factors continue to remain far more influential. For example, adversity in childhood consistently predicts poor psychosocial functioning and long-term health outcomes [
12]. Therefore, screen time sits within a complex mesh of individual and social factors—many of which are already known to have substantial and clear ramifications on everyday functioning, with non-uniform outcomes.
While debate, therefore, continues regarding both the size and importance of screen time associations with health and well-being, basic methodological and philosophical issues surrounding screen time are routinely overlooked. These issues include poor conceptualisation, and the routine use of non-standardised measures that are predominantly self-report. In addition, many research designs rely on cross-sectional data which cannot begin to unpick the causal nature of observed associations between screen time and well-being. This paper considers each of these issues in turn, examines the evidence, provides additional insights, and proposes recommendations to improve the quality of future research and discussion in this area.
2. Issue 1—Conceptual Chaos—What Is Screen Time?
Definitions of screen time vary, which poses a myriad of issues relating to harmonisation, measurement and comparison. The Oxford English Dictionary [
13] defines screen time as “time spent using a device such as a computer, television, or games console”, whereas the World Health Organisation’s [
14] (p 5) most recent guidelines focus on the issue of sedentary screen time and define it as “Time spent passively watching screen-based entertainment (TV, computer, mobile devices). This does not include active screen-based games where physical activity or movement is required.” The WHO, therefore, does not include screen activities like exergaming, which has been credited with upending the stereotype of gaming as a sedentary activity [
15]. Indeed, it has been claimed that current paediatric recommendations specifically surrounding (sedentary-based) screen use may be unachievable based on the fact that screen-based media is relatively central to people’s everyday lives [
16]. This highlights the multitude of purposes that screens have in everyday life, serving to support behaviours such as communication, information seeking and entertainment.
Despite this, there is a tendency to assume that screen time is a largely unidimensional or homogeneous construct, with little work to unify definitions or ensure that conceptualisations are agreed upon. This has led to many different approaches when conceptualising and measuring screen time. For example, a selection of conceptualisations are included in
Table 1.
This diverse nomenclature could be due to the area of study being new, with a unified vocabulary yet to be established by the academic community. However, such diversity is also a clear indication that a concept is relatively undefined even within expert circles. It is, therefore, unsurprising that there is wide variation in the self-report measures used to capture screen time. Whereas some studies have probed largely universal ‘total screen time’ in respect of a certain time-frame (e.g., in the last week), others ask participants to estimate the amount of screen time on a typical day, and sometimes split this by weekday and weekend day, or school day and non-school day. Other studies take different approaches, either asking supplementary measurements of usage by type of device or activity (e.g., passive use/TV viewing, social media use) or focusing more exclusively on one specific technology use or the use of a specific platform or application. This may include questions about attitudes towards technology use that are then inferred as a proxy for screen time, such as the Facebook Intensity Scale [
26]. It is unclear whether taking a sum or average of all these different measures equates to asking a participant to estimate their overarching screen time, and there has been little work examining how these different types of measurement relate to each other.
One noteworthy study, focusing on Facebook use, has explored variations in how participants respond based on the time-frame reference points of screen time questions, and whether these correspond to actual Facebook use [
27]. These included time-frames such as minutes per day, time per day, daily time in the past week, and number of times per day. Ernala et al.’s [
27] findings illuminated that there are variations in accuracy of responding based on the time-frame reference point—namely, that participants were most accurate (albeit only at a rate of 34%) when reporting on number of times per day they visited Facebook, but very inaccurate when gauging how many “time”, “minutes” or “hours” they had spent on average per day on Facebook (ibid). As such, even simple screen time measures can be expected to include a high degree of measurement error, which is likely to contribute toward contradictory and inconsistent empirical results. (Additional discussion on measurement is provided in Issue 2.)
Developing theoretically valid and practically useful conceptualisations of screen time is likely to only be achieved if users themselves are included as active research participants. Such research is likely to be qualitative in nature and will ideally include users as fundamental partners (e.g., using participatory research methods). This can also inform our understanding of how specific populations may engage with screens. For example, vulnerable populations may have unique uses for screens, as is suggested from work showing that adolescents with neurodevelopmental disorders have higher rates of social media use [
28] and may use screens to support social and communication skills [
29]. Such advantages can include the fact that these experiences can be highly motivating, with integrated rewards, close control of learning rate and content and personalisation to the individual’s own preferences [
30]. These nuances should be integrated into the academic understanding of the concept to make it truly meaningful. This enhanced conceptual understanding of screen time may conclude that the term is simply not a useful concept for research or enhancing public understanding. Thus, we believe that it is fundamentally important for scholars to meaningfully engage with issues surrounding the conceptualisation and measurement of screen time.
Recommendations:
Employ a wider range of user-focused research methodologies to inform a clear and unambiguous conceptualisation of screen time.
Include in conceptualisations a differentiation between screen time as a numerical measurement (e.g., minutes per day) and “screen use” as goal-directed behaviour (e.g., to fulfil different motivations such as social use and educational use).
3. Issue 2—Estimates Are Not Good Enough
Studies investigating screen time (or any variation of terms on this) most often use users’ estimated reports, either retrospectively or throughout the day. Participants may be asked questions such as “How many hours of screen time did you use in a typical day last week?”, yet it is far from clear whether participants can accurately estimate the amount of screen time they engage in or whether this estimate is systematically biased by screen use itself. Biases may include underestimation by high users, overestimation by low users, developmental issues relating to the accuracy of time estimation itself [
31]. That is, accumulating evidence indicates that timing functions are impaired in a variety of developmental disorders including ADHD [
32,
33], dyslexia [
34], and autism [
35,
36].
Whilst self-report measures are commonplace in psychological research, evidence suggests that these can be particularly ill-fitting for the purposes of gaining accurate data on people’s technology-related use [
27,
37,
38]. Namely, when comparing smartphone users’ estimates of their own smartphone use, and checking behaviours with their actual use (garnered from smartphone data), they tend to underestimate these behaviours [
39]. Correspondingly, for social networking sites such as Facebook, compared to actual use garnered from log data, people tend to overreport how much time they spend on Facebook but underreport how many times they visit per day [
27]. This appears to become even more problematic when estimating short, habitual screen time behaviours like smartphone checking [
39], as highlighted in recent work showing that iPhone users’ estimates of daily social media use are less accurate than their reports of overall daily iPhone use [
38]. In this context, developments such as the Apple screen time App allow researchers to gain objective measurements of behaviour surrounding smartphone use [
38,
40]. However, challenges remain when wishing to record screen time on other platforms such as laptops or TVs or screen time on shared devices (e.g., family consoles or a university PC). This issue is discussed further in Issue 3.
It is perhaps unsurprising that there has been little movement in the field to develop high-quality screen time measures because this involves a variety of methodological, security and privacy challenges. For example, the Human Screenome project [
41] aims to unobtrusively collect screen recordings which show moment-by-moment changes on users’ screens. However, notwithstanding security and privacy concerns, resources are not freely available that would allow other researchers to take advantage of these methods. These developments do not diminish the importance of subjective experiences but measurements should be orientated with the specific research question, which, in this case, typically aligns with a focus on usage alone.
Recommendations:
Make use of objective screen data such as data logs, screen time apps which record the use of apps of interest, and server data where possible (e.g., Apple screen time, which records the use of apps based on categories such as “entertainment” and “health and fitness”).
Encourage and contribute to transdisciplinary programs of work around privacy, security and methodological challenges relevant to the adoption of objective measurement.
Work to develop measures of screen time should seek to develop scales and assessments which are as brief as possible, especially when these are designed to assess children and young people’s screen time.
4. Issue 3—Screen Use Varies across Contexts
While recording screen time on individual devices such as a smartphone, or a specific smartphone app is now possible, even these approaches will struggle to address how much time is spent on different forms of screen-based engagement which take place on multiple screens. For example, social media can be accessed via a smartphone, tablet, TV, games console, and/or PC. It seems unlikely that data can be harvested from all of these concurrently, especially when at least some will be shared with people other than a study participant (e.g., a family TV). In addition, people engage with screens that are not necessarily their own (e.g., a friend’s phone and a school laptop) and this further limits the utility of approaches based on device-level records of actual use. There are of course economic aspects to consider in relation to this issue, and the extent to which different users will have the means and access to varied types of technology in different places.
The issue of multitasking overlaps with that of screen proliferation. Imagine a researcher wants to record how much screen time a person engages in between 19.00 and 20.00. The participant has a TV on for the whole hour, uses a laptop for 25 minutes to complete some work, and checks their social media whenever they get an alert on their smartphone. How should the person report their screen time? Previous work on media multitasking (changing across devices) and task switching (changing channels or task within the same device) has been explored quite extensively in the literature, but largely in respect of how this impacts on cognitive or attentional capacity [
42]. In general, these have tended to define media multitasking as simultaneous media usage via several devices [
43,
44]. However, these studies have failed to reach a reasonable consensus on how best to measure the proportionality of the use of specific devices or channels during these instances.
As previously noted, the available literature on screen time tends to use either a score of one’s total “screen time” or a number of such scores by type of device or activity. Whilst the latter of these is helpful to ascertain more clearly how certain types of screen activities may impact differentially on outcomes, what is currently omitted using this approach is how the various combinations of different screen time activities (which may occur across contexts) function in this regard. That is, someone with a high proportion of entertainment screen use (watching TV) relative to other screen-based activities (e.g., social media use) may be differentially impacted compared to someone with a more equally-weighted “screen time profile”. Any latent-based measure may need to be cognizant of other important socio-cultural confounds including economic status. However, no research to date has used a profiling or cluster-based approach to explore how these various screen time proportionalities may map out onto psychological or behavioural outcomes. This would be a helpful advancement within the field, and potentially presents a more holistic approach to the issue than current approaches.
Recommendations:
5. Issue 4—Screen Use Changes over Time
The majority of research on screen time has obtained cross-sectional data to understand the relationships with psychological and behavioural outcomes. While such cross-sectional research is important in an effort to examine a fast moving target, and where there might not be enough time to collect longitudinal data, it inherently limits the scope of conclusions. A handful of studies that have approached questions around screen time using longitudinal methods observe either no or very small effects for the impact of screen time on experiences such as depressive symptoms [
21,
45]. This holds when exploring specific forms of screen activities such as video games [
46], social media [
5,
6,
47,
48], and passive screen use [
21].
While longitudinal research is invaluable for beginning to establish causal relationships, it brings with it a set of unique concerns. Longitudinal research, by its nature, requires data to be gathered at multiple time-points, which can extend over a period of months or often years. There are three major challenges facing researchers who wish to conduct such studies. First, it is unclear how applicable any given measure of “screen time” will be for the duration of a given piece of research. For example, if screen time is recorded as a platform- or device-specific activity (e.g., Facebook use or iPhone use) then the measurement is dependent upon the success and durability of those platforms/devices. Second, the degree to which different cohorts use comparable platforms or devices in comparable ways is far from clear. For example, the way in which a 12 year old uses the most up-to-date iPhone in 2021 will not necessarily be directly comparable to how a 12 year old uses the most up-to-date iPhone in 2024; the capabilities and affordances of the 2024 model may make it a fundamentally different device to the 2021 model. In this rapidly evolving environment, documenting cohort-specific screen use is of paramount importance. Finally, work with children and young people often involves data collection within schools and during class time. As such, repeated data collection can become disruptive for participants’ education and this can make it more likely that researchers will seek out measurements that are quick but which are less psychometrically substantive to draw strong conclusions.
One solution is to move away from measuring the platform use itself, and instead measure affordances/activities obtained via platforms. For example, video-chatting is a social activity which is mediated by a number of different platforms such as Facebook Messenger, Zoom and FaceTime. An affordance-based approach avoids relying on the commercial success of platforms or devices and instead establishes measurements which should theoretically remain more stable over time. This can support researchers to ensure that responses are comparable. Failure to attend to this fundamental measurement issue is equivalent to asking participants how they interact with novels at the start of a study and then asking them how they interact with newspapers later.
In addition to comparable longitudinal measurements, better use of high-quality experimental studies is needed to probe causal links between screen time and outcomes of interest. However, studies of this nature would necessitate the use of control groups where subsets of participants are asked to refrain from using screens (or a certain subset of them). Such studies are lacking because ensuring participant compliance is challenging [
4,
49], and those published fail to show consistent results [
50,
51,
52]. However, a more viable way forward may be to form control groups based on restrictions to certain features of technology (e.g., social media sites or apps) for a given time-frame to compare with those using these features as normal. Despite control groups being considered part of the gold-standard approach within effective scientific design, there is a scant attention to this issue in current screen time literature. To our knowledge, even the obvious question of who operates as a control group in this type of research has not yet been fully appreciated. A true control group in this case would be those who are abstinent from all screens for a predetermined time-frame, yet this would be almost impossible for large sections of society, as screens serve a multitude of everyday functions and are embedded within our schools and workplaces. Within the literature, these questions have simply not been asked, and therefore present additional methodological uncertainty for those working in this area. Although removing screens entirely from people’s lives is not viable as a form of control group, some work has measured the effects of smartphone abstinence (for up to 24 hours) on short-term outcomes such as mood, anxiety and craving [
53]. Alternatively, researchers may also start to realise new opportunities by teasing out causal relationships between screen time and psychological impacts with observational data alone [
54].
Recommendations:
As per recommendation 2, defining screen time instead in reference to screen use of more specified, targeted behaviours (e.g., social screen use) is focused more on the behaviour itself rather than the technological affordances available at certain time-points. If measurements are more centrally developed around the behaviours screens are affording rather than what “functions and features” people are using, this is less volatile as a reference point and less likely to result in comparability issues across time-points.
Control groups should feature more prominently in research paradigms, such as including people who are restricted from using certain screen activities for engaging in certain behaviours for a designated time-period.
6. Conclusions
There are a number of complex and nuanced issues in work addressing screen time and its outcomes. These include the difficulties conceptualising screen time, limitations in self-report data, and challenges for measuring screen time over time and in different contexts. Some of these issues may be rectified at least in part by the use of more objective-driven approaches rather than estimates of behaviour. This is certainly an important avenue to advance this field, and a key recommendation for improving scientific insight into these issues. However, this is only a partial solution to a broader conceptual debate. As a first step, we reiterate what others have already recommended [
55,
56]: that the term “screen time” is retired. Instead, alongside our recommendations outlined above, we propose an affordance-based approach as a logical and useful step forward. Here, “screen use” (irrespective of platform or feature) is based on the behaviour(s) screens are facilitating, which may relate to behaviours such as entertainment, social, education/work, and informational. To structure such an approach, the needs of users should be the central concern and users themselves should be co-creators of this, through qualitative work which listens to the voice of these various users. This should be the next step to further academic and public debate around conceptualising and measuring screen use, so that we may be better equipped to explore its psychological and social correlates.
Author Contributions
Conceptualization, L.K.K., A.O., D.A.E., S.C.H., and S.H.; methodology, N/A.; software, N/A.; validation, N/A; formal analysis, N/A.; investigation, N/A.; resources, L.K.K., A.O., D.A.E., S.C.H., and S.H.; data curation, N/A.; writing—original draft preparation, L.K.K.; writing—review and editing, L.K.K., A.O., D.A.E., S.C.H. and S.H.; visualization, N/A.; supervision, N/A.; project administration, L.K.K.; funding acquisition, N/A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Twenge, J.M. More time on technology, less happiness? Associations between digital-media use and psychological well-being. Curr. Dir. Psychol. Sci. 2019, 28, 372–379. [Google Scholar] [CrossRef]
- Twenge, J.M.; Campbell, W.K. Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Prev. Med. Rep. 2018, 12, 271–283. [Google Scholar] [CrossRef] [PubMed]
- Odgers, C.L.; Jensen, M.R. Annual Research Review: Adolescent Mental Health in the Digital Age: Facts, Fears, and Future Directions. J. Child. Psychol. Psychiatry 2020, 61, 336–348. [Google Scholar] [CrossRef] [PubMed]
- Orben, A. Teenagers, Screens and Social Media: A Narrative Review of Reviews and Key Studies. Soc. Psychiatry Psychiatr. Epidemiol. 2020, 55, 407–414. [Google Scholar] [CrossRef] [Green Version]
- Jensen, M.; George, M.J.; Russell, M.R.; Odgers, C.L. Young adolescents’ digital technology use and mental health symptoms: Little evidence of longitudinal or daily linkages. Clin. Psychol. Sci. 2019, 7, 1416–1433. [Google Scholar] [CrossRef]
- Orben, A.; Dienlin, T.; Przybylski, A.K. Social Media’s Enduring Effect on Adolescent Life Satisfaction. Proc. Natl. Acad. Sci. USA 2019, 116, 21. [Google Scholar] [CrossRef] [Green Version]
- Orben, A.; Przybylski, A.K. The association between adolescent well-being and digital technology use. Nat. Hum. Behav. 2019, 3, 173–182. [Google Scholar] [CrossRef] [Green Version]
- Wang, P.; Lei, L.; Wang, X.; Nie, J.; Chu, X.; Jin, S. The exacerbating role of perceived social support and the "buffering" role of depression in the relation between sensation seeking and adolescent smartphone addiction. Pers. Individ. Dif. 2018, 130, 129. [Google Scholar] [CrossRef]
- Twenge, J.M.; Spitzberg, B.H.; Campbell, W.K. Less in-person social interaction with peers among US adolescents in the 21st century and links to loneliness. J. Soc. Pers. Relat. 2019, 36, 1892–1913. [Google Scholar] [CrossRef]
- Hong, W.; Liu, R.; Oei, T.; Zhen, R.; Jiang, S.; Sheng, X. The mediating and moderating roles of social anxiety and relatedness need satisfaction on the relationship between shyness and problematic mobile phone use among adolescents. Comput. Hum. Behav. 2019, 93, 301–308. [Google Scholar] [CrossRef]
- Elhai, J.D.; Levine, J.C.; Dvorak, R.D.; Hall, B.J. Fear of missing out, need for touch, anxiety and depression are related to problematic smartphone use. Comput. Hum. Behav. 2016, 63, 509–516. [Google Scholar] [CrossRef]
- Bellis, M.A.; Lowey, H.; Leckenby, N.; Hughes, K.; Harrison, D. Adverse childhood experiences: Retrospective study to determine their impact on adult health behaviours and health outcomes in a UK population. J. Public Health 2014, 36, 81–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Oxford University Press. Oxford English Dictionary (2020); Oxford University Press: Oxford, UK, 2020. [Google Scholar]
- World Health Organization. Guidelines on Physical Activity, Sedentary Behaviour and Sleep for Children under 5 Years of Age. Available online: https://apps.who.int/iris/handle/10665/311664 (accessed on 17 April 2020).
- Kaye, L.K.; Levy, A.R. Reconceptualising the link between screen—time when gaming with physical activity and sedentary behaviour. Cyberpsychol. Behav. Soc. Netw. 2017, 20, 769–773. [Google Scholar] [CrossRef] [PubMed]
- Houghton, S.; Hunter, S.C.; Rosenberg, M.; Wood, L.; Zadow, C.; Martin, K.; Shilton, T. Virtually impossible: Limiting Australian children and adolescents daily screen based media use. BMC Public Health 2015, 15, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Suchert, V.; Hanewinkel, R.; Isensee, B.; läuft Study Group. Sedentary behavior, depressed affect, and indicators of mental well-being in adolescence: Does the screen only matter for girls? J. Adolesc. 2015, 42, 50–58. [Google Scholar] [CrossRef]
- Stockdale, L.A.; Coyne, S.M.; Padilla-Walker, L.M. Parent and child technoference and socioemotional behavioral outcomes: A nationally representative study of 10- to 20-year-old adolescents. Comput. Hum. Behav. 2018, 88, 219–226. [Google Scholar] [CrossRef]
- Twenge, J.M.; Campbell, W.K. Media use is linked to lower psychological well-being: Evidence from three datasets. Psychiatr. Q. 2019, 90, 311–331. [Google Scholar] [CrossRef]
- Ferguson, C.J. Everything in moderation: Moderate use of screens is unassociated with child behavior problems. Psychiatr. Q. 2017, 88, 797–805. [Google Scholar] [CrossRef]
- Houghton, S.; Lawrence, D.; Hunter, S.C.; Zadow, C.; Rosenberg, M.; Wood, L.; Shilton, T. Reciprocal relationships between trajectories of depressive symptoms and screen media use during adolescence. J. Youth Adolesc. 2018, 47, 2453–2467. [Google Scholar] [CrossRef] [Green Version]
- Wu, X.; Tao, S.; Zhang, S.; Zhang, Y.; Chen, K.; Yang, Y.; Hao, J.; Tao, F. Impact of screen time on mental health problems progression in youth: A 1-year follow-up study. BMJ Open 2016, 6, e011533. [Google Scholar] [CrossRef] [Green Version]
- Twenge, J.M.; Joiner, T.E.; Rogers, M.L.; Martin, G.N. Increases in depressive symptoms, suicide-related outcomes and suicide rates among US adolescents after 2010 and links to increased new media screen time. Clin. Psychol. Sci. 2018, 6, 3–17. [Google Scholar] [CrossRef] [Green Version]
- Nesi, J.; Prinstein, M.J. Using social media for social comparison and feedback-seeking: Gender and popularity moderate associations with depressive symptoms. J. Abnorm. Child Psychol. 2015, 43, 1427–1438. [Google Scholar] [CrossRef] [PubMed]
- Orben, A.; Przybylski, A.K. Teenage Sleep and Technology Engagement across the Week. PeerJ 2020, 8, e8427. [Google Scholar] [CrossRef] [PubMed]
- Orosz, G.; Tóth-Király, I.; Bőthe, B. Four facets of Facebook intensity—The development of the Multidimensional Facebook Intensity Scale. Pers. Individ. Differ. 2016, 100, 95–104. [Google Scholar] [CrossRef]
- Ernala, S.K.; Burke, M.; Leavitt, A.; Ellison, N.B. How well do people report time spent on Facebook? An evaluation of established survey questions with recommendations. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar]
- Nereim, C.; Bickham, D.; Rich, M. A primary care pediatrician’s guide to assessing problematic interactive media use. Curr. Opin. Pediatr. 2019, 31, 435–441. [Google Scholar] [CrossRef]
- Fletcher-Watson, S.; Petrou, A.; Scott-Barrett, J.; Dicks, P.; Graham, C.; O’Hare, A.; Pain, H.; McConachie, H. A trial of an iPad™ intervention targeting social communication skills in children with autism. Autism 2016, 20, 771–782. [Google Scholar] [CrossRef] [Green Version]
- Fletcher-Watson, S. A Targeted Review of Computer-Assisted Learning for People with Autism Spectrum Disorder: Towards a Consistent Methodology. Rev. J. Autism Dev. Disord. 2014, 1, 87–100. [Google Scholar] [CrossRef] [Green Version]
- Block, R.A.; Zakay, D.; Hancock, P.A. Developmental changes in human duration judgments: A meta-analytic review. Dev. Rev. 1999, 19, 183–211. [Google Scholar] [CrossRef] [Green Version]
- Marx, I.; Weirich, S.; Berger, C.; Herpertz, S.C.; Cohrs, S.; Wandschneider, R.; Höppner, J.; Häßler, F. Living in the fast lane: Evidence for a global perceptual timing deficit in childhood ADHD caused by distinct but partially overlapping task-dependent cognitive mechanisms. Front. Hum. Neurosci. 2017, 11, 122. [Google Scholar] [CrossRef] [Green Version]
- Noreika, V.; Falter, C.M.; Rubia, K. Timing deficits in attention-deficit/hyperactivity disorder (ADHD): Evidence from neurocognitive and neuroimaging studies. Neuropsychologia 2013, 51, 235–266. [Google Scholar] [CrossRef]
- Gooch, D.; Snowling, M.; Hulme, C. Time perception, phonological skills and executive function in children with dyslexia and/or ADHD symptoms. J. Child Psychol. Psychiatry 2011, 52, 195–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Isaksson, S.; Salomäki, S.; Tuominen, J.; Arstila, V.; Falter-Wagner, C.M.; Noreika, V. Is there a generalized timing impairment in autism spectrum disorders across time scales and paradigms? J. Psychiatr. Res. 2018, 99, 111–121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Karaminis, T.; Cicchini, G.M.; Neil, L.; Cappagli, G.; Aagten-Murphy, D.; Burr, D.C.; Pellicano, E. Central tendency effects in time interval reproduction in autism. Sci. Rep. 2016, 6, 28570. [Google Scholar] [CrossRef] [PubMed]
- Ellis, D.A. Are smartphones really that bad? Improving the psychological measurement of technology-related behaviors. Comput. Hum. Behav. 2019, 97, 60–66. [Google Scholar] [CrossRef] [Green Version]
- Sewall, C.J.R.; Bear, T.B.; Merranko, J.; Rosen, D. How psychosocial well-being and usage amount predict inaccuracies in retrospective estimates of digital technology use. Mob. Media Commun. 2020. [Google Scholar] [CrossRef]
- Andrews, S.; Ellis, D.A.; Shaw, H.; Piwek, L. Beyond self report: Tools to compare estimated and real-world Smartphone use. PLoS ONE 2015, 10, e0139004. [Google Scholar] [CrossRef] [Green Version]
- Ellis, D.; Davidson, B.; Shaw, H.; Geyer, K. Do smartphone usage scales predict behavior? Int. J. Hum. Comput. Stud. 2019, 130, 86–92. [Google Scholar] [CrossRef]
- Reeves, B.; Robinson, T.; Ram, N. Time for the Human Screenome Project. Nature 2020, 577, 314–317. [Google Scholar] [CrossRef] [Green Version]
- Yeykelis, L.; Cummings, J.J.; Reeves, B. Multitasking on a Single Device: Arousal and the Frequency, Anticipation, and Prediction of Switching Between Media Content on a Computer. J. Commun. 2014, 64, 167–192. [Google Scholar] [CrossRef]
- Lin, T.T.C. Why Do People Watch Multiscreen Videos and Use Dual Screening? Investigating Users’ Polychronicity, Media Multitasking Motivation, and Media Repertoire. Int. J. Hum. Comput. Interact. 2019, 35, 1672–1680. [Google Scholar] [CrossRef]
- Lin, T.T.C.; Kononova, A.; Chiang, Y.-H. Screen Addiction and Media Multitasking among American and Taiwanese Users. J. Comput. Inf. Syst. 2019. [Google Scholar] [CrossRef]
- Gunnell, K.E.; Flament, M.F.; Buchholz, A.; Henderson, K.A.; Obeid, N.; Schubert, N.; Goldfield, G.S. Examining the bidirectional relationship between physical activity, screen time, and symptoms of anxiety and depression over time during adolescence. Prev. Med. 2016, 88, 147–152. [Google Scholar] [CrossRef] [PubMed]
- Zink, J.; Belcher, B.R.; Kechter, A.; Stone, M.D.; Leventhal, A.M. Reciprocal associations between screen time and emotional disorder symptoms during adolescence. Prev. Med. Rep. 2019, 13, 281–288. [Google Scholar] [CrossRef] [PubMed]
- Coyne, S.M.; Rogers, A.A.; Zurcher, J.D.; Stockdale, L.; Booth, M. Does time spent using social media impact mental health? An eight year longitudinal study. Comput. Hum. Behav. 2020, 104, 106160. [Google Scholar] [CrossRef]
- Raudsepp, L.; Kais, K. Longitudinal associations between problematic social media use and depressive symptoms in adolescent girls. Prev. Med. Rep. 2019, 15, 100925. [Google Scholar] [CrossRef]
- Allcott, H.; Braghieri, L.; Eichmeyer, S.; Gentzkow, M. The welfare effects of social media. Am. Econ. Rev. 2019, 110, 629–676. [Google Scholar] [CrossRef] [Green Version]
- Vanman, E.J.; Baker, R.; Tobin, S.J. The burden of online friends: The effects of giving up Facebook on stress and well-being. J. Soc. Psychol. 2018, 158, 496–507. [Google Scholar] [CrossRef]
- Tromholt, M. The Facebook Experiment: Quitting Facebook Leads to Higher Levels of Well-Being. Cyberpsychol. Behav. Soc. Netw. 2016, 19, 661–666. [Google Scholar] [CrossRef]
- Hunt, M.G.; Marx, R.; Lison, C.; Young, J. No more FOMO: Limiting social media decreases loneliness and depression. J. Soc. Clin. Psychol. 2018, 37, 751–768. [Google Scholar] [CrossRef] [Green Version]
- Wilcockson, T.D.W.; Osbourne, A.M.; Ellis, D.A. Digital detox: The effect of smartphone abstinence on mood, anxiety, and craving. Addict. Behav. 2019, 99, 106013. [Google Scholar] [CrossRef]
- Rohrer, J.M. Thinking clearly about correlations and causation: Graphical causal models for observational data. Adv. Methods Pr. Psychol. Sci. 2018, 1, 27–42. [Google Scholar] [CrossRef] [Green Version]
- London School of Economics and Political Science. What and How Should Parents be Advised about ‘Screen Time’? 2016. Available online: https://blogs.lse.ac.uk/parenting4digitalfuture/2016/07/06/what-and-how-should-parents-be-advised-about-screen-time/ (accessed on 16 April 2020).
- London School of Economics and Political Science. The Trouble with ‘Screen Time’ Rules. 2017. Available online: https://blogs.lse.ac.uk/parenting4digitalfuture/2017/06/08/the-trouble-with-screen-time-rules/ (accessed on 16 April 2020).
Table 1.
Different terminology used in screen time work.
Table 1.
Different terminology used in screen time work.
Term | Examples of Authors |
---|
Sedentary screen-based behaviour | Suchert et al. [17] |
Media time | Stockdale et al. [18] |
Twenge and Campbell [19] |
Screen time | Ferguson [20] |
Houghton et al. [21] |
Wu et al., [22] |
Screen use | Ferguson [20] |
Houghton et al. [21] |
Wu et al. [22] |
New media screen time | Twenge et al. [23] |
Digital media time | Twenge and Campbell [19] |
Technology use | Nesi and Prinstein [24] |
Digital engagement | Orben and Przybylski [25] |
Digital technology use | Orben and Przybylski [7] |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).