1. Introduction
Two decades ago, it was unimaginable that children would be owning and using mobile devices similar to adults. The landscape has dramatically changed, as 35.6% of children age between 4 and 14 years old now spend between one and two hours on their mobile devices, while those spending more than four hours are now 15.1% [
1]. Even more critical is the age at which children receive their first smartphones. Statistics suggest that most children receive their first mobile device between ages 11 and 12 [
1]. At this age, they are primarily still within their prepubescent years. Their exposure to these gadgets at an early age has been a subject of concern to their parents, who feel that they cannot handle the nature of some online material. Their specific fear is that their children will come into contact with some age-inappropriate content, which is not good for the sustainable development of their mental health [
2].
In Saudi Arabia, the earliest age children get their first mobile device is now just seven. Parents are becoming more comfortable with their children owning these devices, as 60% of Saudi parents find the devices helpful in fostering their children’s problem-solving skills [
3]. Another 64% of parents find the devices helpful in instilling a sense of responsibility in children. These statistics notwithstanding, about 53% of the parents argue that these devices have caused sleep difficulties and other attention-related challenges to their children [
3]. The sustainability of continued exposure to digital devices among children concerns local and international children-related bodies. United Nations Children’s Fund (UNICEF) raises concerns that Internet access by children continues to be a more personal and confidential matter for them, as most of them prefer to use these devices in their bedrooms, with limited supervision [
4].
The subject of sustainable child development and behavior in a digital world has attracted mixed opinions. Proponents of digital integration of children contend that exposing children to technology at an early age is beneficial to developing their academic and technological curiosity [
5,
6]. Some even argue that this technology further facilitates educational progress by exposing children to volumes of valuable materials. As intimated in the findings above, scholarly opinion is split between allowing children limited access to mobile devices and completely disallowing this access. The issue with disallowing access is that the children would find other means of accessing the materials all the same. It is far more dangerous because parents would not be in control of this access, and the material could be more inappropriate. Children often model their behavior from the materials they interact with online, and it is an unsustainable practice for them to have unlimited access to online materials [
7].
This issue has been a subject of debate in technology circles where content, games, and social media owners continuously embrace the fact that children consume their products. Controlling this access is a big challenge because it is difficult to tell whether the user is a child or an adult. However, video-streaming service companies such as YouTube and Netflix have special kids platforms. YouTube runs a separate program known as YouTube Kids, which has content specifically audited for children [
8]. They mostly run on artificial intelligence (AI), which is a branch of technology that builds models to emulate and predict human behavior [
9]. Apart from these AI-based controls, parental control also manifests. It is in the form of parents paying close attention to the websites and content their children have been interacting with online. Unfortunately, parents’ engagement in actively inspecting children’s online activities is minimal. An estimated three in four parents either do not find time to involve themselves with their children’s online activities or are unaware that their children have active online lives [
10]. This unsupervised access to limitless online materials, games, and content may result in the unsustainable development of children’s mental and psychological health [
11].
The general objective of the paper is to establish the effect of various digital media interfaces on child development and how artificial intelligence control and parental controls moderate this effect. Specifically, the paper seeks to achieve the following objectives:
To determine the effect of game apps on the sustainability of children’s behavior in Saudi Arabia;
To examine the significance of the effect of video-streaming apps on the sustainability of children’s behavior in Saudi Arabia;
To analyze the effect of video-streaming apps on the sustainability of children’s behavior in Saudi Arabia;
To establish the moderating effect of artificial intelligence control and parental controls on the effect of digital media interfaces on the sustainability of children’s behavior in Saudi Arabia.
This paper is significant because of its implication for children’s growth. Indeed, several studies have investigated the effect of digital media interfaces on children’s behavior. However, most of them fail to appreciate the role played by artificial intelligence control and parental controls in alleviating the negative effects of their technological exposure. The research is progressively innovative because it seeks to provide AI-based solutions to the growing problem of children’s inappropriate exposure to online content. The researcher argues that physical controls are highly limited in their moderating role, thereby creating the need to implement more automated measures.
The paper is organized into six sections.
Section 1 introduces the study by providing an elaborate background, objectives, and research contribution. Next, the paper provides a comprehensive review of literature materials to determine the state of research on the subject. In the theoretical background section, the researcher examines two relevant theories, draws up the conceptual framework, and develops hypotheses. The paper proceeds to formulate the methodology of the study before presenting and analyzing the findings. Ultimately, the researcher discusses these findings in light of reviewed literature, concludes, and makes recommendations on how best to leverage artificial intelligence (AI) and parental controls.
2. Literature Review
The use of game apps among children has served to mold the sustainability of their behavior in the real world. Some teachers use serious games to create a sense of attentiveness among young students [
12]. Consequently, these games create a culture of attentiveness among children by enhancing their gaze durations, which results in better-sustained attentiveness. Some scholars argue that violent game applications are retrogressive in advancing sustainable child behavior.
A game like Fortnite has been at the receiving end of criticism for exposing children to violent behavior. The game depicts cartoonish characters that go shooting each other [
13]. The goal is for the player to be the last one standing. Research has rendered these games ineffective in explaining child development and behavior [
14]. If played in a multi-player setting, the games may even help develop prosocial behavior. Some of the critical social qualities gained by children as they play Fortnite in this setting are cooperation, resilience, self-expression, and the appreciation of others’ successes [
14]. Children satisfy key social needs such as positive emotions, enjoyment, and relatedness by participating in gaming applications. All these factors contribute positively to sustainable development during the early childhood phases.
Children’s use of social media may make some parents feel uncomfortable because of the unregulated nature of most of the content posted on these platforms. Nevertheless, adolescents keep flocking to these sites, where some may even use social media accounts that are not theirs [
15]. In other cases, parents may open Facebook and Instagram accounts for their children while they are still young. The current TikTok craze has made many children flock to the platform. The effects of interacting with these sites are well-documented in scholarly research. There are divided opinions on how these platforms affect children’s sustainable development, with some indicating that it causes depression [
16]. Some have shown that children interacting with others on social media sites are at risk of developing queer behavior [
17]. This adverse behavior often results from the children becoming curious and engaging with people unknown to them. Some users on the sites are adults masquerading as teens or youths. Many of them have sinister motives, such as child trafficking or pedophilia. Other sources claim that social media engagement is important to a child’s sustainable development because it accords them with an environment to exercise their social skills [
18]. Children on social media also learn a great deal from their peers, who come from various places around the globe. This networking aspect of social media is vital to the child’s embracing the global village.
Children find video content highly attractive, especially if it is animated because it appeals to their simplicity. Unfortunately, not all online video content is appropriate for children, which is not good for their sustainable behavioral development [
19]. A site such as YouTube has regular and kids’ versions. Some children are too curious to remain locked in the kids’ zone, and they explore the regular site. YouTube’s attempts to moderate content on the platform are commendable, as it is difficult to come across inappropriate content if one does not search for it [
20]. However, children are extremely curious, and in an unsupervised environment, they may search for content that is not appropriate for their ages. From this angle, it may seem that children’s usage of video-streaming services is detrimental to their behavioral conduct. It is worth noting that the sites play critical developmental roles in children’s lives by allowing them to view content that promotes their social well-being. Most of the content hosted on YouTube Kids and Netflix Kids teaches moral lessons to children, which is good for their sustainable development [
21]. Some children may even extrapolate the lessons learned in these videos to their real-life experiences.
To mitigate the murky waters of children’s online experiences, some have reached out to algorithmic solutions in the form of artificial intelligence. It is a computer science technology that trains and builds models to emulate human behavior [
9]. Normally, it would take a human being to determine whether the content is appropriate for viewers or users. It is mostly applicable in video content recommendation. Using the predictive power of machine learning AI and data obtained from a user’s watch history, the algorithm can determine how to populate search results and recommended videos [
22]. Children are likely to view and watch content appropriate to their ages. Hence, the system can detect their age with a small margin of error. After this determination, it profiles the user as a child and recommends the most relevant content. If the device is in use by adults and children, it may confuse the algorithm and result in less accurate predictions [
23]. Some social media sites have integrated algorithms to detect unusual usage in an account. If such is the case, the system may lock the user out and prompt them to enter their security information. This mitigation technique is useful when a child logs in to an adult’s social media account.
Machine learning (ML) technologies classify content based on features and user feedback. Whenever a user uploads an image or video to a social media site (Facebook) and video-streaming platforms (YouTube), a bot scans this image or video frame using ML tools such as OpenCV to determine whether the content is graphic or in any way inappropriate. In this way, it is extremely hard to find inappropriate materials on these sites [
24]. The bot also scans videos posted and blurs their thumbnails. Some social media platforms such as Twitter further provide a warning to users that the content is graphic and may be inappropriate to some audiences [
25]. In text-based classification, social media sites have a big data dictionary of offensive phrases. Whenever one makes a post containing any such phrases, the bot replaces the offensive part with asterisks.
Even in the presence of artificial intelligence to control children’s exposure to inappropriate content, some still consider parental controls an even more suitable approach. Such controls may be manual or embedded in the systems. Manual controls involve parents physically supervising the content with which their children interact [
19]. Some choose to download what they find appropriate and switch off the Internet so that children only consume what they have proofed. Others choose to be close to the children while playing games, browsing social media sites, and watching online videos to ensure they are within their limits [
26]. Even with the manual nature of most parental controls, a significant proportion of parents still implement them [
26]. Such parents consider their physical presence intimidating enough to influence their children’s online conduct.