Next Article in Journal / Special Issue
Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
Previous Article in Journal
A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features
Previous Article in Special Issue
Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethics and Transparency Issues in Digital Platforms: An Overview

Department of Humanities, Illinois Institute of Technology, Chicago, IL 60616, USA
*
Author to whom correspondence should be addressed.
AI 2023, 4(4), 831-843; https://doi.org/10.3390/ai4040042
Submission received: 31 August 2023 / Revised: 25 September 2023 / Accepted: 26 September 2023 / Published: 28 September 2023
(This article belongs to the Special Issue Standards and Ethics in AI)

Abstract

:
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization.

1. Introduction

In this paper, we address the topic of transparency in digital platforms that utilize artificial intelligence (AI). Due to their ease of use and access, digital platforms have become a ubiquitous part of our daily lives. These platforms act as mediators and intermediate and shape users’ experience of accessing different types of content on the internet. For instance, digital platforms can be used for social interactions on social media websites, media sharing and streaming, gaining knowledge, providing and receiving an array of services, or from ridesharing to food and grocery delivery. Also, with the advancements in the field of AI, this technology has the ability to optimize and automate processes and has become an inseparable component of these platforms. The importance of platforms as internet mediators has resulted in the development of the “platformization” theoretical framework that explores the impact of digital platforms on the web from three dimensions of economic, infrastructural, and governance [1]. However, with all their benefits, there are ethical concerns related to these digital platforms that remain to be addressed. One of these concerns and the focus of this paper is related to the matter of transparency.
Since the advent of digital platforms, scholars have studied their positive and negative impacts on society. Regarding the positive impacts, particularly, the facilitating role of digital platforms on social and political movements has been studied extensively. For instance, the Occupy movements [2,3] and Arab Spring [4,5] are exemplary examples of digitally networked movements. However, there are scandalous examples that have roots in the transparency and governance of these platforms. In this regard, some of the examples are the 2016 U.S. election [6], Brexit [7], genocide in Myanmar [8], and the 2019 terrorist attack in Al Noor Mosque in Christchurch, New Zealand [9]. Therefore, scholars such as Gorwa and Ash [7] and Leone de Castris [10] raise concerns about transparency and governance issues of digital platforms and deem it necessary to develop effective transparency policies.
From a technological point of view, these platforms and the way artificial intelligence is developed and used in them are highly complex. Such a high level of complexity and the proprietary nature of these platforms can contribute to non-transparency issues. Furthermore, due to the proprietary nature of AI algorithms and lack of visibility to their decision-making approaches, algorithmic and dataset biases cannot be easily identified and addressed [11,12]. Similarly, since platforms are mainly owned by for-profit organizations, their strategic decisions can add to non-transparency concerns. We recognize that “…despite technical and organizational efforts to improve explainability, transparency and accountability, massive zones of non-transparency remain, caused by both the sheer complexity of technological systems and by strategic organizational decisions” [13] (p. 110). In addition to the technological and organizational dimensions, the lack of effective government regulations and the self-regulatory approach of organizations can add to the complexities of ethical concerns. Currently, public policies cannot effectively moderate how platforms are managed, and giant technology companies self-regulate themselves through their self-developed ethics codes and guidelines.
To address the transparency issues in these three dimensions (i.e., technological, organizational, and regulatory), the first steps include reflecting on platforms and platformization, identifying definitions for AI that can be agreed upon in the context of platformization, and how AI ethics can be utilized to uncover zones of non-transparency. A definition for AI in this context should clearly define the objectives, scope, goals, and limitations of AI within AI ethics, and bridge the gap between principles and operation. As identified by Crawford et al. [14] and Mittelstadt [15], the lack of an agreed upon and clear definition of AI is preventing the attempts to bridge the gap between principles and the operationalization of AI ethics. Similarly, Chouldechova and Roth [11] contend that while the literature on algorithmic fairness is extensive, there is a lack of clear definitions in AI ethics, and there are many open questions that need to be addressed.
While in general, AI ethics can be characterized as reflecting on the ethical and social implications of AI, in this paper we apply AI ethics in line with the thoughts that AI ethics deals with “…ways of deviation or distancing oneself from problematic routines of action, with uncovering blind spots in knowledge, and of gaining individual self-responsibility” [13] (p. 115). In what follows, when analyzing platforms, we do this from the lens of this perspective. We argue, using this AI ethics approach, that the ubiquitous use and development of platforms are the ‘problematic routines of action’ we need to distance ourselves from. We utilize David B. Nieborg and Thomas Poell’s [1] concept of “the platformization of cultural production” to uncover blind spots in knowledge that hopefully fill some zones of non-transparency in platform development and use. We stress a growing need to increase awareness of transparency issues that arise from using the powerful digital platforms Google, Apple, Facebook, Amazon, and Microsoft (GAFAM). We examine three zones of non-transparency in platforms including:
  • Lack of transparency with regard to who contributes to platforms.
  • Lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers.
  • Lack of transparency with regard to how algorithms are developed and governed.
In order to address the topic of transparency in digital platforms that utilize AI, in the next section, we first elaborate on what we mean by digital platforms, AI and AI ethics, and transparency.

2. Reflecting on the Concept

2.1. Digital Platforms

The concept of platforms in the context of business development is not new. Platforms have been used throughout history for efficiently connecting suppliers with consumers. For instance, Van Alstyne, Parker, and Choudary [16] bring examples of shopping malls and newspapers for connecting brands with buyers, and subscribers with advertisers, respectively. According to Nishikawa and Orsato [17], historical local markets also followed the same principles. However, the digitalization of platforms in recent decades is an important milestone in the history of platforms. With the advancements in the field of information and communication technology (ICT), it has become easier and more cost-effective than before to expand and increase the outreach of platforms [16].
Chen, Richter, and Patel [18] characterize digital platforms as “digital systems that facilitate communications, interactions, and innovations to support economic transactions and social activities” [18] (p. 1306). While the focus of this definition is more on the interactions between different parties that use digital platforms, Asadullah, Faik, and Kankanhalli [19] stress the role of technical aspects for the definition of digital platforms. For the technical definition, the focus is on the capability of digital platforms to serve as an infrastructure for providing a different array of services [19].
Given their widespread use in various contexts, digital platforms now have far-reaching cultural implications, and this has resulted in the development of the “platformization” theoretical framework [1]. Nieborg and Poell [1] define platformization “as the penetration of economic, governmental, and infrastructural extensions of digital platforms into the web and app ecosystems, fundamentally affecting the operations of the cultural industries” (p. 4276). Each of the lenses of economic, infrastructural, and governance provides a unique perspective into the matters of transparency and AI ethics on platforms.
Through the economic lens, while it appears that users are freely accessing platforms, there is very little information and transparency available about how personal information and behavior are being used for behavioral mining and advertising purposes. Furthermore, users are no longer just consumers of content. Users also produce content and the word ‘prosumption’ has been used to describe the blending of production and consumption that users now take part in [20]. It is unclear how users are compensated in terms of the labor they provide. Therefore, the issue of transparency needs to be investigated in more detail in this dimension.
How platform infrastructures are developed, maintained, and transformed raises transparency issues. Consumers/end-users and complementors, i.e., institutional actors such as advertisers or content producers, depend strongly on the platform. Nieborg and Poell [1] consider a platform’s infrastructure as “a sociotechnical system that is widely shared and increasingly perceived as essential. Infrastructural access to Application Programming Interfaces (APIs) and Software Development Kits (SDKs) is among the primary ways in which platforms control complementors” (p. 4281). These infrastructures that can be conceived in the form of APIs and SDKs define the user experience in terms of the type of content, methods of content creation, and circulation of content. Furthermore, with the unidirectional power relation between those platform developers and users, users have no option except to adapt to the infrastructure [1,21].
As for the governance lens, platform developers have the power to set policies and standards regarding how their platforms can be used. These standards and policies manifest as end-user license agreements and terms of service, which are not transparent and fall short of providing protection against “targeting, harassment, and marginalization” [21] (p. 3). From another perspective, the governance lens allows us to investigate how effectively the policy makers are performing with regard to platformization. Due to the short history and rapid change of digital platforms, regulatory systems are yet to catch up with the policy-related needs of platforms. For instance, Duguay [21] asserts that “a lot of tech companies make their home between the moment some new way to make money is discovered and the moment some government entity gets around to deciding if it’s actually legal” (p. 8).
A broad spectrum of platforms exists, depending on their user and ecosystem context, and goals. Consequently, platform governance, i.e., standards and policies that refer to platforms, varies from case to case. Gillespie [22] distinguishes between two related aspects of platform governance: governance of platforms and governance by platforms. Governance of platforms refers to policies that specify platform liabilities for the user content and activity, whereas governance by platforms refers to the ways platforms curate the content and police the activities of their users.

2.2. Governance of Platforms

Regulatory systems, like governments, have not been able to adapt to the needs of cyberspace. In this regard, digital platforms are no exception. There are not enough regulations specifically enacted for addressing the challenges of cyberspace and the ones that are enacted have major flaws. An example of such regulation is Section 230 of the Communications Decency Act of 1996 in the United States.
Section 230 of the Communications Decency Act states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” [23]. While this act has its proponents claiming that the act has resulted in the prosperity of the internet, it also has opponents criticizing the act on the grounds that it fails to prevent illegal activity on the internet [24]. Daub [25] (p. 39) further describes the limitation of the act by explaining how platforms are different from publishers, since publishers have “editorially responsibility,” while platforms do not accept such level of responsibility.
Although the Congress’ Allow Victims and States to Fight Online Sex Trafficking Act of 2017 (FOSTA) and Senate’s Stop Enabling Sex Trafficking Act of 2017 (SESTA) were enacted to tackle some of the challenges associated with Section 230 and hold platforms more responsible for their content, they also have flaws such as using vague and non-transparent terminologies, which leave room for interpretations that can consequently result in narrow or broad liability and under- or over-policing [24]. After all, as Daub [25] mentions, “regulation is supposed to be slow-moving, deliberate, a little bit after-the-fact” (p. 8). In that sense, it will be hard to expect that regulatory systems will be fast enough to identify and prevent challenges before they happen or at least properly address them after a short period of time once these challenges are identified.
The shield from liability that Section 230 provides for digital platforms, raises the following question: when platforms have proven to be good at tracking users, mining their data for behavior analysis and advertising purposes, when it comes to increasing their knowledge of how their platform is being misused for unethical purposes, why do they claim it is hard and unachievable to track all the content and take preventive measures? Aside from the governance of platforms and its associated concerns, another concerning matter is how platforms govern their ecosystem.

2.3. Governance by Platforms

While the focus of governance of platforms is on the regulations created by entities that are independent of platforms, governance by platforms refers to the platforms’ ability to moderate content and regulate interactions between different stakeholders. In their literature review on crowdwork platform governance, Gol, Stein, and Avital [26] give an overview of the two key aspects of governance by platforms: control and coordination. They describe the various aspects of control and coordination, and how control and coordination overlap. In the context of governance, control refers to the capability of surveilling all interactions and processes that take place on platforms. Furthermore, coordination refers to the ability to manage “dependencies among tasks and resources that exist in the process” [26] (p. 9). It needs to be mentioned that their literature review certainly holds not only for crowdwork platforms but also for other forms of platforms.
Centralized and decentralized modes of (crowdwork) platform governance use different forms of control and coordination mechanisms. Each of these modes of platform governance have different advantages and disadvantages that, for the sake of brevity, cannot be discussed here [26]. In general, centralized forms of platform governance are less transparent, whereas decentralized governance forms are more transparent (see [27]). In centralized governance forms, decision-making power is centralized and there is a high degree of control exerted by the platform. Also, the governing processes lack transparency, whereas decentralized governance forms can be characterized by the distribution of decision-making power among stakeholders and a higher degree of transparency [26,27].

2.4. AI and Ethics in the Context of Digital Platforms

The lack of an agreed upon definition of AI that can clearly define its objectives and scope, as well as its limitations, can be regarded as one of the major reasons for the lack of transparency in platforms that use AI. Scholars agree that the vagueness and generality of AI definitions and the noncongruent definitions that are used by different stakeholders (e.g., developers, policy makers, and users) hinder the development of practical ethics codes and guidelines that can address the current accountability concerns [14,28,29,30]. We want to address two possible reasons there is no clear definition for AI. First, historically, there seems to be no standard definition for intelligence [31]. In this regard, artificial intelligence is no exception. Matters get even more complicated in the case of artificial intelligence as this term does not apply to a single application with narrowed-down scopes. Currently, artificial intelligence is being used in numerous businesses and advanced technologies, from advertisement placing to predictive policing.
Although there is no standard definition of intelligence, Legg and Hutter [31] were able to identify similarities and strong relations between several definitions based on a survey of around 70 intelligence definitions. The three common features of intelligence identified by Legg and Hutter [31] are as follows: 1. Being able to interact with the environment, 2. Succeeding in achieving goals, and 3. Being adaptable to different objectives and environments. Considering these common characteristics, Legg and Hutter [31] adopted an informal definition of intelligence: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments” (p. 9).
Second, when it comes to AI and AI ethics, there are various stakeholders such as developers and users. The definition of these stakeholders for AI and its ethics can be different or even in contrast with each other. For instance, based on compiling a collection of AI definitions in a technical paper, Samoili et al. [32] identified three different perspectives for AI definition development, including policy and institutional organizations, research, and market. The policy and institutional perspectives of AI are used to understand the impact of AI on society and as a means to advancements in technology. As the name implies, the research perspective is focused on understanding AI as a research field. Finally, the market perspective is used to understand AI from the lens of economic value.
Furthermore, based on the analysis of their collection of AI definitions, Samoili et al. [32] identified four common characteristics for AI: “1. Perception of the environment, including the consideration of the real world complexity, 2. Information processing, 3. Decision making, and 4. Achievement of specific goals” (p. 8). Interestingly, we can observe significant overlaps between these characteristics of AI, and the characteristics of intelligence by Legg and Hutter [32]. Aside from the definition of AI, Crawford et al. [14] pointed out that thus far, AI ethics guidelines are primarily generated by entities in the Global North, which are dominated by white males. However, the viewpoints of the entities in the Global South are less present in this field, although the Global South may be more impacted by the reinforced inequities caused by the AI industry.
Similarly, the lack of a clear definition of AI in the context of digital platforms hinders the implementation of effective AI ethics that can be quantified and enforced. A clear definition is a requirement in any discipline for developing ethical codes. In this regard, Mittelstadt [15] compares the field of medical ethics with AI ethics on the four grounds of “(1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms”. First, while medicine has a common aim of promoting health and wellness, when it comes to AI, there are contradictory aims between developers, users, and other stakeholders. While users of AI might be more aligned with improving welfare, developers of AI may have roots in commercialization. Also, other stakeholders such as governments may want to use AI to increase their control and monitoring capabilities. Second, unlike medicine, AI is a relatively newer field and does not have an established history.
While medical ethics were improved over the span of several centuries, AI ethics can only be traced back to the 1950s [33]. Therefore, this lack of history has prevented the field of AI benefiting from the lessons learned through the passage of time. Third, the same established history in the field of medicine has resulted in effective ways that can properly translate principles into practice. Such translations are missing in the field of AI. Finally, although the field of medicine is regulated with laws and regulations, the field of AI has very limited regulations that apply to it.
As a starting point, in this paper, the definition of AI that we use and envision is aligned with the baseline definition adopted by Samoili et al. [32] that was originally defined by High Level Expert Group on Artificial Intelligence (HLEG):
“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions” [34] (p. 6).
In the context of digital platforms, we understand that AI plays a pivotal role in their day-to-day operation. Initially, AI could only perform limited tasks on digital platforms. However, with the continuous advancements in this field, AI has become an inseparable and vital part of platforms [35]. Currently, AI is used in platforms to perform tasks that are essential for digital platforms such as advertisement placing, search engine ranking, image recognition, translation, demand forecasting, and algorithmic content moderation [9,35].

2.5. AI Ethics and Transparency

Over the past few decades, digital platforms powered with AI have played important roles in several major political and social events and incidents. While the scale of their impact is still discussed, their influence remains unquestioned. With regard to the concept of platform transparency, some of the examples are the 2016 U.S. election, Brexit, genocide in Myanmar [7], and the 2019 terrorist attack in Al Noor Mosque in Christchurch, New Zealand [9]. Therefore, there has been a lot of effort by all the stakeholders, from scholars and policy makers to developers, to improve the issue of transparency on digital platforms. From the scholars’ and policy makers’ side, new research concepts and bills were introduced and from the developers’ side, they have tried to use transparency as an accountability mechanism to gain back the trust that they have lost from their stakeholders [7,10].
We focus on the concept of transparency due to the fact that it has proven to be the most frequently appearing principle in a number of ethics codes and guidelines pertaining to AI [36]. In the context of platforms, Suzor, Van Geelen, and Myers West [37] consider transparency as the incorporation of “the right of users to know when, why, and by whom decisions that affect them are being made” (p. 392). However, while at first glance the concept of transparency appears to be self-explanatory and straightforward, Gorwa and Ash [7] assert that transparency is a widely used form of accountability due to its “flexibility and ambiguity”. Similarly, Leone de Castris [10] considers the recent efforts for more transparency and accountability of digital platforms as “vague and superficial”.

3. Zones of Non-Transparency in Platforms

In the context of platforms, transparency is an important concept. For instance, a study by Deng et al. [38] shows that transparency is among the key values of crowdworkers. The values identified by Deng et al. [38] are access, autonomy, fairness, transparency, communication, security, accountability, making an impact, and dignity. To uncover blind spots in knowledge, and hopefully fill some zones of non-transparency in platform development and use, we apply AI ethics to three zones of non-transparency that we believe are overlooked. The first zone of non-transparency in platforms we focus on is ‘who contributes to platforms?’. The second zone of non-transparency is ‘who is working behind platforms and what are their work conditions?’. And third, ‘how algorithms are developed and governed’.

3.1. Non-Transparency on Who Contributes to Platforms

The digital platform ecosystem involves various groups of contributors to digital platforms including owners, technology developers, and end-users/consumers; “…a digital platform may need its platform owners to manage its core technology and governance systems, third-party developers to build innovative applications on top of its core technology, and end-users to utilize its core and complementary services” [18] (p. 1308). Compared with the well-known contributors of platforms (e.g., developers and users), platform workers are a group of contributors that have remained anonymous for a long time. Platform workers contribute to the success of digital platforms; a digital worker can be defined as “those who have ever gained income from providing services via online platforms, where the match between provider and client is made digitally, payment is conducted digitally via the platform, and work is performed either (location-independent) web-based or on-location” [39] (p. 3). Brancati et al. [39] break down platform workers into three different levels based on contribution:
  • Main platform workers work more than 20 h per week or earn 50% or more of their income via digital platform work.
  • Secondary platform workers work more than 10 h per week or earn 25–50% of their income via digital platform work.
  • Marginal platform workers work less than 10 h per week or earn 25% or less of their income via digital platform work.
In Europe, the number of platform workers is steadily increasing along with the number of women working as platform workers. The typical platform worker is a young male, and “platform workers tend to be younger, more educated, and more likely to live in a larger household and have dependent children” [39] (p. 4). However, gathering exact numbers of platform workers is notoriously difficult for several reasons. One reason is the lack of clear definitions of what platform workers are. In a 2018 study, the data on platform workers were skewed when “many respondents poorly understood the definition, such as answering yes if they merely made use of a computer or mobile app in their job” [40] (p. 18). The definitions of platform workers also vary internationally [40] (p. 19). Another reason is that there is not enough administrative data curated on behalf of the platforms themselves; due to ambiguities in the regulation of digital work platforms, they may be omitted from some datasets. For example, ride hailing apps blur the lines between on-street hailing of a cab and pre-booking a chauffeur and many apps take advantage of loop-holes in existing labor market regulation [40] (p. 20).
These regulation problems will be discussed further later in the text. With a lack of information on the specific numbers of platform workers, who contribute to digital platforms, and discrepancies in the definitions of what platform work is internationally, this is the first zone of non-transparency that we believe influences the ethics of digital platforms.

3.2. Non-Transparency on Contributions of Workers and Their Work Conditions

The second zone of non-transparency in digital platforms is the contributions and working conditions of digital platform workers. The power imbalance between the workers and companies plays a role here in how platforms employ AI-based techniques to manage assignments, assess productivity, and assign wages to maximize their profit [14]. For instance, the productivity or performance of workers in Amazon warehouses is assessed with an automatic process and even the system for handling disciplinary actions is automated and is insensitive to circumstances that may have affected worker’s performance [14]. In this automated performance tracking process, warehouse workers are automatically evaluated with an algorithm, and if they fall behind a productivity rate that is also determined automatically, they will face automatic disciplinary actions that can range from warnings to job termination [14,41]. The decision-making metrics in these AI-enabled platforms are not available to the public and this issue is one of the areas that require more transparency.
An important example of such power imbalance is the case of commercial content moderators. Commercial content moderators (CCMs) are the individuals who are in charge of monitoring the user-generated content on platforms. Content moderators play an important role in digital platforms as the decision-making process of accepting or rejecting the content is currently beyond the current capabilities of AI. Roberts [42] identifies four types of work situations for moderators: in-house, boutiques, call centers, and microlabor websites.
In the case of in-house CCMs, they are mostly hired by third-party contracting companies. These CCMs were not given any health insurance, would not benefit from the benefits that full-time employees would receive, and were given low hourly wages instead of salaries and no guarantee of future employment [42]. In the case of call-center CCMs, Roberts [42] mentions that the CCM workers in Manila, Philippines earned roughly $400 per month. In the case of microlaborers, the wage is usually “pay-per-task” and starts from as low as $0.01 per task [43]. In 2020, the average rate of pay per hour ranged “between approximately EUR 7 for microtasking and EUR 23 for software development” [39].
Regardless of the type of work situation, a pattern that can be seen is similar to the external and low-skill labor market defined by Huws [44]. In this type of liberal labor market, workers tend to be temporary or part-time with low job security. The situation is even worse for microlabor workers where their labor marketplace is even more unregulated than the other three categories [45].

3.3. Non-Transparency on How Algorithms Are Developed

Finally, the third zone of non-transparency that we highlight here is the lack of governance in digital platforms, which ultimately leads to a lack of transparency in the use and development of platform algorithms. Artificial intelligence and machine learning algorithms have become inseparable parts of our daily lives. They have penetrated numerous tangible and intangible aspects of decision-making processes, from advertisement placing and music and video recommendations to loan application processing and even predictive policing and bail decisions [11,12,46]. Despite their ubiquitous applications, artificial intelligence and machine learning algorithms are currently developed as black boxes and their models can be proprietary and confidential. In this regard, an example worth noting is the large language model (LLM).
The ethical use of LLMs needs to be subject to active critical assessment by the users of LLMs. The critical assessment of LLMs will vary by industry. A framework and table for critically assessing LLMs in health care has been proposed [47] (p. 185). In Illicki’s framework [47], the first step in determining the ethical use of LLMs is to identify the main source of data that the LLM will be/is using. The second step is to determine the intended recipient(s) of the LLM. Once these have been identified, it is then imperative to address the limitations of the LLM. We must address questions that critically assess the LLM like what are the limitations of the dataset for the intended recipient(s)?; was the dataset curated in an ethical way?; will the output accurately and efficiently address the needs of the recipient(s)?; and are there any ethical concerns with the output that the recipient(s) will receive? Ilicki [47] addresses that this type of critical assessment is not to be replaced with exhaustive and comprehensive assessment before deployment and implementation in the intended industry (p. 186). However, for our purposes, it is useful for simply addressing and understanding the limitations of LLMs and, we might add, a chance for considering the ethical nature of the LLM.
Moreover, the data that are used to train these algorithms often have an intrinsic bias, a bias that will impact their future results. For instance, regarding automatic facial analysis systems, a systematic review by Khalil, Ahmed, Khattak, and Al-Qirim [12] has shown several instances of bias in automatic facial analysis systems “against a specific group or category”. While such biases have become widely known, the use of machine learning algorithms in life-altering decision-making processes such as criminal risk assessment is concerning. For instance, in the U.S., nine states of Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin have adopted machine learning algorithms to provide risk scores to judges to help them with the sentencing decision [48,49]. Now that AI has become a vital component of digital platforms, it is now more important than ever to focus efforts on increasing transparency with regard to how algorithms are developed.
Furthermore, a study by Nguyen, Yosinski, and Clune [50] has shown that machine learning methods such as Deep Neural Networks (DNNs) can be easily deceived, and their recognition and perception have major differences from those of human beings. In their study, Nguyen, Yosinski, and Clune [50] demonstrated that “discriminative DNN models are easily fooled in that they classify many unrecognizable images with near-certainty as members of a recognizable class” (p. 434). This raises a major concern, considering the ever-increasing application of artificial intelligence in decision-making processes. In the context of platforms, tasks that are related to the operation and maintenance of these platforms are being more and more performed with AI. With that in mind, it is crucial to address the transparency concerns that are related to the development of algorithms. Therefore, with the current levels of transparency, artificial intelligence and machine learning algorithms are prone to external and internal exploitation and it is hard to trace the sources and causes of bias in them.

4. Conclusions

The aim of this paper is two-fold. First, based on the survey of the literature and by reflecting on the concept of platformization ethics, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we identified the zones of non-transparency in the context of digital platforms. The second aim was to provide new perspectives for the concerns that were raised with the first aim. While the previous sections were focused on the former, the concluding remarks will be focused on the latter.

4.1. Identify Contributors

Platforms should clearly and transparently identify who, or what, contributes to the success of the platform and how much contributors contribute to their platform. Since there is currently no clear definition of what platform workers are internationally, defining what a platform worker is will help create awareness of platform workers and their needs. This definition should take into consideration a variety of platform jobs for the most well-rounded definition. A clear definition may also help platform workers better self-identify and lessen the anonymity culture of platform work. There should also be more regulations, internally within the organization and via government intervention, in place for platform workers. The lack of regulations has led to minimal reporting and administrative data curation for platform workers. With better reporting and administrative data, worker conditions and compensation will become more transparent.

4.2. Identify Governance/Self-Governance

With the monopoly that giant technology companies currently possess, the lack of effective policies that can oversee how platforms are managed can be regarded as a major concern. As Golumbia [51] states, this situation gets even worse as giant technology companies are trying to self-regulate themselves in a centralized manner without the pressure of public policies, which would result in even more opaque information handling. In centralized governance structures, power most often lies with the platform owners, whereas other stakeholders have little power. Chen et al. [18] argue that platforms with semi-decentralized governance structures not only provide developers and other stakeholders with more rights and control over platform governance but also are more likely to be associated with better platform performance. Accordingly, semi-decentralization may help attract developers, foster collective action, and facilitate rapid technical development. Overall, Chen et al. [18] characterize shared governance with semi-decentralization as a governance structure preferable to both fully centralized and fully decentralized governance structure.
There are a number of ways that can be considered to increase transparency in a broader sense. A number of them involve policy making. For example, the Global Platform Governance Network’s Transparency Working Group discusses issues of transparency and accountability of digital platforms and formulates transparency recommendations for policy makers (MacCarthy [52]). Other ways include sharing of information, attempts to provide an overview of the various aspects of the ‘platform ecosystem’, as well as ethics codes and ethical guidelines. For instance, in the context of crowdworking, while there is a lack of regulations, one way of increasing transparency around crowdworking is providing information. For example, the webpage http://faircrowd.work/ (accessed on 25 September 2023) collects information on crowdworking platforms and provides reviews.
Ethics codes and codes of conduct that address platform-related aspects are another way of shaping transparency with regard to who contributes to platforms; with regard to who is working behind platforms, and the contributions and working conditions of these workers; and with regard to how algorithms are developed and governed. A disadvantage of ethics codes and guidelines certainly is that they tend to cover only broader topics, often are relatively general, and are difficult to enforce. Similarly, identifying metrics to evaluate or gauge ethics in AI is extensively difficult. There are many contradictory voices in the AI ethics field. There is criticism of principle-based ethical development of AI, human-rights-based ethical development of AI, and even design-oriented ethical development of AI. These ethical considerations are all still waning in real-world operational contexts [53] (p. 1429). Joris Krijger [53] shares that “Ethical decisions regarding model development and deployment are ultimately made within contexts of organizations that have to align the ethical principles with vested interests such as the organizational culture, mission and goals” (p. 1429). Thus, while there is currently no agreed upon metric to evaluate ethics in AI, what is burgeoning is the notion that AI ethics and the metrics used to evaluate its efficacy are variable by industry.
Policies can be defined by government agencies, independent agencies within the industry, or platform owners. In this paper, we explained how some government policies such as Section 230 need to be improved and how self-regulation provides a high degree of control exertion by the platforms. For independent self-regulatory agencies, there are cases from other industries that can be used as examples. An example of an effective self-regulatory organization from the gaming industry is the Entertainment Software Rating Board (ESRB). ESRB is a self-regulatory rating organization in the gaming industry that was formed in 1994 in response to the US Senate hearings, concerning the link between video games and violent behavior [54]. In their study, Laczniak, Carlson, Walker, and Brocato [55] investigated the efficacy of ESRB and found that the guidelines of this organization are effective in helping parents with regulating their children’s exposure to violent games. Such self-regulatory entities from other industries can serve as examples for improving the governance of digital platforms.

4.3. Identify Datasets and Explanation of Decisions

As we suggested earlier, the datasets used in training machine learning algorithms may have intrinsic biases. Furthermore, these algorithms are currently similar to black boxes that do not provide an explanation about their outputs and results. As AI has become an inseparable part of digital platforms, it is important to address the transparency and fairness issues in this field. While there are some open-source datasets available in the field of AI, their misrepresentation is a “well-known problem” [12]. The matter of data availability and data analysis transparency is not limited to the field of AI ethics. There are other fields with more history compared with the field of AI ethics that have tackled the challenges associated with such matters. When it comes to scientific publications, scholars are being encouraged more than ever to be transparent about their datasets, methods, funding organizations, and conflicts of interest.
Concerning datasets, several journals are encouraging or even requiring scholars to make their analyzed data available in a publicly accessible repository [56,57,58]. The reason behind these efforts is to allow other scholars to reproduce the results for validation purposes or reuse those data for new research studies. For instance, in the field of Psychology, based on a study performed by Hardwicke et al. [58], they attempted to reproduce 1324 target values from 35 articles with reusable data, and were not able to reproduce 64 values “within a 10% margin of error”. Such studies show the importance of data availability policies. It is our recommendation that such policies might also be beneficial for the field of AI ethics. If the datasets used to train algorithms become openly available, evaluated by supervisory entities for biases, and shared among developers, their biases will be exposed, and these datasets can be improved over time.
Aside from the bias of the training dataset, the bias of the algorithm is also a concern that needs to be addressed. It is currently infeasible to determine why an algorithm reaches a decision. Furthermore, research suggests that these algorithms can be easily deceived [50]. With the advancements in the field of AI and recognition of the need for more transparency and fairness, new concepts such as Explainable AI (XAI) are suggested [46,59]. The aim of XAI is to provide insight into the decision-making process of the algorithms. While this concept is yet to be standardized, the XAI taxonomy by Gilpin et al. [46] focuses on “whether the method tries to explain the processing of the data by a network, explain the representation of data inside a network or to be a self-explaining architecture to gain additional meta predictions and insights about the method” (p. 86). It is our recommendation that when applicable, the insights that XAI provides should become publicly available and not be limited to developers.

Author Contributions

Conceptualization, L.M., M.S. and E.H.; methodology, L.M., M.S. and E.H.; investigation, L.M., M.S. and E.H.; writing—original draft preparation, L.M., M.S. and E.H.; writing—review and editing, L.M., M.S. and E.H.; supervision, E.H.; project administration, E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nieborg, D.B.; Poell, T. The platformization of cultural production: Theorizing the contingent cultural commodity. New Media Soc. 2018, 20, 4275–4292. [Google Scholar] [CrossRef]
  2. Juris, J.S. Reflections on #Occupy Everywhere: Social media, public space, and emerging logics of aggregation. Am. Ethnol. 2012, 39, 259–279. [Google Scholar]
  3. Tremayne, M. Anatomy of protest in the digital era: A network analysis of Twitter and Occupy Wall Street. Soc. Mov. Stud. 2014, 13, 110–126. [Google Scholar] [CrossRef]
  4. Rane, H.; Salem, S. Social media, social movements and the diffusion of ideas in the Arab uprisings. J. Int. Commun. 2012, 18, 97–111. [Google Scholar] [CrossRef]
  5. Ghannam, J. Social Media in the Arab World: Leading Up to the Uprisings of 2011; Center for International Media Assistance: Washington, DC, USA, 2011; Volume 3, pp. 1–44. [Google Scholar]
  6. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef]
  7. Gorwa, R.; Ash, T.G. Democratic transparency in the platform society. In Social Media and Democracy: The State of the Field, Prospects for Reform; Tucker, J.A., Persily, N., Eds.; Cambridge University Press: Cambridge, UK, 2020; pp. 286–312. [Google Scholar]
  8. Stevenson, A. Facebook Admits It Was Used to Incite Violence in Myanmar. 2018. Available online: https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html (accessed on 25 September 2023).
  9. Gorwa, R.; Binns, R.; Katzenbach, C. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data Soc. 2020, 7, 2053951719897945. [Google Scholar] [CrossRef]
  10. Leone de Castris, A. Types of Platform Transparency: An Analysis of Digital Platforms and Policymakers Discourse on Big Tech Governance and Transparency; University of Chicago: Chicago, IL, USA, 2022. [Google Scholar]
  11. Chouldechova, A.; Roth, A. A snapshot of the frontiers of fairness in machine learning. Commun. ACM 2020, 63, 82–89. [Google Scholar] [CrossRef]
  12. Khalil, A.; Ahmed, S.G.; Khattak, A.M.; Al-Qirim, N. Investigating Bias in Facial Analysis Systems: A Systematic Review. IEEE Access 2020, 8, 130751–130761. [Google Scholar] [CrossRef]
  13. Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  14. Crawford, K.; Dobbe, R.; Dryer, T.; Fried, G.; Green, B.; Kaziunas, E.; Kak, A.; Mathur, V.; McElroy, E.; Sánchez, A.N. AI Now 2019 Report; AI Now Institute: New York, NY, USA, 2019. [Google Scholar]
  15. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
  16. Van Alstyne, M.W.; Parker, G.G.; Choudary, S.P. Pipelines, platforms, and the new rules of strategy. Harv. Bus. Rev. 2016, 94, 54–62. [Google Scholar]
  17. Nishikawa, B.T.; Orsato, R.J. Professional services in the age of platforms: Towards an analytical framework. Technol. Forecast. Soc. Chang. 2021, 173, 121131. [Google Scholar] [CrossRef]
  18. Chen, Y.; Richter, J.I.; Patel, P.C. Decentralized Governance of Digital Platforms. J. Manag. 2020, 47, 1305–1337. [Google Scholar] [CrossRef]
  19. Asadullah, A.; Faik, I.; Kankanhalli, A. Digital Platforms: A Review and Future Directions. In Proceedings of the Pacific Asia Conference on Information Systems (PACIS), Yokohama, Japan, 26–30 June 2018; pp. 1–14. [Google Scholar]
  20. Tapscott, D.; Williams, A.D. Wikinomics: How Mass Collaboration Changes Everything; Penguin: London, UK, 2006. [Google Scholar]
  21. Duguay, S. “Running the Numbers”: Modes of Microcelebrity Labor in Queer Women’s Self-Representation on Instagram and Vine. Soc. Media+ Soc. 2019, 5, 1–11. [Google Scholar] [CrossRef]
  22. Gillespie, T. Governance of and by platforms. In The SAGE Handbook of Social Media; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2017; pp. 254–278. [Google Scholar]
  23. U.S.C. § 230. Communications Decency Act. 1996. Available online: https://libguides.uakron.edu/c.php?g=627783&p=5861337 (accessed on 25 September 2023).
  24. McKnelly, M. Untangling SESTA/FOSTA: How the Internet’s ‘Knowledge’ Threatens Anti-Sex Trafficking Law. Berkeley Technol. Law J. 2019, 34, 1239. [Google Scholar] [CrossRef]
  25. Daub, A. What Tech Calls Thinking: An Inquiry into the Intellectual Bedrock of Silicon Valley; FSG Originals: New York, NY, USA, 2020. [Google Scholar]
  26. Gol, E.S.; Stein, M.-K.; Avital, M. Crowdwork platform governance toward organizational value creation. J. Strateg. Inf. Syst. 2019, 28, 175–195. [Google Scholar]
  27. Hein, A.; Schreieck, M.; Wiesche, M.; Krcmar, H. Multiple-case analysis on governance mechanisms of multi-sided platforms. In Proceedings of Multikonferenz Wirtschaftsinformatik; Universitätsverlag Ilmenau: Ilmenau, Germany, 2016. [Google Scholar]
  28. Greene, D.; Hoffmann, A.L.; Stark, L. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Grand Wailea, HI, USA, 8–11 January 2019. [Google Scholar]
  29. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9. [Google Scholar] [CrossRef]
  30. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019. [Google Scholar]
  31. Legg, S.; Hutter, M. A collection of definitions of intelligence. Front. Artif. Intell. Appl. 2007, 157, 17. [Google Scholar]
  32. Samoili, S.; Cobo, M.L.; Gomez, E.; De Prato, G.; Martinez-Plumed, F.; Delipetrev, B. AI Watch. Defining Artificial Intelligence. Towards an Operational Definition and Taxonomy of Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar]
  33. Borenstein, J.; Grodzinsky, F.S.; Howard, A.; Miller, K.W.; Wolf, M.J. AI Ethics: A Long History and a Recent Burst of Attention. Computer 2021, 54, 96–102. [Google Scholar] [CrossRef]
  34. HLEG. A Definition of AI: Main Capabilities and Disciplines; European Commission: Brussels, Belgium, 2019. [Google Scholar]
  35. Mucha, T.; Seppala, T. Artificial Intelligence Platforms—A New Research Agenda for Digital Platform Economy; Elsevier Inc.: Amsterdam, The Netherlands, 2020. [Google Scholar]
  36. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  37. Suzor, N.; Van Geelen, T.; Myers West, S. Evaluating the legitimacy of platform governance: A review of research and a shared research agenda. Int. Commun. Gaz. 2018, 80, 385–400. [Google Scholar] [CrossRef]
  38. Deng, X.; Joshi, K.D.; Galliers, R.D. The Duality of empowerment and marginalization in microtask crowdsourcing. MIS Q. 2016, 40, 279–302. [Google Scholar] [CrossRef]
  39. Urzì Brancati, M.C.; Pesole, A.; Fernandez-Macias, E. New Evidence on Platform Workers in Europe: Results from the Second COLLEEM Survey; Joint Research Centre (Seville site): Sevilla, Spain, 2020. [Google Scholar]
  40. O’Farrell, R.; Montagnier, P. Measuring digital platform-mediated workers. New Technol. Work Employ. 2020, 35, 130–144. [Google Scholar] [CrossRef]
  41. Lecher, C. How Amazon Automatically Tracks and Fires Warehouse Workers for ‘Productivity’. 2019. Available online: https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations (accessed on 25 September 2023).
  42. Roberts, S.T. Behind the Screen: Content Moderation in the Shadows of Social Media; Yale University Press: New Haven, CT, USA, 2019. [Google Scholar]
  43. Ross, J.; Irani, L.; Silberman, M.S.; Zaldivar, A.; Tomlinson, B. Who are the crowdworkers? Shifting demographics in Mechanical Turk. In CHI’10 Extended Abstracts on Human Factors in Computing Systems; AMC: New York, NY, USA, 2010; pp. 2863–2872. [Google Scholar]
  44. Huws, U. Labor in the Global Digital Economy: The Cybertariat Comes of Age; NYU Press: New York, NY, USA, 2014. [Google Scholar]
  45. Marvit, M.Z. How Crowdworkers Became the Ghosts in the Digital Machine. 2014. Available online: https://www.thenation.com/article/archive/how-crowdworkers-became-ghosts-digital-machine/ (accessed on 25 September 2023).
  46. Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018. [Google Scholar]
  47. Ilicki, J. A Framework for Critically Assessing ChatGPT and Other Large Language Artificial Intelligence Model Applications in Health Care. Mayo Clin. Proc. Digit. Health 2023, 1, 185–188. [Google Scholar] [CrossRef]
  48. Angwin, J.; Larson, J. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. 2016. Available online: https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say#:~:text=Series%3A%20Machine%20Bias-,Bias%20in%20Criminal%20Risk%20Scores%20Is%20Mathematically%20Inevitable%2C%20Researchers%20Say,on%20the%20fairness%20of%20outcomes (accessed on 24 September 2022).
  49. Yapo, A.; Weiss, J. Ethical Implications of Bias in Machine Learning. In Proceedings of the 51st Hawaii International Conference on System Sciences, Waikoloa Village, HI, USA, 3–6 January 2018. [Google Scholar]
  50. Nguyen, A.; Yosinski, J.; Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  51. Golumbia, D. Do You Oppose Bad Technology, or Democracy? 2019. Available online: https://medium.com/@davidgolumbia/do-you-oppose-bad-technology-or-democracy-c8bab5e53b32 (accessed on 27 September 2022).
  52. MacCarthy, M. Transparency Recommendations for Regulatory Regimes of Digital Platforms; Centre for International Governance Innovation: Waterloo, ON, Canada, 2022. [Google Scholar]
  53. Krijger, J. Enter the metrics: Critical theory and organizational operationalization of AI ethics. AI Soc. 2022, 37, 1427–1437. [Google Scholar] [CrossRef]
  54. Kocurek, C.A. Night Trap: Moral Panic. In How to Play Video Games; New York University Press: New York, NY, USA, 2019; pp. 309–316. [Google Scholar]
  55. Laczniak, R.N.; Carlson, L.; Walker, D.; Brocato, E.D. Parental restrictive mediation and children’s violent video game play: The effectiveness of the Entertainment Software Rating Board (ESRB) rating system. J. Public Policy Mark. 2017, 36, 70–78. [Google Scholar] [CrossRef]
  56. Federer, L.M.; Belter, C.W.; Joubert, D.J.; Livinski, A.; Lu, Y.-L.; Snyders, L.N.; Thompson, H. Data sharing in PLOS ONE: An analysis of Data Availability Statements. PLoS ONE 2018, 13, e0194768. [Google Scholar] [CrossRef]
  57. Gherghina, S.; Katsanidou, A. Data Availability in Political Science Journals. Eur. Political Sci. 2013, 12, 333–349. [Google Scholar] [CrossRef]
  58. Hardwicke, T.E.; Mathur, M.B.; MacDonald, K.; Nilsonne, G.; Banks, G.C.; Kidwell, M.C.; Hofelich Mohr, A.; Clayton, E.; Yoon, E.J.; Henry Tessler, M. Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. R. Soc. Open Sci. 2018, 5, 180448. [Google Scholar] [CrossRef]
  59. Gunning, D. Broad Agency Announcement Explainable Artificial Intelligence (XAI); Technical report; Defense Advanced Research Projects Agency (DARPA): Arlington, TX, USA, 2016. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mirghaderi, L.; Sziron, M.; Hildt, E. Ethics and Transparency Issues in Digital Platforms: An Overview. AI 2023, 4, 831-843. https://doi.org/10.3390/ai4040042

AMA Style

Mirghaderi L, Sziron M, Hildt E. Ethics and Transparency Issues in Digital Platforms: An Overview. AI. 2023; 4(4):831-843. https://doi.org/10.3390/ai4040042

Chicago/Turabian Style

Mirghaderi, Leilasadat, Monika Sziron, and Elisabeth Hildt. 2023. "Ethics and Transparency Issues in Digital Platforms: An Overview" AI 4, no. 4: 831-843. https://doi.org/10.3390/ai4040042

APA Style

Mirghaderi, L., Sziron, M., & Hildt, E. (2023). Ethics and Transparency Issues in Digital Platforms: An Overview. AI, 4(4), 831-843. https://doi.org/10.3390/ai4040042

Article Metrics

Back to TopTop