Next Article in Journal
AdVulCode: Generating Adversarial Vulnerable Code against Deep Learning-Based Vulnerability Detectors
Previous Article in Journal
An Improved Algorithm for Insulator and Defect Detection Based on YOLOv4
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Computer Interaction and Participation in Software Crowdsourcing

1
Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha P.O. Box 2713, Qatar
2
Department of Computer Science, Al Ain University, Abu Dhabi P.O. Box 112612, United Arab Emirates
3
Department of Computer Science, University of Swabi, Swabi 23430, Pakistan
4
BK21 Chungbuk Information Technology Education and Research Center, Chungbuk National University, Cheongju 28644, Republic of Korea
5
Department of Electrical Engineering, College of Engineering, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(4), 934; https://doi.org/10.3390/electronics12040934
Submission received: 8 January 2023 / Revised: 1 February 2023 / Accepted: 6 February 2023 / Published: 13 February 2023
(This article belongs to the Special Issue New Challenges of Networking Technologies and IoT)

Abstract

:
Improvements in communication and networking technologies have transformed people’s lives and organizations’ activities. Web 2.0 innovation has provided a variety of hybridized applications and tools that have changed enterprises’ functional and communication processes. People use numerous platforms to broaden their social contacts, select items, execute duties, and learn new things. Context: Crowdsourcing is an internet-enabled problem-solving strategy that utilizes human–computer interaction to leverage the expertise of people to achieve business goals. In crowdsourcing approaches, three main entities work in collaboration to solve various problems. These entities are requestors (job providers), platforms, and online users. Tasks are announced by requestors on crowdsourcing platforms, and online users, after passing initial screening, are allowed to work on these tasks. Crowds participate to achieve various rewards. Motivation: Crowdsourcing is gaining importance as an alternate outsourcing approach in the software engineering industry. Crowdsourcing application development involves complicated tasks that vary considerably from the micro-tasks available on platforms such as Amazon Mechanical Turk. To obtain the tangible opportunities of crowdsourcing in the realm of software development, corporations should first grasp how this technique works, what problems occur, and what factors might influence community involvement and co-creation. Online communities have become more popular recently with the rise in crowdsourcing platforms. These communities concentrate on specific problems and help people with solving and managing these problems. Objectives: We set three main goals to research crowd interaction: (1) find the appropriate characteristics of social crowd utilized for effective software crowdsourcing, (2) highlight the motivation of a crowd for virtual tasks, and (3) evaluate primary participation reasons by assessing various crowds using Fuzzy AHP and TOPSIS method. Conclusion: We developed a decision support system to examine the appropriate reasons of crowd participation in crowdsourcing. Rewards and employments were evaluated as the primary motives of crowds for accomplishing tasks on crowdsourcing platforms, knowledge sharing was evaluated as the third reason, ranking was the fourth, competency was the fifth, socialization was sixth, and source of inspiration was the seventh.

1. Introduction

Individuals contribute their time, expertise, and wealth to help the needy and transform the Earth into a better living place [1]. Technologies like social networking and Web 2.0 are making health and medical care more accessible to businesses, professionals, patients, and laypeople. The innovative tools and applications made publicly available by Web 2.0 innovation have changed how organizations operate and communicate. People utilize a variety of platforms to expand their social networks, purchase items, complete activities, and learn new things. Information retrieval, blogging, tagging, path-finding, text messaging, collaborative online services, and multi-player gaming are some of the activities that are carried by Web 2.0 applications [2]. Individuals may now interact and collaborate more easily because of advances in technology, and this engagement of people is referred as “crowdsourcing” [3]. Crowdsourcing is a task-solving methodology in which human participation is required to solve difficult tasks [4]. Jeff Howe first used the term “crowdsourcing” in Wired magazine. Crowdsourcing, according to him, is “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call” [5].
Crowdsourcing is a popular strategy for accomplishing a variety of tasks. Several individuals and their interactions are involved in the process, such as requesters and crowdsourcers [6] that manage, execute, and supervise crowdsourcing initiatives and may post task requests, the crowd (individuals) [7], consisting of virtual employees who participate in outsourcing activities or events, and the platform [8], which serves as a channel for interaction between the crowd and the crowdsourcers. The people on these platforms are connected by means of social media such as Facebook, WhatsApp, Instagram, Twitter, Y-mail, and Gmail accounts. Social networking improvements have encouraged corporations to collect information from individuals all over the world in order to identify the best solutions to unique challenges [9,10]. Crowdsourcing allows enterprises to hire globalized, low-cost, and talented workers through internet platforms [11,12].Crowdsourcing is employed for a wide range of tasks including spelling correction, content creation, coding, pattern recognition, software development, and debugging [13]. The task is advertised on a platform, and the crowd participates in various types of activities [14]. Considering the user interface and their intrinsic motivation, crowdsourcing platforms utilize appealing concepts that promote human–computer interaction in the context of open innovation [15]. In some situations, computers may be used to manage crowdsourced tasks, resulting in human-based computation systems. This type of human-based computing is incorporated in many online systems (crowdsourcing platforms) [16]. Crowdsourcing platforms are websites that work as the interface between job seekers and online people. Both entities are registered on these crowdsourcing platforms. The platform uses various selection methods such as skill testing, profiling, previous participating history, matching, and many others to select an appropriate participant to accomplish requestor tasks [17].
In crowdsourcing, participants from diverse backgrounds have skills, knowledge, abilities, and some expertise in task domains, and they collaborate to tackle various challenges [3,6]. “Crowdsourced Software Engineering is the act of undertaking any external software engineering tasks by an undefined, potentially large group of online workers in an open call format” [18]. Crowdsourcing allows a requestor to tap into a global community of users with various types of expertise and background to facilitate the completion of a task that would be difficult to complete without a large group of individuals [19]. Crowdsourcing has also been utilized in software engineering to resolve coding, validation, and architectural problems. However, most crowdsourcing approaches for engineering are for theoretical concepts and are often implemented and assessed on a comparatively small crowd with a maximum of 20 participants. Considering the nature of the crowdsourcing sector and the cognitive features of participants, there is a great demand for an integrated resource-sharing crowdsourcing environment for real-world solutions [20].
Modern society depends on sophisticated hardware and software systems, many of which are safety-critical, such health monitoring software, which is utilized by medical organizations to detect, monitor, and aid elders and patients. Innovations in mobile computing might change how health interventions are delivered. In order to overcome limitations brought on by a scarcity of clinician timing, poor patient engagement, and the challenge of ensuring appropriate treatments at the correct time, mobile health (mHealth) treatment may be more effective than occasional in-clinic consultations. The delivery of healthcare is being revolutionized by technology that collects, analyzes, and configures patient data across devices. Intelligent and interconnected healthcare will deliver benefits that are safe, more user-centered, economical, effective, and impartial due to advancements in ubiquitous computing.
Crowdsourcing is being utilized more often as it offers the opportunity to mobilize a wide and heterogeneous group through improved communication and collaboration. In the areas of research, crowdsourcing R&D has been successful. Due to “crowdsourcing’s elasticity and mobility”, it is excellent for carrying out research tasks such data processing, surveying, monitoring, and evaluation. By including the public as innovation partners, projects may be improved in terms of quality, cost, and speed while also generating solutions to important research problems [21].
The key contributions of this article are as follows:
  • Analysis of the appropriate characteristics of social crowds utilized for effective software crowdsourcing.
  • Analysis of the participation reasons of people in carrying out software developmental tasks.
  • Development of a decision support system for evaluating primary participation reasons by assessing various crowds using Fuzzy AHP and TOPSIS techniques.
The paper is organized into five sections. Section 2 describes the existing literature on crowdsourcing, Section 3 describes the overall methodology of our study, Section 4, “Experimental setup and results”, provides the description and evaluation of the proposed method, and Section 5, “conclusion”, concludes and summarizes the objectives achieved in this study.

2. Literature Review

The Internet has made it possible for organizations to attract a significant number of individuals. Cell phones, computers, tablets, and smart gadgets are all ways for crowd and requestor organizations to engage people to carry out various crowdsourcing tasks (health monitoring, question and answering, problem solution, recommendations, etc.). Crowds are employed from around the world to undertake various jobs. As the tasks are completed by groups of individuals, quality outcomes may be obtained in less time and with less expenditure [22]. Organizations issue an open call for employees who will satisfy specific standards, along with guidelines if the organization receives a reply from competent workers to carry out the assignment within the specific timelines. The organization provides a confirmation to that individual [23]. Crowds are constituted of a group of skilled people who possess some expertise [7,24]. Organizations recruit people who can provide numerous and diverse suggestions for fixing technical concerns [25]. The company pre-assesses the individual’s capacity to engage in complicated activities [26].
Participants are selected based on their backgrounds. Demographic filtering is used to pick persons from relevant countries/localities. If a worker is ready to begin a task, they must supply demographic information. The audience is drawn from a variety of sources and backgrounds. Not every crowd is suitable for every activity, and different activities need different levels of expertise, field knowledge, and so on. Inaccurate workers can reduce job productivity and increase recruitment costs. Choosing the right employee is a difficult task. Workers are recruited with three goals in mind: maximize test specification scope, maximize recruited worker competence in bug identification, and decrease costs. Workers are classified into five belts based on their enrollment—red, green, yellow, blue, and grey—that indicate their skill levels. Worker reliability is evaluated by qualifying and completing the assigned work [27].
To choose appropriate personnel for the sensing task, many task assignment methodologies are used. Mobile crowd-sensing is a technology that enables a group of people to interact and gather data from devices with sensing and processing capabilities in order to measure and visualize phenomena that are of interest to all. Using smartphones, data can be gathered in everyday life and easily compared to other users of the crowd, especially when taking environmental factors or sensor data into account as well. In the context of chronic diseases, mobile technology can particularly help to empower patients in properly coping with their individual health situations. Employee selection in Mobile Crowd Sensing (MCS) is a difficult problem that has an impact on sensing efficiency and quality. Different standards are used to screen for appropriate employees. Participants in the job scheduling system employ sensors to gather or evaluate details about their actual subject [11].
Many software firms are knowledge-intensive; therefore, knowledge management is critical. The design and execution of software systems need information that is frequently spread across many personnel with diverse areas of experience and capabilities [28]. Software engineering is increasingly taking place in companies and communities involving large numbers of individuals, rather than in limited, isolated groups of developers [29]. The popularity of social media has created new methods of distributing knowledge over websites. We live in an environment dominated by social media and user-generated information. Many social relationships, from pleasure to learning and employment, are the result of people engaging actively with one another. It is hardly surprising that social sites have modified the spread of knowledge [30].
Social media is essential for organizations of all sizes because people must engage to perform tasks. As technology develops, it becomes more user-friendly and incorporates a variety of features, such as an operating system based on social factors, software applications primarily geared toward communication, and a medium of engagement through social networks, which are becoming more and more significant. Social media have altered how individuals engage with and share their perspectives on state policies. Four key objectives are achieved by organizations using social media: engaging with citizens, promoting citizen involvement, advancing open government, and analyzing/monitoring public sentiment and activity [31].
Human–computer interaction (HCI) research and practice are based on the principle of human-centered design. The goal of human-centered design is to develop new technologies that are geared toward the requirements and activities of the users. This design philosophy ensures that user needs are considered throughout the whole development process of a technology, from obtaining required information through its final stages. Crowdsourcing utilizes gamification-based strategy for solving larger problems. The practice of adding features that provide game-like representation is known as gamification. The method seeks to boost overall problem-solving strategy by eliciting users’ intrinsic drives through the development of systems that are similar to game interfaces. The principles and characteristics of games may be utilized to attract and engage users, lowering anticipated constraints to system usage, such as low motivation and low acceptance rates, and transforming game-based activities into successful outcomes [32].

2.1. Stack Overflow

Social media has become an essential source of information for a diverse range of fields as a means to gain a broader knowledge of information processes and community groups [33]. To aid with application creation, the internet provides a plethora of built-in libraries and tools. Developers typically use pre-existing mobile APIs to save time and money. An Application Programming Interface is a group of elements that provide a mechanism for software-to-software interaction. Stack Overflow (SO) is a renowned question-and-answer platform for software developers, engineers, and beginners. The Stack Overflow technique provides a distributed skill set that enables clients worldwide to improve and broaden their knowledge in coding and their communication capabilities [34]. SO enables a user (literally, the problem presenter or question submitter) to initiate a conversation (question), provide an answer, make discussion, rate questions, and accept responses that they believe are beneficial [35].
Choosing the right coding language is usually an important phase in the software development process. Technical factors concerning the coding language’s abilities and flaws in resolving the topic of concern naturally drive this decision. The recent emergence of social networks concerned with technical difficulties has brought professionals into conversations regarding programming languages wherein rigorously technical difficulties usually compete with strongly articulated viewpoints [36]. Practitioners and scholars throughout software engineering are continually focusing on the challenges associated with mobile software development [37]. When coworkers are drawn from various cultures, tackling cultural barriers in software engineering is essential for ensuring appropriate group performance, and the necessity of managing such difficulties has expanded with activities that are reinforced in software development [38].
Some famous platforms that assign workers to requested tasks are mentioned in subsequent sections. Platforms and their working procedures are also briefly highlighted in [39], and their working procedures are also explained.

2.2. Amazon Mechanical Turk

Amazon Mechanical Turk is a crowdsourced recruiting platform that enables requesters to offer online projects in exchange for completion rewards to online people who have the required skills, without the limitations of permanent employment [40,41]. Requesters seeking access to a large pool of individuals have minimal entrance requirements for executing human intelligence task (HITs). The tasks are assigned by requesters and completed by individuals. The HITs vary in complexity from recognizing an image to performing domain-specific tasks such as interpreting source code [42]. Requesters might provide qualifying conditions, such as gender, age, and geography requirements [43]. MTurk is a platform for seeking jobs that may require human intelligence. By displaying Human Intelligence Tasks, employees may examine and decide to complete specific tasks [44]. Workers sign up for tasks on the platform and afterwards according to their credentials and compete on micro projects known as HITs that are advertised by requester organizations that require the accomplishment of such tasks. The MTurk workforce is mostly constituted of individuals from various parts of the globe [45].

2.3. Upwork

Upwork is a platform that connects people selling work with potential employees. Organizations may publish a variety of jobs on Upwork for potential workers to bid on. Accounts are created on Upwork by both job hunters and job posters in order to browse or post jobs and to use the services and functionalities that Upwork delivers. Jobs on Upwork often entail client commitments that last from hours to weeks and necessitate more dynamic interaction than those advertised on micro-tasking websites such as AMT. Jobs advertised on Upwork generally demand a higher degree of implicit understanding and, as a result, a new strategy for planning and coordinating activities between providers and users that goes beyond simple automated monitoring [46]. Upwork enables skilled employees to perform knowledge work ranging from web design to strategic decisions [47].

2.4. Freelancer

Freelancers are self-employed individuals who have a short-term, task-based relationship with employers and hence are not part of the firm workforce. Their relationship with the firm lasts just until the assigned task is finished satisfactorily; therefore, they do not have the long-term obligations with the corporation that full-time workers have. In exchange for payment, they are obligated to finish the assignment with high quality standards by the agreed-upon day. During the period that the freelancer is carrying out the assigned activity, they are permitted to take on additional freelancing projects with various requesters under different contractual circumstances. In other words, freelancers may work on many projects concurrently. As opposed to full-time employees, freelancers are not bound by restrictive and long contractual work [48]. Engagement of freelancers on these websites enables them to pool their collective intellect in order to execute an assignment in a creative and cost-effective way. The lower processing cost makes freelancers a desirable option for completing tasks efficiently and effectively [49]. FaaT (Freelance as a Team) is an approach for professionals to optimize their internal procedures to fulfill the requirements and capabilities of a single programmer [50].

2.5. Top Coder

Many prominent application developers use online communities to enhance the products or solutions they deliver. TopCoder is a digital platform of over 430,000 creative professionals that contest to build and improve software, websites, and mobile applications for subscribers. TopCoder was a pioneer of technology innovation and allows developers and producers from across the world to choose and solve the various problems and difficulties to which they wish to make contributions. TopCoder offers functionality and technology to coordinate and ease the advancement of solution and implementation [51]. Every TopCoder application passes through the following phases: application design, architecture, development, assembly, and delivery.
Each step is advertised as a contest on the TopCoder site. Registered platform users can enter any contest and submit the appropriate solutions. The preceding phase’s successful answer is used as input for the following step. The needs of businesses are gathered and specified during the application specification process. Following that, each application is separated into a collection of components in the architecture stage. Following that, each part passes through the design and development phases. The components are then joined together during the assembly process to develop the application, which is eventually deployed and delivered to corporations [52].
Overall, the registration of social people on crowdsourcing platforms to participate on different tasks is on the rise.

3. Methodology

As the value-creation process of organization has transformed from being centralized to decentralized and from being closed to being open, various operational constraints are currently disappearing with advancements in technology. Due to internal resource constraints and external competitiveness, corporations are searching for crowdsourcing initiatives to direct crowdsourcing towards innovative products and services. Crowd involvement is a key concern in crowdsourcing systems, since it has been found to be essential to the variety and success of crowdsourcing competitions. Crowdsourcers announce tasks on crowdsourcing platforms where virtual crowds are present, and many people believe that the quantity of participants is a good proxy for contest quality when determining the value that can be derived from participating in the contest. Crowdsourcing is one of the numerous digital economy sectors that have emerged as a result of the expansion of internet connectivity and cellular technologies. By linking requesters and employees from all over the world together in a public setting, crowdsourcing tackles the problem of individual employment and advances societal wellbeing [53]. Social and psychological aspects have a significant role in determining how people work, their involvement in their jobs, their well-being, and the sustainability of their employment [54]. The expectancy theory postulates that the task’s accomplishment influences the individual’s decision by transforming his or her mental representation, particularly by their perceptual expectation and valence. The most impact on perceived expectation derives from comments concerning task completion, whereas the major influence on perceived valence comes from narratives about rewards. It has also been observed that task accomplishment and compensation are commonly focused when crowdsourcing organizations post tasks for the general public.
Our research focused on three goals. The first two goals were achieved from a review of the literature, and the third goal was achieved by using a decision support system. The goals are discussed in the Methodology section, and the results are provided in Section 4.

3.1. Analysis of the Appropriate Characteristics of Social Crowds Utilized for Effective Software Crowdsourcing

Corporations are interested in harnessing and learning from individuals. Superior strategies are used to acquire this expertise from outside specialists in order to enhance the effectiveness of diverse processes [25]. Workers are typically divided into two categories: trustworthy and untrustworthy. Trustworthy employees accomplish their jobs honestly; hence, trustworthiness is a favorable characteristic of the populace. Untrustworthy employees are solely concerned in the incentives linked with duties; thus, they do not truly work and are a destructive presence in a crowd [55]. Workers can solve enormous challenges. These challenges need innovation, practical wisdom, and prior knowledge. Non-experts can also do jobs involving geo-referenced data, maps, and atlases. Workers can be classified in accordance with their contributions and previous participation history, and they are chosen exclusively on these categories [56]. Crowd workers are either software engineers with programming abilities or software testers who provide multiple analytical services to help the software development process [57].
The crowd consists of qualified professionals with varied talents (java, Photoshop, accountancy, etc.) [58]. Appropriately diversified crowds engage in the creation of numerous initiatives. Domain specialists are chosen from crowds of inside and outside developer groups in a collaborative setting. Crowd employees are able to work on coding, architectural implementations, unit tests, and debugging [8]. Crowd developers collaborate, exchange, and cooperate with one another to make the software development process more productive. The audience consists of smartphone users who place bids on social site [59]. Through an open call, developers are invited to engage in various developmental stages of software life cycle [60]. Individuals have diverse competences and the capacity to coordinate different tasks, adapt to changes in the workplace, and create their own designs [61]. A corporation picks a group of employees with diverse knowledge, numbers, and heterogeneity.
Complex activities are carried out by people with specialized abilities, such as software developers and engineers. The entire software package or a portion of it is outsourced to a vast pool of possible developers. Professionals work on these projects or components in order to deliver a solution. Workers in crowd-sourced software development collaborate and manage time to build high-quality software. The crowd understands the objective of the activity and collaborates in English [62]. Experienced software testers provide high-quality products [63]. The crowd is recruited from a pool of qualified testing professionals and is employed to perform operational tests. Experts and software engineers are employed for development, testing, and evaluating the results [64]. Some individuals in the pool have defined competence and a high degree of experience to give a response to various tasks [65,66]. The overall crowdsourcing phenomenon is represented in Figure 1.

3.2. Analysis of the Participating Reasons of People in Carrying out Software Developmental Task

When analyzing any type of user community, it is crucial to consider the potential effects that the recruiting strategies and distribution channels may have on the research outcome. It is well-understood that different user demographics have different expectations, preferences, and cultural backgrounds [67]. Crowdsourcing participants may be divided into paid and unpaid crowd labor, depending on the compensation obtained by employees. The term “paid work” refers to crowdsourcing jobs for which participants are compensated financially, generally using a platform that streamlines payments. Crowdwork, Crowd4U, Wikipedia, Test My Brain, Moral Machine, and Zooniverse are platforms on which users participate without greed or expecting anything in return [68]. Crowd workers on other platforms such as fiver and AMT perform and complete various tasks as they have expectations in return. These expectations may include a reward that is associated with task completion, or may involve various other material or non-material things [6].

3.2.1. Gaining Various Types of Rewards

People perform diverse activities (such as development, designing, debugging) in order to get rewarded [57]. Crowds are compensated according to the quality of their work. Employees that create high-quality results are rewarded more. The major motivations for crowd involvement are monetary rewards. Monetary awards may play an important part in engaging and motivating employees, which can lead to considerable returns on the organizational level [69,70]. These benefits might be cash rewards or reimbursement. Rapid payoffs, as well as earning extra money as a result of perks, encourage the crowd to participate. Non-monetary incentives are an excellent way to boost engagement and participation. Players complete numerous tasks in order to earn reputation, credit scores, status enhancement [8], and compensation [71] from job seekers.

3.2.2. Ranking

Several incentive approaches are used to encourage public participation, just as they are used to motivate internal working groups in corporations. Employees are granted various points based on their degree of work engagement, and these scores are converted into numerous presents and awards [25]. Developers join in order to improve their ranking, self-development, and to safeguard the authorship of their unique work [11,63].

3.2.3. Employment Purposes

Employees from the outside world engage in various tasks for professional progress and employment prospects [14,72].

3.2.4. Enhancing or Sharing Knowledge

Crowd involvement is primarily motivated by the desire to obtain information, improve understanding, and education. Participation is important for skill development. The crowd contributes to numerous crowdsourcing jobs in order to raise their degree of competence [73]. They may take part in order to learn or share what they have learned. Crowd participation may be used to seek a suitable response to a question [8,57,74]. The crowd may engage by suggesting and brainstorming [9] ideas [75,76] for developmental projects.

3.2.5. Socialization

Some motivators for participation include social comparison (identity with competitors), social capital (partnership), socialization (making new friends), and connectivity motivations (developing interpersonal or professional relationships) that boost the personality of the engaging crowd. The individual may also carry out various activities in order to acquire specified incentives, through which the worker increases their fame and recognition among peers. Crowd involvement may occur as a result of one’s own exposure, such as exposure, self-advertisement, and identity. Self-efficacy indicates that an individual’s efforts are valued, that they will be rewarded for participating in tasks, and that they are providing the most appropriate solutions. Crowd involvement may be a result of determined motivation to achieve personal goal achievement [77]. Crowdfunding may also be used to raise and collect finances for initiatives, while the audience wishes to contribute with others. Four forms of incentives are present: helping others, encouraging others in completing tasks, providing effective solutions, and ensuring trust [78].

3.2.6. Source of Inspiration

Crowds may be stimulated to encourage people to participate in tasks such as expectancy in which individuals work as volunteer for solving community problems as other workers will also benefit people. Altruistic commitment (acting without expectations) or pure compassion may be contributory factors [79,80]. Funding and campaigning may also be reasons for crowd participation [71].

3.2.7. Evaluating Competency Level

Individuals may engage in order to assess their own skill and aptitude. By competing in several tournaments, workers can acquire a variety of rewards. Workers may also receive feedback on their efforts after they help complete a job. The audience may engage in activities in order to meet requirements by detecting and correcting different errors [81].

3.3. Development of a Decision Support System for Evaluating Primary Participation Reasons by Assessing Various Crowds Using Fuzzy AHP and TOPSIS Techniques

The Analytic Hierarchy Process (AHP), which was invented by Saaty in the late 1970s, is one of the methods for making multi-criteria decisions. In this method, a complicated decision problem is divided into several hierarchical levels. The weight of each criterion and alternative is estimated via pairwise comparisons, and the priority is established using the eigenvector method. Fuzzy AHP is an analytical hierarchy process (AHP) that is based on fuzzy logic. The AHP technique and the fuzzy AHP approach are interchangeable. The fuzzy AHP approach merely transforms the AHP scaling into a fuzzy triangle scaling that may be obtained in a variety of ways. It is frequently used in situations with ambiguity and uncertainty, but it typically tackles concerns employing several criteria. The TOPSIS technique is useful for assessing alternatives based on their proximity to the ideal +ive and ideal -ive solutions in Euclidian space. It is a realistic way for addressing difficulties that need many decision-making procedures. The choice is made after thoroughly considering all the available options in the scenario after comparing the efficacy of several solutions in a transparent and sensible manner. In our proposed method, we used the TOPSIS algorithm to evaluate the audience based on their motivations for involvement. Details of our method are presented in Section 4.

4. Experimental Setup and Results

In order to cope with ambiguity and imprecision throughout the decision-making process, one of the main AI agents, known as fuzzy set theory, was applied to evaluate reasons for participation. This section discusses our evaluation of our proposed method.

4.1. Fuzzy AHP Approach for Finding Criterion Weightage

The weightage of criteria was calculated using a fuzzy scale, as presented in Table 1.
Our suggested strategy uses the fuzzy AHP approach to assess a crowd based on their participation motives. This method reliably assesses selected qualities and determines their percentage relevance. Seven engagement criteria were taken into consideration in the proposed study. The variables were identified by their titles, which included competency, knowledge-sharing, socialization, ranking, employment, and rewards. The list below is ordered by the procedure results and total numerical effort. The steps of the approach are as follows.
Step 1. Draw a pairwise decision matrix n*n.
C = [ C 11 C 1 n C 21 C 2 n C 31 C 3 n C 41 C 4 n C n 1 C n n ]
The decision matrix (n*n) may be created by solving the preceding matrix equation and assigning a value from 1 to 10 to each criterion, as shown in Table 2.
Step 2. Replacing and offering fuzzy numbers to each criterion. For reciprocals, the equation is
A − 1 = (l, m, u) − 1 = (1/u, 1/m, 1/l),
where l is a lower number, m is the middle number, and u is the upper number.
Equation (2) may be used to replace specific integers with fuzzy numbers, and the resultant fuzzified matrix is shown in Table 3.
Step 3. We compute the fuzzy geometric mean value (FGMV) by implementing the following equation,
FGMV = Ã1 * Ã2…* Ãn = ((l1,m1,u1) * (l2,m2,u2) * (l3,m3,u3)*…*(ln, mn, un)) = ((l1 * l2 * l3*…*ln)1/n, (m1 * m2*…* mn)1/n, (u1 * u2*…*un)1/n)
whereas “n” indicates the number of criteria.
The FGMV values are derived using solution (3). In Table 4, the results of the FGMV are shown.
Step 4. For computing the fuzzy weights (Wi), the formula is as follows:
Wi = ri * (r1, r2, r3… r10)−1
Step 5. Defuzzification: average weights are computed by using the formula given below:
Center of Area (wi) = l + m + u/3
Using the COA method, we obtain the average weights from fuzzy weights.
Step 6. If the overall sum of the average weightage is greater than one, convert the weights to normalized weights by applying the formula below:
Normalized   Weights   ( Ni ) = w i i   w i
Using the afore-mentioned Equations (4)–(6), we must determine the FGMV before calculating the fuzzy weights, average weights, and normalized weights of Formula (6). The fuzzy weights are initially calculated using Formula (4). Then, we use Formula (5) to calculate the average weights. Lastly, Formula (6) is used to obtain the normalized weights of the criterion. Table 5 displays the general results.
Figure 2 indicates the overall weights of the criteria (participation reasons). Here, rewards and employment are the primary motives of crowds for task accomplishment, followed by knowledge-sharing, ranking, competency purpose, socialization, and source of inspiration.

4.2. TOPSIS Technique for Evaluating and Ranking Alternatives

To overcome MCDM difficulties, Hwang and Yoon developed TOPSIS, a method for judging order performance by similarity to the ideal solution. According to the primary premise of the technique, the choice to be picked should be the one that is the furthest from the positive ideal solution and the closest to the negative one. In conventional MCDM techniques, the ratings and weights of criteria are precisely known. The traditional TOPSIS approach also uses real numbers to show the weights of the criteria and the ratings of the options. Several other fields have successfully used the TOPSIS technique. The proposed approach successfully evaluates the options and calculates their percentages. Five alternatives are proposed in the corresponding study. The alternatives are determined by their titles, which include crowd-1, crowd-2, crowd-3, crowd-4, and crowd-5. The following list is organized by the procedure’s outcomes and total numerical computation. The steps of the approach are as follows:
Step 1. Draw a decision matrix.
Develop the decision matrix by applying matrix Equation (7):
D = [ D ij ] = [ D 11 D 12 D 1 n D 21 D 22 D 2 n D 31 D 32 D 3 n D 41 D 42 D 4 n D m 1 D m 2 D m n ]
Here, i = 1,2,3,4,…,m and j = 1,2,3,4,…,m.
In the given matrix (1), Dij displays the value of ith alternatives on the jth characteristic.
Using the crowds and criteria listed in Table 6 as a foundation, the decision matrix can be built for five crowds and provide values between 1 to 10.
Step 2. Draw normalized decision matrix (NDM)
Identify the normalized matrix by using Equation (8):
F i j = D i j i = 1 n d i j 2
Equation (8) is used to normalize the previously provided decision matrix in Table 2; the results are shown in Table 7.
Step 3. Calculate the weighted normalized decision matrix (weighted NDM); recognize the weighted NDM via Equation (9):
N = Nᵢⱼ = Cⱼ ∗ Fᵢⱼ
N = [ N 11 N 1 j N 1 n N i 1 N i j N i n N m 1 N m i N m n ] = [ C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 C 1 f 11 ]
The normalized matrix shown in Table 6 was used to create the weighted NDM via Equation (9); the outputs of the scaled NDM, together with criteria weighting, are shown in Table 8.
Step 4. Calculating ideal +ive and −ive parameters
The ideal +ive and −ive solutions are determined using the given Formulas (10) and (11),
I j + = { ( max i ( I ij )   if   j   ε   J ) ; ( min i   ( I ij )   if   j   ε   J / )   }
I j = { ( min i ( I ij )   if   j   ε   J ) ; ( max i   ( I ij )   if   j   ε   J /   ) }
The ideal Ij and Ij+ solutions were determined from the weighted normalized table using Equations (10) and (11), and their results are shown in Table 9 below. A solution that optimizes the beneficial criterion and reduces the cost or non-beneficial criteria was the positive ideal Ij+ solution. The negative ideal Ij solution optimizes the cost or non-beneficial criteria while minimizing the beneficial criteria. The ideal Ij+ was the best crowd-motivation reason, while Ij was considered to be the worst crowd-motivation reason.
Step 5. Identifying ideal and non-ideal separation
The ideal (S+) as well as non-ideal separation (S) was determined using Equations (12) and (13):
S + = J = 1 n ( N ij   I + ) 2
S = J = 1 n ( N ij   I ) 2
Equations (13) and (14) were used to compute Si+ and Si, accordingly, for the ranking of options, and the overall result is shown in Table 10.
Step 6. Calculate performance score (Pᵢ) and ranking of alternatives
Pᵢ was determined using Equation (14):
Performance   score   ( P i ) = S i ( S i + + S i )
Table 11 demonstrates the ordering of options after measuring the ideal and non-ideal separation measures and identifying pi via Equation (14).
The following Figure 3 depicts the performance score and ranking of the evaluated crowds as alternatives.
We concluded from the outcomes that crowd-4 was the best alternative, having the highest performance value at 0.676, and thus, we ranked it 1st among the available crowds.

5. Conclusions

Crowdsourcing is a popular strategy for accomplishing a variety of tasks. Several individuals and their interactions are involved in the process, such as requesters that manage, execute, and supervise crowdsourcing initiatives and may post task requests, the crowd (individuals), consisting of virtual employees who participate in outsourcing activities or events, and the platform, which serves as a channel for interaction between the crowd and the crowdsourcers. The people on these platforms are connected by means of social media such as Facebook, WhatsApp, Instagram, Twitter, Y-mail, and Gmail accounts. By utilizing crowdsourcing, a requestor may access a wide potential audience with a variety of skills and experiences to help with work that would be challenging to do without many people. In the realm of software engineering, crowdsourcing has been used to address coding, validation, and architectural problems. A range of incentive schemes are used to encourage public involvement, just as they are used to motivate internal working groups in businesses. Employees get several types of points, cash awards, and other rewards based on their performance. Our study focused on achieving three goals related to crowdsourcing paradigm:
(1)
Analysis of the appropriate characteristics of social crowd utilized for effective software crowdsourcing.
(2)
Analysis of the participation reasons of people carrying out software developmental tasks.
(3)
Development of a decision support system for evaluating primary participation reasons by assessing various crowds using Fuzzy AHP and TOPSIS techniques.
A decision support system was developed in this study to analyze the appropriate reasons for crowd participation in software development. Rewards and employment were evaluated as the highest motives of crowds for task accomplishment, followed by knowledge-sharing, ranking, competency, socialization, and source of inspiration. As this study evaluated crowd drives and motivations for task accomplishment, it will assist crowdsourcing organizations in enhancing their operations and productivity by providing incentives according to crowd needs and expectations.

Author Contributions

Conceptualization, F.A. and S.N.; Methodology, H.U.K. and Y.Y.G.; Validation, H.G.M.; Formal analysis, I.U.; Investigation, H.U.K.; Writing—review & editing, F.A. and I.U. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023TR140), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

Not applicable.

Acknowledgment

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023TR140), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mazlan, N.; Ahmad, S.S.S.; Kamalrudin, M. Volunteer selection based on crowdsourcing approach. J. Ambient. Intell. Humaniz. Comput. 2018, 9, 743–753. [Google Scholar] [CrossRef]
  2. Raza, M.; Barket, A.R.; Rehman, A.U.; Rehman, A.; Ullah, I. Mobile crowdsensing based architecture for intelligent traffic prediction and quickest path selection. In Proceedings of the 2020 International Conference on UK-China Emerging Technologies (UCET), Glasgow, UK, 20–21 August 2020; pp. 1–4. [Google Scholar]
  3. Lee, J.; Seo, D. Crowdsourcing not all sourced by the crowd: An observation on the behavior of Wikipedia participants. Technovation 2016, 55, 14–21. [Google Scholar] [CrossRef]
  4. Zhai, L.; Wang, H.; Li, X. Optimal Task Partition with Delay Requirement in Mobile Crowdsourcing. Wirel. Commun. Mob. Comput. 2019, 2019, 1–12. [Google Scholar] [CrossRef]
  5. Howe, J. The rise of crowdsourcing. Wired Mag. 2006, 14, 1–4. [Google Scholar]
  6. Assis Neto, F.R.; Santos, C.A.S. Understanding crowdsourcing projects: A systematic review of tendencies, workflow, and quality management. Inf. Process. Manag. 2018, 54, 490–506. [Google Scholar] [CrossRef]
  7. Pongratz, H.J. Of crowds and talents: Discursive constructions of global online labour. New Technol. Work. Employ. 2018, 33, 58–73. [Google Scholar] [CrossRef]
  8. Sarı, A.; Tosun, A.; Alptekin, G.I. A systematic literature review on crowdsourcing in software engineering. J. Syst. Softw. 2019, 153, 200–219. [Google Scholar] [CrossRef]
  9. Wu, G.; Chen, Z.; Liu, J.; Han, D.; Qiao, B. Task assignment for social-oriented crowdsourcing. Front. Comput. Sci. 2021, 15, 1–11. [Google Scholar] [CrossRef]
  10. Boubiche, D.E.; Imran, M.; Maqsood, A.; Shoaib, M. Mobile crowd sensing—Taxonomy, applications, challenges, and solutions. Comput. Hum. Behav. 2019, 101, 352–370. [Google Scholar] [CrossRef]
  11. Stol, K.; Caglayan, B.; Fitzgerald, B. Competition-Based Crowdsourcing Software Development: A Multi-Method Study from a Customer Perspective. IEEE Trans. Softw. Eng. 2019, 45, 237–260. [Google Scholar] [CrossRef]
  12. Alsayyari, M.; Alyahya, S. Supporting Coordination in Crowdsourced Software Testing Services. In Proceedings of the 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE), Bamberg, Germany, 26–29 March 2018; pp. 69–75. [Google Scholar]
  13. Pee, L.G.; Koh, E.; Goh, M. Trait motivations of crowdsourcing and task choice: A distal-proximal perspective. Int. J. Inf. Manag. 2018, 40, 28–41. [Google Scholar] [CrossRef]
  14. Brandtner, P.; Auinger, A.; Helfert, M. Principles of human computer interaction in crowdsourcing to foster motivation in the context of open innovation. In Proceedings of the HCI in Business: First International Conference, HCIB 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, 22–27 June 2014; Proceedings 1; pp. 585–596. [Google Scholar]
  15. Wightman, D. Crowdsourcing human-based computation. In Proceedings of Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, New York, NY, USA, 16–20 October 2010; pp. 551–560. [Google Scholar]
  16. Shang, R.; Ma, Y.; Ali, F.; Hu, C.; Nazir, S.; Wei, H.; Khan, A. Selection of crowd in crowdsourcing for smart intelligent applications: A systematic mapping study. Sci. Program. 2021, 2021, 1–23. [Google Scholar] [CrossRef]
  17. Mao, K.; Capra, L.; Harman, M.; Jia, Y. A survey of the use of crowdsourcing in software engineering. J. Syst. Softw. 2016, 126, 57–84. [Google Scholar] [CrossRef]
  18. Stolee, K.T.; Elbaum, S. Exploring the use of crowdsourcing to support empirical studies in software engineering. In Proceedings of the 2010 ACM-IEEE international symposium on Empirical software engineering and measurement, Bolzano/Bozen, Italy, 16–17 September 2010; pp. 1–4. [Google Scholar]
  19. Xie, T.; Bishop, J.; Horspool, R.N.; Tillmann, N.; De Halleux, J. Crowdsourcing code and process via code hunt. In Proceedings of the 2015 IEEE/ACM 2nd International Workshop on CrowdSourcing in Software Engineering, Florence, Italy, 19 May 2015; pp. 15–16. [Google Scholar]
  20. Vermicelli, S.; Cricelli, L.; Grimaldi, M. How can crowdsourcing help tackle the COVID-19 pandemic? An explorative overview of innovative collaborative practices. RD Manag. 2021, 51, 183–194. [Google Scholar] [CrossRef]
  21. Mourelatos, E.; Tzagarakis, M. An investigation of factors affecting the visits of online crowdsourcing and labor platforms. NETNOMICS Econ. Res. Electron. Netw. 2018, 19, 95–130. [Google Scholar] [CrossRef]
  22. Peng, X.; Gu, J.; Tan, T.H.; Sun, J.; Yu, Y.; Nuseibeh, B.; Zhao, W. CrowdService: Serving the individuals through mobile crowdsourcing and service composition. In Proceedings of the 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), Singapore, 3–7 September 2016; pp. 214–219. [Google Scholar]
  23. Leicht, N.; Blohm, I.; Leimeister, J.M. Leveraging the Power of the Crowd for Software Testing. IEEE Softw. 2017, 34, 62–69. [Google Scholar] [CrossRef]
  24. Jeffcoat, K.L.; Eveleigh, T.J.; Tanju, B. A Conceptual Framework for Increasing Innovation through Improved Selection of Specialized Professionals. Eng. Manag. J. 2019, 31, 22–34. [Google Scholar] [CrossRef]
  25. Kamoun, F.; Alhadidi, D.; Maamar, Z. Weaving Risk Identification into Crowdsourcing Lifecycle. Procedia Comput. Sci. 2015, 56, 41–48. [Google Scholar] [CrossRef]
  26. Saremi, R.L.; Ye, Y.; Ruhe, G.; Messinger, D. Leveraging crowdsourcing for team elasticity: An empirical evaluation at TopCoder. In Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP), Buenos Aires, Argentina, 20–28 May 2017; pp. 103–112. [Google Scholar]
  27. Pryss, R. Mobile crowdsensing in healthcare scenarios: Taxonomy, conceptual pillars, smart mobile crowdsensing services. In Digital Phenotyping and Mobile Sensing; Springer: Berlin/Heidelberg, Germany, 2023; pp. 305–320. [Google Scholar]
  28. Barzilay, O.; Treude, C.; Zagalsky, A. Facilitating crowd sourced software engineering via stack overflow. In Finding Source Code on the Web for Remix and Reuse; Springer: Berlin/Heidelberg, Germany, 2013; pp. 289–308. [Google Scholar]
  29. Matei, S.A.; Abu Jabal, A.; Bertino, E. Social-collaborative determinants of content quality in online knowledge production systems: Comparing Wikipedia and Stack Overflow. Soc. Netw. Anal. Min. 2018, 8, 1–16. [Google Scholar] [CrossRef]
  30. Sathish, R.; Manikandan, R.; Priscila, S.S.; Sara, B.V.; Mahaveerakannan, R. A report on the impact of information technology and social media on COVID–19. In Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 3–5 December 2020; pp. 224–230. [Google Scholar]
  31. Wang, X.; Goh, D.H.-L.; Lim, E.-P. Understanding continuance intention toward crowdsourcing games: A longitudinal investigation. Int. J. Hum. Comput. Interact. 2020, 36, 1168–1177. [Google Scholar] [CrossRef]
  32. Blanco, G.; Pérez-López, R.; Fdez-Riverola, F.; Lourenço, A.M.G. Understanding the social evolution of the Java community in Stack Overflow: A 10-year study of developer interactions. Future Gener. Comput. Syst. 2020, 105, 446–454. [Google Scholar] [CrossRef] [Green Version]
  33. Zhu, W.; Zhang, H.; Hassan, A.E.; Godfrey, M.W. An empirical study of question discussions on Stack Overflow. arXiv 2021, arXiv:2109.13172. [Google Scholar] [CrossRef]
  34. Beddiar, C.; Khelili, I.E.; Bounour, N.; Seriai, A.-D. Classification of Android APIs Posts: An analysis of developer’s discussions on Stack Overflow. In Proceedings of the 2020 International Conference on Advanced Aspects of Software Engineering (ICAASE), Constantine, Algeria, 28–30 November 2020; pp. 1–5. [Google Scholar]
  35. Cagnoni, S.; Cozzini, L.; Lombardo, G.; Mordonini, M.; Poggi, A.; Tomaiuolo, M. Emotion-based analysis of programming languages on Stack Overflow. ICT Express 2020, 6, 238–242. [Google Scholar] [CrossRef]
  36. Rosen, C.; Shihab, E. What are mobile developers asking about? a large scale study using stack overflow. Empirical Software Engineering 2016, 21, 1192–1223. [Google Scholar] [CrossRef]
  37. Zolduoarrati, E.; Licorish, S.A.; Stanger, N. Impact of individualism and collectivism cultural profiles on the behaviour of software developers: A study of stack overflow. J. Syst. Softw. 2022, 192, 111427. [Google Scholar] [CrossRef]
  38. Zhen, Y.; Khan, A.; Nazir, S.; Huiqi, Z.; Alharbi, A.; Khan, S. Crowdsourcing usage, task assignment methods, and crowdsourcing platforms: A systematic literature review. J. Softw. Evol. Process 2021, 33, e2368. [Google Scholar] [CrossRef]
  39. Layman, L.; Sigurðsson, G. Using Amazon’s Mechanical Turk for User Studies: Eight Things You Need to Know. In Proceedings of the 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Baltimore, MD, USA, 10–11 October 2013; pp. 275–278. [Google Scholar]
  40. Ritchey, C.M.; Kuroda, T.; Rung, J.M.; Podlesnik, C.A. Evaluating extinction, renewal, and resurgence of operant behavior in humans with Amazon Mechanical Turk. Learn. Motiv. 2021, 74, 101728. [Google Scholar] [CrossRef]
  41. Sun, P.; Stolee, K.T. Exploring crowd consistency in a mechanical turk survey. In Proceedings of the 3rd International Workshop on CrowdSourcing in Software Engineering, Austin, TX, USA, 16 May 2016; pp. 8–14. [Google Scholar]
  42. Binder, C.C. Time-of-day and day-of-week variations in Amazon Mechanical Turk survey responses. J. Macroecon. 2022, 71, 103378. [Google Scholar] [CrossRef]
  43. Hilton, L.G.; Coulter, I.D.; Ryan, G.W.; Hays, R.D. Comparing the Recruitment of Research Participants With Chronic Low Back Pain Using Amazon Mechanical Turk With the Recruitment of Patients From Chiropractic Clinics: A Quasi-Experimental Study. J. Manip. Physiol. Ther. 2021, 44, 601–611. [Google Scholar] [CrossRef]
  44. Schmidt, G.B.; Jettinghoff, W.M. Using Amazon Mechanical Turk and other compensated crowdsourcing sites. Bus. Horiz. 2016, 59, 391–400. [Google Scholar] [CrossRef]
  45. Jarrahi, M.H.; Sutherland, W.; Nelson, S.B.; Sawyer, S. Platformic management, boundary resources for gig work, and worker autonomy. Comput. Support. Coop. Work. (CSCW) 2020, 29, 153–189. [Google Scholar] [CrossRef]
  46. Kinder, E.; Jarrahi, M.H.; Sutherland, W. Gig platforms, tensions, alliances and ecosystems: An actor-network perspective. Proc. ACM Hum. -Comput. Interact. 2019, 3, 1–26. [Google Scholar] [CrossRef]
  47. Gupta, V.; Fernandez-Crehuet, J.M.; Hanne, T. Freelancers in the software development process: A systematic mapping study. Processes 2020, 8, 1215. [Google Scholar] [CrossRef]
  48. Abhinav, K.; Dubey, A. Predicting budget for Crowdsourced and freelance software development projects. In Proceedings of the 10th Innovations in Software Engineering Conference, Jaipur, India, 5–7 February 2017; pp. 165–171. [Google Scholar]
  49. Bernabé, R.B.; Navia, I.Á.; García-Peñalvo, F.J. Faat: Freelance as a team. In Proceedings of the 3rd International Conference on Technological Ecosystems for Enhancing Multiculturality, Porto, Portugal, 7–9 October 2015; pp. 687–694. [Google Scholar]
  50. Begel, A.; Bosch, J.; Storey, M.-A. Social networking meets software development: Perspectives from github, msdn, stack exchange, and topcoder. IEEE Softw. 2013, 30, 52–66. [Google Scholar] [CrossRef]
  51. Li, K.; Xiao, J.; Wang, Y.; Wang, Q. Analysis of the key factors for software quality in crowdsourcing development: An empirical study on topcoder. com. In Proceedings of the 2013 IEEE 37th Annual Computer Software and Applications Conference, Kyoto, Japan, 22–26 July 2013; pp. 812–817. [Google Scholar]
  52. Guo, W.; Fu, Z.-L.; Sun, J.; Wang, L.; Zhang, J. Task navigation panel for Amazon Mechanical Turk. In Proceedings of the 5th International Conference on Computer Science and Software Engineering, Guilin, China, 21–23 October 2022; pp. 574–580. [Google Scholar]
  53. Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating Crowdworkers’ Identify, Perception and Practices in Micro-Task Crowdsourcing. Proc. ACM Hum. -Comput. Interact. 2022, 6, 1–20. [Google Scholar] [CrossRef]
  54. Zhao, Y.; Liu, G.; Zheng, K.; Liu, A.; Li, Z.; Zhou, X. A context-aware approach for trustworthy worker selection in social crowd. World Wide Web 2017, 20, 1211–1235. [Google Scholar] [CrossRef]
  55. Luz, N.; Silva, N.; Novais, P. A survey of task-oriented crowdsourcing. Artif. Intell. Rev. 2015, 44, 187–213. [Google Scholar] [CrossRef]
  56. Folorunso, O.; Mustapha, O.A. A fuzzy expert system to Trust-Based Access Control in crowdsourcing environments. Appl. Comput. Inform. 2015, 11, 116–129. [Google Scholar] [CrossRef]
  57. Christoforaki, M.; Ipeirotis, P.G. A system for scalable and reliable technical-skill testing in online labor markets. Comput. Netw. 2015, 90, 110–120. [Google Scholar] [CrossRef]
  58. Li, M.; Wang, M.; Jin, X.; Guo, C. Affinity-Aware Online Selection Mechanisms in Mobile Crowdsourcing Sensing. In Proceedings of the 2018 IEEE 9th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 23–25 November 2018; pp. 1–6. [Google Scholar]
  59. Sharma, S.; Hasteer, N.; Van-Belle, J.P. An exploratory study on perception of Indian crowd towards crowdsourcing software development. In Proceedings of the 2017 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 5–6 May 2017; pp. 901–905. [Google Scholar]
  60. Tokarchuk, O.; Cuel, R.; Zamarian, M. Analyzing Crowd Labor and Designing Incentives for Humans in the Loop. IEEE Internet Comput. 2012, 16, 45–51. [Google Scholar] [CrossRef]
  61. Zanatta, A.L.; Machado, L.; Steinmacher, I. Competence, Collaboration, and Time Management: Barriers and Recommendations for Crowdworkers. In Proceedings of the 2018 IEEE/ACM 5th International Workshop on Crowd Sourcing in Software Engineering (CSI-SE), Gothenburg, Sweden, 27 May–3 June 2018; pp. 9–16. [Google Scholar]
  62. Zhang, X.; Chen, Z.; Fang, C.; Liu, Z. Guiding the Crowds for Android Testing. In Proceedings of the 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C), Austin, Texas, USA, 14–22 May 2016; pp. 752–753. [Google Scholar]
  63. Tran-Thanh, L.; Stein, S.; Rogers, A.; Jennings, N.R. Efficient crowdsourcing of unknown experts using bounded multi-armed bandits. Artif. Intell. 2014, 214, 89–111. [Google Scholar] [CrossRef]
  64. Smirnov, A.; Ponomarev, A.; Shilov, N. Hybrid Crowd-based Decision Support in Business Processes: The Approach and Reference Model. Procedia Technol. 2014, 16, 376–384. [Google Scholar] [CrossRef]
  65. Moayedikia, A.; Yeoh, W.; Ong, K.-L.; Boo, Y.L. Improving accuracy and lowering cost in crowdsourcing through an unsupervised expertise estimation approach. Decis. Support Syst. 2019, 122, 113065. [Google Scholar] [CrossRef]
  66. Tahaei, M.; Vaniea, K. Recruiting participants with programming skills: A comparison of four crowdsourcing platforms and a CS student mailing list. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–15. [Google Scholar]
  67. Hettiachchi, D.; Kostakos, V.; Goncalves, J. A survey on task assignment in crowdsourcing. ACM Comput. Surv. (CSUR) 2022, 55, 1–35. [Google Scholar] [CrossRef]
  68. Dissanayake, I.; Mehta, N.; Palvia, P.; Taras, V.; Amoako-Gyampah, K. Competition matters! Self-efficacy, effort, and performance in crowdsourcing teams. Inf. Manag. 2019, 56, 103158. [Google Scholar] [CrossRef]
  69. Aguinis, H.; Joo, H.; Gottfredson, R.K. What monetary rewards can and cannot do: How to show employees the money. Bus. Horiz. 2013, 56, 241–249. [Google Scholar] [CrossRef]
  70. Troll, J.; Blohm, I.; Leimeister, J.M. Why Incorporating a Platform-Intermediary can Increase Crowdsourcees’ Engagement. Bus. Inf. Syst. Eng. 2019, 61, 433–450. [Google Scholar] [CrossRef]
  71. Modaresnezhad, M.; Iyer, L.; Palvia, P.; Taras, V. Information Technology (IT) enabled crowdsourcing: A conceptual framework. Inf. Process. Manag. 2020, 57, 102135. [Google Scholar] [CrossRef]
  72. Micholia, P.; Karaliopoulos, M.; Koutsopoulos, I.; Aiello, L.M.; Morales, G.D.F.; Quercia, D. Incentivizing social media users for mobile crowdsourcing. Int. J. Hum. -Comput. Stud. 2017, 102, 4–13. [Google Scholar] [CrossRef]
  73. LaToza, T.D.; Hoek, A.v.d. A Vision of Crowd Development. In Proceedings of the 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, Italy, 16–24 May 2015; pp. 563–566. [Google Scholar]
  74. Dissanayake, I.; Zhang, J.; Gu, B. Virtual Team Performance in Crowdsourcing Contest: A Social Network Perspective. In Proceedings of the 2015 48th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2015; pp. 4894–4897. [Google Scholar]
  75. Franken, S.; Kolvenbach, S.; Prinz, W.; Alvertis, I.; Koussouris, S. CloudTeams: Bridging the Gap Between Developers and Customers During Software Development Processes. Procedia Comput. Sci. 2015, 68, 188–195. [Google Scholar] [CrossRef]
  76. Saxton, G.D.; Oh, O.; Kishore, R. Rules of Crowdsourcing: Models, Issues, and Systems of Control. Inf. Syst. Manag. 2013, 30, 2–20. [Google Scholar] [CrossRef]
  77. Garcia Martinez, M. Inspiring crowdsourcing communities to create novel solutions: Competition design and the mediating role of trust. Technol. Forecast. Soc. Chang. 2017, 117, 296–304. [Google Scholar] [CrossRef]
  78. Moodley, F.; Belle, J.V.; Hasteer, N. Crowdsourced software development: Exploring the motivational and inhibiting factors of the South African crowd. In Proceedings of the 2017 7th International Conference on Cloud Computing, Data Science & Engineering—Confluence, Noida, India, 12–13 January 2017; pp. 656–661. [Google Scholar]
  79. Morschheuser, B.; Hamari, J.; Maedche, A. Cooperation or competition—When do people contribute more? A field experiment on gamification of crowdsourcing. Int. J. Hum. -Comput. Stud. 2019, 127, 7–24. [Google Scholar] [CrossRef]
  80. Zanatta, A.L.; Steinmacher, I.; Machado, L.S.; Souza, C.R.B.d.; Prikladnicki, R. Barriers Faced by Newcomers to Software-Crowdsourcing Projects. IEEE Softw. 2017, 34, 37–43. [Google Scholar] [CrossRef]
  81. Ghezzi, A.; Gabelloni, D.; Martini, A.; Natalicchio, A. Crowdsourcing: A Review and Suggestions for Future Research. Int. J. Manag. Rev. 2018, 20, 343–363. [Google Scholar] [CrossRef]
Figure 1. Crowdsourcing entities and its task assignment strategy representation.
Figure 1. Crowdsourcing entities and its task assignment strategy representation.
Electronics 12 00934 g001
Figure 2. Average and normalized weights of criteria.
Figure 2. Average and normalized weights of criteria.
Electronics 12 00934 g002
Figure 3. Ranking and performance of alternatives.
Figure 3. Ranking and performance of alternatives.
Electronics 12 00934 g003
Table 1. Fuzzy scale.
Table 1. Fuzzy scale.
EqualModerateStrongVery StrongExtremely Strong
13579
(1,1,1)(2,3,4)(4,5,6)(6,7,8)(9,9,9)
Intermediate values
2468
(1,2,3)(3,4,5)(5,6,7)(7,8,9)
Table 2. Pairwise decision matrix.
Table 2. Pairwise decision matrix.
CriteriaCompetency PurposesSource of InspirationKnowledge SharingSocializationRankingEmploymentRewards
Competency purposes181/351/321/7
Source of Inspiration1/811/4231/31/2
Knowledge sharing3411/221/63
Socialization1/51/2211/823
Ranking31/3½811/32
Employment1/2361/2311/2
Rewards721/31/3½21
Table 3. Fuzzified decision matrix.
Table 3. Fuzzified decision matrix.
CriteriaCompetency PurposesSource of InspirationKnowledge SharingSocializationRankingEmploymentRewards
Competency purposes(1,1,1)(7,8,9)(1/4,1/3,1/2)(4,5,6)(1/4,1/3,1/2)(1,2,3)(1/8,1/7,1/6)
Source of inspiration(1/9,1/8,1/7)(1,1,1)(1/5,1/4,1/3)(1,2,3)(2,3,4)(1/4,1/3,1/2)(1/3,1/2,1/1)
Knowledge sharing(2,3,4)(3,4,5)(1,1,1)(1/3,1/2,1/1)(1,2,3)(1/7,1/6,1/5)(2,3,4)
Socialization(1/6,1/5,1/4)(1/3,1/2,1/1)(1,2,3)(1,1,1)(1/9,1/8,1/7)(1,2,3)(2,3,4)
Ranking(2,3,4)(1/4,1/3,1/2)(1/3,1/2,1/1)(7,8,9)(1,1,1)(1/4,1/3,1/2)(1,2,3)
Employment(1/3,1/2,1/1)(2,3,4)(5,6,7)(1/3,1/2,1/1)(2,3,4)(1,1,1)(1/3,1/2,1/1)
Rewards(6,7,8)(1,2,3)(1/4,1/3,1/2)(1/4,1/3,1/2)(1/3,1/2,1/1)(1,2,3)(1,1,1)
Table 4. Calculating FGMV.
Table 4. Calculating FGMV.
CriteriaCompetency PurposesSource of InspirationKnowledge SharingSocializationRankingEmploymentRewardsFGMV
Competency purposes(1,1,1)(7,8,9)(1/4,1/3,1/2)(4,5,6)(1/4,1/3,1/2)(1,2,3)(1/8,1/7,1/6)0.805,1.035, 1.314
Source of Inspiration(1/9,1/8,1/7)(1,1,1)(1/5,1/4,1/3)(1,2,3)(2,3,4)(1/4,1/3,1/2)(1/3,1/2,1/1)0.449,0.610, 0.836
Knowledge sharing(2,3,4)(3,4,5)(1,1,1)(1/3,1/2,1/1)(1,2,3)(1/7,1/6,1/5)(2,3,4)0.923,1.292, 1.739
Socialization(1/6,1/5,1/4)(1/3,1/2,1/1)(1,2,3)(1,1,1)(1/9,1/8,1/7)(1,2,3)(2,3,4)0.534,0.763, 1.037
Ranking(2,3,4)(1/4,1/3,1/2)(1/3,1/2,1/1)(7,8,9)(1,1,1)(1/4,1/3,1/2)(1,2,3)0.839,1.150, 1.601
Employment(1/3,1/2,1/1)(2,3,4)(5,6,7)(1/3,1/2,1/1)(2,3,4)(1,1,1)(1/3,1/2,1/1)0.958,1.314, 1.962
Rewards(6,7,8)(1,2,3)(1/4,1/3,1/2)(1/4,1/3,1/2)(1/3,1/2,1/1)(1,2,3)(1,1,1)0.743,1.065, 1.511
Table 5. Fuzzy weights along with normalized weights of criteria.
Table 5. Fuzzy weights along with normalized weights of criteria.
CriteriaFuzzy WeightsAverage Weights (Mi)Normalized Weights (Ni)Ranking
Competency purposes0.080,0.143,0.2500.1580.1335
Source of inspiration0.045,0.084,0.1590.0960.0817
Knowledge-sharing0.092,0.178,0.3300.2000.1693
Socialization0.053,0.105,0.1970.1190.1006
Ranking0.084,0.159,0.3040.1820.1534
Employment0.096,0.181,0.3730.2170.1822
Rewards0.074,0.147,0.2870.2170.1831
Total 1.188
Table 6. Decision matrix.
Table 6. Decision matrix.
CriteriaOperating CostReliabilityComputational EfficiencyDetection AccuracyQualityNumeric RobustnessPerformance
Alternatives
Crowd-17928563
Crowd-23245738
Crowd-32536974
Crowd-46452897
Crowd-58367425
Table 7. NDM.
Table 7. NDM.
CriteriaOperating CostReliabilityComputational EfficiencyDetection AccuracyQualityNumeric RobustnessPerformance
Alternatives
Crowd-10.5500.7750.2110.6000.3260.4480.235
Crowd-20.2360.1720.4220.3750.4570.2240.627
Crowd-30.1570.4300.3160.4500.5870.5230.313
Crowd-40.4710.3440.5270.1500.5220.6730.548
Crowd-50.6290.2580.6320.5250.2610.1490.392
Table 8. Weighted NDM.
Table 8. Weighted NDM.
Criteria Weights0.1330.0810.1690.10.1530.1820.183
CriteriaOperating costReliabilityComputational efficiencyDetection accuracyQualityNumeric robustnessPerformance
Alternatives
Crowd-10.0730.0630.0360.0600.0500.0820.043
Crowd-20.0310.0140.0710.0370.0700.0410.115
Crowd-30.0210.0350.0530.0450.0900.0950.057
Crowd-40.0630.0280.0890.0150.0800.1220.100
Crowd-50.0840.0210.1070.0520.0400.0270.072
Table 9. Beneficial and non-beneficial parameter identification.
Table 9. Beneficial and non-beneficial parameter identification.
CriteriaOperating CostReliabilityComputational EfficiencyDetection AccuracyQualityNumeric RobustnessPerformance
Alternatives
Crowd-10.0730.0630.0360.0600.0500.0820.043
Crowd-20.0310.0140.0710.0370.0700.0410.115
Crowd-30.0210.0350.0530.0450.0900.0950.057
Crowd-40.0630.0280.0890.0150.0800.1220.100
Crowd-50.0840.0210.1070.0520.0400.0270.072
Ij+0.0840.0630.1070.0600.0900.1220.115
Ij0.0210.0140.0360.0150.0400.0270.043
Table 10. Ideal and non-ideal separation measures.
Table 10. Ideal and non-ideal separation measures.
AlternativesSi+SiSi+ + Si
Crowd-10.1170.1010.218
Crowd-20.1180.0900.208
Crowd-30.1090.0950.203
Crowd-40.0660.1370.202
Crowd-50.1230.1060.230
Table 11. Pi and ranking of alternatives.
Table 11. Pi and ranking of alternatives.
AlternativesPerformance Score (Pi)Ranking
Crowd-10.4643
Crowd-20.4335
Crowd-30.4662
Crowd-40.6761
Crowd-50.4634
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, H.U.; Ali, F.; Ghadi, Y.Y.; Nazir, S.; Ullah, I.; Mohamed, H.G. Human–Computer Interaction and Participation in Software Crowdsourcing. Electronics 2023, 12, 934. https://doi.org/10.3390/electronics12040934

AMA Style

Khan HU, Ali F, Ghadi YY, Nazir S, Ullah I, Mohamed HG. Human–Computer Interaction and Participation in Software Crowdsourcing. Electronics. 2023; 12(4):934. https://doi.org/10.3390/electronics12040934

Chicago/Turabian Style

Khan, Habib Ullah, Farhad Ali, Yazeed Yasin Ghadi, Shah Nazir, Inam Ullah, and Heba G. Mohamed. 2023. "Human–Computer Interaction and Participation in Software Crowdsourcing" Electronics 12, no. 4: 934. https://doi.org/10.3390/electronics12040934

APA Style

Khan, H. U., Ali, F., Ghadi, Y. Y., Nazir, S., Ullah, I., & Mohamed, H. G. (2023). Human–Computer Interaction and Participation in Software Crowdsourcing. Electronics, 12(4), 934. https://doi.org/10.3390/electronics12040934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop