Next Article in Journal
A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations
Next Article in Special Issue
Analysis of Multimodal Sensor Systems for Identifying Basic Walking Activities
Previous Article in Journal
AOAFS: A Malware Detection System Using an Improved Arithmetic Optimization Algorithm
Previous Article in Special Issue
Assistive and Emerging Technologies to Detect and Reduce Neurophysiological Stress and Anxiety in Children and Adolescents with Autism and Sensory Processing Disorders: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Chatbot Development Using No-Code Platforms by People with Disabilities for Their Peers at a Sheltered Workshop

by
Sara Hamideh Kerdar
1,2,*,
Britta Marleen Kirchhoff
1,
Lars Adolph
1 and
Liane Bächler
2
1
Federal Institute of Occupational Safety and Health, 44149 Dortmund, Germany
2
Department of Rehabilitation Sciences, TU Dortmund University, 44227 Dortmund, Germany
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(4), 146; https://doi.org/10.3390/technologies13040146
Submission received: 11 February 2025 / Revised: 16 March 2025 / Accepted: 2 April 2025 / Published: 4 April 2025

Abstract

:
No-code (NC) platforms empower individuals without IT experience to create tailored applications and websites. While these platforms are accessible to a broader audience, their usability for people with disabilities remains underexplored. This study investigated whether, with targeted training, people with disabilities could effectively use NC platforms to develop customized tools for their workplace, and whether these tools would be adopted by their peers. Conducted in collaboration with a sheltered workshop in Germany, the study had three phases. Phase I involved a brainstorming session with employees, which shaped the study design and product development. In Phase II, six participants with disabilities received a one-week training to develop chatbots. Phase III implemented the chatbots in the workshop. In Phase II, each participant successfully developed four chatbots, which increased the participants’ skills and motivation. Based on the phase III results, users rated the developed chatbots highly (the System Usability Scale (SUS) questionnaire was delivered in the form of a chatbot), indicating their user-friendliness (M = 88.9, SD = 11.2). This study suggests that with appropriate training, individuals with disabilities can use NC platforms to create impactful, customized tools that are user-friendly and accessible to their peers.

1. Introduction

The development of digital technologies, such as apps and websites, requires substantial time and financial resources. However, low-code and no-code (LCNC) platforms have opened new doors to digitalization, allowing individuals, known as ‘citizen developers’, with limited or no IT skills to develop apps and websites and customize them according to their needs [1,2]. While no-code (NC) platforms offer ready-made templates and functions that can be modified to some degree, low-code (LC) platforms provide more customization options, requiring some coding for greater flexibility [3,4]. Although LCNC platforms provide “lightweight” digital products, they open new doors for business and industry by offering many opportunities [2,5], such as e-commerce, internal and external communication, workflow management, education, finance, and healthcare. Given the rapid growth and adoption of these platforms in the marketplace [5], more companies are looking for candidates with LCNC skills [2]. Such skills could help companies reduce their costs and dependency on IT specialists [6] and find faster solutions to their immediate needs and requirements. For example, such skills enable employees to create their own digital tools based on the needs of their team or even their organization [5], opening up new opportunities for job crafting.
Even though LCNC platforms are attracting attentions in the job market, the literature stays limited on this subject [2]. While many articles focus on the concept of LCNC, its potentials, and challenges (e.g., ref. [7]), not many studies have focused on their actual implementations in the field. Rajaram et al. [8] used Microsoft Power Apps (a LC platform) to develop an educational application, where both learning and examination were possible. On the subject of education, Mew and Field [9] used an LC platform for further support of the students in a project management course. The impact and introduction of LCNC platforms has been studied in different industries, such as construction [3] or manufacturing [6], showing their potential to enhance digitalization. These platforms, coupled with artificial intelligence (AI) or Augmented Reality (AR), could facilitate work-related processes, such as inventory [10] or job status tracking [11]. In a case study, Takahashi et al. [12] explored the impact of LC tools on ‘Diversity, Equity and Inclusion’ within a corporate context. Employees of a company with no existing IT expertise learned and developed several applications for their organization, mostly focused on “knowledge sharing”. They concluded that, while LC platforms have the potential to harness internal skills to reduce workload and costs, creating a supportive environment is essential.
People with disabilities have less employment possibilities [13]. In Germany, sheltered workshops for disabled people offer the possibility to learn and perform simple tasks, taking a rehabilitation center role. However, the transition rates from the sheltered workshops to the primary labor market remains below 1% [14]. Even though it is argued that many of these individuals are not able to make this transfer due to the demands of the general labor market and their disabilities [15], learning new (digital) skills could facilitate their transition and opportunities. Technologies used at work should demonstrate a high level of fit between the requirements of the task, the characteristics of the users, and the technology. According to the Task-Technology Fit (TTF) Model developed by Goodhue and Thompson, they are also influenced by social norms and facilitating conditions, among other things [16]. Positive impact is possible when technology is aligned with the needs of employees and the task at hand. With the TTF Model in mind, LCNC has the potential to either increase the opportunities for people with disabilities (e.g., cognitive or multiple disabilities) to transition to other jobs, or it could increase their job variety and motivation, and the opportunity to transfer on knowledge to others (e.g., peers) through the design of applications. People with disabilities are also experts in their own field and through LCNC they could have the opportunity to develop job aids (e.g., AI-based apps or chatbots) that are highly suited to their tasks. However, the potential of these platforms for employees with disabilities, especially those working in sheltered workshops, has not yet been studied.
Advancements in artificial intelligence have reshaped human–computer interaction, with chatbots playing a central role. These digital assistants engage in natural language conversations, providing instant support, improving productivity, and enhancing user experience [17]. While developing AI solutions such as chatbots could be costly [18], NC platforms provide more cost-effective alternatives, offering pre-coded tools that make chatbot development accessible even to individuals without coding expertise [19]. The use of chatbots for people with disabilities has been studied in different contexts. For example, Mateos-Sanchez et al. [20] used a chatbot to teach communication skills to people with intellectual disabilities. In a systematic review on the use of chatbots for people with disabilities, only “15 of 192 papers” included the target group in their studies [21]. In a report on AI and occupational inclusion, Touzet [22] highlights 142 AI-based technologies which could assist people with disabilities in the workplace. She argues that more than 75% of these technologies would not be possible without AI. She further concludes that AI has the potential to save money and becomes more easily implemented with different types of assistive technologies (AT). In a review, Beudt et al. [23] found that the majority of AI solutions focus on sensory disabilities (i.e., visual and hearing disabilities) and cognitive disabilities are to some extent ignored. However, similar to other studies, they conclude that AI increases the chance of vocational participation for people with disabilities. Given the benefits of chatbots for people with disabilities and the advantages of NC platforms in facilitating their development, the question remains whether these platforms can support people with disabilities in both creating and utilizing chatbots.
LCNC platforms are rapidly expanding, becoming crucial tools for future innovation [24]. However, questions remain regarding their accessibility for people with disabilities, their alignment with users’ roles and needs, and the new opportunities they may offer. This study seeks to bridge this gap by exploring the role of people with disabilities in sheltered workshops as citizen developers in their workplace. Specifically, it aims to address the following aspects:
(a)
From the perspective of product development as citizen developers: Are LCNC platforms accessible and usable for people with disabilities working in sheltered workshops? Additionally, what types of training could further support them in utilizing these platforms effectively?
(b)
From the end users’ perspective: Are the products developed using LCNC platforms (chatbots in this study) accessible and user-friendly for individuals with disabilities working in sheltered workshops?
This study seeks to explore how LCNC platforms can be effectively leveraged to enhance digital and vocational inclusion, fostering greater participation in sheltered work environments.
The remainder of this paper is organized as follows: Section 2 provides an overview of the research methods, including details of the questionnaires used, the study setting, the platform and equipment used, and the phases of the study. Section 3 presents both qualitative and quantitative results. The article concludes with Section 4, which discusses the findings, and Section 5, which presents the conclusions of the study.

2. Materials and Methods

This study used a mixed methods approach. Mixed method studies provide a deeper understanding of research questions by incorporating multiple perspectives [25,26,27]. This approach is especially valuable for detecting new insights and identifying directions for future research [27]. Therefore, since no previous studies have explored the possibilities of NC platforms for people with disabilities as citizen developers in the work environment, an exploratory approach was first adopted to assess whether these platforms could be used by people with disabilities and how training would assist them. Additionally, a quantitative approach was employed, in which participants answered a series of questions before and after the training to account for individual differences and investigate the individual impact of training and skill acquisition.
This study was ethically evaluated and approved by the ethic commission of the Federal Institute for Occupational Safety and Health (German: Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (BAuA)).
To clarify the structure of the study, the participants, questionnaires used, and study phases are described in detail below.

2.1. Participants

The participants in this study were recruited from a sheltered workshop in Germany and were divided into two groups:
(1)
Participants who were trained in the use of the NC platform and developed chatbots (hereafter referred to as Developers). These individuals attended the training sessions for a week at the BAuA. A supervisor female—hereafter referred to as Supervisor 1 accompanied the participants for the first three days and another supervisor male—hereafter referred to as Supervisor 2 accompanied participants for the rest of the week. The change in supervisors was due to the fact that Supervisor 1 worked only part-time at the sheltered workshop and could not be available for the whole week.
(2)
Other sheltered workshop employees with disabilities who use the developed chatbots in their daily work at the sheltered workshop (hereafter referred to as End Users).

2.2. Questionnaires (Only for the Developers)

Based on the TTF Model, this study examined three core elements—person, technology, and task characteristics—because the alignment of these factors is believed to influence the effectiveness of technologies such as no-code (NC) platforms. To assess each of these core elements, validated questionnaires in German were used. The questions were translated into simpler language. This made them easier to understand for the participants. The questionnaires were printed with each question on a single page to avoid information overload. As all the questionnaires used a Likert scale for their answers, emojis were also used for better understanding [24] (See Figure 1 for an example). The following questionnaires for each core component of the TTF were used.
  • For the Component “Task”
To evaluate the characteristics of tasks, the German version of The Work Design Questionnaire (WDQ) [28,29] was used. The WDQ items are divided into four main categories: task characteristics, knowledge characteristics, social characteristics, and work context. Each main category has between four and five subcategories, addressing a wide range of subjects, such as ergonomics, social support, and information processing. For the purposes of this study, only the following items were included, as the remaining ones were not relevant to the participants’ work tasks or the components of the research: Task Variety and Feedback from Job (from category task characteristic), Problem Solving and Skill Variety (from the category knowledge characteristics), Equipment Use (from the category work context). In total, 18 items were included.
  • For the Component “Technology”
The Digital Technology Acceptance Scale (DTAS): DTAS evaluates individuals’ acceptance of digital technology with 13 items on a 5-point Likert scale (1 = not at all applicable, 5 = fully applicable). The questionnaire is divided into four categories: Ease of Use, Perceived Usefulness, Attitude towards Usage, and Behavioral Intention to Use [30,31].
System Usability Scale (SUS): The German version of the System Usability Scale (SUS) was used to evaluate the usability of the used NC platform (Landbot: https://landbot.io/, accessed on 17 June 2024) from the Developers’ point of view [32,33]. The SUS has 10 items with a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). The score ranges from 0 to 100, with a benchmark (mean score) of 68 [34]. Scores of 100–85 are considered ‘the best imaginable’, scores of 84–73 ‘excellent’, scores of 72–53 ‘good’, scores of 52–38 ‘ok/fair’, scores of 37–24 ‘poor’, and scores of 25–0 ‘the worst imaginable’ [35,36].
  • For the Component “Person”
Scale for measuring professional self-efficacy (German: berufliche Selbstwirksamkeitserwartung (BSW-5-Rev)) [37]: The BSW-5-Rev has five items with a four-point scale (1 = strongly disagree, 4 = strongly agree) and measures motivational factors and skills related to occupational self-efficacy.
Technology commitment (German: Kurzskala Technikbereitschaft (TB)) [38]: This 12-item, 5-point Likert scale (1 = strongly disagree, 5 = strongly agree) measures willingness to use technology and is divided into three categories: technology acceptance, technology competence, and technology control convictions.

2.3. Study Phases

The study took place in three main phases, which are explained in this manuscript as follows:
Phase I: study development and brainstorming (i.e., pre-study to adjust ideas tailored to the needs of the sheltered workshop).
Phase II: product development.
Phase III: implementation of the developed products in the sheltered workshop.
In the following sections, the study’s phases will be explained, detailing the methodology and data collection. For better understanding, Figure 2 provides an overview of the study.

2.3.1. Phase I: Pre-Study

Previous research indicates that successful technology implementation depends on aligning the technology’s functionality with the specific requirements of tasks, rather than forcing tasks to adapt to the technology [16,39]. Therefore, in order to adjust the research idea to the specific needs of the sheltered workshop, several steps were taken.

Step 1: Introduction of NC Platforms to the Sheltered Workshop

First, the researchers introduced the concept of LCNC to the sheltered workshop, explaining its functionalities and providing relevant examples. Following an expressed interest in developing the idea further, two field observations were conducted at different locations. The goal of these visits was to gain a deeper understanding of the variety of tasks and technologies used in the sheltered workshops, which primarily involve assembly of products, laundry, catering, and similar activities. The focus of these visits was to engage with the supervisors about the technologies in use, the tasks performed, the challenges faced, and the areas where additional support is needed.

Step 2: Collaborative Brainstorming with Employees and Supervisors in the Sheltered Workshops—Identifying Requirements and Selecting the Product and Platform

Recognizing the importance of directly asking the target group about their needs and desired support, a meeting was held with the staff of the sheltered workshops. In a large hall, employees with disabilities and their supervisors voluntarily participated in a brainstorming session. Around 20 employees and 3 supervisors were asked to consider areas where they needed support or where technology could be helpful. Employees were grouped into teams based on proximity to where they were sitting and exchanged ideas for about 15 min, during which the researchers and the supervisors encouraged them to think by asking them questions regarding what they do, what they would like to have to facilitate their work, etc. The presence of the supervisors was important to facilitate communication, to encourage employees to share their ideas, and to attend to them when assistance was needed. The teams then presented their ideas and all of them were collected on a board. The ideas were read aloud, and participants voted on the most important ones by raising their hands. The ideas with the most votes were selected by the researchers for the study’s tasks.

Step 3: Selection of Chatbots for the Study

The choice of a platform for this study was guided by two key considerations. First, the LCNC platform needed to be user-friendly and accessible for the Developers. Second, the final solution (product developed by the Developers) had to be suitable for the specific needs of sheltered workshops while aligning with the requirements of the End Users. Therefore, based on the results of the brainstorming session that identified the needs of sheltered workshops, along with literature supporting the effectiveness of chatbots for the target group (see Section 1: Introduction), chatbots were selected as the product for development. To determine the most suitable platform, various options were evaluated based on their simplicity, ease of use, privacy policies, and features. Given the sensitivity of the data and feedback from the ethics committee, the selection was narrowed to primarily EU-based platforms. As a result, the EU-based NC platform Landbot was chosen for chatbot development.

Step 4: Recruitment

Landbot was presented to the spokesperson (Supervisor 1) of the sheltered workshop to assist with participant recruitment. The supervisor was informed of the inclusion criteria: (1) participation was voluntary, (2) participants must be cognitively capable of logical reasoning, (3) participants should be able to use a mouse and keyboard, and (4) participants must not have severe visual impairment or blindness (as some of the features, such as drag and drop, would have been inaccessible for them). The supervisor then privately spoke to the employees meeting these criteria, explaining the study’s methods and aims. Ultimately, six participants voluntarily agreed to participate, with their demographic information presented in Table 1. The disability descriptions are provided by the sheltered workshops based on official medical diagnoses. Participants did not have any training or experience in developing programs or using LCNC platforms. However, all of them had experience working with tablets, smartphones, laptops, and search engines. Additionally, the sheltered workshop staff assess participants’ education levels based on their cognitive and physical abilities, determining their eligibility for vocational training at one of the following levels [34]:
  • Task-oriented qualification (German: tätigkeitsorientierte Qualifizierung): Focuses on skills required for various tasks across one or more work areas.
  • Workplace-oriented qualification (German: arbeitsplatzorientierte Qualifizierung): Focuses on skills needed for specific workplaces within a work area.
  • Field-oriented qualification (German: berufsfeldorientierte Qualifizierung): Covers all skills required in a particular work area of the workshop.
  • Profession-oriented qualification (German: berufsbildorientierte Qualifizierung): Follows the content of a recognized professional occupation.

2.3.2. Phase II: Product Development—Developers

Prior to the study, Supervisor 1 read and explained the study information and consent form to the participants. Participants also took home a paper version so that their legal guardians (if applicable) could sign the documents.
For the training week (June 2024), participants met as usual at the sheltered workshop and then used a transfer bus to come together at the study site at the BAuA. Except for one participant who was absent on Day 2 due to illness and another on Day 4 due to missing the bus, all participants attended the training each day.

Setting

The study took place in a training room at the BAuA, designed to accommodate up to 12 participants. For the training session, there were 6 participants, each equipped with an ergonomic table, a chair and a laptop with a mouse. Given the benefits of pair work in software development (e.g., improved quality and time efficiency [40]) and the advantages of team work for people with disabilities [41], the participants were grouped into three pairs. Each pair shared a tablet for accessing training instructions while working independently on their laptops. They assisted each other as needed, and if one participant finished early, they waited for their partner to complete their tasks. In case of difficulties, team members attempted to help each other, and if they could not resolve the issue, the researchers provided indirect assistance by prompting them to identify the problem and possible solutions. If participants still struggled, the researchers would demonstrate where the issue lay and how to resolve it.

Training Material (For Three Pre-Defined Chatbots)

Based on the results of the brainstorming conducted during the pre-study phase (see Section 2.3.1. Step 2) three tasks for three independent chatbots were designed (hereafter referred to as pre-defined chatbots). The chatbots included a Vacation chatbot, a Suggestion Box chatbot, and an online Cookbook chatbot. The researchers designed the conversation flow of the chatbots, defining the questions, responses, and logical progression based on user inputs, ensuring that the chatbots dynamically guide users to different questions depending on their specific answers. Instructions for using the platform to develop these three chatbots were prepared and installed on tablets. PowerPoint was used to develop the instructions, with each slide representing a step in the process. Once participants successfully completed a step, they moved on to the next. The instructions began with screenshots of the platform, accompanied by minimal information explaining the tasks (see Figure 3 for an example). Towards the end of the instructions, or where the process was complicated, videos were prepared where the task and information were again explained using arrows and minimal explanation. It is worth noting that, as will be explained later in Section 3, a fourth chatbot (Sickness Registration) was developed spontaneously within the same week of training, without any instructional guidance. To provide a clearer understanding of the training sessions, Figure 4 presents a detailed overview of each training day.

2.3.3. Phase III: Implementation in the Sheltered Workshop

A meeting was organized with the sheltered workshops to present the chatbots developed by the team during the training week. The development team expressed interest in showcasing their chatbots, demonstrating their functionality and potential use cases. In July 2024, the research team, development team, and interested employees from the sheltered workshop gathered in a large hall. Slides introducing what a chatbot is and its applications were displayed on a large screen. The development team then presented live demonstrations of the chatbot flows, with both researchers and development team members available to answer questions from the audience. Presented employees were also informed that they could use the chatbots at any time, using the iPads and tablets available in the sheltered workshop. With the assistance of Supervisor 1, five employees from the sheltered workshop, who had attended the live demonstration, were recruited to test the chatbots immediately afterward. Three researchers conducted interviews with one-to-two participants each, using the think-aloud method [42,43], where participants verbalized their thoughts. Afterward, they completed the SUS questionnaire for the chatbots they tested, with each participant testing two-to-three chatbots.
At the time of writing, the chatbots are actively being used in the sheltered workshops. The SUS questions are answered at different intervals, based on the availability of Supervisor 1 and participants who have interacted with the chatbots. The SUS questionnaire is presented in a chatbot format, accessible on iPads and tablets available at the workshops. Upon the supervisor’s request, employees independently and anonymously answer the questions, without needing the research team present. This approach is effective because participants are familiar with the chatbot interface.
The sheltered workshops have two iPads, with an additional tablet loaned by the research team for the study’s duration. One tablet is designated for the online Cookbook chatbot, used exclusively by the housekeeping department, which manages cooking and recipes. For the Sickness Registration (SR) chatbot, small business card-sized flyers with QR codes were printed and distributed for employees to take home. The QR code directs users to the SR chatbot and includes a reminder to use it only for real cases, making it easier to report sickness from home. The remaining chatbots are installed on the other iPads and are accessible at familiar locations within the workshop. To ensure continued use, flyers with QR codes were printed and placed at main stations or distributed among different groups, encouraging employees to use the chatbots.

3. Results

The results related to the Developers and the End Users will be reported in separate sections (Section 3.1. Results—Developers and Section 3.2. Results—End Users).

3.1. Results—Developers

3.1.1. Qualitative Results

Day 1: The first day started with introductions, first the research team and the study, followed by the participants. Then, the questionnaires were distributed among the participants and the supervisor read the questions aloud and explained them if it was unclear to the participants. The researchers then introduced LCNC, including basic knowledge, such as what coding is, along with some examples of their platforms’ functionalities. As none of the participants knew what a chatbot was, further explanations, using pictures and videos with examples were given. The pre-defined chatbots were then introduced (see Methods section—chatbots). The schedule for the week was further explained and questions were answered. After the lunch break, the researchers continued to explain important terms, such as what a block, search engine, or drag and drop function and so on are.
To practice and clarify a flow to develop a chatbot, an example was given of how a chatbot could be useful for the sheltered workshop’s website. The researchers carried out a brainstorming session with the participants to decide what functions (questions and answers) this chatbot should offer. Based on these ideas, a live demonstration of how to develop a chatbot on Landbot was given. For this purpose, a laptop was connected to a large screen in the room where all participants could watch the demonstration. A researcher demonstrated step by step how to use the platform to develop this chatbot. In this phase, the participants did not practice on their laptops, but only observed how it functioned. Questions were answered and at the end of the day the participants were informed that they would start learning to develop the chatbots the next day.
Day 2: On the second day, the session began with a brief recap of the previous day, without revisiting the platform and its functionalities. Using a flipchart, the flow of the three pre-defined chatbots was outlined, emphasizing the importance of pre-defining key elements during development, such as the questions and corresponding answers. Participants were then invited to provide suggestions, which were incorporated into the chatbot flows, though the core instructions remained unchanged. After presenting each chatbot, participants expressed interest in specific projects and were subsequently divided into three groups based on their preferences.
In order for participants to learn how to use the instructions on the tablet, the first 10 slides were practiced together. Once again, the slides were shown on the room’s large screen and, one slide at a time, the participants repeated the step on the tablet and performed the task on their laptops. For example, first slide showed how to open Firefox and second slide showed how to Google search Landbot. Afterwards, participants were asked to work on their own. Table 2 shows the groups and the chatbots they developed on each day.
P5 had to work alone, as P4 was absent. Participants were allowed to make minor changes in text or choose different emojis and photos. Only P5 transcribed the exact wording from the slides, completing her first chatbot in approximately 40 min. While testing her chatbot, minor spelling and punctuation errors were found, but the overall flow was correct. In contrast, the other participants took more time, creating their own sentences and adding emojis. Due to P5’s speed and proficiency in following the instructions, additional features and improvements were introduced to her. She grasped these quickly, asking for assistance from the researchers when needed. During the development of her second chatbot, ‘Suggestion Box’, P5 again followed the instructions on the tablet efficiently, finishing within 30 min. In the discussion, the adding of a new feature to the chatbot —an email notification function— was agreed, where the responsible person receives an email each time the ‘Suggestion Box’ chatbot is used. Although this feature was only taught during the Vacation chatbot, which she had previously developed, P5 initiated the process independently without needing the tablet for guidance. She required minimal help and successfully implemented the new feature.
P1 and P6 (group 1) worked well together as a team. It is important to mention that once the basic functions of Landbot have been learnt, the steps are repetitive. For example, with the “button” function it is possible to create many types of questions. Therefore, after a while, the prepared slides might look repetitive. For instance, P1 was able to perform tasks like creating buttons and editing text with less reliance on the slides, remembering the steps on her own. While her initial attempts were not perfect, she referred back to the slides to refine her process. Notably, some participants were quicker and accomplished more than what the instructions outlined. For example, P1 finished developing the first recipe of the online Cookbook and requested to continue developing more recipes. Independently she googled the required photos, saved them on the laptop and successfully developed the second recipe. She encountered only one issue, where her flow did not appear in the test version. With the researcher’s assistance, she identified the mistake and resolved it independently.
In contrast to group 1, P2 and P3 (group 2) were mostly working individually. They initially started developing on their own, without using the tablet or talking with each other. The researchers had to spend time guiding, encouraging them to follow the slides step by step and to wait for each other during the process. This was very important, as without following the instructions, firstly, they were stuck and did not know what to do next and they would not learn different features of Landbot. P3 was very focused on wording, often questioning what the best question or answer would be. He rewrote sentences multiple times, even when he could have simply copied the text from the slides. This process usually took most of his time. However, he had no difficulty in creating the flow and was able to do so easily. It was observed that most of the participants would ask questions when they did not know something or had difficulties or faced errors. P2 and P3, however, never initiated a question. Rather, the researchers or Supervisor 1 had to ask them if they had questions.
Overall, participants seemed to have fun developing chatbots and were very excited. Even though a 15 min break was announced, P3, P1, and P6 started their tasks after 10 min. After the break, within one hour, everyone had finished with the development of their first chatbots. The participants then started to develop their second chatbot, following the instructions on the tablet. By the end of the day, the other participants did not have time to finish their second chatbots. However, they explicitly stated that they wanted to finish these chatbots the next morning. Shortly before concluding the second date, participants were asked about the day’s experience. P6 found the work very tiring in the long run. P1 stated that at first, it was difficult to focus but she found the instructions very helpful; she stated that if tasks are repeated often, she does not need instructions and she found the slides with photos better than the ones with videos. P5 found the task quite easy, she did not even notice the videos and did not need them. P3 said he had fun working on the tasks and P2 claimed it was difficult to focus but that he had fun and wanted to continue the next day. For him, it did not matter if the instructions used videos or photos.
Day 3: On the third day, the session began with a brief greeting, and the participants immediately resumed their work from the previous day. Everyone remembered their tasks and continued working independently. At this stage, everyone was developing the Vacation chatbot.
Since P4 was absent the previous day and P5 had already developed two chatbots, P5 was asked to assist P4 in learning to create her first chatbot. The tablet was again used for instructions. P4 followed the slides, with P5 providing guidance only when necessary. It was observed that P5 was able to support P4 independently, without any help from the researchers. P4’s chatbot had one issue; a block was not properly connected to the next, disrupting the flow. Unable to identify the problem, P4 worked with the researcher, who helped her review the flow until she recognized the issue. When asked for a solution, P4 successfully resolved it on her own. After developing her second chatbot, the researchers asked P4 about her experience so far. She found the chatbot development enjoyable. Regarding the instructions, she preferred the photos, as she felt the videos moved too quickly. While she considered the instructions somewhat overwhelming, she still found them helpful. Overall, she did not find the practice exhausting or demanding.
When testing her chatbots, P1 was usually able to identify the problems on her own. But she usually needed help to understand why the error occurred. Usually, the researchers would check with her and identify the problem, but P1 would solve the problem herself. The process of testing the chatbot together and thinking aloud about the problem was a helpful way for the participants to recognize their mistakes and fix them. In this way, participants usually understood where the problem was.
To develop the email function (mentioned in the Day 2 section), each question and answer needs a variable that defines it. These variables are placed in the body of the email function at the end. For example, if the chatbot asks where the problem is, a variable such as @Department should be defined and later implemented in the email body. All participants struggled with the concept of variables. The researchers had to assist them in creating several variables they had overlooked, indicating a lack of attention to the slides. To enhance their understanding, the email drafts were displayed on the large screen in the room. Participants could then see what their emails looked like and assess their coherence. After this clarification, everyone completed the development of their Vacation chatbots within 90 min. Only P2 and P3 had some problems with the variables that they could not solve without the help of the researchers, and they decided to work on their improvements instead of starting a new one. The rest of the participants started a new chatbot (P4—second chatbot (online Cookbook); P5—online Cookbook; P1 and P6—3rd chatbot (Suggestion Box)). P1 and P6 had finished developing their 3rd and final chatbot by the lunch break. While P6 decided to go on social media on his phone until lunch break, P1 continued to develop her 3rd recipe (of the online Cookbook). This means that she could independently find her chatbot on the Landbot dashboard and start a new recipe.

Developing a New Chatbot Without Instruction (The 4th Chatbot)

After the lunch break, since the majority of the participants had developed three chatbots, the researcher asked the participants which was their favorite chatbot and why. The Vacation chatbot was mentioned most often because it was the easiest chatbot to create and the topic of vacation was interesting to them. A brainstorming session was held to develop a new chatbot for Sickness Registration (SR) via a chatbot, as the participants had already shown interest and thought such a chatbot would be helpful as anyone could use it from home. To start the process, the researcher drew the process flow on a flipchart while the participants shared their ideas. For example, the researcher asked what the first step should be, and the participants suggested a greeting message. Then, they suggested asking for the person’s name. This brainstorming continued until a concrete flow was defined and everyone found it helpful and complete. After a short break, everyone began developing the SR chatbot. Shortly thereafter, P6 announced that he had finished. However, when he attempted to test his chatbot, he discovered a mistake that prevented it from being tested. The researchers guided him in identifying and fixing the problem, after which he was able to work independently on resolving the issue. P1 also had problems with the email variables (how to clearly define them step by step) and a researcher had to help her. However, she did not have any problems with the flow or using the appropriate functions of the Landbot (e.g., button, message, etc.). P2’s main difficulty was connecting different blocks and following the logic. Without help from the researchers, he could not finish his work or move on.
It should be noted that the participants developed the 4th chatbot based on the knowledge and experience of the previous days. The tablets were collected, and they did not have access to the instructions. However, once again, the problem with the variables and the email function occurred for all participants. To clarify this better, the researchers explained the concept, step by step, on the large screen of the room. The participants were then asked to continue working. Despite this explanation, some of the participants struggled to implement the function correctly. P6, for example, expressed that he found it easier with the instructions and the slides. Without them, when something is demonstrated or explained, he immediately forgets it.
Once again, the participants were so absorbed in their work that they had to be reminded several times that the workshop was over. When asked if the day had been difficult for them, they all said no and expressed that they had enjoyed it. The participants were usually very involved in their work and did not even ask for a break, commenting that the time went by very quickly compared to the time in the sheltered workshops.
At the end of the third day, after the participants had left, for further planning and performance observation, the researchers reviewed all four developed chatbots of each participant and rated them from 1 (very bad) to 5 (very good). The evaluation criteria included the following: (1) adherence to the steps defined in the flow, (2) correct connection of blocks, (3) coherence and clarity of the text, (4) functionality of the email feature, and (5) overall operational status of the chatbot. These criteria were used to ensure a structured and consistent assessment. Table 3 presents the results of this evaluation.
Day 4: On the fourth day, P5 was absent and a new supervisor (Supervisor 2) accompanied the participants. The day started by asking the participants about their experience of developing a chatbot without the instructions. Participants expressed that overall, it was fun. P2 said that it was a bit difficult for him at the beginning but after some time and practice he could do it. P6 agreed and also said that he would like to have the instructions with him at all times so that he could refer to them when needed and that he would not mind if the instructions were on a tablet or on paper. P1 said she had no problems or difficulties. P3 said that everything was easy for him, but the email function was challenging. P4 agreed with these statements.
Since the second goal of the study was to implement the chatbots in the sheltered workshop, chatbots that were fully functional needed to be selected. Four chatbots were developed by each participant, and the best-performing chatbot for each category was selected through voting. By fully functional, it is meant that the chatbot was able to complete its tasks, where questions led to correct answers and the chatbot worked without problems from start to finish. The email function was also an essential part of the fully functional chatbot, ensuring that the person responsible received a suitable email each time the chatbot was used. This was particularly relevant for the Vacation chatbot, the Sickness Registration chatbot, and the Suggestion Box chatbot, where sending an email to the person responsible was necessary for the effective use of the chatbots in the sheltered workshop. To select the chatbots for implementation, group testing was conducted for each developed chatbot, with participants actively engaging in the testing process. To ensure that the chatbots could be clearly seen by everyone, testing was started on a large screen. The Vacation chatbot was tested first, followed by the testing of each participant’s chatbots in their current state. After each chatbot was tested, everyone was asked what the errors or mistakes were, or what could be improved. Many errors, such as issues in the flow or missing blocks, were identified either by the participants themselves or by their peers. However, no one paid attention to spelling and punctuation. So, the researchers had to emphasize this issue. As a tool to improve their dictation, DeepL Write (https://www.deepl.com/de/write (accessed on 17 June 2024)) was introduced. However, none of the participants used it, but continued to ask the researchers. During this testing process, the researchers kept mentioning that mistakes and errors happen during development and it is important to test the chatbots and ask for feedback from peers. They were assured that mistakes are not bad and that they happen to researchers as well. Once everyone received the feedback and the list of needed improvements, time was given to implement them. Most of the problems were again related to the variables and the email function. One last time, one of the researchers explained the concept of a variable in a different format (using the flipchart to draw with examples from daily life) and how it will be used in the email. Although improvements were observed, P3 said that he knew what the problem was, but he did not know what to do or how to make it better. A researcher walked him through his email construction step by step. This helped him understand and implement the changes. In the end, he was able to do it himself without any help. When he sent an email to be checked, he would even suggest improvements in spelling and punctuation. It is worth mentioning that only P1 was able to formulate a perfect email at the end, without any problems or help. Her mistakes were pointed out by the researcher and she would usually say what improvements should be made. Again, participants were so absorbed in their tasks that they did not take a break and when asked, they said they did not need any and continued working.
The initial plan was to view everyone’s chatbot on large screen and then vote on it after it had been improved. However, due to time constraints, the participants were asked which of their chatbots should be viewed. The participants were given five minutes to decide if they wanted their chatbot to be tested and later voted on. Initially, P4, P2, and P3 had to be convinced to volunteer because they were too shy or did not trust themselves. However, after one round of testing the chatbots, they also volunteered. Those who volunteered sat in front of their laptops and showed the others how their chatbots worked. Once all the volunteered chatbots had been reviewed, all the participants, including Supervisor 2 and the researchers, voted on which chatbot should be implemented in the sheltered workshop. The names were written on the flipchart and everyone put a colored dot in front of the name they thought should be selected. The chatbots with the most points were chosen (Table 3).
The day was ended with a few short feedback questions as follows:
  • What was good about this week? Everyone said everything together. More explanation was sought and P3 said “developing chatbots”, P1 said “learning something new”, P2 said “the Vacation chatbot was good”, and P4 and P6 said “everything”.
  • What did I learn? P3 said: “Put yourself in the developers’ shoes and do other tasks for a change”.
  • What could be improved? Everyone unanimously said “nothing”, apart from P2, who said “more Emojis”.
  • What do you take with you? P1 said “new skills” and P3 said “Experience to show the others”
  • What was neglected? P6 said “the breaks” and P3 immediately said “no, the breaks were good, I like it better here than in the workshops, much quieter, many of us go home [from the sheltered workshop] with a headache, I can concentrate well here”. P1 added “[at the sheltered workshop] there is a lot of running around, no concentration”, P2 added “yes, here is nice”, and P1 continued “yes, here you can work well”.
Since P4 did not answer any of the questions, the researcher asked her what she said when she went home. She replied, “I told them what we do, what we develop, that we developed chatbots, that we can write”. P2 added that he also explains the same things when he goes home. P6 said that he is happy that he can do other things instead of annoying everyone. P3 also added that he “got feedback at home that he is capable of doing more and that his parents saw him on the official job market”. Finally, the researcher announced that the work for the day was finished, to which P1 said “sadly”.
Day 5: As this was the last day of the training, the participants first answered the same questionnaires as on the first day. A researcher read the questions aloud, and explanations were offered if the questions were not clear to the participants. Afterward, the participants prepared their selected chatbots for implementation in the sheltered workshop. The participants were again divided into groups to speed up the process and to use each other’s feedback to improve the selected chatbots; P5 and P2 worked together to improve the Suggestion Box chatbot, P4 and P3 worked together on the Vacation chatbot, and P1 and P6 worked together on the Cookbook chatbot. Since the SR chatbot was almost finished, P1 only spent 2–3 min on it, making minor improvements.
P2 and P5 also needed to improve the email function, but because they had difficulty with the steps and variables, they requested to use the instructions on the tablet again. Although the topic of the instructions (Vacation chatbot) was different from their current chatbot (SR Chatbot), they were still able to complete the task. There were some writing and grammatical issues, and since the participants were unable to correct them, DeepL Write was reintroduced for assistance. Still, the researchers had to point out the mistakes so that they could use DeepL Write to see the correct version. They did not take the initiative to see if all their sentences were correct. At the end of the day, the participants expressed their willingness to introduce their chatbots to their peers at the sheltered workshop when the implementation appointment took place.
A conversation that took place during the break is worth mentioning. During the break, a researcher exchanged ideas with P1 and P6. It was asked how else a chatbot could be helpful for the sheltered workshops. P5 said that a chatbot could look for contact information like a phone number (intern). That way they do not have to call and ask the Central Division first. Sometimes Central Division also has difficulty finding the number or the person. The researcher asked if a chatbot would be helpful to show task steps (how to do a task step by step). P5 thought it would be especially helpful because people keep asking her these questions and she has to help them. But she also has tasks and cannot help them all the time. If a chatbot could show them the steps of the tasks, then she could refer them to the chatbot and save herself time. They were asked if they felt confident in developing a new chatbot without any help. They said that if there were instructions on the tablets (detailed like ones used in the study), then they would not have a problem. They were asked about the flipchart and idea planning. They both said it would be helpful for orientation and guidance.

3.1.2. Quantitative Results

Due to the small sample size, only descriptive analysis was performed on the questionnaires. Table 4 shows the descriptive analysis of the scores.

System Usability Scale (SUS)

The developer team’s average SUS score for Landbot was 70.83 (SD = 13.66), with the lowest being 55 and the highest being 87.5.

3.2. Results—End Users

3.2.1. Qualitative Results

In the first two months, chatbot usage was minimal, primarily due to the summer holidays in Germany. During this time, many participants were either on vacation or engaged in internships. As a result, the Vacation Chatbot was seldom utilized initially, since vacations had already been approved. However, since the end of the summer holidays, the usage of the Vacation Chatbot has increased. Despite this growth, it has not entirely replaced the paper version that was previously used at the sheltered workshops.
The Suggestion Box chatbot was the least used, despite being requested by employees. After discussing this with Supervisor 1, it was concluded that users need more explanation and time to become accustomed to using it. Currently, users go directly to their group leaders when they have problems. Although using the chatbots could reduce the workload of the group leaders, developing the culture of using the chatbot takes time and practice. Secondly, according to Supervisor 1, there were not many problems to report at the time of the study.
The Cookbook chatbot started with two recipes. However, the idea was welcomed by the housekeeping team and their group leader. So P1 was requested to add more recipes to the chatbot. For example, as the group makes waffles every week for all the staff, they found it helpful to have the recipe. With the help of the group leader and Supervisor 1, photos and measurements were taken and P1 successfully added the new recipe independently (without any instruction or help from the research team). It is important to note that at this time, Landbot had released a new update that changed the position of the features. Based on the Supervisor 1’s explanations, P1 had noticed this immediately, but after some time was able to understand and continue the development. One challenge was the malfunctioning translator plugin, which led to Supervisor 1 having to translate for P1.
The SR chatbot is the most used of the chatbots, as it has made it easier for employees to report sickness when they are unable to come to work. This chatbot is the most praised by the personnel as it has reduced their workload. The sheltered workshop is considering the possibility of using this chatbot in all its branches, for all employees and members of staff.

3.2.2. Quantitative Results

SUS results for chatbots: As of now, 14 valid answers have been collected regarding the usability experiences of the developed chatbots implemented in the sheltered workshop. Table 5 shows the demographic information and SUS results.

4. Discussion

One criticism of knowledge transfer is that insights gained from research often fail to be effectively applied in practice [44]. However, there are various strategies to bridge this gap. One effective approach is to collaborate directly within real (work) environments alongside the research’s target group, ensuring that findings are seamlessly integrated into practice and continue to have an impact even after the research concludes [45]. The aim of this study was to give people with disabilities working in sheltered workshops the opportunity to acquire new skills and transfer them to their working environment. To this end, six participants with cognitive disabilities underwent a week-long training to learn how to develop chatbots that could be tailored to their needs and job requirements. In this way, the knowledge gained from the training would not be lost and could be transferred to their peers at the sheltered workshop. In parallel with the knowledge transfer, there was also a peer-to-peer transfer. The Developers introduced the chatbots to their colleagues, where currently the chatbots are used on a daily basis. At the time of writing, the chatbots have been successfully implemented and are being used in the sheltered workshops. Furthermore, the Developers continue to develop the chatbots independently during their work and thus do not forget how to use the NC platform.
One of the primary questions in this study was whether NC platforms are accessible to people with disabilities. The answer, however, is not straightforward, as multiple factors need to be considered. On one hand, the NC platform used in this study was intuitive enough for users to recall key functions and work somewhat independently. However, the significance of targeted training for individuals with cognitive disabilities, particularly those without prior IT experience, cannot be overlooked. In this study, a range of training methods were employed to gradually teach participants how to use the platform. The combination of traditional teaching methods, such as live demonstrations, with modern approaches, including the use of tablets and other interactive technologies, proved to be particularly effective. According to previous research, providing immediate feedback during active learning sessions has been shown to be beneficial in teaching new skills to individuals with cognitive impairments [46,47]. In the current study, direct feedback was provided by reviewing the chatbot’s flow with participants and identifying key issues. With this guidance, participants were able to apply their training and resolve the problems effectively. Furthermore, peer learning and guidance from educators have also been found to be highly effective for this group [46]. In this study, it was observed that participants‘ performance improved when group members communicated and collaborated more effectively. Additionally, having a representative from the sheltered workshop (Supervisor 1), who had deep knowledge of the employees and their tasks, was important and highly beneficial. She actively encouraged and recruited participants, while also providing valuable feedback and support during close collaboration with them. This finding aligns with the study by Schaap et al. [48], which investigated the effects of training for supervisors of people with disabilities in the workplace. Their participants emphasized that having a “good” supervisor—someone who values them and communicates clearly—is crucial and highly supportive. Similarly, Frogner et al. [49] demonstrated the critical role supervisors play in sheltered workshops, noting that they understand the specific needs and abilities of sheltered workers and maintain strong daily working relationships with them. However, it is important to consider that introducing new technologies and tasks could lead to supervisor overload, given the substantial responsibilities they already manage in sheltered workshops.
In her report on AI and occupational inclusion, Touzet [22] explained that minimal support, such as a reliable internet connection and access to appropriate tools, is a prerequisite for effectively using technology. In this study, for instance, P1 faced challenges in developing additional recipes for the Cookbook chatbot at the sheltered workshop due to a weak internet connection and laptop performance. According to Supervisor 1, they often had to be patient and reload the webpage to continue their work. This instability also affected the translation plug-in, requiring Supervisor 1 to take over the translation tasks. Consequently, P1 became dependent on Supervisor 1’s availability, despite being capable of managing the tasks independently. Therefore, appropriate funding should be allocated to the sheltered workshop to facilitate the normalization of digitalization within its operations. Additionally, regarding the limitations of LCNC platforms, it has been noted that these platforms often face constraints in terms of customization and the variety of available features. This study acknowledges this limitation. P1 encountered challenges while working in the back-end of Landbot to develop additional recipes for the Cookbook chatbot. As more recipes were added, there was less space available to fit images and steps, which led to increased frustration.
Binzer and Winkler [2] argue that LCNC users should possess a strong professional and academic background, emphasizing that successful use of LCNC tools relies on collaboration, effective communication, and a solid understanding of business functions. This study both agrees and disagrees with this perspective. On one hand, a deep understanding of the workplace was essential, as the chatbots developed needed to meet the specific needs of the sheltered workshop. The Developers’ insights and experiences were crucial for the successful implementation of these chatbots, particularly since they are used daily, demonstrating their alignment with the needs of the sheltered workshop employees. On the other hand, this study showed that individuals with disabilities, despite lacking a strong academic or professional background, can effectively use a NC platform with appropriate training and support. However, it should be noted that cases involving LC or more complex platforms may differ, as they may require more advanced IT skills. This highlights the need for further studies with larger sample sizes and additional LCNC platforms. Touzet further notes that “IT literacy” plays a crucial role in utilizing AI-based technologies [22]. The participants in the current study, despite lacking an IT background, were familiar with working with laptops, allowing them to quickly grasp the new features of Landbot. This may be attributed to the younger generations, who have grown up using these technologies and might find it easier to learn new tools, as they likely develop this literacy from an early age.
The second perspective of this study focused on the viewpoints of End Users, namely the employees at the sheltered workshop who have been using chatbots developed by their peers. According to the SUS scores and occasional feedback from Supervisor 1, employees have expressed satisfaction with their use. The chatbots have been tailored to meet the specific needs of the sheltered workshop, making them well-received by the End Users as well as the supervisors. One notable example is the SR chatbot. Prior to its introduction, employees often relied on their caregivers to notify the workshop’s staff when they could not attend work due to illness, a situation that for some individuals frequently arose because of their disabilities. The chatbot has significantly streamlined this process for both employees and staff, leading to discussions about its implementation across all branches. However, it was observed that certain changes require time and effort. For instance, participants have not been using the Suggestion Box chatbot as frequently as expected, and some are using it for unintended purposes. Implementing these changes will necessitate a shift in the organizational culture, along with additional time and resources. These findings are consistent with the study by Oldman et al. [50], which examined the long-term process of transforming sheltered workshops into evidence-based supported employment programs. Their research highlighted critical factors for successful change, such as motivated leadership, adequate financial resources, and a recognized “need for change” among stakeholders. The literature review by Nettles [51] corroborates and builds upon these findings, underscoring additional factors crucial to successful change, such as the roles of families and supervisors in driving the process.
This study has its own limitations. Firstly, the sample size was relatively small, and participants (both Developers and End Users) were selected based on the judgment of Supervisor 1, which introduces a potential bias. Secondly, the research was conducted using only one NC platform, which was limited in its features and capabilities, focusing exclusively on the development of chatbots. There are numerous other LCNC platforms available that offer more complex functionalities and a wider array of products. For example, during the brainstorming phase of the study, the development of an intranet for sheltered workshop employees was discussed. This intranet would allow participants to stay updated on internal news, submit vacation requests, and manage workflows. While such intranet systems could be developed using LCNC platforms, they may require a deeper understanding of programming or more advanced technical knowledge. By not exploring these alternatives, the study may not fully capture the potential benefits and challenges of various LCNC platforms. Furthermore, studies have shown that LCNC platforms present their own set of challenges. For instance, while there is potential to integrate products developed using LCNC platforms into existing systems and tools within companies [18], they are not universally compatible with all systems [52]. In the current study, the chatbots developed using the NC platform were not integrated into the existing systems used in the sheltered workshop, and it remains unclear whether their implementation would create additional challenges or benefits for the workshop. Future research could explore these integrations to assess their long-term feasibility and impact within organizations. In this context, the policies of individual organizations need to be considered. While the implementation of such platforms in the workplace could promote digital inclusion and improve relevant policies, the ethical implications of AI-based LCNC platforms need to be addressed, especially when sensitive personal information is involved. Another limitation of the current study relates to data policies and privacy requirements of LCNC platforms. As previously mentioned, given the sensitivity of the data in the study, platforms were selected based on compliance with EU regulations and the policies of the sheltered workshop. Consequently, the number of platforms available for selection was limited. Future studies would benefit from a larger and more diverse sample, including participants from other sheltered workshops and the exploration of more advanced LCNC platforms for the development of a wider range of products.

5. Conclusions

This study addressed multiple factors that influence the development and implementation of new technologies for people with disabilities in the workplace. By developing tools that are customized to the specific needs, designed by peers, and used on a daily basis, the research offers a comprehensive perspective on inclusion, accessibility, and employment. In line with the existing literature [53], acquiring new skills, such as becoming a citizen developer, can open up new career opportunities, and people with disabilities are no exception. The findings suggest that, with proper training and support, individuals with disabilities can gain new skills and become citizen developers within their organizations. These capabilities have the potential to reduce staff workloads, accelerate digital transformation, and facilitate the transition of people with disabilities into the primary labor market. Usability assessments revealed that End Users found the products intuitive and beneficial across multiple levels. While additional customization to meet individual needs is still required, having in-house developers can simplify the process of development and refinement. This study’s results underscore the potential of LCNC platforms and suggest that future research could explore their capabilities in larger sample sizes and more complex platforms.

Author Contributions

Conceptualization, S.H.K. and B.M.K.; methodology S.H.K. and B.M.K.; validation, S.H.K.; formal analysis, S.H.K.; investigation, S.H.K. and B.M.K.; project administration, S.H.K.; resources, S.H.K.; writing—original draft preparation, S.H.K.; writing—review and editing, S.H.K., B.M.K., L.A. and L.B.; visualization, S.H.K.; supervision, B.M.K., L.B. and L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study is part of a project of the Federal Institute for Occupational Safety and Health in Germany. No external funding was received.

Institutional Review Board Statement

This study was ethically evaluated and approved by the ethic commission of the Federal Institute for Occupational Safety and Health (German: Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (BAuA)-083_2024), approved date: 23 May 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are not publicly available due to the privacy rights of the participants.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LCNClow-code and no-code platforms
LClow-code
NCno-code
SRSickness Registration chatbot
SUSSystem Usability Scale

References

  1. El Kamouchi, H.; Kissi, M.; El Beggar, O. Low-code/No-code Development: A systematic literature review. In Proceedings of the 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA), Casablanca, Morocco, 22–23 November 2023; pp. 1–8. [Google Scholar]
  2. Binzer, B.; Winkler, T. Low-Coders, No-Coders, and Citizen Developers in Demand: Examining Knowledge, Skills, and Abilities Through a Job Market Analysis. In Transforming the Digitally Sustainable Enterprise: Proceedings of the 18th International Conference on Wirtschaftsinformatik, Paderborn, Germany, 18 September 2023; Springer: Cham, Switzerland, 2023. [Google Scholar]
  3. Martinez, E.; Pfister, L. Benefits and limitations of using low-code development to support digitalization in the construction industry. Autom. Constr. 2023, 152, 104909. [Google Scholar] [CrossRef]
  4. Woo, M. The Rise of No/Low Code Software Development—No Experience Needed? Engineering 2020, 6, 960–961. [Google Scholar] [CrossRef] [PubMed]
  5. Lebens, M.; Finnegan, R.; Sorsen, S.; Shah, J. Rise of the Citizen Developer. Muma Bus. Rev. 2021, 5, 101–111. [Google Scholar] [CrossRef] [PubMed]
  6. Sanchis, R.; García-Perales, Ó.; Fraile, F.; Poler, R. Low-Code as Enabler of Digital Transformation in Manufacturing Industry. Appl. Sci. 2020, 10, 12. [Google Scholar] [CrossRef]
  7. McHugh, S.; Carroll, N.; Connolly, C. Low-Code and No-Code in Secondary Education—Empowering Teachers to Embed Citizen Development in Schools. Comput. Sch. 2023, 41, 399–424. [Google Scholar] [CrossRef]
  8. Rajaram, A.; Olory, C.; Leduc, V.; Evaristo, G.; Coté, K.; Isenberg, J.; Isenberg, J.S.; Dai, D.L.; Karamchandani, J.; Chen, M.F.; et al. An integrated virtual pathology education platform developed using Microsoft Power Apps and Microsoft Teams. J. Pathol. Inform. 2022, 13, 100117. [Google Scholar] [CrossRef]
  9. Mew, L.; Field, D. A Case Study on Using the Mendix Low Code Platform to Support a Project Management Course. 2018. Available online: https://www.researchgate.net/publication/329488415_A_Case_Study_on_Using_the_Mendix_Low_Code_Platform_to_support_a_Project_Management_Course (accessed on 3 April 2025).
  10. Jauhar, S.K.; Jani, S.M.; Kamble, S.S.; Pratap, S.; Belhadi, A.; Gupta, S. How to use no-code artificial intelligence to predict and minimize the inventory distortions for resilient supply chains. Int. J. Prod. Res. 2024, 62, 5510–5534. [Google Scholar] [CrossRef]
  11. Konin, A.; Siddiqui, S.; Gilani, H.; Mudassir, M.; Ahmed, M.H.; Shaukat, T.; Naufil, M.; Ahmed, A.; Tran, Q.H.; Zia, M.Z. AI-mediated Job Status Tracking in AR as a No-Code service. In Proceedings of the 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Singapore, 17–21 October 2022. [Google Scholar]
  12. Takahashi, N.; Javed, A.; Kohda, Y. How Low-Code Tools Contribute to Diversity, Equity, and Inclusion (DEI) in the Workplace: A Case Study of a Large Japanese Corporation. Sustainability 2024, 16, 5327. [Google Scholar] [CrossRef]
  13. Teborg, S.; Hünefeld, L.; Gerdes, T.S. Exploring the working conditions of disabled employees: A scoping review. J. Occup. Med. Toxicol. 2024, 19, 2. [Google Scholar] [CrossRef]
  14. Zukunft der Werkstätten. Perspektiven für und von Menschen mit Behinderung zwischen Teilhabe-Auftrag und Mindestlohn: Schachler, Viviane; Schlummer, Werner; Weber, Roland; 2023. Available online: https://www.pedocs.de/frontdoor.php?source_opus=26510 (accessed on 3 April 2025).
  15. BAG WfbM. BAG WfbM zu aktuellen Medienberichten über Werkstätten für Behinderte Menschen 2021. Available online: https://www.bagwfbm.de/article/5199 (accessed on 3 April 2025).
  16. Goodhue, D.L.; Thompson, R.L. Task-Technology Fit and Individual Performance. MIS Q. 1995, 19, 213–236. [Google Scholar] [CrossRef]
  17. Matic, R.; Kabiljo, M.; Zivkovic, M.; Cabarkapa, M. Extensible Chatbot Architecture Using Metamodels of Natural Language Understanding. Electronics 2021, 10, 2300. [Google Scholar] [CrossRef]
  18. Nguyen Quoc, C.; Nguyen Hoang, T.; Cha, J. Using No-Code/Low-Code Solutions to Promote Artificial Intelligence Adoption in Vietnamese Businesses. Int. J. Internet Broadcast. Commun. 2024, 16, 370–378. [Google Scholar]
  19. Lorenzo, G.; Elia, G.; Sponziello, A. Artificial Intelligence Platforms Enabling Conversational Chatbots: The Case of Tiledesk.com; Springer: Cham, Switzerland, 2025; pp. 119–137. [Google Scholar]
  20. Mateos-Sanchez, M.; Melo, A.C.; Blanco, L.S.; García, A.M.F. Chatbot, as Educational and Inclusive Tool for People with Intellectual Disabilities. Sustainability 2022, 14, 1520. [Google Scholar] [CrossRef]
  21. de Filippis, M.L.; Federici, S.; Mele, M.L.; Borsci, S.; Bracalenti, M.; Gaudino, G.; Cocco, A.; Amendola, M.; Simonetti, E. Preliminary Results of a Systematic Review: Quality Assessment of Conversational Agents (Chatbots) for People with Disabilities or Special Needs. In Computers Helping People with Special Needs; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  22. Touzet, C. Using AI to support people with disability in the labour market. In OECD Artificial Intelligence Papers; OECD Publishing: Paris, France, 2023. [Google Scholar] [CrossRef]
  23. Beudt, S.; Blanc, B.; Feichtenbeiner, R.; Kähler, M. Critical reflection of AI applications for persons with disabilities in vocational rehabilitation. In Proceedings of the DELFI Workshops 2020, Online, 14–15 September 2020; Gesellschaft für Informatik e.V.z.: Bonn, Germany, 2020. [Google Scholar]
  24. Simon, P. Low-Code/No-Code: Citizen Developers and the Surprising Future of Business Applications; Racket Publishing: Chicago, IL, USA, 2022. [Google Scholar]
  25. Kajamaa, A.; Mattick, K.; de la Croix, A. How to … do mixed-methods research. Clin. Teach. 2020, 17, 267–271. [Google Scholar] [CrossRef]
  26. Dawadi, S.; Shrestha, S.; Giri, R. Mixed-Methods Research: A Discussion on its Types, Challenges, and Criticisms. J. Stud. Educ. 2021, 2, 25–36. [Google Scholar] [CrossRef]
  27. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE Publications: Thousand Oaks CA, USA, 2017. [Google Scholar]
  28. Morgeson, F.; Humphrey, S. The Work Design Questionnaire (WDQ): Developing and Validating A Comprehensive Measure for Assessing Job Design and the Nature of Work. J. Appl. Psychol. 2006, 91, 1321–1339. [Google Scholar] [CrossRef]
  29. Stegmann, S.; van Dick, R.; Ullrich, J.; Charalambous, J.; Menzel, B.; Egold, N.; Wu, T.T.-C. Der Work Design Questionnaire—Vorstellung und erste Validierung einer deutschen Version. Z. Arb. Organ. AO 2010, 54, 1–28. [Google Scholar]
  30. Schorr, A. Skala zur Erfassung der Digitalen Technologieakzeptanz—Weiterentwicklung zum testtheoretisch geprüften Instrument. Digitaler Wandel, Digitale Arbeit, Digitaler Mensch? Bericht zum 66. Arbeitswissenschaftlichen Kongress vom 16.—18. März 2020, Berlin. No. 35, B.20.52020; pp. 1–6. Available online: https://gfa2020.gesellschaft-fuer-arbeitswissenschaft.de/inhalt/B.20.5.pdf (accessed on 3 April 2025).
  31. Schorr, A. The Technology Acceptance Model (TAM) and its Importance for Digitalization Research: A Review. Proc. TecPsy 2023, 55–65. [Google Scholar] [CrossRef]
  32. Brooke, J. SUS: A quick and dirty usability scale. Usability Eval. Ind. 1995, 189, 4–7. [Google Scholar]
  33. Rummel, B. System Usability Scale (Translated into German). 2013. Available online: https://community.sap.com/t5/additional-blogs-by-sap/system-usability-scale-jetzt-auch-auf-deutsch/ba-p/13487686 (accessed on 3 April 2025).
  34. Hyzy, M.; Bond, R.; Mulvenna, M.; Bai, L.; Dix, A.; Leigh, S.; Hunt, S. System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis. JMIR mHealth uHealth 2022, 10, e37290. [Google Scholar] [CrossRef]
  35. Gutiérrez, M.M.; Rojano-Cáceres, J.R. Interpretation of the SUS questionnaire in Mexican sign language to evaluate usability an approach. In Proceedings of the 2020 3rd International Conference of Inclusive Technology and Education (CONTIE), Baja California Sur, Mexico, 28–30 October 2020. [Google Scholar]
  36. Bangor, A.; Kortum, P.; Miller, J. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  37. Knispel, J.W.L.S.V.; Arling, V. Skala zur Messung der Beruflichen Selbstwirksamkeitserwartung (BSW-5-Rev); Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS): Mannheim, Germany, 2021. [Google Scholar]
  38. Neyer, F.J.F.J.; Gebhardt, C. Kurzskala Technikbereitschaft (TB, Technology Commitment); Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS): Mannheim, Germany, 2016. [Google Scholar]
  39. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  40. Hannay, J.; Dybå, T.; Arisholm, E.; Sjøberg, D. The effectiveness of pair programming: A meta-analysis. Inf. Softw. Technol. 2009, 51, 1110–1122. [Google Scholar] [CrossRef]
  41. Roldán-Álvarez, D.; Márquez-Fernández, A.; Rosado-Martín, S.; Martín, E.; Haya, P.A.; García-Herranz, M. Benefits of Combining Multitouch Tabletops and Turn-Based Collaborative Learning Activities for People with Cognitive Disabilities and People with ASD. In Proceedings of the 2014 IEEE 14th International Conference on Advanced Learning Technologies, Athens, Greece, 7–10 July 2014. [Google Scholar]
  42. Noushad, B.; Van Gerven, P.W.M.; de Bruin, A.B.H. Twelve tips for applying the think-aloud method to capture cognitive processes. Med. Teach. 2024, 46, 892–897. [Google Scholar] [CrossRef]
  43. Ericsson, K.A.; Simon, H.A. Verbal Reports as Data. Psychol. Rev. 1980, 87, 215–251. [Google Scholar] [CrossRef]
  44. Lundmark, C.; Nilsson, J.; Krook-Riekkola, A. Taking Stock of Knowledge Transfer Studies: Finding Ways Forward. Environ. Manag. 2023, 72, 1146–1162. [Google Scholar] [CrossRef]
  45. Mohajerzad, H.; Schrader, J. Transfer from research to practice—A scoping review about transfer strategies in the field of research on digital media. Comput. Educ. Open 2022, 3, 100111. [Google Scholar] [CrossRef]
  46. Matallana, J.; Paredes, M. Teaching methodology for people with intellectual disabilities: A case study in learning ballet with mobile devices. Univers. Access Inf. Soc. 2023, 24, 409–423. [Google Scholar] [CrossRef]
  47. Gilson, C.B.; Carter, E.W.; Biggs, E.E. Systematic Review of Instructional Methods to Teach Employment Skills to Secondary Students with Intellectual and Developmental Disabilities. Res. Pract. Pers. Sev. Disabil. 2017, 42, 89–107. [Google Scholar] [CrossRef]
  48. Schaap, R.; Stevels, V.A.; de Wolff, M.S.; Hazelzet, A.; Anema, J.R.; Coenen, P. “I noticed that when I have a good supervisor, it can make a Lot of difference”. A Qualitative Study on Guidance of Employees with a Work Disability to Improve Sustainable Employability. J. Occup. Rehabil. 2023, 33, 201–212. [Google Scholar] [CrossRef]
  49. Frogner, J.; Hanisch, H.M.; Kvam, L.; Witsø, A.E. A Glass House of Care: Sheltered Employment for Persons with Intellectual Disabilities. Scand. J. Disabil. Res. 2023, 25, 282–294. [Google Scholar] [CrossRef]
  50. Oldman, J.; Thomson, L.; Calsaferri, K.; Luke, A.; Bond, G.R. A Case Report of the Conversion of Sheltered Employment to Evidence-Based Supported Employment in Canada. Psychiatr. Serv. 2005, 56, 1436–1440. [Google Scholar] [CrossRef] [PubMed]
  51. Nettles, J.L. From Sheltered Workshops to Integrated Employment: A Long Transition. LC J. Spec. Educ. 2013, 8, 9. [Google Scholar]
  52. Elshan, E.; Siemon, D.; Bruhin, O.; Kedziora, D.; Schmidt, N. Unveiling Challenges and Opportunities in Low Code Development Platforms: A StackOverflow Analysis. In Proceedings of the 57th Hawaii International Conference on System Sciences, Honolulu, HI, USA, 3–6 January 2024. [Google Scholar]
  53. Biedova, O.; Ives, B.; Male, D.; Moore, M. Strategies for Managing Citizen Developers and No-Code Tools. MIS Q. Exec. 2024, 23, 4. [Google Scholar]
Figure 1. An example of use of emojis for questionnaires (the translation of the question is: ‘My work is not boring’. The answers are rated on a scale from ‘Strongly Disagree’ to ‘Strongly Agree’).
Figure 1. An example of use of emojis for questionnaires (the translation of the question is: ‘My work is not boring’. The answers are rated on a scale from ‘Strongly Disagree’ to ‘Strongly Agree’).
Technologies 13 00146 g001
Figure 2. Study overview.
Figure 2. Study overview.
Technologies 13 00146 g002
Figure 3. An example of using pictograms for the training instructions (the translation of the instruction title is ‘14. Click on the “Button”’. Additionally, the screenshot shows the task of clicking on “Tasten” in German, which means “Button” in English, instructing the user to select certain options).
Figure 3. An example of using pictograms for the training instructions (the translation of the instruction title is ‘14. Click on the “Button”’. Additionally, the screenshot shows the task of clicking on “Tasten” in German, which means “Button” in English, instructing the user to select certain options).
Technologies 13 00146 g003
Figure 4. Overview of training days.
Figure 4. Overview of training days.
Technologies 13 00146 g004
Table 1. Developers’ demographic information.
Table 1. Developers’ demographic information.
ParticipantAge (Years)GenderEducation *DisabilityTasks at the Sheltered Workshop
P118FemaleField-oriented qualificationPhysical and developmental disabilityPackaging and assembly of products
P219MaleWorkplace-oriented qualificationIntellectual and developmental disabilityPackaging and assembly of products
P320MaleWorkplace-oriented qualification Intellectual disabilityPackaging and assembly of products
P423FemaleWorkplace-oriented qualificationCognitive disabilityPackaging and assembly of products
P527FemaleField-oriented qualificationIntellectual disabilityPackaging and assembly of products
P620MaleWorkplace-oriented qualificationIntellectual and developmental disabilityHousekeeping Department
* The education categories are detailed in the Recruitment section (Step 4).
Table 2. Order of chatbot developments in each group.
Table 2. Order of chatbot developments in each group.
MembersDay 2Day 3Day 4
Group 1P1 and P6Chatbot 1: Vacation
Chatbot 2: Suggestion Box
Chatbot 1: Vacation
Chatbot 2: Cookbook
Chatbot 1: Sickness Registration (SR)
Chatbot 2: SR
Group 2P2 and P3Chatbot 1: Cookbook
Chatbot 2: Vacation
Chatbot 1: Vacation
Chatbot 2: Suggestion Box
Chatbot 1: SR
Chatbot 2: SR
Group 3 *P4 and P5Chatbot 1: Suggestion Box
Chatbot 2: Vacation
Chatbot 1: Vacation
Chatbot 2: Vacation and Suggestion Box
Chatbot 1: SR
Chatbot 2: SR
* See method sections for more information. As P4 was absent on day 2, some chatbots were repeated again.
Table 3. Ratings of the four developed chatbots.
Table 3. Ratings of the four developed chatbots.
ParticipantSuggestion BoxCookbookSickness RegistrationVacationVoted for Implementation
P14554.5Cookbook and SR
P25433.5Suggestion Box
P35534Vacation
P45534.5-
P54.5555-
P65533-
Table 4. Descriptive analysis of the scores from the questionnaires completed by Developers before and after the training.
Table 4. Descriptive analysis of the scores from the questionnaires completed by Developers before and after the training.
QuestionnairenMeanMedianSD
BSW:
Before training63.063.000.48
After training 63.533.700.67
TB:
Before training641.3340.59.99
After training642.6742.57.78
WDQ
Before training6
1. Task variety 3.913.870.40
2. Feedback from job 4.023.830.56
3. Problem solving 3.753.620.31
4. Skill variety 4.254.120.67
5. Equipment use 3.333.330.47
After training6
1. Task variety 4.124.250.41
2. Feedback from job 4.114.831.25
3. Problem solving 3.373.500.41
4. Skill variety 4.044.121.04
5. Equipment use 3.443.331.08
DTAS
Before training6
1. Perceived usefulness 17.6718.502.87
2. Perceived ease of use 14.0015.505.02
3. Attitudes towards usage 12.1713.503.65
4. Behavioral intention to use 9.3310.001.03
after training6
1. Perceived usefulness 14.8316.006.01
2. Perceived ease of use 15.5016.004.55
3. Attitudes towards usage 11.8313.004.30
4. Behavioral intention to use 7.177.503.18
Table 5. End Users’ demographic information and SUS results.
Table 5. End Users’ demographic information and SUS results.
DemographicsN (%)
Gender
Female4 (28.6)
Male9 (64.3)
Missing1 (7.1)
Age (years)
18–2412 (85.7)
25–341 (7.1)
45–541 (7.1)
Mean (SD)22.93 (6.9)
Disability
Missing8 (28.6)
Physical disability1 (7.1)
Developmental disability1 (7.1)
Mental illness1 (7.1)
Multiple disability3 (21.4)
SUS scores
100–8512 (85.7)
Under 852 (14.3)
Mean (SD)88.9 (11.2)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hamideh Kerdar, S.; Kirchhoff, B.M.; Adolph, L.; Bächler, L. A Study on Chatbot Development Using No-Code Platforms by People with Disabilities for Their Peers at a Sheltered Workshop. Technologies 2025, 13, 146. https://doi.org/10.3390/technologies13040146

AMA Style

Hamideh Kerdar S, Kirchhoff BM, Adolph L, Bächler L. A Study on Chatbot Development Using No-Code Platforms by People with Disabilities for Their Peers at a Sheltered Workshop. Technologies. 2025; 13(4):146. https://doi.org/10.3390/technologies13040146

Chicago/Turabian Style

Hamideh Kerdar, Sara, Britta Marleen Kirchhoff, Lars Adolph, and Liane Bächler. 2025. "A Study on Chatbot Development Using No-Code Platforms by People with Disabilities for Their Peers at a Sheltered Workshop" Technologies 13, no. 4: 146. https://doi.org/10.3390/technologies13040146

APA Style

Hamideh Kerdar, S., Kirchhoff, B. M., Adolph, L., & Bächler, L. (2025). A Study on Chatbot Development Using No-Code Platforms by People with Disabilities for Their Peers at a Sheltered Workshop. Technologies, 13(4), 146. https://doi.org/10.3390/technologies13040146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop