**1. Introduction**

The Internet of Things (IoT) concept has several definitions, as involved technologies are continually evolving. IoT is defined as "a network that connects uniquely identifiable things to the Internet" [1]. These *things* have sensing and actuating capabilities and can be programmed, such that data can be collected and their state can change. IoT potentialities enable the development of a significant number of applications for improving citizens' life. Smart homes and buildings, smart cities, mobility and transportation, healthcare, agriculture and industry are some of the main areas of IoT application [1]. For a rapid materialization of IoT, the symbiosis among the physical world and the cyber world must be harmonious [2]. Interactions between humans and computing-enabled objects must be smarter and opportunistic [3]. As it may happen with humans' intelligence [4], the smartness of IoT things relies heavily on their sensory, interactive capabilities. In this vein, smart interactive objects enable creating tangible things to do different tasks in different application domains [5].

The development of smart IoT applications usually requires strong programming skills, which commonly exceed people's abilities. However, in recent years, several projects, such as Arduino and Raspberry Pi, aimed not only at professionals but also educators and students, have influenced the IoT expansion. These initiatives include both hardware platforms and programming tools, and a user community is growing around them.

Since the notation used in programming languages has a tremendous impact on novices [6], various tools to program IoT microcontrollers and microcomputers have emerged. These tools are

based on block-based languages and proved to be useful for novice programmers. Learners of block-based languages depicted greater gains in algorithmic thinking [7] and a higher interest in computer science than those using text-based environments [8]. Differences between block-based languages and text-based languages often fade after learners transfer their acquired knowledge of computer programming to more professional, text-based languages and environments [9].

Currently, the most commonly used block-based programming tools, namely Scratch and App Inventor, provide capabilities to connect with external hardware devices, such as Arduino. However, they present some limitations when it comes to developing IoT applications, namely: (1) the absence of an easy mechanism for ingesting and processing event data streams and (2) the lack of usable mechanisms for visually representing data.

To facilitate authoring of IoT mobile apps, several visual components for a custom version of App Inventor, as well as a set of extensions for its block-based programming language, have been developed. With these components and language extensions, users can easily create apps that ingest data streams from available sensors, process them using a map-reduce programming style and then visualise the results of data processing graphically. The goal of this paper is to investigate how easy it is for non-experts to leverage such improved features to create their own smart IoT applications.

The block-based language approach followed in our research proposal has some limitations, which have been described in the literature. First, it may be applicable only for novice programmers who are learning to create their own smart IoT applications [9]. The research claims and results are not directly transferable to professional, text-based programming languages or even to other not block-based, visual programming paradigms [10]. Second, the use of programming concepts that are relevant to create smart IoT applications (such as state initialisation [11], parallelism [12], anonymous functions [13] and higher-order functions [14]) were adapted to visual and block-based languages. However, there are no evidences of learning improvements thanks to the use of such end-user development (EUD) approaches. Therefore, the use of block-based languages as an EUD approach for creating smart IoT applications may have limitations, which have to be overcome by more extensive research, as intended in this work.

The rest of the paper is structured as follows: the background and related works are presented in Section 2. Section 3 describes the main contribution. Two case studies are included in Sections 4 and 5. The former presents a usability study conducted with students of a computer programming fundamentals course whereas the latter was targeted at university lecturers. Finally, Section 6 discusses the results and draws the conclusions of this research.

#### **2. Background & Related Works**

IoT solutions are composed of hardware and software elements. Guth et al. [15] propose an IoT reference architecture from a comparison of various open-source (SiteWhere, OpenMTC and FIWARE) and proprietary (Amazon Web Services IoT) IoT platforms. Such architecture includes a set of sensors and actuators at the lower level. On the next level up, a hardware device is connected by a wired connection or wirelessly to sensors and actuators. Data communication protocols are required to manage the constraints of the smart devices, as well as gateways to translate data between different protocols and to forward communications. Middleware [15,16] processes the data received from the connected devices (e.g., by the execution of condition-action rules) to provide them to connected applications and sends commands to be executed by the corresponding actuators. Finally, IoT applications allow device-to-device and human-to-device interactions [17]. In the latter, mobile app-based smart interactive experiences can be provided for end-users.

Existing initiatives for learning and developing IoT solutions as well as block-based end-user development tools and their applications for creating IoT mobile experiences are described below.

#### *2.1. Initiatives for Learning and Developing IoT Solutions*

Arduino and Raspberry Pi are some of the most popular platforms used for educational purposes [1,18,19], with a huge development community. Arduino is a programmable circuit board, which can be connected with sensors and actuators of many types. Raspberry Pi is a single-board computer to run programs in a multitasking environment. However, the analog-digital conversion is not available onboard and thus additional hardware is required for interfacing with analog sensors such as photocells, joysticks and potentiometers.

Some initiatives and educational projects were carried out in order to teach IoT technologies for undergraduate and university students [20,21]. For example, in a project-based teaching and learning approach conceived for an IoT course [22], Raspberry Pi is used to devise and implement IoT designs. Other project-based learning courses for learning wired and wireless networking techniques have been offered to electrical and computer engineering students [23]. The use of microcontrollers with network connectivity and without complex operating systems provides cost-effective, well-supported and flexible platforms for developing IoT applications.

Moreover, the educational research outcome of teaching IoT device prototyping in a practical, real problem-based setting is presented [24] as a means for teaching computer science and software engineering. An example course outline for planning learning experiences in IoT prototyping is described along with a general assessment framework and best practice recommendations in order to facilitate personalised learning in analogous contexts.

Some educational approaches are based on the pocket labs (PL) concept to stimulate students' initiative and creativity. PL allow learners to experiment with real equipment in any place and at any time [25]. Despite that IoT and PL are not initially interrelated, the authors present a real case of IoT teaching practice based on Arduino and accompanying shields that includes sensors and actuators. PLs are combined with the online Tinkercad software tool to prototype and simulate electronic designs that include the Arduino boards.

Other initiatives for integrating IoT technologies in existing teaching-learning case studies were developed. For example, an IoT-based learning framework that integrates IoT and hardware/software technologies is used as part of a software engineering course for embedded system analysis and design [26]. Specifically, the authors introduced a lab development kit composed by Arduino and Raspberry Pi boards, sensors and XBee modules for providing wireless communication.

Common general-purpose programming languages can be used for developing IoT applications [19]. However, since IoT systems involve a wide variety of hardware and software components, depending on a range of distributed system and communication technologies, developing IoT applications is time-consuming and complex. Hence, a variety of IoT libraries, such as CoAPthon [27], and frameworks [28], such as IDeA, FRASAD, D-LITe, IoTLink, WebRTC based IoT application Framework, Datatweet, IoTSuite and RapIoT, have been developed to manage those complexities.

#### *2.2. End-User Development Tools for IoT*

Modern software programming tools hide much of the complexity of traditional programming languages. Recent *low code* software engineering approaches have been successful both for IoT [29] and for more general mobile application development [30]. Their general objective consists in making application creation easier for people without programming skills. This goal is shared by the research field known as end-user development (EUD). A recent review on this topic differentiates between end-user programming (EUP) and other software engineering activities that span the entire software development lifecycle [31]. The review was recently completed by another author, focusing on current EUD tools for developing IoT and robot applications [32].

Among EUP tools, block-based programming environment features are noteworthy [33] to enable composing programs without dealing with the syntactic issues of textual languages. Among such block-based languages and environments, Scratch [34,35] is very popular to create interactive

games, stories and animations, as well as to share such creations on the Web. Scratch computer programs are built by dragging and dropping blocks that represent common programming elements, such as variables, expressions, conditions and statements. Another block-based EUP approach for robotic applications is Phratch, which is a Scratch-like live programming environment [36]. Besides, App Inventor [37,38] is an open-source block-based programming tool. This tool enables users without prior programming experience to create apps specifically for smartphones and mobile devices. In particular, it makes mobile app deployment easier for the end-user. Additionally to other tools' amenities, App Inventor allows end-users to perform interface design and software deployment tasks, which belong to the realm of EUD beyond EUP. End-users can drag, drop and arrange various interface and non-visible components through a visual designer and then use a block language editor to program the app logic behaviour in order to create and deploy fully functional mobile apps. App Inventor provides event handling as a form of trigger-action programming (TAP), which proved to be particularly suitable to define bespoke behaviours to respond to the multiple events that may occur in an IoT context [39]. End-users specify the behaviour of a system as events or triggers and response actions when the events occur [40].

Despite the availability of libraries and frameworks to work with IoT technologies, it is very complicated to find EUD solutions to assist non-IT professionals in a particular area or topic to develop their own IoT consumer applications and smart user experiences. For example, ScratchX [41] is an experimental platform that allows people to test experimental functionalities built by some developers for the Scratch visual language. These experimental extensions enable apps to integrate with web services and external hardware, such as Arduino or Raspberry Pi.

On the other hand, the MIT IoT App Inventor project [42] allows students, teachers and *makers* to implement IoT projects in the same way as they develop regular mobile apps. This project provides users with components and block extensions to read data from a grea<sup>t</sup> variety of sensors (e.g., moisture, pressure, temperature, noise, etc.) and control a multiplicity of actuators (e.g., buzzers, lights, motors, etc.) As apps run on mobile devices, they can take advantage of all built-in features provided by App Inventor, but they can also use the apps to interact with objects all around. Besides, UDOO [43] is a combined set of open hardware and software technologies to allow novice makers to create their own digital objects connected to the cloud and to define custom behaviour logic for sensors and actuators. In addition to the physical devices, UDOO includes an App Inventor extension to handle sensors and actuators from within mobile apps. Finally, IoT Inventor [44] is a web-based integration platform, not based on but inspired by App Inventor, with a friendly drag-and-drop composer interface to build personalised and reconfigurable services using smart IoT-enabled things.

All of the described extensions are targeted to handle sensors and actuators but they do not provide support for easily ingesting, processing and visualizing data.

#### **3. Creating IoT Mobile Apps with VEDILS**

VEDILS [45] is a visual environment for designing interactive learning scenarios. It is an authoring tool targeted at users without programming skills who want to create their own mobile apps. The platform is based on App Inventor, the programming tool to build apps for mobile devices. The current version requires Android devices, though an iOS-based version is currently being devised by MIT. The development environment relies on the Blockly library for its visual programming language based on blocks.

App Inventor provides several components for designing mobile apps' user interfaces as well as other features, including multimedia elements, communication with the device sensors, sharing through social networks, etc. In addition to the built-in components provided by App Inventor, VEDILS features new components to enrich the apps with virtual and augmented reality experiences and to serve multi-modal external Human Machine Interface (HMI) devices, such as hand gesture sensors or electroencephalography (EEG) headsets, among other features. The platform was also used to conduct a study on the suitability of visual languages for non-expert robot programmers [46]. Regarding IoT computing, several components and blocks were developed for VEDILS to ingest, process and visualise data from a diversity of sensors.

#### *3.1. Ingesting IoT Data Streams*

App Inventor manages the following block types for each component: property getters and setters (green blocks), functions (blue blocks) and event handlers (yellow blocks). VEDILS extends those with a particular type of block (similar to event handlers) for non-visual components that issue a continuous flow of data, as is the case of both internal and external sensors. These kinds of components (red blocks) provide the app developer with a data stream suitable to be treated with the processing blocks described in Section 3.2; these are triggered when data from the sensor are ingested for a predefined time window.

One of the most common ways of receiving data from an IoT sensor and sending commands to an actuator is via a Bluetooth connection. Thus, the built-in *BluetoothClient* component was extended with the new *StreamDataReceived* block (see Figure 1), which provides the data stream as well as a new *SecondsToGetStreamData* property to set the time period to fetch new data from the Bluetooth server.


**Figure 1.** Ingesting stream data from Bluetooth external devices.

In addition, every new VEDILS component that provides communication with internal or external devices can support the streaming blocks. For example, the *BrainwaveSensor* component, which enables to detect brain activity by means of an EEG headset, includes specific blocks for ingesting stream data from regular fast Fourier transform (FFT) bands (i.e., Theta, Alpha, Low Beta, High Beta and Gamma) of EEG channels (see Figure 2). It also includes a *TimeToStreamBandsData* property to set the time window. The current implementation works with the Emotiv Epoc+ and Insight headsets. Thus, this component enables a new range of mobile applications to monitor emotions, track cognitive performance and even control objects through learning a set of mental activity patterns that can be trained and interpreted as mental commands.

The blocks of Figure 2 were developed as Java class methods that support external and internal sensors. The data flow is internally managed as Java 8 streams. Besides, additional classes based on threads and the Java Timer API were required to periodically check data availability.

#### *Sensors* **2019**, *19*, 5467


**Figure 2.** Ingesting stream data from electroencephalography (EEG) headsets.

#### *3.2. Processing IoT Data Streams*

In order to address the issue of treating IoT data, several data processing blocks were developed and delivered with VEDILS (see Figure 3):


All the previous blocks are intermediary operations, except for the *Reduce* block, which is terminal. The intermediary blocks can be indistinctly chained, whereas the *Reduce* blocks must always appear on the left of the sequence of operations.

**Figure 3.** Visual blocks for processing stream data

Extensions for the visual programming language itself requires not only Java code but also other languages. The visual appearance of each block of the *Streams* palette is defined by a JavaScript fragment using the Blockly library API, whereas its run-time behaviour is defined by generating YAIL code. Young Android Intermediate Language (YAIL) is a set of abstractions for Kawa, a Java-based implementation of the Scheme functional language. Figures 4 and 5 show the code required to develop the *limit* block.

**Figure 5.** JS fragment for generating the Young Android Intermediate Language (YAIL) code of the *Limit* block.

#### *3.3. Visualising IoT Data Streams*

App Inventor does not provide built-in capabilities to include charts or data tables in the apps. Thus, two new visible components were integrated into VEDILS for allowing developers to equip

their apps with those kinds of visualisations (see Figure 6). The *Chart* component enables the creation of simple graphics such as bars, lines or pie charts, whereas the *DataTable* component is intended to present the data in a tabular format. Both components can be fed with a data stream from any sensor and they are customisable, for example, by configuring the category and value axes of charts.


**Figure 6.** Visualizing stream data with the chart component

Both the *Chart* and *DataTable* components were developed as Java classes that inherit from the *AndroidViewComponent* superclass, already included in App Inventor. In run-time, these components provide an embedded *WebViewer* for the app screen, which points to an external HTML. That web page receives a JSON string containing the app data and renders it via the Google Chart API.

#### **4. Evaluating IoT Mobile App Development with Students**

This section presents a usability study of the VEDILS components for IoT computing. This test is aimed at checking whether these IoT components are suitable for learners when programming end-user IoT mobile apps. The test was defined and executed by following the guidelines provided by Rubin et al. [47].
