Next Article in Journal
Models versus Datasets: Reducing Bias through Building a Comprehensive IDS Benchmark
Previous Article in Journal
Proposal and Investigation of a Convolutional and LSTM Neural Network for the Cost-Aware Resource Prediction in Softwarized Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FlockAI: A Testing Suite for ML-Driven Drone Applications

Department of Computer Science, School of Sciences and Engineering, University of Nicosia, Nicosia CY-2417, Cyprus
*
Author to whom correspondence should be addressed.
Future Internet 2021, 13(12), 317; https://doi.org/10.3390/fi13120317
Submission received: 10 November 2021 / Revised: 8 December 2021 / Accepted: 14 December 2021 / Published: 16 December 2021
(This article belongs to the Special Issue Accelerating DevOps with Artificial Intelligence Techniques)

Abstract

:
Machine Learning (ML) is now becoming a key driver empowering the next generation of drone technology and extending its reach to applications never envisioned before. Examples include precision agriculture, crowd detection, and even aerial supply transportation. Testing drone projects before actual deployment is usually performed via robotic simulators. However, extending testing to include the assessment of on-board ML algorithms is a daunting task. ML practitioners are now required to dedicate vast amounts of time for the development and configuration of the benchmarking infrastructure through a mixture of use-cases coded over the simulator to evaluate various key performance indicators. These indicators extend well beyond the accuracy of the ML algorithm and must capture drone-relevant data including flight performance, resource utilization, communication overhead and energy consumption. As most ML practitioners are not accustomed with all these demanding requirements, the evaluation of ML-driven drone applications can lead to sub-optimal, costly, and error-prone deployments. In this article we introduce FlockAI, an open and modular by design framework supporting ML practitioners with the rapid deployment and repeatable testing of ML-driven drone applications over the Webots simulator. To show the wide applicability of rapid testing with FlockAI, we introduce a proof-of-concept use-case encompassing different scenarios, ML algorithms and KPIs for pinpointing crowded areas in an urban environment.

1. Introduction

We are now witnessing the extensive deployment of drones in a diverse set of applications, such as rescue missions [1], infrastructure monitoring [2], urban sensing [3], and precision farming [4]. In rescue missions, due to rough terrain and possible ruins after a disaster (e.g., forest fire), it is difficult to timely deploy emergency monitoring facilities. Moreover, missions in such areas are dangerous for first responders and volunteers. Fortunately, with the rapid progression of robotics, artificial intelligence and edge computing, drones are now equipped with cameras and sensors, and can aid in identifying survivors, pinpoint rescue routes and detect environmental hazards by obtaining a favorable viewpoint for situational assessment [5].
However, drone technology comes with notable limitations. As the on-device AI tasks become more demanding (e.g., face detection), drones are severely impacted by battery autonomy [6]. This key limitation is inhibited by the AI algorithmic process, which imposes vast requirements in the data analysis, a prominent energy drain for drones and mobile systems in general [7,8]. For example, commercial drones can fly for 30–40 min at best, thus mapping a medium-size area of 5 km 2 requires multiple battery charges that severely impacts mission timeliness [9]. To cover more terrain and increase mission timeliness, multiple drones can be deployed to “parallelize the workload”. Recently, drone swarms are being employed, where drones are programmed to achieve a common objective by autonomously adapting their behavior based on shared information communicated between drones [10]. This new technology is based on Machine Learning (ML), where algorithms embedding trained models, enable—seemingly independent—robotic entities to share data and converge online by combining their insights into a single and emergent intelligence unknown to the individuals [11]. Hence, ML is a significant driver in the new era of drone applications, with ML algorithms limiting the reliance of the human-in-the-loop by tackling our inability to recognize objects from viewpoints far in the sky [12]. However, as the number of drones in a swarm increases, so does the complexity of the data-driven decision-making and the exchange of data between both participants and the ground control, which, in turn, critically affects swarm scalability and mission timeliness [13].
This brings us to the focal point of our work. Evaluating on-board ML algorithms for drone technology is inhibited by the lack of realistic testing frameworks that can be used alongside open and popular robotic simulators. Simulators are used heavily during the design and development phase of drone projects. This unfortunately burdens ML practitioners with the task of developing not only the proposed ML algorithms and training models but also requires vast amounts of time dedicated for the development of the benchmarking infrastructure to evaluate them in consolidated environments. As a result, ML algorithm evaluation for drone technology over key performance indicators, is limited, poorly repeatable and ad-hoc, at best. Consequently, the design, deployment, and evaluation of AI-driven drone applications becomes a complex and costly endeavor for researchers and engineers, since it requires the exploration of numerous conditions and parameters through repeatable and controllable experiments on a combination of physical and virtual testbeds that are hard to configure and measure. Most importantly, ML practitioners require frameworks that can reproduce experiment results so that experiment validation can be performed.
Towards this, the main contributions of our work are:
  • A comprehensive programming paradigm tailored to the unique characteristics of ML-driven drone applications. The model expressivity enables developers to design, customize, and configure complex drone deployments, including encompassed sensors, compute modules, resource capabilities, operating flight path, and the emulator world along with simulation settings. Most importantly, users can use the declarative algorithm interface to deploy trained models and enable ML inference tasks that run on the drone while in flight;
  • A configurable energy profiler, based on an ensemble of SOTA energy models for drones, with the profiler supporting the specification of relevant parameters for multi-grain energy consumption of the drone and the application in general. The complexity of the modeling provides users with energy measurements relevant to drone flight, computational processing, and the communication overhead. Users can take advantage of “energy profiles”, with pre-filled parameters, or tweak the parameterization and even completely alter the model implementation as long as it adheres to the energy interface;
  • A monitoring system providing various probing modules for measuring and evaluating the resource utilization of a drone application. Through the monitoring system, users can compose custom metric collectors so that the monitoring system is used to inject application-specific metrics and, finally, compose more complex models, such as QoS and monetary costs;
  • The FlockAI framework (https://unic-ailab.github.io/flockai/), which is an open and modular by design Python framework supporting ML practitioners with the rapid deployment and repeatable testing of ML-driven drone applications over the Webots robotics simulator (https://cyberbotics.com/). A screenshot is provided in Figure 1.
The rest of this article is structured as follows: Section 2 presents technological challenges when testing ML-driven drone applications. Section 3 introduces the FlockAI framework, while Section 4 illustrates the modeling and implementation aspects of our framework. A comprehensive experimentation is illustrated in Section 5. Finally, Section 6 presents the related work and Section 8 concludes the article.

2. Machine Learning for Drones

Machine Learning is the application of Artificial Intelligence capable of training machines with data collected from past experience, to learn how to perform data-driven tasks. ML algorithms in their most simplistic form feature a training and testing phase. During training, the ML algorithm receives labeled data as input and the output is a model representing a desired real-world process. During model application, the model is used to infer the outcome of the real-world process based on unlabeled input. The complexity of the ML model, along with the data given for training, significantly impact the algorithm accuracy for the applied task.
It comes as no surprise that a recent McKinsey report denotes that the largest stake of the funding for commercial drone technology (in the US) is for on-board data management and analysis services empowered by ML and AI in general [14]. Hence, ML is now heavily examined by the research community as a driver to improve the level of autonomy of drones. Towards this, we are now witnessing the penetration of ML in drone navigation. This includes drone autopilots [15], mission planning [16], and collision avoidance [12]. The latter is drawing significant attention for two reasons: (i) drone-to-drone collisions due to the vast deployment of drones for several applications; and (ii) the use of drones in small and dense areas to aid in search and rescue missions (e.g., buildings on fire) [17].
On another level, ML is heavily used for both object detection and tracking [18]. Object detection is a specialization of the classification problem, where images from the on-board camera are passed to the ML algorithm to pinpoint the presence of (un-)desired objects. In turn, object tracking is usually enabled after detection, where object movement is tracked to remain in the scope of the drones’ viewpoint. The domains of agriculture, critical infrastructure monitoring, and cinematography have benefited by the ability of drones to perform object detection and tracking. For example, drones are deployed to aid farmers in monitoring the quality of crop growth [4], and aid energy infrastructure companies survey quality degradation problems with powerlines, roads, and even photovoltaic panels [19]. Interestingly, ML-driven drones are recently being examined to detect crowded urban areas when social distancing must be enforced [20].
Despite the great potential of ML algorithms, these are usually not optimal for dealing with drones out-of-the-box. The constrained battery and modest compute capabilities of drones are considered significant inhibitors impacting the deployment of ML-driven applications [21]. At this point, one may argue that upgrading to a more powerful processor is a “quick fix” solution. However, stronger processors require more power and hence compute still ends up impacting flight time. Another, may argue that no on-board processing is needed and that the data should just be streamed to the base-station (or cloud) for analysis. However, communication is also a significant energy drain, not to mention that waiting for a remote answer can lead to catastrophic results for the task assigned to the drone (e.g., collision).
Hence, a ML practitioner developing a new drone application is left puzzled with the following questions in mind:
  • Q1. Can my ML algorithm run in time when deployed on the desired resource-constraint drone?
  • Q2. How does the ML algorithm affect the drones’ energy consumption and overall battery autonomy?
  • Q3. If the time and energy constraints permit, what is the trade-off between on-board ML inference vs. using a remote service through an open communication link?
  • Q4. Could a different (less-intensive) algorithm suffice (trade-off between algorithm accuracy and energy)?
These questions, among others, can be answered by exploiting the use of the FlockAI framework in the design and evaluation phase of ML-driven drone applications.

3. The FlockAI Framework

This Section provides a high-level overview of the FlockAI framework requirements and elaborates on its functionality from a user perspective.

3.1. Requirements

Releasing to production a drone application is a timely and costly endeavor due to the vast possibility of both (disastrous) mechanical and software errors emerging during the design phase. Simulation is the process that realizes the behavior of a platform, device or infrastructure on a host device or host infrastructure, at a fraction of the real deployment cost [22]. To ease this process, simulators have emerged as critical tools utilized during the drone application design. Towards this, a handful of simulators enable the coding and evaluation of algorithmic solutions to test—in a controlled realm—the solution feasibility along with the ability to integrate (and “play nice”) with other embedded components. Developers of ML-driven drones, who seek to explore and assess the performance of their applications under various and diverse operational conditions, even when utilizing a drone simulator, are met with several challenges before they can even start thinking of how to answer the mentioned questions.
These challenges are what drive the requirements definition of the FlockAI framework:
  • Drones come in different flavors with their resource capabilities featuring diverse resource configurations (e.g., compute, network, disk I/O), while drones can also encompass multiple and different sensing modules (e.g., camera, gps) depending on the task they are utilized. A framework supporting drone application experimentation must support configurable drones, as well as, the enablement of different sensing modules;
  • With the increased interest of examining the adoption of drone technology in different domains, a wide range of applications must be supported and these must be realistic. Although a framework does not need to satisfy every type of application, significantly limiting the scope inhibits adoption as the effort devoted to learning a new framework is only justified when this knowledge can be used again in the (near) future;
  • Evaluating the performance of an ML application entails the measurement of a variety of key performance indicators (metrics) including the impact of the ML algorithm to the drone’s resources and the algorithm’s accuracy. The latter is highly dependent to the data used during the training phase, while the former requires significant probing during application simulation in order to assess resource utilization, communication overhead, and energy consumption;
  • An evaluation toolkit must facilitate repeatable measurements under various scenarios. For drones, the incorporation of multiple sensing modules, the dependence of physical environment and external communication, make this a challenging criterion to meet.

3.2. Framework Overview

Figure 2 depicts a high-level architectural overview of the FlockAI framework. FlockAI is an open and extensible framework developed in Python and supporting the testing of ML-driven drone applications prior to introducing them to production by taking advantage of a robotics simulator. To ease the description of a drone environment, when utilizing a robotics simulator, FlockAI provides users with several high-level programming abstractions so that applications deployed over a provisioned simulated environment meet user-desired requirements in terms of the drones’ resource and sensor modules. Such requirements extend well beyond configuring its communication modules (e.g., Wi-Fi receiver) and storage to also include the attachment of several sensing modules, such as a camera, GPS, and even domain-specific sensors.
In particular, users only need to focus on the business logic of their application with the FlockAI controller encapsulating the implementation handling the low-level interaction of the simulated drone with the underlying physics engine of the robotics simulator. Thus, ML practitioners adopt the high-level abstractions offered by FlockAI to code their experiment scenarios rather than being burden with how to code the simulator for configuring the drone resources and sensing modules, as well as, applying a flight plan. In regards to the latter, FlockAI, other than easing the flying of a drone through abstractions including drone take-off, hovering, moving to different directions and landing, users can opt to enable either a keyboard-based pilot or even an autopilot. Although FlockAI provides a number of very basic autopilots (e.g., apply obstacle detection), the pilot interface is easily extensible and users are free to create and “plug-in” to their experiments custom and more complex autopilots.
With the field of AI now covering an abundance of technologies with different development paradigms, FlockAI “limits” its support to ML. Thus, FlockAI can be used for the testing of a wide range of ML-driven applications featuring a pre-trained model that can be deployed on the drone so that inference is performed in place. Nonetheless, FlockAI makes the provision for training to also take place on the drone itself although this should be used with caution by users as training is usually both a time and compute expensive process (unless the model is simple and the need for data is sparse). Hence, FlockAI provides the abstractions to on-board training models, collects data from the enabled sensors, and “feeds” inference algorithms with the collected data.
Moreover, with FlockAI users can architect sophisticated experiments to test various “what-if” scenarios. These allow ML practitioners to fine-tune ML drone applications and reveal their strengths and limitations before introduced in production. Specifically, users can probe a drone for performance metrics, adjust the workload, update the flight plan, and adapt the entire configuration at runtime. In regards to monitoring data, FlockAI provides several monitoring probes that work out of the box for monitoring the drones system performance (e.g., CPU, memory, I/O), communication overhead, energy consumption through a composable energy model, as well as provide the means for users to create their own custom metrics so that data from their ML algorithms can be extracted as well. Finally, all the data collected from the enablement of a scenario is stored in a structured format so that it can be exported and shared among collaborating entities (e.g., fellow researchers). The flowchart in Figure 3 summarizes in a compact and elaborative view the steps required to setup a testbed with FlockAI. In the following Section, details are shed for each step.

4. Modeling and Implementation

This section introduces elaborative details regarding the modeling and implementation of the modular design of the FlockAI framework by giving emphasis in showcasing its customizability and extensibility.

4.1. Robotics Simulator

The simulator of choice for FlockAI is the open and popular Webots simulator [23]. Webots is a simulator for robots, sensors, and actuators that enables the synthesis of the aforementioned to build large simulated environments, denoted as worlds. The Webots feature-set extends well beyond the enablement of a physics engine for robot simulation. Specifically, the prime reason for adopting Webots in FlockAI, is that it inherently supports the embedding of a development environment for robot programming in either C++, Python, or Java. Hence, Webots not only provides a visually representative environment, but also enables practitioners, through coded artifacts, to parameterize both the robot and the environment to achieve more realistic simulations. Specifically, users can create a controller and with this, “control” the operation of a simulated robot.
FlockAI capitalizes on the former by providing a curated set of controllers embedding high-level abstractions in the Python programming language so that drone experiments can be designed just like a set of blueprints are used to architect a large building. These abstractions enable ML practitioners to configure drone resource capabilities, enable sensing and communication modules, deploy their ML algorithm(s) that will run on the drone itself and finally, monitor the performance and impact of the ML-driven tasks.

4.2. Drone Configuration

One of the first difficulties presented to an ML practitioner, is how to configure a robot emulator (e.g., Webots) so that a simulated device resembles the actual drone(s) used in production. To illustrate the difficulty in apprehending this task, one must enable a vast number of motor and sensing devices, with the configuration potentially requiring the knowledge of drone specifics, such as how to enable a drone to move autonomously in a certain area (e.g., bounding box) and perform certain maneuvers (e.g., move in a straight line or perform a zig-zag formation). To actually achieve this functionality one must take positioning data from the camera, GPS, and gyroscope (e.g., roll, pitch, yaw) at the current time interval and compute the positioning for the next via a PID controller. This, and a number of other low-level configurations, even simple ones such as, e.g., setting the capacity of the drones’ flash storage, or more advanced settings, such as which sensors must or cannot be used together, slow down the testing process and even derail practitioners from the actual testing scope.
Towards this, FlockAI provides several abstractions easing the configuration of simulated drones. FlockAI provides blueprint classes for popular drones (e.g., DJI drones) that extend the FlockAIController so that encompassed motors, sensors and various parameters, such as the drone weight, motor efficiency, battery capacity, etc., are pre-filled to ease configuration and quickly enable users to test their algorithms. For example, Listing 1 depicts an exemplary Python script for extending the FlockAI controller to create a pre-configured DJI Mavic 2 drone (https://www.dji.com/mavic-2). Hence, users can quickly attach and detach to a drone both motorized components (e.g., propellers) and sensing devices (e.g., camera, GPS) so as to customize their drone according to their needs. Therefore, if a user is working on use-case that does not require a specific sensing device then it does not need to be included. Having created a new drone controller or selected one from the FlockAI repository, users are now able to create their own customized drone experiments. Listing 2 depicts such a custom experiment that uses the DJIMavic2 drone (line 12). From this, we immediately observe that a user adopting the mavic controller does not need to configure any motors, sensors, or drone-specific parameterization (e.g., weight). Still, a user can alter and even extend the selected controller. For example, suppose the user will use the selected drone but considers testing with a more powerful battery, then this can easily be changed (line 13). In turn, FlockAI provides a Sensor class which can be extended to create and attach custom sensors (Listing 2) that access data from either a file or create random data based on a given probability kernel (lines 15–16).
Moreover, a drone upon configuration can include a Pilot that implements a flight plan that can be statically defined, keyboard-driven, or autonomous. In Listing 2, the depicted drone is configured with an autopilot that will execute a flight within a user-desired bounding box and will navigate through the vicinity of this area by applying a zig-zag formation. Other autopilots that are currently available in the FlockAI repository include a pilot applying obstacle collision and another going through various user-given points of interest with the navigation from point X to Y computed via a simple euclidean distance.
Listing 1: Partial View of FlockAI Blueprint for DJI Drone
  • 1 from flockai.drones.controllers import FlockAIController
  • 2 …
  • 3 class DJIMavic2(FlockAIController):
  • 4
  • 5   def __init__(self, name="DJIMavic2", …):
  • 6     self.drone = configDrone(name, …)
  • 7     …
  • 8
  • 9   def configDrone(name, …):
  • 10    motors = [
  • 11       (MotorDevice.CAMERA, "cameraroll", AircraftAxis.ROLL),
  • 12       (MotorDevice.CAMERA, "camerapitch", AircraftAxis.PITCH),
  • 13       (MotorDevice.CAMERA, "camerayaw",     AircraftAxis.YAW),
  • 14       (MotorDevice.PROPELLER, "FLpropeller", Relative2DPosition(1, -1)),
  • 15       (MotorDevice.PROPELLER, "FRpropeller", Relative2DPosition(1, 1)),
  • 16       (MotorDevice.PROPELLER, "RLpropeller", Relative2DPosition(-1, -1)),
  • 17       (MotorDevice.PROPELLER, "RRpropeller", Relative2DPosition(-1, 1)),
  • 18    ]
  • 19    sensors = [
  • 20      (EnableableDevice.CAMERA, "camera"),
  • 21      (EnableableDevice.GPS, "gps"),
  • 22      (EnableableDevice.INERTIAL_UNIT, "inertial"),
  • 23      (EnableableDevice.COMPASS, "compass"),
  • 24      (EnableableDevice.GYRO, "gyro")
  • 25      (EnableableDevice.BATTERY_SENSOR, "battery"),
  • 26      (EnableableDevice.WIFIRECEIVER, "receiver"),
  • 27      …
  • 28    ]
  • 29    mavic2 = FlockAIDrone(motors, sensors)
  • 30    mavic2.weight = 0.907 #in kg
  • 31    mavic2.battery = 3850 #in mAh
  • 32    mavic2.storage = 8000 #in MB
  • 33    …
  • 34    return mavic2
Listing 2: Exemplary FlockAI Experiment Blueprint
  • 1 from flockai.drones.controllers import DJIMavic2
  • 2 from flockai.drones.pilots import BoundingBoxZigZagPilot
  • 3 from flockai.drones.monitoring import FlightProbe, EnergyProbe
  • 4 from flockai.drones.sensors import TempSensor
  • 5 from Classifiers import CrowdDetectionClassifier
  • 6 …
  • 7 def MyDroneMLExperiment():
  • 8   autopilot = BoundingBoxZigZagPilot(…)
  • 9   probes = [FlightProbe(..), EnergyProbe(..)]
  • 10
  • 11   mydrone = DJIMavic2(pilot=autopilot, probes=probes)
  • 12   mydrone.getBattery().setCapacity(4500)
  • 13
  • 14   temp = TempSensor(trange=(20,32), dist=gaussian)
  • 15   mydrone.addSensor(temp)
  • 16
  • 17   crowddetect = CrowdDetectionClassifier()
  • 18   crowddetect.load_model(’…/cnn_cd.bin’)
  • 19   mydrone.enableML(algo=crowddetect, inf_period=5,…)
  • 20
  • 21   mydrone.activate() #start simulation
  • 22   mydrone.takeoff()
  • 23
  • 24   while mydrone.flying():
  • 25     img = mydrone.getCameraFeed(…)
  • 26     loc = mydrone.getGPScoordinates(…)
  • 27     testdata = (img, loc)
  • 28     pred = mydrone.classifier.inference(data=testdata,…)
  • 29     #do something with pred…and output energy and battery level
  • 30     eTotal = mydrone.getProbe("EnergyProbe").values().get("total")
  • 31    battery = mydrone.battery.getLevel()
  • 32     print(eTotal, battery)
  • 33     …
  • 34  out = mydrone.simulation.exportLogs(format_type=pandas_df)
  • 35  #do stuff to output

4.3. ML Algorithm Deployment

Once a drone is configured and ready to fly in a robotics simulator, new challenges arise for the ML practitioner and must be dealt with. These, at best, are three-fold: (i) a training model must be on-boarded to the drone; (ii) runtime data from the drone’s sensors (e.g., camera) must be periodically (or on-demand) made available and wrangled to meet a certain format; and (iii) an algorithm must be configured so that ML inference is performed at runtime based on the harvested data and the deployed training model.
FlockAI attempts to overcome the burden when faced with the aforementioned challenges by simplifying the deployment of ML models, easing the extraction of sensing data, the passing of the data to the inference algorithm and finally performing inference and formatting the response for subsequent use. Most importantly, FlockAI features a number of class abstractions for ML. Continuing on the running example from the previous listings, Listing 3 shows an example of how a user can extend the FlockAIClassifier class so their ML solution will be deployed with FlockAI. In its most simplistic form, the classifier provides the means to load a fixed training model and apply the inference algorithm. The training model that will be deployed on the (simulated) drone is given in binary format. This format is one of the prominent export formats used in frameworks such as TensorFlow to enable interoperable exchange of ML trained models. Hence, no additional effort is required for a ML model to be deployed through FlockAI. In regards to the inference algorithm, this is user-defined and requires from the user to extend the abstract method inference(). Moreover, as ML is a constantly evolving landscape with a large and diverse corpus of techniques and algorithms, no ML framework can cover everything. Towards this, despite the fact that FlockAI provides ML templates for regression, clustering, and classification, there are no templates, in the current version, for evolutionary techniques, such as swarm learning. To support swarm learning, a user must create a new ML algorithm type from the abstract FlockAIMLBaseClass just like the other ML algorithms currently supported. In contrast to classification, the swarm learning algorithm type (i.e., PSO) will not include the loading of a pre-trained (static) model but rather, will include the periodic communication with a central coordination service (i.e., basestation) to receive updates so that the local model is updated after each communication round with inference embracing the updated model parameterization.
Furthermore, FlockAI is designed to support the on-boarding (and testing) of multiple ML algorithms, while inference needs not be applied in the same time intervals. Finally, we note that both the input and the response of the inference task can be structured in different formats including raw form, csv, or to accommodate the most popular data science libraries in Python, as numpy’s ndarrays and/or pandas’ dataframes.
Listing 3: Custom Classifier from FlockAI Base Class
  • 1 from flockai.drones.ml import FlockAIClassifier
  • 2 …
  • 3
  • 4 class CrowdDetectionClassifier(FlockAIClassifier)
  • 5
  • 6   def__init__(self, inf_period=10):
  • 7     super().__init__()
  • 8     self.inf_period = inf_period
  • 9     …
  • 10
  • 11   def load_model(self, model_path, params,…)
  • 12     …
  • 13
  • 14   def inference(self, testdata)
  • 15     …
  • 16     return prediction
  • 17
  • 18 …

4.4. Energy Profiling

The following provides a comprehensive overview of the vast configurations and parameters one must consider when attempting to obtain energy consumption analytics. Through this brief overview of the state-of-the-art in drone energy modeling we want to show that there are a plethora of modeling approaches and a vast set of parameters that must be understood and obtained, while most models will need to be coded for a desired simulator. All these must be apprehended prior to the emulation process, a task requiring knowledge from multiple domains, which at the very least, slows experimentation and yields innovation [24].
Equation (1) depicts the energy model integrated in Webots for calculating the energy consumption at the tth time interval of a drone and robots in general. In this equation, t q is defined as the output torque of a rotational motor and C is a constant user-defined consumption factor between [ 0 , inf ) .
E t = t q * C
Despite its simplicity, one can easily argue that such a model ignores significant aspects of what beholds as energy consumption for drone technology. A more general model for a multi-copter drone can be defined as follows. Specifically, the total energy consumption of a multi-copter drone at the tth time interval is denoted as E t and can be modeled by the following equation:
E t = E m o t o r s + E c o m m + E p r o c
In this equation, E m o t o r s denotes the energy consumption attributed to the mechanical parts tasked with “flying” the drone, including the electrical motors and rotating propellers. Naturally, this is the prominent energy draining factor, with the literature referencing that E m o t o r s may even attribute to 95% of E t for a drone entrusted with a simple flying task, and, therefore, some consider that E t E m o t o r s  [25]. However, this does not suffice when a drone is entrusted with an AI-driven task and an open link is established for real-time data dissemination. In this case, both E p r o c , denoting the energy consumed for compute and sensing tasks (including flight control and image/video capture), and E c o m m , the energy consumed for data exchange (including control messages), cannot be considered as negligible [26].
Unweaving Equation (2), the energy consumed by the electrical motors, E m o t o r s , to fly the drone can be modeled as:
E m o t o r s = E t a k e o f f + E h o v e r + E m o v e + E l a n d i n g
In this equation, E t a k e o f f is the energy attributed to the drone lifted in the air till it enters a hovering state, and similar E l a n d i n g denotes the energy consumed for the drone to be safely grounded. In turn, E h o v e r is the energy attributed to keeping the drone in the air and is dependent to both drone and air characteristics, such as the drone mass and air density. What is more, E m o v e is the (kinetic) energy required to set a drone in motion from a hovering state. Interestingly, as denoted by Marins et al. [27], E m o v e can simply be represented as a multiplying constant—modestly—affecting E h o v e r when acceleration is imposed to the moving drone (under “normal” weather conditions). So, assuming that movement occurs under constant velocity, meaning that the energy consumed for movement is only a fraction towards the energy contributing for stationary hovering, and that the takeoff/landing occur only once during the mission, then the following simplification can be considered:
E m o t o r s E h o v e r
Now, to calculate the energy consumed for a drone to hover, we denote that energy consumption is equal to the power to hover multiplied by the time in the hovering state E t = P · t . The following depicts the power to keep a drone in a hovering state, which is dependent to the mass (m) of the drone, area of the propeller (A), air density ( ρ ), gravity (g), and the drones’ motor efficiency ( η ).
P h o v e r = ( 2 ρ · A ) 1 / 2 · ( m · g ) 3 / 2 η
From Equation (2), the communication overhead of a drone can be model as:
E c o m m = P x · t x + P r · t r + P i d l e · t i d l e
In this equation, P x is the power attributed to data transmission and t x is the time in the transmission state. Similarly, P r and t r are the power and time consumed in a receiving state. When the communication unit is neither in a transmitting or receiving state, but still powered on (e.g., idle waiting for messages) then it consumes power, albeit significantly less, and is modeled with the values P i d l e and t i d l e , respectively. In more complex settings, E c o m m can be extended to also include components found in general mobile embedded systems, such as the energy consumed by the network unit if it can be placed in sleep mode along with the energy consumed for state transitioning.
In a similar fashion, the compute overhead, E p r o c , of a drone’s digital computing systems can be modeled as:
E p r o c = P f c · t f c + P s o c · t s o c + P i o · t i o + P s s · t s s
In this equation, P f c denotes the power level of the utilized flight controller (e.g., 8 W for DJI A3), while P s o c denotes the power utilized by system-on-chip units entrusted with domain-specific tasks, such as the ML-driven algorithms (e.g., object detection) deployed by users. The P s o c component can be further decomposed into the power level for when the compute unit is in active state and the minimal power consumed when idle. We particularly note that SoC power levels can drastically differ, where a Raspberry Pi microcontroller will feature in active state a power level of approximately 8 W, while a Nvidia Jetson Nano with an on-board GPU unit will consume 57 W, respectively. Finally, P i o denotes the power for I/O (e.g., writing to flash drive), while P s s the power driving peripheral sensing devices.
The FlockAI approach. The above gives a brief, but elaborative, view on the complexity of energy modeling for drones. Implementing such a model requires the definition of a plethora of parameters and the measurement of the utilization of several resources. In order to provide a simple “jump-start” energy profiler so that novice users can quickly perform emulated measurements, but at the same time, provide a comprehensive modeling approach for more advanced analysis, FlockAI provides users with a compositional energy profiler and pre-filled drone energy templates based on an ensemble of the SOTA in energy modeling for drones. Specifically, users can simply “attach” to the instantiated energy model of a drone different components. For example, Listing 4 illustrates how a user can update the energy model of a drone with a customized E p r o c component, so that the energy model for the processing overhead ignores the sensor and i/o sub-components, and, in turn, considers the power level for the flight controller to be 8.3 W.
Listing 4: Example of Customizing a Drone’s Energy Model
  • 1   from flockai.models.energy import Eproc
  • 2
  • 3   cusEproc = Eproc(name="custom-eproc")
  • 4   cusEproc.removeComponents(["sensors","io"])
  • 5   efc = cusProc.getComponent("fc")
  • 6   efc.setdesc("DJI A3 flight controller")
  • 7   efc.setpower(8.3)
  • 8   mydrone.energy.updComponent("proc", custProc)
  • 9   …
In regards to the pre-filled templates, FlockAI provides for the drones uploaded in the project repository (e.g., DJI drones), pre-filled parameterization with all low-level energy parameters pre-configured (e.g., drone mass, power of flight controller, etc.) to ease the configuration overhead when starting to use FlockAI.

4.5. Monitoring

To perform seamless and efficient monitoring FlockAI embraces the side-car architecture paradigm. In particular, the side-car paradigm enables FlockAI to add monitoring capabilities to a simulated drone with no additional configuration to the actual business logic of the ML solution. To achieve this, FlockAI deploys monitoring probes to the containerized environment encompassing the code that will run on the drone itself, with each probe operating as an independent long-running process for harvesting data. Monitoring Probes can logically group multiple metrics together, in order to reduce the monitoring overhead when accessing common and shared resources (e.g., requests to an API, file handler, etc.). Furthermore, by operating independently, different probes can be enabled/disabled without affecting the data collection process of the other probes, while both performance and app-level metrics, along with their periodicity, can be customized accordingly by the user. A number of pre-implemented probes already exist in the FlockAI codebase that enable a comprehensive overview of the impact ML algorithms have to the resource utilization of a flying drone. The metrics encompassing these probes, depicted in Table 1, are also used to continuously populate the energy modeling which, in turn, feeds updates to a drones energy probe.
Moreover, users can expose application-level metrics for their ML services by creating custom monitoring probes based on the probe interface exposed by FlockAI. The probe interface adopts the open specification spawned from the jcatascopia monitoring system [28], that we have ported to the Python language and provide as open-source (https://github.com/dtrihinas/PyCatascopia) even for projects not related to drones. An example of an app-level probe was created for the use-case presented in the evaluation (objection detection), which is developed so that a ML practitioner can view in one consolidated environment both drone resource utilization and algorithm performance. To create custom probes, FlockAI exposes a high-level interface with users only needing to provide definitions (e.g., timer, ratio, etc.) for the to-be collected metrics and the implementation of the collect() method, periodically invoked during runtime. Hence, with the use of the probing interface, users can solely focus on “pushing” updates to the monitoring system rather than having to deal with time-consuming tasks, such as thread synchronization, logging, error reporting, that are requirements of the monitoring process. Moreover, users can attach to the FlockAI monitoring a queue to receive real-time updates of new measurements and/or utilize a key-value storage so that experiment data are persistently stored. Finally, monitoring data can be provided in raw form, json, and/or a pandas dataframe so that data can be used by ML practitioners in their native (data science) environment.

5. Evaluation

FlockAI is deliberately designed to facilitate the rapid experimentation of ML-driven drone applications to derive analytic insights referring to the impact of ML algorithms to a drone’s resources. This section provides a comprehensive study demonstrating the main contributions of our work by introducing a realistic use-case under-going various deployments and tested with different ML algorithms, so that ML practitioners can truly assess that pinpointed KPIs are met before deploying the application to production.

5.1. Use-Case

Suppose an ML practitioner must test both the efficacy and efficiency of a developed ML solution that employs the use of drones for pinpointing potential crowded areas by performing face detection to count the number of humans in an urban environment. Without ML, this process must either be performed through human inspectors on the ground or by manually counting people from the video feed of a drone. Both are timely consuming processes. Towards this, the drone is given a flight plan including a (bounded) area of interest and a starting location. The drone will then fly above this area and scan the bounded area with its camera to capture a detailed image feed. This image feed will be used by the ML algorithm to infer both the locations of the human subjects and count how many are in the drone’s viewpoint. Figure 4 depicts an example when applied on a camera feed extracted from the VisDrone dataset [29]. Table 2 presents a high-level overview of the experiment scenarios that will be run so that users may reproduce these scenarios by following the steps introduced by the flowchart in Figure 3. In brief, Scenario 1 examines the overhead of running an ML algorithm for crowd detection (sc. 1-B) vs. the same drone flying without an ML task configured (sc. 1-A). Scenario 2 examines the trade-off between performing in place (sc. 1-B) and remote ML inference (sc. 2), while Scenario 3 examines the overhead of performing in place inference for crowd detection when adopting two different ML algorithms (sc 3-A, 3-B).
We clarify that the scope of this article is not on proposing a new and novel crowd detection algorithm nor on the parameterization of the algorithm (i.e., optimal safety distances). Rather, our goal is to showcase the ability of FlockAI to answer the four questions, introduced in Section 2, that are faced by ML practitioners when in need of testing their ML solutions. These questions will be answered by under-going three experiment scenarios and evaluating different performance measures collected by FlockAI during the experimentation so that one can truly understand the key advantages and pains of his/her designed solution. The experiment scenarios utilize the FlockAI blueprint for the DJI Mavic 2 Pro drone (truncated version shown in Listing 1). In turn, the experiment scenarios resemble the one showcased in Listing 2 and all are openly available in the FlockAI project github repository for reproducability purposes.

5.2. Scenario 1: On-Board Inference

In this experiment scenario, just like in the one presented in Listing 2, the ML practitioner would like to evaluate the impact of his/her algorithm to the underlying drone resources. Hence, the ML algorithm is deployed on the drone and run during the flight. This means that when the camera captures a new image feed, this is passed to the on-board inference algorithm for human detection and counting. We note that the ML algorithm used in this experiment is implemented in Python and adopts a pre-trained Convolutional Neural Network (CNN) developed on top of the C++ dlib engine and capable of identifying human faces even when given images with a moving viewpoint (e.g., drone camera feed) [30]. For testing data we use the open DroneFaces dataset which contains 20 camera sequences capturing 30 images each with the drone flying above humans in an urban environment [31]. Although the algorithm accuracy is not of interest in this experiment, for thoroughness, we note that it is 95%. Accuracy will be further investigated in Scenario 3. Moreover, only when a potential crowded region is detected by the on-board ML algorithm is the ground station notified so that operators can verify and take further action (i.e., send SMS for social distancing). In this case, the data payload exchanged between the drone and ground station includes the camera feed annotated with the crowded region(s) and drone positioning data. As a baseline for comparison purposes, we will consider the drone performing the same flight but without the deployment of the ML algorithm.
Figure 5 depicts the energy consumption of the drone when undertaking the experiment flight without performing the ML task, while Figure 6 depicts the drones energy consumption with the ML task enabled. In these figures, the energy consumption is decomposed by FlockAI into the energy consumed by the motors, processing, and communication components of the drone. From both of these figures, we immediately observe that E m o t o r s is the dominating energy consuming factor. In fact, without the ML task E m o t o r s attributes to approximately 91% of the total energy. From the other two components, the majority of the processing overhead comes from the autopilot, while the modest communication overhead (3%) includes the health checks exchanged between the drone and the ground station.
However, despite the significance of E m o t o r s when the ML task is enabled, E p r o c contributes to approximately 21% of the total energy and this is far from being considered as negligible. The additional energy consumption is, of course, attributed to the ML algorithm. In order to further understand the impact of the ML algorithm on the energy consumed for processing (ML), we use FlockAI monitoring data to further decompose the E p r o c component into its sub-components. These, are depicted in Figure 7, where we observe that the majority of the energy for processing, is consumed by the CPU when in active state (∼79%) and this, of course, is attributed to the overhead imposed by the ML inference algorithm. Moreover, one can also observe that a fraction of the overhead is also attributed to the use of the flight controller (∼9%) and another to the CPU when in idle state (∼10%), while the I/O overhead is minimal (∼1%) since images are not stored. In regards to the total flight time, when deploying the ML task and performing on-board inference the flight time is reduced from 30 min and 6 s to 24 min and 48 s. Finally, we reiterate that an ML practitioner undergoing this experiment, now, has answers for both Q1 and Q2 of the questions presented in Section 2.

5.3. Scenario 2: Remote Inference

This experiment scenario resembles Scenario 1, but with a key differentiating factor: the ML algorithm is run remotely, and specifically, on the ground station. Hence, during the flight the drone is programmed to continuously collect both the camera feed and positioning data, with these streamed to the ground station through an open Wi-Fi link established between the drone and the ground station.
Figure 8 depicts the energy consumption decomposed for when inference is performed remotely. Comparing the results of this experiment with the previous, we immediately observe that the energy consumption for communication is higher, and specifically, 16% higher compared to the No-ML scenario and 12% when performing on-board inference. Obviously, the key difference is that E c o m m is now attributed as the second energy consuming component, which is due to the fact that the wireless link is open throughout the flight and a data payload is exchanged with the ground station. In turn, we observe something that may not be immediately evident. Despite the fact that inference is performed remotely, E p r o c has not dropped to the value it had when a non-ML flight was executed. This is due to the data preparation tasks of the sensory data and the “firing” of the data transmission.
In regards to the effect on the total flight time, with inference performed remotely the drone can stay in the air for 25 min and 51 s, which is a bit more than 1 min when compared to on-board inference. Still, since the data payload is actually small, images in the DroneFace dataset and not FHD quality usually found in today’s drones, performing remote inference seems to “save” a bit of flight time, but only 1 min more. However, Figure 9 provides a deeper overview of the cost of performing remote inference. In this, one immediately observes the high latency for remote inference (median 3.15 s) compared to on-board inference (median 1.36 s), that can be disastrous depending on the use-case. Thus, an important insight is derived; deciding to “offload” the ML task from the drone may save a bit of flight time, however this actually can hamper overall flight performance in the end due to the delay in performing the inference task. Hence, having performed this experiment, a ML practitioner can answer Q3.

5.4. Scenario 3: Testing Different ML Algorithms

This experiment scenario examines the trade-off between accuracy and drone performance when using different ML algorithms to achieve the same end goal (e.g., human detection and counting). This is of high interest to ML practitioners as one would like to be able to compare algorithms, based on different metrics, to understand advantages and pains of the algorithms before deploying to production. Hence, Scenario 3 adopts the same blueprint as Scenario 1, but for each experiment run a different algorithm is used for on-board inference. The ML algorithms that are examined are, the CNN used in Scenario 1, and an ML algorithm for human face recognition adopting Frozen Inference Graphs (FIGs) with the use of Tensorflow [32]. All of the sequences part of the DroneFaces dataset were used again as test data.
Table 3 depicts the results of the experiments after being extracted from the monitoring of FlockAI. From the results, we observe that the CNN algorithm achieves the best accuracy with the accuracy standing at 95%. This translates to being able to correctly detect the crowd in 19 out of the 20 sequences. On the other hand, the FIGs algorithm achieves only a 60% accuracy which translates to 12 correctly identified out of the 20 sequences. In regards to inference time, we observe that the median inference time to handle an image in the camera feed is far from considered as comparable for the two algorithms with the CNN featuring a 10× higher inference time. To further investigate inference, a custom monitoring metric was created in FlockAI to also keep track of how fast the crowd was detected in the image sequence. We reiterate that each sequence contains 30 images and the metric created (inference delay), measures when in the 30 image sequence was the crowd detected. Hence, if the inference delay is 26, this denotes that the crowd was detected from image 26 and onward. Table 4 depicts the inference delay for the two algorithms. In this table, a dash denotes that the algorithm could not detect the crowd.
Clearly, the CNN algorithm is more accurate but this accuracy comes with a penalty in the speed, and compute resources consumed, at which the ML task can output a decision for the captured images. To examine the impact of the additional processing overhead imposed by the CNN when compared to FIGs, we examine the total flight time for the drone. Hence, Table 2, depicts the time the drone is in the air when employing either of the two algorithms. From this, we immediately observe that the FIGs algorithm enables the drone to stay in the air for an additional 2 min and 34 s. Based on these results, and for the specific use-case in hand, despite the fact that the FIGs algorithm can boost mission completion time by over 2 min, this comes with a significant hit in terms of accuracy. Clearly, the 35% boost in accuracy for the CNN algorithm is of most importance, even if the 35% additional accuracy translates to 2.5 min less in flight time. Thus, by being able to rapidly reproduce and replay emulated testbeds, the ML practitioner is able to answer Q4 and having answered all four questions, he/she can now take more informative decisions.

6. Related Work

As drone technology is now just growing out of its infancy, the majority of the benchmarking frameworks for drones have been designed to support the testing of various hardware capabilities and/or flight control and state estimation. For example, DronesBench [33] is a system for stress testing in a controlled (and grounded) environment the hardware components of real drones during take-off and landing. The system aims to detect faults in the hardware by measuring the thrust force control action, the power consumption of the motorized components, and the attitude stability.
On the other hand, jMavSim [34] is a simulation tool aimed at simplifying the design and testing of autonomous flight control (e.g., mission planning) for drones that adopt the PX4 autopilot firmware. The jMavSim programming environment is in Java and both monitoring and energy consumption data (for the motorized components) depend on external tools that interface between the PX4 autopilot when the testing adopts a hardware-in-the-loop configuration. In turn, RotorS [35] is a modular framework to design algorithms for drone flight control and state estimation, which can then be tested in the Gazebo simulator to debug errors that could potential lead to drone collisions and crashes. Another tool is MRS [36], an open-source framework with various drone templates (i.e., dji, tarot drones) focusing on the design and testing of user-developed C++ algorithms for drone flight control and state estimation. Unfortunately, both RotorS and MRS lack energy profiling and do not provide any monitoring insights for the system utilization and communication overhead of the drone from the execution of user algorithms.
Nonetheless, a handful of frameworks are now aiming to provide users with the ability to deploy and test algorithms over drone settings. Specifically, UTSim [37] is a framework for testing over a simulator (Unity) different algorithms for air traffic integration. In particular, users specify, via C# scripts, the properties of the environment, including the number of drones and their configuration (i.e., speed, destination), and specify the algorithm used for path planning and collision avoidance. The simulator, at the end, outputs a log file with the content capturing data, such as the number of sent and received messages, the number of detected objects and collided drones. In turn, AirSim [38] is a framework built on top of the Unreal game engine, to provide users with the ability to simulate a drone flight under various environmental conditions. These include, wind effects, GPS signal degradation, and physical phenomena, such as drag and gravity alterations. Moreover, the authors state that AirSim was proposed for developers to generate (annotated) training data to be used by ML algorithms. However, no facilitation of the execution of ML algorithms on the drone itself is provided.
Moreover, Tropea et al. introduce FANETSim [25], a java-based simulation tool that enables the testing of drone-to-ground station and drone-to-drone communication overhead. Overhead is examined in terms of message counts and energy consumption. However, the energy model only considers energy relevant to flying the drone ( E m o t o r s ) which is modeled with drone velocity being the only dependent variable. Finally, Deepbots [39] is a framework of abstractions enabling Reinforcement Learning (RL) over Webots by acting as a middleware between the simulator and the OpenAI Gym. Specifically, users extend a set of functions for an AI agent to run in Webots. However, this only includes RL parameterization such as how rewards are updated, how to calculate the next step and how to process a new observation. Hence, leaving drone configuration, performance monitoring, and even energy modeling, as burdening tasks for the user.

7. Discussion

Table 5 provides a concise overview to serve as a guide for performing a qualitative comparison of the aforementioned works. From this overview, one first observes that the majority of the frameworks support the deployment and testing of user-designed algorithms for flight control and state estimation. The execution settings of these algorithms varies with some frameworks opting for remote execution (i.e., jMavSim, UTSim, AirSim) and some opting for in place execution (autopiloting). However, no framework other than FlockAI, supports both execution modes. As shown in experiment scenario 2, evaluating when to perform on-board or remote inference is of critical importance for ML-driven drone applications as inference can impact flight time. Still, this may be justified by the fact that, other than Deepbots, FlockAI is the only drone testing framework that supports ML algorithm evaluation. In regards to monitoring, the majority of the introduced works provide metrics for flight control and a few provide network/communication data, but only FlockAI provides insights in regards to system utilization (i.e., cpu, mem, i/o). Moreover, energy profiling for the majority of the works is limited to only evaluating the motorized components impact. However, our evaluation has shown the importance of the processing and communication overhead to the total energy deficit. Finally, a note on limitations of FlockAI. The current version does not support the configuration of environmental conditions in the simulation, as opposed to AirSim, and FlockAI only supports drone controller programming in Python, despite the fact that webots can run C++ and Java.

8. Conclusions and Future Work

In this article, we have introduced FlockAI: an open and extensible framework facilitating the rapid prototyping of reproducible experiment scenarios for ML-driven drone applications evaluated on top of a robotic simulator. Specifically, FlockAI embeds a powerful modeling framework for designing experiment benchmarks so that ML practitioners can configure drone resource capabilities, enable sensing and communication modules, deploy their ML algorithm(s) that will run on the drone itself and finally, monitor the performance and impact of the ML-driven tasks. Through runtime monitoring, ML practitioners are able to evaluate various key performance indicators of their ML-driven applications including flight performance, resource utilization, communication overhead, and energy consumption. The latter is of extreme importance as the flight time of a drone depends on its battery life, and with ML-driven tasks constituting the majority of the drones’ compute and communication overhead, it is necessary to assess when designing energy-aware solutions.
As future work, we are in the process of creating a SDK that will enable ML practitioners to use and interact with FlockAI natively in a notebook environment (e.g., jupyter, colab). Users will be able to start a simulation and test their ML algorithms based on various use-case scenarios without leaving the comfort of the development environment that feels “natural” to them. The FlockAI SDK will be designed in Python and the interaction with the emulator will be realized adopting a Webots web streaming server to provide ML-driven drone application testing-as-a-service. Finally, the near future implementation plan includes extending the ML class repository to also include swarm learning algorithms (i.e., PSO).

Author Contributions

Conceptualization, D.T. and I.K.; Methodology, D.T., M.A. and I.K.; Project administration, D.T.; Software, D.T., M.A. and K.A.; Validation, M.A. and K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the University of Nicosia Seed Grant Scheme (2020–2022) for the FlockAI project.

Data Availability Statement

Note Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cui, J.Q.; Phang, S.K.; Ang, K.Z.Y.; Wang, F.; Dong, X.; Ke, Y.; Lai, S.; Li, K.; Li, X.; Lin, F.; et al. Drones for cooperative search and rescue in post-disaster situation. In Proceedings of the 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Siem Reap, Cambodia, 15–17 July 2015; pp. 167–174. [Google Scholar] [CrossRef]
  2. Kyrkou, C.; Timotheou, S.; Kolios, P.; Theocharides, T.; Panayiotou, C. Drones: Augmenting Our Quality of Life. IEEE Potentials 2019, 38, 30–36. [Google Scholar] [CrossRef]
  3. Wu, D.; Arkhipov, D.I.; Kim, M.; Talcott, C.L.; Regan, A.C.; McCann, J.A.; Venkatasubramanian, N. ADDSEN: Adaptive Data Processing and Dissemination for Drone Swarms in Urban Sensing. IEEE Trans. Comput. 2017, 66, 183–198. [Google Scholar] [CrossRef]
  4. Puri, V.; Nayyar, A.; Raja, L. Agriculture drones: A modern breakthrough in precision agriculture. J. Stat. Manag. Syst. 2017, 20, 507–518. [Google Scholar] [CrossRef]
  5. Sacco, A.; Flocco, M.; Esposito, F.; Marchetto, G. An architecture for adaptive task planning in support of IoT-based machine learning applications for disaster scenarios. Comput. Commun. 2020, 160, 769–778. [Google Scholar] [CrossRef]
  6. Tseng, C.; Chau, C.; Elbassioni, K.M.; Khonji, M. Flight Tour Planning with Recharging Optimization for Battery-operated Autonomous Drones. arXiv 2017, arXiv:1703.10049v1. [Google Scholar]
  7. Abeywickrama, H.V.; Jayawickrama, B.A.; He, Y.; Dutkiewicz, E. Comprehensive Energy Consumption Model for Unmanned Aerial Vehicles, Based on Empirical Studies of Battery Performance. IEEE Access 2018, 6, 58383–58394. [Google Scholar] [CrossRef]
  8. Trihinas, D.; Pallis, G.; Dikaiakos, M. Low-cost adaptive monitoring techniques for the internet of things. IEEE Trans. Serv. Comput. 2018, 14, 487–501. [Google Scholar] [CrossRef] [Green Version]
  9. Dimitropoulos, S. If One Drone Isn’t Enough, Try a Drone Swarm. 2019. Available online: https://www.bbc.com/news/business-49177704 (accessed on 9 November 2021).
  10. Vásárhelyi, G.; Virágh, C.; Somorjai, G.; Nepusz, T.; Eiben, A.E.; Vicsek, T. Optimized flocking of autonomous drones in confined environments. Sci. Robot. 2018, 3. [Google Scholar] [CrossRef] [Green Version]
  11. Schilling, F.; Lecoeur, J.; Schiano, F.; Floreano, D. Learning Vision-Based Flight in Drone Swarms by Imitation. IEEE Robot. Autom. Lett. 2019, 4, 4523–4530. [Google Scholar] [CrossRef] [Green Version]
  12. Taha, B.; Shoufan, A. Machine Learning-Based Drone Detection and Classification: State-of-the-Art in Research. IEEE Access 2019, 7, 138669–138682. [Google Scholar] [CrossRef]
  13. Chen, W.; Liu, B.; Huang, H.; Guo, S.; Zheng, Z. When UAV Swarm Meets Edge-Cloud Computing: The QoS Perspective. IEEE Netw. 2019, 33, 36–43. [Google Scholar] [CrossRef]
  14. Commercial Drones Are Here: The Future of Unmanned Aerial Systems. 2021. Available online: https://www.mckinsey.com/industries/travel-logistics-and-infrastructure/our-insights/commercial-drones-are-here-the-future-of-unmanned-aerial-systems (accessed on 9 November 2021).
  15. Hodge, V.J.; Hawkins, R.; Alexander, R. Deep reinforcement learning for drone navigation using sensor data. Neural Comput. Appl. 2021, 33, 2015–2033. [Google Scholar] [CrossRef]
  16. San Juan, V.; Santos, M.; Andújar, J.M.; Volchenkov, D. Intelligent UAV Map Generation and Discrete Path Planning for Search and Rescue Operations. Complex 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  17. Azari, M.M.; Sallouha, H.; Chiumento, A.; Rajendran, S.; Vinogradov, E.; Pollin, S. Key Technologies and System Trade-offs for Detection and Localization of Amateur Drones. IEEE Commun. Mag. 2018, 56, 51–57. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, Y.; Shen, L.; Wang, X.; Hu, H.M. Drone Video Object Detection using Convolutional Neural Networks with Time Domain Motion Features. In Proceedings of the 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Shenzhen, China, 6–8 August 2020; pp. 153–156. [Google Scholar] [CrossRef]
  19. Besada, J.A.; Bergesio, L.; Campaña, I.; Vaquero-Melchor, D.; López-Araquistain, J.; Bernardos, A.M.; Casar, J.R. Drone mission definition and implementation for automated infrastructure inspection using airborne sensors. Sensors 2018, 18, 1170. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Kumar, A.; Sharma, K.; Singh, H.; Naugriya, S.G.; Gill, S.S.; Buyya, R. A drone-based networked system and methods for combating coronavirus disease (COVID-19) pandemic. Future Gener. Comput. Syst. 2021, 115, 1–19. [Google Scholar] [CrossRef]
  21. Qin, Y.; Kishk, M.A.; Alouini, M.S. Performance Evaluation of UAV-enabled Cellular Networks with Battery-limited Drones. IEEE Commun. Lett. 2020, 24, 2664–2668. [Google Scholar] [CrossRef]
  22. Symeonides, M.; Georgiou, Z.; Trihinas, D.; Pallis, G.; Dikaiakos, M.D. Fogify: A Fog Computing Emulation Framework. In Proceedings of the 2020 IEEE/ACM Symposium on Edge Computing (SEC), San Jose, CA, USA, 12–14 November 2020; pp. 42–54. [Google Scholar] [CrossRef]
  23. Webots Robot Simulator. 2021. Available online: https://cyberbotics.com/ (accessed on 9 November 2021).
  24. Trihinas, D.; Agathocleous, M.; Avogian, K. Composable Energy Modeling for ML-Driven Drone Applications. In Proceedings of the 2021 IEEE International Conference on Cloud Engineering (IC2E), San Francisco, CA, USA, 4–8 October 2021; pp. 231–237. [Google Scholar] [CrossRef]
  25. Tropea, M.; Fazio, P.; De Rango, F.; Cordeschi, N. A New FANET Simulator for Managing Drone Networks and Providing Dynamic Connectivity. Electronics 2020, 9, 543. [Google Scholar] [CrossRef] [Green Version]
  26. Zeng, Y.; Zhang, R. Energy-Efficient UAV Communication With Trajectory Optimization. IEEE Trans. Wirel. Commun. 2017, 16, 3747–3760. [Google Scholar] [CrossRef] [Green Version]
  27. Marins, J.L.; Cabreira, T.M.; Kappel, K.S.; Ferreira, P.R. A Closed-Form Energy Model for Multi-rotors Based on the Dynamic of the Movement. In Proceedings of the 2018 VIII Brazilian Symposium on Computing Systems Engineering (SBESC), Salvador, Brazil, 5–8 November 2018; pp. 256–261. [Google Scholar] [CrossRef]
  28. Trihinas, D.; Pallis, G.; Dikaiakos, M.D. Monitoring elastically adaptive multi-cloud services. IEEE Trans. Cloud Comput. 2016, 6, 800–814. [Google Scholar] [CrossRef]
  29. VisDrone ECCV 2020 Crowd Counting Challenge. 2020. Available online: http://aiskyeye.com/challenge/crowd-counting/ (accessed on 9 November 2021).
  30. Geitgey, A. Python Face Recognition Library. 2021. Available online: https://github.com/ageitgey/face_recognition (accessed on 9 November 2021).
  31. Hsu, H.J.; Chen, K.T. DroneFace: An Open Dataset for Drone Research. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; Association for Computing Machinery: New York, NY, USA, 2017. MMSys’17. pp. 187–192. [Google Scholar] [CrossRef]
  32. TensorFlow. Face Detection using Frozen Inference Graphs. 2021. Available online: https://www.tensorflow.org/ (accessed on 9 November 2021).
  33. Daponte, P.; De Vito, L.; Lamonaca, F.; Picariello, F.; Riccio, M.; Rapuano, S.; Pompetti, L.; Pompetti, M. DronesBench: An innovative bench to test drones. IEEE Instrum. Meas. Mag. 2017, 20, 8–15. [Google Scholar] [CrossRef]
  34. jMavSim. Available online: https://docs.px4.io/master/en/ (accessed on 9 November 2021).
  35. Furrer, F.; Burri, M.; Achtelik, M.; Siegwart, R. RotorS—A Modular Gazebo MAV Simulator Framework. In Robot Operating System (ROS): The Complete Reference; Koubaa, A., Ed.; Springer International Publishing: Cham, Switzerland, 2016; Volume 1. [Google Scholar]
  36. Báca, T.; Petrlík, M.; Vrba, M.; Spurný, V.; Penicka, R.; Hert, D.; Saska, M. The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles. J. Intell. Robotic Syst. 2021, 102, 26. [Google Scholar] [CrossRef]
  37. Al-Mousa, A.; Sababha, B.H.; Al-Madi, N.; Barghouthi, A.; Younisse, R. UTSim: A framework and simulator for UAV air traffic integration, control, and communication. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419870937. [Google Scholar] [CrossRef] [Green Version]
  38. Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 621–635. [Google Scholar]
  39. Kirtas, M.; Tsampazis, K.; Passalis, N.; Tefas, A. Deepbots: A Webots-Based Deep Reinforcement Learning Framework for Robotics. In Artificial Intelligence Applications and Innovations; Maglogiannis, I., Iliadis, L., Pimenidis, E., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 64–75. [Google Scholar]
Figure 1. Screenshot from FlockAI with a drone deployed to perform object detection.
Figure 1. Screenshot from FlockAI with a drone deployed to perform object detection.
Futureinternet 13 00317 g001
Figure 2. High-level and abstract overview of FlockAI.
Figure 2. High-level and abstract overview of FlockAI.
Futureinternet 13 00317 g002
Figure 3. FlockAI experiment testbed setup steps.
Figure 3. FlockAI experiment testbed setup steps.
Futureinternet 13 00317 g003
Figure 4. Example application of ML algorithm pinpointing potential crowded areas.
Figure 4. Example application of ML algorithm pinpointing potential crowded areas.
Futureinternet 13 00317 g004
Figure 5. Energy decomposed into components (No ML).
Figure 5. Energy decomposed into components (No ML).
Futureinternet 13 00317 g005
Figure 6. Energy decomposed into components (On-board ML with CNN).
Figure 6. Energy decomposed into components (On-board ML with CNN).
Futureinternet 13 00317 g006
Figure 7. Processing energy decomposed into its sub-components (On-board ML with CNN).
Figure 7. Processing energy decomposed into its sub-components (On-board ML with CNN).
Futureinternet 13 00317 g007
Figure 8. Energy decomposed into its sub-components (Remote ML with CNN).
Figure 8. Energy decomposed into its sub-components (Remote ML with CNN).
Futureinternet 13 00317 g008
Figure 9. Inference latency.
Figure 9. Inference latency.
Futureinternet 13 00317 g009
Table 1. FlockAI monitoring probes and metrics.
Table 1. FlockAI monitoring probes and metrics.
ComputeProbe
MetricUnitDesc
cpuUsage%Current drone CPU utilization averaged across all cores
cpuTimemsTotal time CPU in processing state for the current flight
ioTimemsTotal time CPU blocked waiting for IO to complete
memUsage%Current drone memory utilization
CommProbe
MetricUnitDesc
kbytesINKBTotal incoming traffic (in KB) across selected network interfaces (i.e., wifi)
kbytesOutKBTotal outgoing traffic (in KB) across selected network interfaces
pctsIN#Total number of incoming message packets
pctOut#Total number of outgoing message packets
FlightProbe
MetricUnitDesc
flightTimesTimespan of the current flight
hoveringTimesTimespan drone in hovering state
takeoffTimesTimespan drone in takeoff state
landingTimesTimespan drone in landing state
inMoveTimesTimespan drone in moving state
inAccTimesTimespan drone is in moving state and velocity is fluctuating
EnergyProbe
MetricUnitDesc
eProcJEnergy consumed for compute tasks (when drone in flight continuously updated)
eCommJEnergy consumed for communication tasks
eMotorJEnergy consumed for powering drone motors
eTotalJTotal energy consumed by drone for the current flight
batteryRem%Percentage of drone battery capacity remaining
Table 2. Use-case scenarios key configurations for reproducibility.
Table 2. Use-case scenarios key configurations for reproducibility.
Scenario
No.
Drone
Controller
AutopilotMonitoringML
Algorithm
ML
Execution
1-ADJIMavic2Bounding Box
ZigZag Pilot
Flight, Compute,
Comm and Energy
Probes
--
1-BDJIMavic2Bounding Box
ZigZag Pilot
Flight, Compute,
Comm and Energy
Probes
CNN Crowd
Detection Classifier
In Place
2DJIMavic2Bounding Box
Zig Zag Pilot
Flight, Compute,
Comm and Energy
Probes
CNN Crowd
Detection Classifier
Remote
3-ADJIMavic2Bounding Box
Zig Zag Pilot
Flight, Compute,
Comm and Energy
Probes
CNN Crowd
Detection Classifier
In Place
3-BDJIMavic2Bounding Box
Zig Zag Pilot
Flight, Compute,
Comm and Energy
Probes
FIGs Crowd
Detection Classifier
In Place
Table 3. ML algorithm accuracy vs. performance for on-board inference.
Table 3. ML algorithm accuracy vs. performance for on-board inference.
AlgorithmAccuracy (%)Median Inference Time (ms)Flight Time (s)
CNN9513611488
FIGs602741642
Table 4. ML algorithm inference delay (in images of camera feed).
Table 4. ML algorithm inference delay (in images of camera feed).
CNNFIGs
SequenceInf. Delay (img count)Inf. Delay (img count)
seq123-
seq22629
seq32929
seq42929
seq52328
seq62423
seq726-
seq827-
seq926-
seq102629
seq112929
seq122829
seq1325-
seq142628
seq152829
seq1628-
seq172629
seq182729
seq1928-
seq20--
Table 5. Qualitative comparison of the SOTA in drone testing (M, P, and C in energy profiling denote the energy consumption attributed to the motorized, processing, and communication components, respectively).
Table 5. Qualitative comparison of the SOTA in drone testing (M, P, and C in energy profiling denote the energy consumption attributed to the motorized, processing, and communication components, respectively).
FrameworkSDKDrone
Configuration
Algorithm
Testing
Algorithm
Execution
MonitoringEnergy
Profiling
M | P | C
Last
Updated
Drones
Bench
-real drone--motorized
components
X | - | -NA
jMavSimjavaquad-copter
and hexa-copter
imitations
localization and
camera sensors
custom flight
control with
PX4 autopilot
modules
remoteflight
control
X | - | -2021
RotorSC++quad-copter
imitation
localization and
camera sensors
interface for
custom flight
control and
state estimation
on droneflight
control
- | - | -2020
MRSC++drone templates
(dji, tarot)
interface for
custom flight
control and
state estimation
on droneflight
control
- | - | -2021
UTSimC#quad-copter
imitation
interface for
path planning
remoteflight control
comm overhead
- | - | -NA
AirSimC++
python
java
quad-copter
imitation
localization and
camera sensors
environmental
conditions
interface for
computer vison
remoteflight control
comm overhead
X | - | -2021
FANETSimjavaquad-copter
imitation
comm protocol
--flight control
comm overhead
X | X | -NA
Deepbotspythonquad-copter
imitation
reinforcement
learning
on droneinference
accuracy
- | - | -2020
FlockAIpythoncustomizable
drone templates
lovalization and
camera sensors
autopilot
interface for
custom flight
control and
state estimation,
regression,
clustering and
classification
incl. face and
object detection
on drone
and
remote
flight control
processing
overhead,
comm overhead
inference
accuracy
and delay
X | X | X2021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Trihinas, D.; Agathocleous, M.; Avogian, K.; Katakis, I. FlockAI: A Testing Suite for ML-Driven Drone Applications. Future Internet 2021, 13, 317. https://doi.org/10.3390/fi13120317

AMA Style

Trihinas D, Agathocleous M, Avogian K, Katakis I. FlockAI: A Testing Suite for ML-Driven Drone Applications. Future Internet. 2021; 13(12):317. https://doi.org/10.3390/fi13120317

Chicago/Turabian Style

Trihinas, Demetris, Michalis Agathocleous, Karlen Avogian, and Ioannis Katakis. 2021. "FlockAI: A Testing Suite for ML-Driven Drone Applications" Future Internet 13, no. 12: 317. https://doi.org/10.3390/fi13120317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop