Next Article in Journal
Validation of Player and Ball Tracking with a Local Positioning System
Next Article in Special Issue
Enhanced Single Image Super Resolution Method Using Lightweight Multi-Scale Channel Dense Network
Previous Article in Journal
SDNN24 Estimation from Semi-Continuous HR Measures
Previous Article in Special Issue
Real-Time Instance Segmentation of Traffic Videos for Embedded Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Runway Safety System Based on Vertically Oriented Stereovision

1
Bioseco Sp. z. o. o., Budowlanych 68, 80-298 Gdansk, Poland
2
Department of Mathematics and Natural Sciences, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden
3
Ekoaviation, ul. Piecewska 30B/16, 80-288 Gdansk, Poland
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(4), 1464; https://doi.org/10.3390/s21041464
Submission received: 17 January 2021 / Revised: 13 February 2021 / Accepted: 17 February 2021 / Published: 20 February 2021
(This article belongs to the Special Issue Visual Sensor Networks for Object Detection and Tracking)

Abstract

:
In 2020, over 10,000 bird strikes were reported in the USA, with average repair costs exceeding $200 million annually, rising to $1.2 billion worldwide. These collisions of avifauna with airplanes pose a significant threat to human safety and wildlife. This article presents a system dedicated to monitoring the space over an airport and is used to localize and identify moving objects. The solution is a stereovision based real-time bird protection system, which uses IoT and distributed computing concepts together with advanced HMI to provide the setup’s flexibility and usability. To create a high degree of customization, a modified stereovision system with freely oriented optical axes is proposed. To provide a market tailored solution affordable for small and medium size airports, a user-driven design methodology is used. The mathematical model is implemented and optimized in MATLAB. The implemented system prototype is verified in a real environment. The quantitative validation of the system performance is carried out using fixed-wing drones with GPS recorders. The results obtained prove the system’s high efficiency for detection and size classification in real-time, as well as a high degree of localization certainty.

1. Introduction

The first collision of a bird with an aircraft, so-called bird strike, was reported in 1905, and then, in 1912 the first fatality was noted [1]. Since then, the number of cases has risen, presenting a significant threat to flight safety and causing a number of tragic accidents worldwide [2]. In 2020, purely in the USA, over 10,000 bird strikes were reported [3]. The reports show that the average bird strike rate, which is counted per 10,000 flights, increased from 11 in the year 2011 to 33 in the year 2017 [4]. According to the International Civil Aviation Organization (ICAO), most of the bird strikes occur during the approach, 33%, take off, 31%, and landing, 26%, which means that 90% of incidents occur in the airspace under the airport’s legal responsibility [4]. The administration and legal regulations introduced by the ICAO and European Union Aviation Safety Agency (EASA) oblige each airport to minimize the bird and wildlife strike risk, under Wildlife and Hazard Management (WHM) [5,6].
Currently, different techniques and methods allowing the mitigation of bird strike risk such as the ornithological observations and radar based solutions [7] are the most widespread at medium and large airports. There are also some attempts to develop vision based monitoring systems [8,9,10]. However, enhancing automation and to improve the system performance levels of WHM in terms of detection efficiency and localization accuracy is a research challenge.
To meet the requirements of a market tailored product, which may be customized to any small and medium size airport, a stereovision based real-time solution embedded into the Internet of Things (IoT) and a distributed computing paradigm is proposed. A new stereovision method with the cameras’ optical axes freely oriented is modeled, evaluated, and implemented in a prototype. The real-time field verification at an airport runway using drones with a GPS recorders shows the system’s capacity to detect moving objects at a range of 300 m and to localize them within the required accuracy of 10%. It is proven that the proposed solution is able to classify detected objects into one of three size categories corresponding to bird size.

2. Background and Related Works

The problem of bird strikes is multifaceted and can be approached from the point of view of sustainable development, economy, law, and technology.

2.1. Non-Technological Approaches

The non-technological aspect of the presented solution could be analyzed from the perspective of bird presence near the airports, as well as the legal and financial consequences of potential bird strikes.
Increasing volumes of air traffic and the adaptation of some bird species to the living conditions in the vicinity of urban areas, which also increases their activity around airports, are the main causes of the increase in bird collisions [11]. Birds have modified their behavior and learned to tolerate the presence of both humans and man-made structures including air traffic and accompanying noises. Therefore, it is getting more difficult to control or limit their presence in the airport’s vicinity [12].
The problem with the increasing bird strike rate was noted by national and international organizations including the ICAO [13], the World Birdstrike Association (WBA), and states’ aviation authorities [14,15]. These organizations are responsible for sharing information and experiences, as well as for the development of the best practices regarding collision prevention. Currently, environmental monitoring of airports is regulated by the EASA [5] and the ICAO [6]. There are also national civil and military authorities and organizations responsible for aviation safety, like the Civil Aviation Authority [16] in Poland or the Swedish Civil Aviation Administration (Luftfartsverket) [17] in Sweden, who are responsible for wildlife risk management.
The data analysis performed by the ICAO and the EASA shows the critical areas where most of the accidents occur [18]:
  • Ninety percent of collisions are below an altitude of 150 m;
  • Sixty-one percent of events are at heights of less than 30 m;
  • Eight percent of collisions are at an altitude above 900 m and are outside the aerodrome area;
  • Seventy-five percent of accidents happen during the day.
The bird strikes with the windshield and the engine of the aircraft are the most dangerous and the most frequent events [19]. These damages cost over $200 million annually [20] purely in the USA and up to $1.2 billion worldwide [21].

2.2. Technological Approaches

So far, the most widespread solutions for bird strike prevention at large and medium size airports are still the eye observation of the runway. At many airports, various methods such us trained dogs, falconry, pyrotechnics, and green lasers are used as the most effective tools. Sometimes, deterrents are also installed, which emit predator or banging cannon sounds [20].
There have been a number of attempts to develop reliable autonomous bird detection and localization systems [10,22]. Besides the aforementioned automation of WHM at airports [23], the bird preservation at wind farms [10,24] and autonomous analysis of migrant bird behavior [25,26] are the main application fields.
Mainly, there are two types of sensors used for bird detection: radar [27,28] and vision cameras [9]. One of the first bird detection systems, which used the radar technology, was developed [29,30] in the early 1950s. Since then, the radar based solutions have improved their capabilities of bird detection in wide observation areas. Because of their capacity to estimate the bird’s position, velocity, and movement [31], they have become widely used in airports [32]. Radar systems for bird detection are characterized by long-range detection [33] in any weather and light conditions [34,35]. It is worth noting that the radar based solutions require additional permissions for emission in the frequency band, which should not disrupt the airport’s flight control systems [31].
The vision based solutions can be split into two groups: mono- and stereo-scopic systems. Whereas monoscope systems are able to detect the birds [26,36] and identify particular species [37], the stereoscopic systems additionally allow bird localization and size estimation [10,37,38].
The growth of CPU and GPU capabilities allows the application of advanced algorithms, which are more reliable in moving object detection [39] and identification [40,41]. The parallel enhancement of the resolution of image sensors and the advance in optics make it possible to detect and identify even small objects from far distances [10,26,39,42].
The core component of each vision based system is a detection algorithm. The bird detection in the video stream can be made using motion detection [22,25,43], AI based identification [26,44,45,46,47,48,49], or a combination of both [10,38]. Whereas the motion detection algorithms allow the reduction of the computational complexity of the safety system [50], the application of AI methods allows bird identification [48,49] and the reduction of false positive rates [10]. From the AI based solutions, the Convolutional Neural Networks (CNNs) [51,52,53] outperform other methods, for instance the Haar feature based cascade classifier [45,54] or Long Short-Term Memory (LSTM) [48]. The most recent studies reported that dense CNN [54] shows good feature extraction capabilities allowing for bird identification [49,55], and after 100 epochs, the system reaches near 99% accuracy. Other CNNs, implemented in distributed computing and IoT paradigms, allow the system to ensure 99.8% precision with 99.0% recall in bird identification with real-time performance [10]. This allows the development of a reliable vision based safety system at airports.
There are several examples of vision based systems allowing WHM at airports. Chen et al. proposed an automatic bird-blocking network and intelligent bird-repelling system [56]. The proposed algorithm with the use of IoT technology allowed automatic repelling, which minimizes the habituation effect [56]. The company Pharovision developed a commercially available system that is based on the infrared camera and allows scanning of the ground and the air, day and night [9]. Using the FLIR and CCTV cameras, their system detects and tracks even a single bird from up to 7 km [9]. Another complex system allowing multiple bird detections and repelling is provided by Volacom [8]. Detection is supported by thermal and stereovision cameras, which in real-time scan the airport’s vicinity for flying objects [8]. An additional acoustic module focuses a deflection sound signal at the targeted birds, to deter them up to 300 feet [8].
After detection, an automatic repellent method could be applied to minimize the bird strike risk. One of the first repelling methods tested in various scenarios was pulsing light [23]. This method was successfully used at an airport [57], other man-made structures [58], and wind farms [10]. Since the year 2015, pulsing light at 2 Hz in the landing light system has been recommended by the Federal Aviation Administration (FAA) and successfully used in airplanes and helicopters as a tool, allowing a substantial drop in bird collisions [59]. The other solution mounted in airports near the runway is large screens displaying a specific visual sight [60].
To deter a bird, a loud sound can also be used. Bishop et al. [61] showed that high frequency sound in the ultrasonic range above 20 kHz is ineffective and therefore has no biological basis for its use. In [62], the authors combined the effect of sound between 90 dB and 135 dB and a frequency of 2 kHz with white light. To reduce the habituation effect of the repellent method, the particular deterrent method used should vary and be implemented as rarely as possible [61].

3. Problem Statement, Objectives, and Main Contributions

As the survey of related works shows, there is a need for a reliable and cost-effective system mitigating the collision of avifauna with airplanes around airport runways. The biggest drawback of existing solutions, mostly based on stereovision, is their basically horizontally oriented Field of View (FoV), limiting the observation area and therefore requiring multiplied installations, which are heavy and costly.
The main objective of the paper is to determine the hardware and software structures of a stereovision based detection system for monitoring space over the airport runway to identify, localize, and classify moving objects. Such a highly reliable real-time system has to assure a wide range observation area without compromising its size and price, whilst also providing a wide range of customizability.
The proposed hardware configuration is composed of two cameras coupled in stereovision mode, wherein the first and second cameras are oriented with their optical axes of an angle α to the base line, wherein α is a substantially non-right angle. The cameras could be equally rotated to any direction to cover the selected observation area, which can be horizontal, vertical, or even oblique. The system software configuration is based on IoT technology, the distributed computing concept, and deep learning algorithms to ensure real-time operation mode.
The user-driven design methodology is used to provide a market tailored solution that may be customized to any small and medium size airports. The proposed solution was modeled and optimized using MATLAB software. The system prototype was installed in an real environment and verified using fixed-wing drones with GPS recorders.

4. System Design

The proposed avifauna monitoring system for runways was designed based on the User-Driven Design (UDD) methodology presented in [63,64]. Besides airport stakeholders and designers, the design process involved several authorities such as ornithologists and experts in aviation laws. Furthermore, future users, who contributed to the design, were falconers, airport security and safety staff, pilots, maintenance service workers, and environmental workers.
It is beyond a doubt that such a system is demanded to minimize the collision risk due to:
  • passengers and staff safety;
  • wildlife protection;
  • financial consequences related to damages and delays;
  • the legal, administrative, and marketing consequences of a potential catastrophe.
To achieve the listed goals, the designed system needs to fulfill the following functionalities and constraints:
  • to detect and localize suspected moving objects within the customizable safe zones and to do this with high reliability and low positioning uncertainty;
  • to distinguish individual, multiple, or flocks of birds simultaneously;
  • to work in real time with a very short detection latency;
  • to ensure that bird risk management has no side effects;
  • to eliminate the human factor by autonomous monitoring and repelling methods;
  • to ensure the affordability of the system including that the system price, cost of installation, and cost of maintenance are acceptable for small airports;
  • to facilitate and automate the reporting process recommended by the ICAO and the EASA regulations.
General and itemized functionalities and particular related constraints along with selected technologies and algorithms are summarized in Table 1. The motivation analysis of the technologies and algorithm selection is beyond the scope of this paper. However, it can be observed that the system is based on stereovision and the distributed computing and IoT concepts. The chosen algorithms belong to the machine learning and AI categories. The details of the applied solutions are presented in Section 5 and Section 6.

5. Modeling

The system conceptualization is presented in Figure 1. Since there is a need to cover a wide observation space, the system consists of several monitoring modules and other subsystems inter-connected via the network, which becomes a central component of the proposed structure. On the network’s right side, there are components allowing the user to interact with the system. On its left side, there are the control unit along with the sensors and actuators responsible for data acquisition and system reactions. The system is based on the IoT and distributed computing concepts [50], facilitating communication between modules and providing easy access to the storage data through the intuitive GUI.
The system can be deployed along the runway and consists of the control system, monitoring modules, and repellent part. Each monitoring module includes the stereovision sensing unit and Local Processing Unit (LPU) responsible for motion detection, object identification, and localization. Data from all monitoring modules are sent to the control system, where the detected object is cropped from the picture and processed, and a decision is made about using the repellent part. The control system handles the connection with several monitoring modules, repellent parts, the database, and the Human Machine Interface (HMI).
In the database, the data of the detected events such us the bird’s position, estimated flight-path, images, and movies, as well as info regarding any actions undertaken are collected. Archived data are accessible through the HMI such as web and mobile applications. The HMI can be also used to manually activate the repellents part and maintain/test the system.

5.1. Model of the Modified Stereovision Method

A stereovision based monitoring module oversees a selected runway zone. The proposed new solution ensures that the observation space is freely selectable through the adjustment of the cameras’ optical axes by changing the orientation of their angles α (z); see Figure 2.
In classical stereovision, the cameras’ optical axes are perpendicular to the baseline, B, where the baseline is the line segment connecting the cameras’ centers. Then, the baseline and the cameras’ image planes are placed on the same Euclidean plane; see Figure 2a. In the proposed modified stereovision method, the cameras’ optical axes are set at an angle α with respect to the baseline, in such a way that the cameras’ image planes are placed on two parallel planes, as shown in Figure 2b. The cameras’ alignment is presented in Figure 2c [65,66].
To understand the extraction of the 3D features, the coordinates of the modified stereovision system can be transformed using some geometric features. The transformation is carried out in relation to the first camera C1 (see Figure 3) in such a way that the coordinates and the scene are shifted by the rotation angle α . Using the geometrical transformation, the modified mathematical model of the method can be delivered using the variables and parameters defined in Table 2.

5.1.1. Distance Measurement Using Modified Stereovision

The distance, D, from the first camera C1 to the plane of the object, wherein the plane of the object is a plane perpendicular to the optical axes of the cameras, is equal to the distance D k from the object to the baseline, D = D k . From the basic geometry, one may also find that D = D b d 1 and B × cos α = b 1 + b 2 , where b 1 and b 2 are defined as:
b 1 = D tan φ 1 b 2 = D tan φ 2
Then, after substitution:
B × cos α = ( D b d 1 ) × tan φ 1 + ( D b + d 2 ) × tan φ 2
Knowing that d 2 = B × sin α d 1 :
B × cos α = ( D b d 1 ) × tan φ 1 + ( D b + B × sin α d 1 ) × tan φ 2
which could be simplified to:
B × cos α = ( D b d 1 ) × tan φ 1 + ( D + B × sin α ) × tan φ 2
From this, distance, D can be calculated as:
D = B × ( cos α sin α tan φ 2 ) tan φ 1 + tan φ 2
The angles φ 1 and φ 2 may be found from the relationships:
y 1 y 0 2 = tan φ 1 tan φ 0 2
y 2 y 0 2 = tan φ 2 tan φ 0 2
Then, distance D can be defined as:
D = B × cos α × y 0 2 × tan φ 0 2 × ( y 1 y 2 ) + B × sin α × y 2 ( y 1 y 2 )
which, for α = 0 , gives the distance for classical stereovision:
D = B × y 0 2 × tan φ 0 2 × ( y 1 y 2 )
Knowing the distance D and the angle φ 0 , the object altitude could be calculated using the formula:
H = D × cot φ 0 2 + φ 2
where φ 2 can be found from (7).

5.1.2. Quantization Uncertainty of the Distance Measurement

The distance D defined by (8) is a non-linear discrete function of y 0 , B, ( y 1 y 2 ) , y 2 , and φ 0 . The measurement uncertainty, Δ D , determined by the exact differential method [10,67,68], can be expressed by:
Δ D = ± y 0 × cos α 2 × tan φ 0 2 + y 2 × sin α × B ( y 1 y 2 ) 2 = = ± D y 1 y 2
The quantization uncertainty Δ D is a discrete function of ( y 1 y 2 ) N + and y 2 N + . Since Δ D depends also on the value of y 2 , it means that the uncertainty increases not only with distance D, but also with object altitude H. The quantization uncertainty of H depends on the distance estimation and may be considered per analogiam.
Figure 4 shows how for a varying pixel difference, ( y 1 y 2 ) , the quantized value of distance measurement D and its uncertainty Δ D depend on the y 2 value, which is a measure of object elevation. The simulations for the highest y 2 m a x and lowest altitude y 2 m i n were performed for y 0 = 1440 px and φ 0 = 48.8 , corresponding to the off-the-shelf IMX219 camera with a focal lens of f = 3 mm and a large baseline B = 1 m [10].
Figure 5 and Figure 6 illustrate how the quantized distance measurement D and its quantization uncertainty Δ D respectively depend on the pixel difference, ( y 1 y 2 ) , and object position on the C 2 image plane, y 2 . The simulation was done within the range of 300 m. It proves that in the worst case, quantization uncertainty Δ D could be of 70 m, which gives a measurement uncertainty of 35 m.

5.2. System Processing

Figure 7 presents the processing architecture of the system, which is based on the distributed computing and IoT paradigms. The proposed architecture supported by a stable Ethernet connection enables reliable real-time communication between the monitoring modules where images are collected and the control unit where the measurement data are processed.
The monitoring modules with the on-board LPU provide the video streaming from the stereovision set consisting of two cameras. The flying bird identification is based on the motion detection and object identification algorithms presented in the authors’ previous studies [10]. The CNN distinguishes bird-like objects from sky artifacts such as clouds, snow, rain, etc. When a detected moving object is identified as a bird, a warning trigger is activated, and the information from the motion detection algorithm including the estimated object’s center coordinates, x c and y c , is sent to the 3D localization unit. The optimization procedure of the detection and identification algorithm was described in the authors’ previous work [10].
Via Ethernet, the control unit receives information including the object’s 3D positioning along with the image miniature and object contour [10]. In the data filtering block, a statistical analysis is performed to conclusively distinguish birds from other bird-like objects such as the drones, airplanes, and insects. Then, based on data about the object width, height, and contour received from the motion detection algorithm, as well as the estimated distance calculated in the localization algorithm, the size classification algorithm estimates the object’s size to sort it into one of three categories: small, medium, or large [10]. After classification, the notification protocol via HMI is provided to the users’ apps and archived in a local database. The deterrence module could be activated if needed.

5.3. Size Classification

Knowing the distance D and the size of a detected object on the image plane as p W (px) and p H (px) [10], the bird’s wingspan P W (m) and height P H (m) could be estimated from:
P W = D × p W × S I A f × 1 y 0 P H = D × p H × S I A f × 1 y 0
where SIA is the camera’s Sensor Image Area.
Previous studies showed that the approximation of the bird’s size with an isosceles triangle enables classification of its size as small, medium, or large [10]. Figure 8 illustrates how the bird size could be estimated. The triangle base corresponds to the bird’s wingspan p W (px), and the height of the triangle p H (px) denotes the bird’s height. Then, the triangle area O a p p r o x is a measure of the bird’s size.
Since the representation of an object on an image depends significantly on the object distance from the monitoring modules, then the size classification accuracy depends on the quantization error. The uncertainties of the measurement of P W as a function of the distance for typical small, medium, and large objects are presented in Figure 9. Within the requested distance ranges, there are no overlaps between the shown classes; however, the fuzziness resulting from the distance measurement uncertainty could be observed for birds of sizes close to the inter-category boundaries. The presented simulations were performed for the parameters selected in Section 5.1, and the SIA was set to 3.76 mm, which corresponds to the Sony IMX219 sensor. The calculations were done for average birds representative of each class, i.e., 1 m, 1.32 m, and 1.67 m wingspans for small, medium, and large, respectively. The measures of P H and O a p p r o x show similar uncertainty and may be considered per analogiam.
The estimate of object area O a p p r o x is used for the classification of the birds [10] into three categories, with the boundary values, O b 1 and O b 2 , which were defined based on ornithologists suggestions. The common buzzard and the red kite were selected as boundary representatives of the medium and large objects. Therefore, each object smaller than O b 1 = 0.22 m 2 , corresponding to the size of the common buzzard, is considered as small, and each object bigger than or equal to O b 2 = 0.48 m 2 , corresponding to the size of the red kite, is considered as a large object. The calculation of boundary values of O b 1 = 0.22 m 2 were performed for wingspans of 1.1 m and 1.45 m and heights of 0.4 m and 0.66 m, for the common buzzard and red kite [69], respectively.

6. Prototyping

This section firstly considers the optimization of the parameters within a range of constraints stated in Section 4, and then, the prototype of the system is presented.

6.1. Parameter Optimization

From (8) and Figure 10, it can be seen that the core structural parameters of the proposed method are: the baseline, B, image resolution, y 0 , and FoV, φ 0 ; therefore, the selection of their values is crucial.
A camera image resolution y 0 = 1440 px was selected due to the limitation of the computational complexity of the applied algorithms and the corresponding capabilities of the local processing units. Camera’s focal length f and its FoV defined by φ 0 are interdependent. Previous studies [10] showed that the maximum possible FoV can be realized using the IMX219 with a focal lens of f = 3 mm and an FoV φ 0 = 48.8 .
As a rule of thumb, the spatial vision is correct when the baseline is between 1/100 and 1/30 of the system range [70]. However, due to technical reasons, the baseline should not exceed 1.5 m. To select an acceptable baseline length, an evaluation of the distance measurement and its uncertainty was dione. The simulation results of D and Δ D for the object image detected at the top and at the bottom of the camera matrix were collected for B = {0.75 m, 1 m, 1.25 m, 1.5 m}; see Figure 10. From their analysis, it can be concluded that in the worst case at 300 m, ( y 1 y 2 ) = 4 px and ( y 2 ) = 1440 px, with measurement quantization uncertainty Δ D = ± 81 m for B = 1 m, and for B = 1.5 m, Δ D = ± 61 m. Therefore, the stereoscopic baseline B = 1 m was selected as fulfilling the requirement for a 10% localization accuracy with the shortest baseline B.

6.2. Hardware Prototyping of the Monitoring Modules

The prototype of the monitoring modules is presented in Figure 11, and the installation spot is shown in Figure 12. Each module was composed of two IMX219 cameras with a f = 3 mm lens, having a vertical FoV φ 0 = 48.8 and allowing the image capture with a resolution of y 0 = 1440 px. To optimize the monitoring space, the rotation angle of the system (optical axes of both detection cameras) was set to α = φ 0 / 2 . The computational core of the LPU was an ARM v8.2 processor with 8 GB RAM and 384 CUDA cores and 48 Tensor cores for the AI based object identification algorithm. The monitoring modules were equipped with a switch allowing the IoT configuration. To ensure low weight, the system was composed of an acrylic cover.
The prototype of the system included an auxiliary recording camera allowing real-time video streaming and recording for verification and validation of the detection system. The configuration of three monitoring modules allowed monitoring of the area within the field of view of φ = 180 , as shown in Figure 13, where small dead zones near construction could be neglected as having no impact on the detection efficiency.
The control system ran on a database Dell server equipped with 3.6 GHz Xeon X5687 processor and 8 GB of RAM. As the memory storage, two 8 TB hard drives were used. The connection between the monitoring modules and the control system was provided by the Ethernet protocol. The monitoring modules were powered by safety extra-low voltage.

7. Validation and Testing

The system prototype was installed on a dedicated stand in a test field, which was a flat open space near the runway of Reymont Airport in Lodz, Poland (IATA: LCJ, ICAO: EPLL), as shown in Figure 11 and Figure 12. The prototype was equipped with three monitoring modules and one control unit. Mutual placement of the stereoscopic cameras was manually set based on the fixed distant object. The positions of the images were manually determined using transformation by handle in the GIMP software. The system reported approaches by birds in flight, and an example of one such observed dangerous approach of a bird with an airplane is presented in Figure 14.
For the quantitative evaluation of the system performance in terms of detection efficiency and localization precision, bird-like drones equipped with GPS recorders were used. Two fixed-wing drones and one quadrocopter representing small, medium, and large objects are presented in Figure 15, and their dimensions provided by the manufacturer in terms of the wingspan, height, and total area are shown Table 3. The drones were programmed to fly along a given path within the system vicinity.
To evaluate the system detection efficiency, test flights for the three drones were performed. The drones flew at a random speed and altitude within the desired system detection range. The system detected the small drone 1565 times, the medium drone 2248 times, and the large drone 2875 times, during the 3 min, 12 min, and 10 min flights, respectively. The detection efficiency presented in Table 4 was calculated as the relationship between the time when the drone was visible to the monitoring module and the time of flight in the defined range. Table 4 summarizes the results. The presented results prove that the desired efficiency was achieved within the requested detection range defined in Table 1.
To quantitatively evaluate the developed system’s ability to carry out 3D object localization, it was tested in nine different scenarios defined in Table 5. The drones were turned on in autopilot mode using the remote controller, and they flew around the module at a predefined approximately constant distance and altitude, with different distances D and altitudes H used for different scenarios. The subscripts S, M, and L included in the scenarios listed with Roman numerals denote small, medium, and large drones, respectively. The average speed of the small, medium, and large drones during each test was 4.0 m/s, 20.0 m/s, and 15.0 m/s, respectively.
For each test flight, the mean distance D ¯ , height H ¯ , and corresponding standard deviation σ D ¯ and σ H ¯ , for GPS and detection module data, respectively, were estimated. The GPS measurements were treated as reference values for the analysis of system uncertainty presented in the last four columns, where Δ D k ¯ and Δ H ¯ depict the mean absolute accuracy of the distance and height measure, respectively, and δ D k ¯ and Δ H ¯ depict the corresponding relative accuracy of the distance and height measure, respectively.
Examples of the graphical illustration of the test results are presented in Figure 16 and Figure 17 for the small, medium, and large drones, respectively. The flight scenarios were chosen to show the system capabilities at the detection range borders for each drone. The green and red dots represent localization measurement samples from the GPS and from the system, respectively. The ellipses illustrate the measurement statistics where their center coordinates, X ( D ¯ , H ¯ ) , correspond to the mean values of the distance and height measurements, respectively. Their semi-major axes depict the standard deviation σ D ¯ , and the semi-minor axes correspond to the standard deviation σ H ¯ .
At long distances of more than 200 m, the quantization error of a single measurement was greater than the desired localization precision. However, statistics allow the reduction of the quantization error, which meets the user’s desire; see Table 1. The mean values of the distance and height uncertainty dropped below the expected 10% even for far distances of more than 300 m, which is above the quantization uncertainty of the distance measurement; see Figure 6. The system detection range and localization precision depend on the object size. The system was able to detect the small drone from 100 m, the medium drone up to 200 m, and the large drone up to 300 m.
Table 3 includes the test drones’ data sheet information, which were treated as reference values. Table 6 shows the test results for the size estimates and their quality along with the results of bird classifications, and they are presented in the last three columns of the table. For each scenario defined in Table 5, the drones’ width, P w ¯ , height, P h ¯ , and size, O ¯ a p p r o x , were estimated from the images, and then, the estimates’ variances σ P w ¯ , σ P h ¯ , and σ O ¯ a p p r o x were calculated, respectively. Despite relatively high estimation uncertainties, the system was capable of classifying drones into their correct categories.
Object classification into one of three categories of small, medium, and large was based on the estimate O a p p r o x and defined heuristically. The selected boundaries between categories were: between small and medium O b 1 = 0.22 m 2 and between medium and large O b 2 = 0.48 m 2 , as introduced and presented in Section 5.3. The test results proved that within the desired ranges, the system classified small and large objects with a reliability of 99.6% and 91.4%, respectively. The classification reliability of medium objects was 65.4%. Nevertheless, medium objects were more likely to be classified as large (25.4%) rather than small (9.0%), which errs on the safe side from an application point of view. It is worth noting that the classification of objects should be treated as a fuzzy categorization, because the real sizes of birds of the same species vary. Furthermore, size estimates are biased by measurement uncertainties. Nevertheless, the test results confirmed that the average size O ¯ a p p r o x calculated for each scenario allowed the evaluation of the object size correctly in each case.

8. Discussion, Conclusions, and Future Work

This work proposes a stereovision based detection system for monitoring the space near airports to identify and localize moving objects. The system is a reliable and cost-effective solution for the prevention of bird strikes around airport runways.
A new stereovision structure is proposed, composed of two cameras coupled in stereovision mode, with the cameras’ optical axes able to be freely oriented to cover the desired monitoring space from one installation spot within the cameras’ common FoV. A set of detection modules could extend the system observation FoV up to 360 . One can estimate that a medium size airport with a 2600 m runway can be covered using up to seven systems, each equipped with eight monitoring modules. The system software configuration based on the distributed computing concept powered by machine learning algorithms embedded in the IoT paradigm ensures real-time performance. Apart from the detection of moving objects, the system is capable of localizing and classifying them based on their size.
To make the system desirable and flexible for different airport sizes, the user-driven design was applied, which included many actors such as airport stakeholders, local and ecological authorities, designers, and future users. This has driven the design solution into a customizable system, which ensures cost-effectiveness without compromising system reliability.
The system was modeled and optimized using MATLAB software. The evaluation method included the analysis of the localization uncertainty and enabled system optimization. The quantitative evaluation of the system performance showed that the proposed solution meets the desired requirements regarding detection range and localization precision.
The modeled system was implemented and prototyped and then installed in a test field, which was a flat open space near the runway of Reymont Airport in Lodz, Poland. To validate the system performance, two drone sizes of 2.0 m and 1.2 m and one quadrocopter of 0.24 m were applied, imitating large, medium, and small birds, respectively. Nine test scenarios, three for each device, were applied to prove system localization and size estimate accuracy, as well as to prove the detection efficiency and ability to correctly classify the objects.
The tests proved that the system detects small objects within a range of 100 m with an efficiency of 94%. Medium objects can be detected within a range of 250 m with an efficiency of 93%, whereas the large object detection range of 300 m had a detection efficiency of 93%; see Table 4.
The estimates of the localization uncertainty for both distance and height measurements varied from 0.7% up to 9.7%, but did not exceed the required 10%, as shown in Table 5.
Estimations of drone size, which is used for object classification, were done for all nine scenarios; see Table 6. The test results proved that the system is capable of distinguishing small and large objects with a reliability of 99.6% and 91.4%, respectively. The classification reliability of medium objects was 65.4%. The results show that the approximated sizes were overestimated compared to the reference ones. However, this type of result is not fatal, and the applied classification algorithm is able to sort the objects into the correct categories. Nevertheless, the test results confirmed that by means of statistics, it is possible to enhance the object’s size estimation.
The system validation proved that the system implements all the desired functionalities and fulfills all the regulatory requirements and therefore can be used for standalone autonomous bird monitoring, complementing ornithologists’ work to minimize the risk of bird collisions with airplanes.
Among other future developments, a tracking algorithm to anticipate bird flight paths could be implemented to improve system reliability and localization accuracy. The implementation of Multiple Hypothesis Tracking (MHT), Kalman filter, or Probability Hypothesis Density (PHD) are considered as possible solutions. Moreover, the classification could be extended to include the recognition of bird species, which could improve long-term wildlife monitoring. Other possible work may also concern the detection of mammals or Foreign Object Debris (FOD) within an airport’s proximity.
Furthermore, ornithological long-term observations should be performed to verify the system performance in terms of bird detection efficiency and false positive rate. These observations could also validate the system performance in overcast weather conditions, which would be required before its implementation at airports in autonomous operational mode.
The precise calibration of a large-base stereovision system is complex and may cause a large positioning uncertainty [74]. Therefore, our future work will focus on an autonomous in situ calibration of the system.
Aviation safety at airports requires also the detection of FOD, as well as land mammals. The monitoring area of the proposed detection system could be extended to cover the whole runway.
Future work may also concern the deployment of a multi-module configuration along an airport’s runway to ensure full coverage of the skies within an airport’s legal jurisdiction.

Author Contributions

Conceptualization, D.G., D.D., D.K., A.J., and W.J.K.; methodology, D.G., D.D, and W.J.K.; software, D.G., D.D., and D.K.; validation, D.G. and D.D.; formal analysis, D.G., D.D., and W.J.K.; investigation, D.G. and W.J.K.; data curation, D.D.; original draft preparation, M.S., D.G., D.D., and W.J.K.; review and editing, W.J.K.; visualization, D.G., D.D., and D.K.; supervision, W.J.K.; project administration, A.J.; funding acquisition, D.G., A.J., and W.J.K. All authors read and agreed to the published version of the manuscript.

Funding

This research was conducted within grant “Carrying out research and development works necessary to develop a new autonomous AIRPORT FAUNA MONITORING SYSTEM (AFMS) reducing the number of collisions between aircraft and birds and mammals” (No. POIR.01.01.01-00-0020/19) from The National Centre for Research and Development of Poland.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The presented data are accessible for authorized staff according to the local regulation.

Acknowledgments

The authors would like to acknowledge Sandy Hamilton’s support to improve this article. In addition, the enormous participation of all Bioseco employees in the project’s implementation should be emphasized.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ICAOInternational Civil Aviation Organization
EASAEuropean Union Aviation Safety Agency
WHMWildlife Hazard Management
IoTInternet of Things
WBAWorld Birdstrike Association
CNNConvolutional Neural Networks
UDDUser-Driven Design
FoVField of View
SIASensor Image Area
HMIHuman Machine Interface
LPULocal Processing Unit

References

  1. When Birds Strike. Available online: https://www.historynet.com/when-birds-strike.htm (accessed on 23 December 2020).
  2. Meer, A. Bird-strike aircraft accidents and their prevention. Asian J. Sci. Technol. 2019, 10, 9251–9257. [Google Scholar]
  3. FAA Wildlife Strike Database. Available online: https://wildlife.faa.gov/home (accessed on 23 December 2020).
  4. European Union Aviation Safety Agency. Available online: https://www.easa.europa.eu/landing (accessed on 23 December 2020).
  5. EASA. Certification Memorandum; Technical Report 01; European Union Aviation Safety Agency: Cologne, Germany, 2019.
  6. ICAO. Provisions for Wildlife Strikes Hazard Reduction in Aerodromes; ICAO: Montreal, QC, Canada, 2019. [Google Scholar]
  7. Plonczkier, P.; Simms, I.C. Radar monitoring of migrating pink-footed geese: Behavioural responses to offshore wind farm development. J. Appl. Ecol. 2012, 49, 1187–1194. [Google Scholar] [CrossRef]
  8. Volacom|Breakthrough Bird Control Solutions. Available online: https://volacom.com/ (accessed on 23 December 2020).
  9. Pharovision-Interceptor. Available online: https://www.pharovision.com/index.php/payloads/interceptor (accessed on 23 December 2020).
  10. Gradolewski, D.; Dziak, D.; Martynow, M.; Kaniecki, D.; Szurlej-Kielanska, A.; Jaworski, A.; Kulesza, W.J. Comprehensive Bird Preservation at Wind Farms. Sensors 2021, 21, 267. [Google Scholar] [CrossRef]
  11. Bradbeer, D.R.; Rosenquist, C.; Christensen, T.K.; Fox, A.D. Crowded skies: Conflicts between expanding goose populations and aviation safety. Ambio 2017, 46, 290–300. [Google Scholar] [CrossRef] [Green Version]
  12. Skakuj, M. Zagrożenia środowiskowe w lotnictwie i zmiany klimatyczne. Pr. Nauk. Politech. Warszawskiej. Transp. 2018, z. 123, 175–188. [Google Scholar]
  13. Dolbeer, R.A. The History of Wildlife Strikes and Management at Airports; USDA National Wildlife Research Center—Staff Publications: Fort Collins, CO, USA, 2013. [Google Scholar]
  14. Wang, Y. Recent development of ICAO on wildlife strike hazard reduction. In Proceedings of the World Birdstrike Association Conference, Warsaw, Poland, 19–21 November 2018; pp. 1–4. [Google Scholar]
  15. Stefanioros, V.; Haya-Leiva, S.; Bernandersson, M. EASA Wildlife Strike Prevention Update. In Proceedings of the World Birdstrike Association Conference, Warsaw, Poland, 19–21 November 2018; pp. 1–4. [Google Scholar]
  16. Civil Aviation Authority. Available online: https://ulc.gov.pl/en (accessed on 8 January 2021).
  17. Air Traffic Control and Associated Services. Available online: https://lfv.se/en (accessed on 8 January 2021).
  18. International Civil Aviation Organization. Airport Services Manual. Part 3: Wildlife Control and Reduction, 4th ed.; International Civil Aviation Organization: Montreal, QC, Canada, 2012. [Google Scholar]
  19. Strategies for Prevention of Bird-Strike Events. Available online: https://www.boeing.com/commercial/aeromagazine/articles/2011_q3/4/ (accessed on 8 January 2021).
  20. Metz, I.C.; Ellerbroek, J.; Mühlhausen, T.; Kügler, D.; Hoekstra, J.M. The Bird Strike Challenge. Aerospace 2020, 7, 26. [Google Scholar] [CrossRef] [Green Version]
  21. Allan, J.; Orosz, A. The Costs of Birdstrikes to Commercial Aviation. In Proceedings of the 2001 Bird Strike Committee-USA/Canada, Third Joint Annual Meeting, Calgary, AB, Canada, 19–21 August 2001. [Google Scholar]
  22. Verstraeten, W.W.; Vermeulen, B.; Stuckens, J.; Lhermitte, S.; Van der Zande, D.; Van Ranst, M.; Coppin, P. Webcams for bird detection and monitoring: A demonstration study. Sensors 2010, 10, 3480–3503. [Google Scholar] [CrossRef] [Green Version]
  23. Blackwell, B.F.; DeVault, T.L.; Seamans, T.W.; Lima, S.L.; Baumhardt, P.; Fernández-Juricic, E. Exploiting avian vision with aircraft lighting to reduce bird strikes. J. Appl. Ecol. 2012, 49, 758–766. [Google Scholar] [CrossRef] [Green Version]
  24. Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Bird detection and species classification with time-lapse images around a wind farm: Dataset construction and evaluation. Wind Energy 2017, 20, 1983–1995. [Google Scholar] [CrossRef]
  25. Shakeri, M.; Zhang, H. Real-time bird detection based on background subtraction. In Proceedings of the 10th World Congress on Intelligent Control and Automation, Beijing, China, 6–8 July 2012; pp. 4507–4510. [Google Scholar]
  26. Hong, S.J.; Han, Y.; Kim, S.Y.; Lee, A.Y.; Kim, G. Application of deep-learning methods to bird detection using unmanned aerial vehicle imagery. Sensors 2019, 19, 1651. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Advanced Radar Technology - Get to know our Bird Radar Solutions. Available online: https://www.robinradar.com/products (accessed on 19 April 2020).
  28. Chilson, C.; Avery, K.; McGovern, A.; Bridge, E.; Sheldon, D.; Kelly, J. Automated detection of bird roosts using NEXRAD radar data and Convolutional neural networks. Remote Sens. Ecol. Conserv. 2019, 5, 20–32. [Google Scholar] [CrossRef]
  29. Fox, A.D.; Beasley, P.D. David Lack and the birth of radar ornithology. Arch. Nat. Hist. 2010, 37, 325–332. [Google Scholar] [CrossRef]
  30. Dokter, A.M.; Desmet, P.; Spaaks, J.H.; van Hoey, S.; Veen, L.; Verlinden, L.; Nilsson, C.; Haase, G.; Leijnse, H.; Farnsworth, A.; et al. bioRad: Biological analysis and visualization of weather radar data. Ecography 2019, 42, 852–860. [Google Scholar] [CrossRef] [Green Version]
  31. van Gasteren, H.; Krijgsveld, K.L.; Klauke, N.; Leshem, Y.; Metz, I.C.; Skakuj, M.; Sorbi, S.; Schekler, I.; Shamoun-Baranes, J. Aeroecology meets aviation safety: Early warning systems in Europe and the Middle East prevent collisions between birds and aircraft. Ecography 2019, 42, 899–911. [Google Scholar] [CrossRef]
  32. Phillips, A.C.; Majumdar, S.; Washburn, B.E.; Mayer, D.; Swearingin, R.M.; Herricks, E.E.; Guerrant, T.L.; Beckerman, S.F.; Pullins, C.K. Efficacy of avian radar systems for tracking birds on the airfield of a large international airport. Wildl. Soc. Bull. 2018, 42, 467–477. [Google Scholar] [CrossRef] [Green Version]
  33. Nilsson, C.; Dokter, A.M.; Schmid, B.; Scacco, M.; Verlinden, L.; Bäckman, J.; Haase, G.; Dell’Omo, G.; Chapman, J.W.; Leijnse, H.; et al. Field validation of radar systems for monitoring bird migration. J. Appl. Ecol. 2018, 55, 2552–2564. [Google Scholar] [CrossRef]
  34. Bird Radar Schiphol Airport | Robin Radar Technology Systems. Available online: https://www.robinradar.com/full-bird-radar-coverage-at-schiphol-airport (accessed on 1 January 2021).
  35. Bird Control Radar Systems. Available online: https://detect-inc.com/bird-control-radar-systems/ (accessed on 1 January 2021).
  36. Chabot, D.; Francis, C.M. Computer-automated bird detection and counts in high-resolution aerial images: A review. J. Field Ornithol. 2016, 87, 343–359. [Google Scholar] [CrossRef]
  37. Weinstein, B.G. A computer vision for animal ecology. J. Anim. Ecol. 2018, 87, 533–545. [Google Scholar] [CrossRef]
  38. McClure, C.J.; Martinson, L.; Allison, T.D. Automated monitoring for birds in flight: Proof of concept with eagles at a wind power facility. Biol. Conserv. 2018, 224, 26–33. [Google Scholar] [CrossRef]
  39. Zou, Z.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. arXiv 2019, arXiv:1905.05055. [Google Scholar]
  40. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
  41. Yang, Y.; Deng, H. GC-YOLOv3: You Only Look Once with Global Context Block. Electronics 2020, 9, 1235. [Google Scholar] [CrossRef]
  42. Bernacki, J. A survey on digital camera identification methods. Forensic Sci. Int. Digit. Investig. 2020, 34, 300983. [Google Scholar] [CrossRef]
  43. Rakibe, R.S.; Patil, B.D. Background Subtraction Algorithm Based Human Motion Detection. Int. J. Sci. Res. Publ. 2013, 3, 4. [Google Scholar]
  44. Bhusal, S.; Bhattarai, U.; Karkee, M. Improving Pest Bird Detection in a Vineyard Environment using Super-Resolution and Deep Learning. IFAC-PapersOnLine 2019, 52, 18–23. [Google Scholar] [CrossRef]
  45. Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Evaluation of bird detection using time-lapse images around a wind farm. In Proceedings of the European Wind Energy Association Conference, Paris, France, 17–20 November 2015; pp. 104–107. [Google Scholar]
  46. Pillai, S.K.; Raghuwanshi, M.M.; Shrawankar, U. Deep Learning Neural Network for Identification of Bird Species Sofia. Proc. IRSCNS 2018 2019, 75, 291–298. [Google Scholar] [CrossRef]
  47. Gavali, P.; Mhetre, M.P.A.; Patil, M.N.C.; Bamane, M.N.S.; Buva, M.H.D. Bird Species Identification using Deep Learning. Int. J. Eng. Res. Technol. 2019, 8, 68–72. [Google Scholar]
  48. Trinh, T.T.; Yoshihashi, R.; Kawakami, R.; Iida, M.; Naemura, T. Bird detection near wind turbines from high-resolution video using lstm networks. In Proceedings of the World Wind Energy Conference (WWEC), Tokyo, Japan, 31 October–1 November 2016; Volume 2, p. 6. [Google Scholar]
  49. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  50. Gradolewski, D.; Maslowski, D.; Dziak, D.; Jachimczyk, B.; Mundlamuri, S.T.; Prakash, C.G.; Kulesza, W.J. A Distributed Computing Real-Time Safety System of Collaborative Robot. Elektron. Ir Elektrotechnika 2020, 26, 4–14. [Google Scholar] [CrossRef]
  51. Mohanty, R.; Mallik, B.K.; Solanki, S.S. Automatic bird species recognition system using neural Network based on spike. Appl. Acoust. 2020, 161, 107177. [Google Scholar] [CrossRef]
  52. Houpt, R.; Pearson, M.; Pearson, P.; Rink, T.; Seckler, S.; Stephenson, D.; VanderStoep, A. Using Neural Networks to Identify Bird Species from Birdsong Samples. In An Introduction to Undergraduate Research in Computational and Mathematical Biology; Springer: Berlin/Heidelberg, Germany, 2020; pp. 401–442. [Google Scholar]
  53. Triveni, G.; Malleswari, G.N.; Sree, K.N.S.; Ramya, M. Bird Species Identification using Deep Fuzzy Neural Network. Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET) 2020, 8, 1214–1219. [Google Scholar] [CrossRef]
  54. Huang, Y.P.; Basanta, H. Bird image retrieval and recognition using a deep learning platform. IEEE Access 2019, 7, 66980–66989. [Google Scholar] [CrossRef]
  55. Orhan, A.E.; Pitkow, X. Skip connections eliminate singularities. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  56. Chen, Y.; Dai, Y.; Chen, Y. Design and Implementation of Automatic Bird-blocking Network in Airport Intelligent Bird-repelling System. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; Volume 1, pp. 2511–2515. [Google Scholar]
  57. Doppler, M.S.; Blackwell, B.F.; DeVault, T.L.; Fernández-Juricic, E. Cowbird responses to aircraft with lights tuned to their eyes: Implications for bird–aircraft collisions. Condor Ornithol. Appl. 2015, 117, 165–177. [Google Scholar] [CrossRef] [Green Version]
  58. Goller, B.; Blackwell, B.F.; DeVault, T.L.; Baumhardt, P.E.; Fernández-Juricic, E. Assessing bird avoidance of high-contrast lights using a choice test approach: Implications for reducing human-induced avian mortality. PeerJ 2018, 6, e5404. [Google Scholar] [CrossRef]
  59. Aviation Rulemaking Advisory Committee (ARAC), Advisory and Rulemaking Committees—Rotorcraft Bird Strike Working Group; Federal Aviation Administration: Washington, DC, USA, 2017.
  60. Hausberger, M.; Boigné, A.; Lesimple, C.; Belin, L.; Henry, L. Wide-eyed glare scares raptors: From laboratory evidence to applied management. PLoS ONE 2018, 13, 1–15. [Google Scholar] [CrossRef] [PubMed]
  61. Bishop, J.; McKay, H.; Parrott, D.; Allan, J. Review of International Research Literature Regarding the Effectiveness of Auditory Bird Scaring Techniques and Potential Alternatives; Food and Rural Affairs: London, UK, 2003; pp. 1–53. [Google Scholar]
  62. Cadets Test Sound Light System Deter Bird Strikes. Available online: https://www.airforcetimes.com/article/20140115/NEWS/301150011/Cadets-test-sound-light-system-deter-birdstrikes/ (accessed on 28 June 2020).
  63. Dziak, D.; Jachimczyk, B.; Kulesza, W.J. IoT based information system for healthcare application: Design methodology approach. Appl. Sci. 2017, 7, 596. [Google Scholar] [CrossRef]
  64. Dziak, D. Detection and Classification Multi-sensor Systems: Implementation of IoT and Systematic Design Approaches. Ph.D. Thesis, Blekinge Tekniska Högskola, Karlskrona, Sweden, 2020. [Google Scholar]
  65. Gradolewski, D.; Dziak, D.; Kaniecki, D.; Jaworski, A.; Kulesza, W. A Stereovision Method and System. U.K. Patent Application No. 2018391.9, 23 November 2020. [Google Scholar]
  66. Paszek, K.; Danielowski, M. Systems and Methods for Detecting Flying Animals. U.S. Patent Application No. US20200257896A1, 26 May 2018. [Google Scholar]
  67. McGhee, J.; A Henderson, I.; Korczynski, M.J.; Kulesza, W. Scientific Metrology; Lodart S.A.: Lódź, Poland, 1996. [Google Scholar]
  68. Chen, J.; Khatibi, S.; Kulesza, W. Depth reconstruction uncertainty analysis and improvement–The dithering approach. Image Vis. Comput. 2010, 28, 1377–1385. [Google Scholar] [CrossRef]
  69. Svensson, L.; Grant, P.J.; Mullarney, K.; Zetterström, D. Collins bird guide. Br. Birds 1999, 92, 432–433. [Google Scholar]
  70. Mrovlje, J.; Vrancic, D. Distance measuring based on stereoscopic pictures. In Proceedings of the 9th International PhD Workshop on Systems and Control: Young Generation Viewpoint, Izola, Slovenia, 1–3 October 2008; Volume 2, pp. 1–6. [Google Scholar]
  71. MAVIC AIR. Available online: https://www.dji.com/pl/mavic-air (accessed on 16 January 2021).
  72. E-flite Opterra 1.2m flying wing. Available online: https://www.horizonhobby.com/product/opterra-1.2m-bnf-basic-with-as3x-and-safe-select/EFL11450.html (accessed on 16 January 2021).
  73. The Opterra 2m Flying Wing. Available online: https://www.horizonhobby.com/product/opterra-2m-wing-bnf-basic-with-as3x/EFL11150.html (accessed on 16 January 2021).
  74. Rodríguez, J.A.M.; Alanís, F.C.M. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging. J. Mod. Opt. 2016, 63, 1219–1232. [Google Scholar] [CrossRef]
Figure 1. Block diagram illustrating the system conceptualization.
Figure 1. Block diagram illustrating the system conceptualization.
Sensors 21 01464 g001
Figure 2. Stereovision model: (a) classical; (b) with rotated image planes; (c) illustration of the cameras’ alignment for the modified stereovision.
Figure 2. Stereovision model: (a) classical; (b) with rotated image planes; (c) illustration of the cameras’ alignment for the modified stereovision.
Sensors 21 01464 g002
Figure 3. Definition of the variables and basic system settings.
Figure 3. Definition of the variables and basic system settings.
Sensors 21 01464 g003
Figure 4. The distance measurement and its uncertainty for varying ( y 1 y 2 ) where y 2 m a x and y 2 m i n depict the numbers of the top and bottom pixel, respectively, on the C 2 image plane.
Figure 4. The distance measurement and its uncertainty for varying ( y 1 y 2 ) where y 2 m a x and y 2 m i n depict the numbers of the top and bottom pixel, respectively, on the C 2 image plane.
Sensors 21 01464 g004
Figure 5. The distance measurement as a function of the pixel difference, ( y 1 y 2 ) , and object position on the C 2 image plane, y 2 , within a range of 300 m.
Figure 5. The distance measurement as a function of the pixel difference, ( y 1 y 2 ) , and object position on the C 2 image plane, y 2 , within a range of 300 m.
Sensors 21 01464 g005
Figure 6. The quantization uncertainty of the distance measurement as a function of the pixel difference, ( y 1 y 2 ) , and object position on the C 2 image plane, y 2 , within the range of 300 m
Figure 6. The quantization uncertainty of the distance measurement as a function of the pixel difference, ( y 1 y 2 ) , and object position on the C 2 image plane, y 2 , within the range of 300 m
Sensors 21 01464 g006
Figure 7. General system processing architecture consisting of N monitoring modules and the control unit.
Figure 7. General system processing architecture consisting of N monitoring modules and the control unit.
Sensors 21 01464 g007
Figure 8. Graphical illustration of the bird’s size estimation.
Figure 8. Graphical illustration of the bird’s size estimation.
Sensors 21 01464 g008
Figure 9. The quantization uncertainty of P W with respect to the distance. The calculations were made for average birds representative of each class, i.e., 1 m, 1.32 m, and 1.67 m wingspans representative of small, medium, and large bird classes, respectively.
Figure 9. The quantization uncertainty of P W with respect to the distance. The calculations were made for average birds representative of each class, i.e., 1 m, 1.32 m, and 1.67 m wingspans representative of small, medium, and large bird classes, respectively.
Sensors 21 01464 g009
Figure 10. The distance measurement and its uncertainty as a function of the pixel difference for baseline B = (0.75, 1, 1.25, 1.5) m for different object placements on the image plane: y 2 = 1 in (a) and (b); y 2 = 1440 in (c) and (d).
Figure 10. The distance measurement and its uncertainty as a function of the pixel difference for baseline B = (0.75, 1, 1.25, 1.5) m for different object placements on the image plane: y 2 = 1 in (a) and (b); y 2 = 1440 in (c) and (d).
Sensors 21 01464 g010
Figure 11. (a) The prototype computer drawing of the monitoring module; (b) the system installation composed of three monitoring modules and one control unit.
Figure 11. (a) The prototype computer drawing of the monitoring module; (b) the system installation composed of three monitoring modules and one control unit.
Sensors 21 01464 g011
Figure 12. The installation spot depicted as P1 at the airport.
Figure 12. The installation spot depicted as P1 at the airport.
Sensors 21 01464 g012
Figure 13. The monitoring area and dead zone configuration of the system prototype consisting of three monitoring modules.
Figure 13. The monitoring area and dead zone configuration of the system prototype consisting of three monitoring modules.
Sensors 21 01464 g013
Figure 14. A picture from the monitoring module during the tests—a bird with an airplane in a risk situation. Frame 1 and Frame 3 are centered on the detected bird and the airplane, respectively.
Figure 14. A picture from the monitoring module during the tests—a bird with an airplane in a risk situation. Frame 1 and Frame 3 are centered on the detected bird and the airplane, respectively.
Sensors 21 01464 g014
Figure 15. Small, medium, and large drones used during the validation of the system.
Figure 15. Small, medium, and large drones used during the validation of the system.
Sensors 21 01464 g015
Figure 16. The graphical illustration of the tests results. Distance from a monitoring module vs. the height of the drone during the test. Green dots: GPS data; red dots: data from the module. The corresponding color ellipses illustrate the standard deviations of the respective distance and height measurements. (a) Scenario III; (b) Scenario IV.
Figure 16. The graphical illustration of the tests results. Distance from a monitoring module vs. the height of the drone during the test. Green dots: GPS data; red dots: data from the module. The corresponding color ellipses illustrate the standard deviations of the respective distance and height measurements. (a) Scenario III; (b) Scenario IV.
Sensors 21 01464 g016
Figure 17. Graphical representation of the tests results. Distance from a monitoring module vs. the height of the drone during the test. Green dots: GPS data; red dots: data from the module. The corresponding color ellipses illustrate the standard deviations of the respective distance and height measurements. (a) Scenario VI; (b) Scenario IX.
Figure 17. Graphical representation of the tests results. Distance from a monitoring module vs. the height of the drone during the test. Green dots: GPS data; red dots: data from the module. The corresponding color ellipses illustrate the standard deviations of the respective distance and height measurements. (a) Scenario VI; (b) Scenario IX.
Sensors 21 01464 g017
Table 1. UDD summary: functionalities, constraints, and applied technologies and algorithms.
Table 1. UDD summary: functionalities, constraints, and applied technologies and algorithms.
FunctionalitiesParticular ConstraintsTechnologies and Algorithms Used
GeneralItemized
Real-time runway monitoringDetection and identification of moving objectsReliability 98 % ,
latency 5  s,
computation rate > 15 FPS,
FP rate < 5%,
robustness for weather and light conditions
Stereovision,
motion detection,
machine learning (convolutional NN),
distributed computing,
microcontroller
Localization,
positioning
3D positioning ranges:
large (red kite) (20 m–300 m),
medium (common buzzard) (20 m–200 m),
small (swallow) (20 m–75 m),
corresponding wing image size (10 px–10,000 px),
localization uncertainty < 10%
Classification/managementObject classificationBird/no-bird,
small/medium/large bird,
reliability > 80%,
simultaneously up to 4 individual,
birds, and/or flocks
Multi-dimensional signal processing,
distributed computing,
machine learning
(convolutional and deep NN),
strobe, audio
Threat classificationAccording to the airport’s specific horizontal and vertical zones
Bird risk managementManual and automated repelling, cannot distract people, especially pilots
HMISystem accessibilityRedundant reliable link with error warning 24/7, using web/mobile app to view current and archive events, manual activating repelling systemLinux/MacOS/Windows/Android/iOS,
Edge/Chrome/Mozilla/Safari,
MySQL, ReactJS,
Ethernet, Wi-Fi, TCP/IP
Event traceability, archivingAutomate and periodical reporting (monthly, quarterly, annually), compliant with the ICAO and the EASA regulations, manual reporting of eyewitness observations
AffordabilityCustomizability, versatilityCustomize monitoring range and observation zones, customize object classificationDistributed computing, IoT, CUDA, GPU, modularity
Cost-effectivenessSystem price, easy installation, and replacement
Serviceability, /operabilityAutomated and on-demand online,
in situ auto-test,
in situ auto-calibration
Table 2. Definition of the basic variables and parameters of the mathematical model along with units used.
Table 2. Definition of the basic variables and parameters of the mathematical model along with units used.
SymbolNameUnit
BBaseline, the line segment connecting the cameras’ centers.(m)
D b The length of the segment connecting the object with the line through the centers of the two cameras along a line parallel to the optical axes of the cameras.(m)
D k The distance of a detected object to a line through the centers of the two cameras.(m)
b 0 The intersection point of the line of D b with the baseline.(-)
b 1 The distance from the first camera C1 to b 0 in a direction perpendicular to the optical axes of the cameras.(m)
b 2 Th distance from the second camera C2 to b 0 in a direction perpendicular to the optical axes of the cameras.(m)
DThe distance from the first camera to the plane of the object, wherein the plane of the object is a plane perpendicular to the optical axes of the cameras.(m)
d 1 The distance from the first camera C1 to b 0 in a direction parallel to the optical axes of the cameras.(m)
d 2 The distance from the second camera C2 to b 0 in a direction parallel to the optical axes of the cameras.(m)
φ 0 The cameras’ field of view.( )
φ 1 The angle between the projection line of the object on the first camera C1 and the optical axis of the first camera.( )
φ 2 The angle between the projection line of the object on the second camera C2 and the optical axis of the second camera.( )
α The rotation angle defined as an angle between the (parallel) optical axes of the cameras and the base line.
The rotation of the first camera C1 is around the first axis, perpendicular to the optical axis of the first camera, and the rotation of the second camera C1 is around the second axis, parallel to the first axis.
( )
y 0 The cameras’ resolution along the Y axes wherein the Y axis of a camera is perpendicular to the rotational axis of the camera (the first axis for the first camera and the second axis for the second camera) and within the image plane of the corresponding camera.(px)
y 1 The pixel number of the object’s center projection on the image plane of the camera C1 along the Y1 axis wherein the Y1 axis is perpendicular to the rotational axis of C1 and within the image plane of the camera.(px)
y 2 The pixel number of the object’s center projection on the image plane of the camera C2 along the Y2 axis wherein the Y2 axis is perpendicular to the rotational axis of C2 and within the image plane of the camera.(px)
Table 3. Parameters of drones used for the tests.
Table 3. Parameters of drones used for the tests.
ParameterSmall [71]Medium [72]Large [73]
Wingspan0.24 m1.20 m1.99 m
Height0.10 m0.53 m1.04 m
Total area0.012 m 2 0.28 m 2 0.67 m 2
O a p r o x r e f 0.04 m 2 0.32 m 2 1.02 m 2
Requested detection range75 m200 m300 m
Table 4. Evaluation of system detection efficiency.
Table 4. Evaluation of system detection efficiency.
SmallMediumLarge
Detection RangeDetection TimeFlight TimeDetection EfficiencyDetection TimeFlight TimeDetection EfficiencyDetection TimeFlight TimeDetection Efficiency
(s)(s)(%)(s)(s)(%)(s)(s)(%)
(0–50>2932912626100---
(50–100>8085948282100--
(100–150>204050226235962424100
(150–200 >---3223299824225296
(200–250>---64699310210598
(250–300>------262892
(300–350>------20636257
Table 5. Test plan of the designed system. N is the number of samples registered during the test, and the error is the difference in the mean between the GPS and the system measurements.
Table 5. Test plan of the designed system. N is the number of samples registered during the test, and the error is the difference in the mean between the GPS and the system measurements.
GPS DataDetection Module DataUncertainty
Scenarios D ¯ k G P S σ D k G P S ¯ H ¯ G P S σ H ¯ N D ¯ k σ D k ¯ H ¯ σ H ¯ Δ D k ¯ Δ H ¯ δ D k ¯ δ H ¯
(m)(m)(m)(m)(-)(m)(m)(m)(m)(m)(m)(%)(%)
I S 46.40.326.70.28545.61.626.90.60.80.21.70.7
II S 66.62.232.00.28668.73.734.61.42.12.63.28.1
III S 96.310.826.70.182101.720.729.32.75.45.95.69.7
IV M 104.212.536.10.59997.18.934.51.87.11.66.81.6
V M 133.613.738.52.3157141.39.635.43.87.73.15.08.0
VI M 202.81.648.81.797199.117.450.55.11.76.71.83.4
VII L 129.32.553.00.7137139.49.553.73.610.10.77.81.3
VIII L 202.94.996.90.7101186.716.888.36.716.28.67.98.8
IX L 311.52.9102.90.731320.223.0103.57.28.70.62.70.5
Table 6. The test results of drone size estimations, where the reference sizes can be seen in Table 3. The classification boundaries are O b 1 = 0.22 m 2 and O b 1 = 0.48 m 2 .
Table 6. The test results of drone size estimations, where the reference sizes can be seen in Table 3. The classification boundaries are O b 1 = 0.22 m 2 and O b 1 = 0.48 m 2 .
Detection Module DataUncertaintyClassification
Scenarios P w ¯ σ P w ¯ P h ¯ σ P h ¯ O ¯ a p p r o x σ O a p p r o x ¯ Δ P w ¯ Δ P h ¯ Δ O ¯ a p p r o x SmallMediumLarge
(m)(m)(m)(m)(m 2 )(m 2 )(m)(m)(m 2 )
I S 0.320.040.170.010.0350.0020.080.070.0058500
II S 0.350.010.250.010.0440.0010.110.140.0048600
III S 0.310.060.210.020.0490.0050.070.110.0098110
IV M 1.060.230.680.080.3630.2070.140.150.04687219
V M 0.870.110.860.110.3790.1210.330.330.0442110531
VI M 1.260.250.690.030.4370.0820.060.160.11835440
VII L 1.850.240.870.050.8070.1000.140.170.227517115
VII L 2.090.130.950.030.9990.0790.100.090.03600101
IX L 1.970.141.160.041.1560.1450.020.120.1210130
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gradolewski, D.; Dziak, D.; Kaniecki, D.; Jaworski, A.; Skakuj, M.; Kulesza, W.J. A Runway Safety System Based on Vertically Oriented Stereovision. Sensors 2021, 21, 1464. https://doi.org/10.3390/s21041464

AMA Style

Gradolewski D, Dziak D, Kaniecki D, Jaworski A, Skakuj M, Kulesza WJ. A Runway Safety System Based on Vertically Oriented Stereovision. Sensors. 2021; 21(4):1464. https://doi.org/10.3390/s21041464

Chicago/Turabian Style

Gradolewski, Dawid, Damian Dziak, Damian Kaniecki, Adam Jaworski, Michal Skakuj, and Wlodek J. Kulesza. 2021. "A Runway Safety System Based on Vertically Oriented Stereovision" Sensors 21, no. 4: 1464. https://doi.org/10.3390/s21041464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop