1. Introduction
An unmanned ground or aerial system includes a set of subsystems, such as the vehicle or aircraft, payloads, control stations (often additional remote stations), launch and recovery elements for aircrafts, support subsystems, communication subsystems, transportation subsystems, etc. It also must be considered as part of a land or air environment, capable of operating over restricted or extended areas, considering specific rules and regulations.
Unmanned systems typically have similar elements to manned vehicle or aircraft systems, but in this case, they are designed for operation without an onboard crew. The crew (as a subsystem), with its interface with vehicle/aircraft controls and position, is replaced with an electronic information and control subsystem.
Therefore, a robot is considered a complex, computer-programable system equipped with microprocessors, sensors, actuators, mechanical structures, capable of action, perception, decision making, and communication, able to operate in complex and variable environments.
Initially, a remotely piloted vehicle (RPV) was used for unmanned aircraft, but as systems began to incorporate terrestrial or underwater vehicles, other acronyms emerged to clarify reference to aerial vehicle systems. Currently, unmanned aerial vehicle (UAV) is the general term for an aircraft (aerial vehicle) in an unmanned aerial system (UAS) [
1,
2]. While useful, such systems are generally better suited to being integrated in outdoor surveillance systems.
The purpose of this work is to create an architecture using unmanned ground and aerial vehicles for the surveillance of objectives. Designing such a system is challenging due to the continuous monitoring required, raising issues of autonomy as well as the design of navigation, command, and control systems. Supplementarily, our work pertains to usage for the monitoring of large, indoor objectives, such as warehouses, public institutions, stadiums, concert halls, and so on.
Combining mobile robots with drones ensures increased system autonomy, lower costs for creation and operation, and easier maintenance over extended periods [
3]. Mobile robots and drones can replace human security patrols, reducing costs and eliminating repetitive and monotonous tasks. These systems must possess key features such as autonomy, intelligence, flexibility, scalability, and precision. Their implementation requires meticulous planning, selecting appropriate equipment, and complying with local regulations. Such systems have been discussed in detail in papers such as [
4,
5].
This work proposes the development of an autonomous mechatronic system for monitoring strategic objectives, aiming to cut costs and enhance security. It emphasizes the importance of miniaturization, modularity, energy efficiency, and eco-friendly solutions in the design of such systems. Securing strategic objectives is a fundamental concern for modern states [
6,
7], with profound implications for economic stability, national security, and citizen protection. These objectives include critical infrastructures such as power plants, transportation networks, military units, and government institutions, all playing essential roles in societal functions. With global interdependence and emerging threats, protecting these resources becomes a strategic priority.
Regarding indoor positioning, most papers focus on the development and challenges of autonomous indoor surveillance systems using robots, addressing critical aspects such as navigation, sensing, and cooperative strategies. As such, paper [
8] provides an overview of the opportunities and challenges in deploying autonomous robots for indoor surveillance. The paper identifies navigation, real-time decision making, and advanced sensing as key areas requiring innovation. It emphasizes the importance of robust algorithms and sensor technologies to enable robots to operate effectively in dynamic environments. Further, paper [
9] describes the design and implementation of an autonomous indoor surveillance robot. Their system uses a combination of sensors, such as ultrasonic and infrared, along with mapping and obstacle-avoidance algorithms to enable robots to patrol predefined indoor spaces. The paper highlights the importance of efficient navigation and real-time environmental awareness. Paper [
10] proposes a hybrid vision-based system that integrates optical cameras and depth sensors. Their approach improves target detection and environmental monitoring accuracy, enabling robots to adapt to complex indoor settings. The hybrid system enhances surveillance capabilities by combining the strengths of multiple sensing technologies. Further on, paper [
11] focuses on a multi-sensor fusion approach for autonomous surveillance. The system integrates data from LiDAR, ultrasonic sensors, and cameras to improve environmental perception and obstacle detection. By combining sensor inputs, the robots achieve higher navigation precision, ensuring reliable surveillance in dynamic and cluttered environments. Paper [
12] introduces a multi-agent system for indoor surveillance, where multiple robots collaborate to optimize task allocation and area coverage. Their study highlights cooperative strategies and communication protocols that allow robots to patrol efficiently while minimizing overlap and maximizing coverage. Collectively, these papers demonstrate innovative approaches to indoor surveillance using autonomous robots. They emphasize the importance of integrating sensors, advanced algorithms, and multi-robot cooperation to address challenges such as navigation, obstacle avoidance, and real-time monitoring in dynamic environments, being a topic of great interest among researchers for different approaches of indoor positioning ssystems [
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24].
The present study proposes an autonomous monitoring system using mobile robots and drones to secure strategic objectives. This system integrates advanced sensors and software algorithms to ensure efficient and secure surveillance of predefined areas, meeting modern security requirements. It offers benefits such as reduced operational costs, elimination of repetitive human tasks, and an eco-friendly, energy-efficient infrastructure. Unmanned terrestrial and aerial systems are explored, highlighting their applications in security, emergency response, environmental monitoring, agriculture, and infrastructure protection. These systems leverage artificial intelligence and advanced technologies for monitoring, rapid response, and safeguarding assets against evolving threats.
Mobile robots are equipped with technologies such as AI and sensors, enabling autonomous navigation, route planning, mapping, and obstacle avoidance. Various locomotion mechanisms—wheels, tracks, and legs—are utilized, catering to environments such as industrial inspections, search and rescue, and exploration. Aerial mechatronic systems combine mechanics, electronics, and control engineering, emphasizing automation, miniaturization, safety, and predictive maintenance. The study also classifies drones based on design: fixed-wing for long distances, rotary-wing for stability, hybrids, single-rotor for heavy payloads, ornithopters, nano/micro drones for confined spaces, and submersibles for air and underwater use. Applications range from smart city surveillance to ocean monitoring, addressing disaster detection and wide-area oversight. Collaborative systems of aerial and terrestrial robots are also discussed, capable of executing tasks such as strategic surveillance and disaster response. The challenges of using robots in diverse environments are addressed, emphasizing the need for safe human–robot interaction. The proposed system includes mobile robots patrolling indoor areas. Theoretical and experimental tests validated the design, including a mobile flying robot with calculations for thrust, motion, and battery life. Marvelmind beacons ensured accurate indoor localization (±2 cm), tested through various stages, including mobile robots and quadcopters. Implementation details include tracked robots and quadrupeds with advanced features such as visual AI and inverse kinematics for precise movements. Two drone prototypes, one teleoperated and another autonomous, demonstrated reliable performance through rigorous testing. Indoor positioning tests confirmed system accuracy, and a synchronization algorithm was proposed for multi-stage surveillance: area mapping, robot design, trajectory optimization, and sensor integration. Robots and drones collaborate autonomously to detect and report anomalies, with drones providing additional data. The system is customizable and operates efficiently to meet specific needs.
2. Establishing the Monitoring Scenario and Imposed Restrictions
In the new paradigm of large-area geographic and interior space monitoring, using autonomous mobile robots amid current security challenges presents an area where current studies do not necessarily offer a clear solution [
25]. A critical aspect to consider is ensuring cybersecurity for these systems. Autonomous robots rely on wireless communications and sensors, making them vulnerable to cyber attacks such as data interception, spoofing, and malware [
26,
27]. Successful attacks could compromise robot functionality, leading to system failures or even physical harm. Security assurance requires encryption, secure communication protocols, and continuous monitoring against threats. Additionally, AI-driven robots must be trained to autonomously recognize and mitigate potential cybersecurity risks [
28,
29,
30]. As robotics evolves, proactive cybersecurity measures will be essential to maintain trust and safety in these systems.
Beyond cybersecurity, it is crucial to recognize that the diverse configurations of geographical areas or building interiors navigated by these robots may require custom configurations for each scenario. Additionally, special attention must be given to the coexistence of humans and robots, ensuring that each operates without disrupting the other. Effective communication, trust, and clear safety protocols are vital for seamless interaction.
Indoor surveillance with robots is an evolving technology that uses autonomous robots equipped with cameras, sensors, and advanced algorithms to monitor and secure indoor spaces. These robots are designed to autonomously navigate environments such as offices, warehouses, hospitals, or homes, providing real-time surveillance without requiring human operators.
In this system, a team of autonomous robots will patrol an indoor building area. Robot trajectories will be both random and predefined based on specific rules developed for each situation.
The system will follow key stages, obtaining relevant data on events within the robots’ operational area. The algorithm will allow a variable number of mobile robots to patrol an area, equipped with a minimal set of sensors necessary only for movement, obstacle avoidance, and basic detection of disturbances in the area of interest. In this case, the robots follow either a predetermined or random trajectory, while a flying robot, designed to serve the entire system and equipped with high-performance, costly sensors, will intervene only in the event of a detected incident.
Robots patrol according to a predefined schedule until one detects a potential threat, at which point it sends an alarm signal to the drone to initiate an intervention, as depicted schematically in
Figure 1. By initiating an automated intervention sequence, the drone enters an alert state, powers up its engines, and prepares to take off and inspect the area of interest. It quickly heads to the location indicated by the mobile robot, covering the distance in a much shorter time than another mobile robot or human intervention agent could. Simultaneously, all its equipment (sensors and camera) is activated.
The drone begins capturing images and videos from the air using its high-resolution cameras. Additionally, modern drones can utilize thermal cameras to identify heat sources at night or in low-visibility conditions, an option that could be implemented in future projects or viewed as an enhancement to the current project.
The data captured by the drone are transmitted in real time to the control center, where human operators or AI algorithms assess whether a genuine threat exists.
The drone maintains aerial surveillance, providing additional information for mobile robots or other units available for response. Based on observation results, the drone will either continue to monitor the area or collaborate with the mobile robot to track the target. If a physical intervention is needed, it can be carried out by other specialized units, supported by detailed information provided by the drone. Thus, the drone functions as an aerial observer within the system, offering continuous surveillance when required and enabling a rapid response to incidents detected by the mobile robot.
3. Implementation and Testing of an Indoor GPS System
The Marvelmind navigation system is an off-the-shelf indoor navigation system designed to provide precise location data (±2 cm) for autonomous robots and vehicles (AGV—automated guided vehicles). It can also track moving objects through attached mobile beacons. This navigation system includes fixed ultrasonic beacons connected via radio, one or more mobile beacons on objects being tracked, and a router providing access to a computer or peripheral device for monitoring. The mobile beacon’s location is calculated based on the ultrasonic time of flight (TOF) delay between stationary and mobile beacons using a trilateration algorithm.
The system operates by measuring the distance between beacons and mobile devices using ultrasonic signals and trilateration algorithms. Marvelmind systems can cover distances up to 50 m between beacons, depending on environmental conditions such as obstacles or interference. The total coverage area can be extended by adding more beacons to create a larger grid. This scalability makes it suitable for warehouses, robotics, and industrial applications, ensuring reliable and accurate indoor tracking. The system uses ultrasonic signals combined with trilateration to calculate positions with centimeter-level accuracy. To cover larger areas, additional beacons can be deployed, forming a scalable grid system that significantly extends the overall coverage area. By strategically placing beacons, Marvelmind can efficiently track objects or robots across extensive indoor spaces such as warehouses, factories, and office buildings.
The system is composed of stationary and mobile beacons as well as a modem needed to communicate both with the beacons and with an appropriate application.
Regarding the stationary beacons (
Figure 2), they are usually mounted on walls or ceilings above the robot, with ultrasonic sensors facing downwards to achieve optimal ultrasonic signal coverage. For automatic landing and indoor navigation of copters, it is recommended to install a mobile beacon on the bottom of the flying system, oriented downwards. The placement and orientation of the beacons should be configured to ensure maximum coverage of the ultrasonic signal. The system’s effectiveness largely depends on the quality of the ultrasonic signal received by the stationary beacons. During the initial mapping configuration, stationary beacons emit and receive ultrasonic signals. Switching between roles (stationary and mobile) is accomplished via the application software and does not require and hardware modifications.
As stated before, an important part of the system is the router, which acts as the central data aggregation point for the system. It must remain powered while the navigation system is operational. The router also enables configuration, monitoring, and dashboard interaction. It can be placed anywhere within radio coverage, typically up to 100 m with antennas from the starter kit. The router is represented in
Figure 3.
The indoor positioning system is accompanied by an application for the initial programming and configuration of the beacons. The application’s front-end dashboard is shown in
Figure 4.
To obtain preliminary results and perform a general functionality test, the system was set up in a laboratory, where initial measurements were conducted.
Figure 5 shows the equipment setup used for configuring the workspace, consisting of five beacons and a modem.
For the initial testing of the Marvelmind indoor positioning system, a configuration with two fixed and three mobile beacons (2D setup) was chosen. In this configuration, two stationary beacons were mounted at a height of 1 m from the perimeter of the test area. This setup is demonstrated in
Figure 5. By utilizing this arrangement, the system was able to map and track positions within the designated test space, allowing for real-time monitoring and testing of the system’s capabilities under controlled conditions.
The mobile beacons will be introduced into the designated work perimeter one by one, manually. After each beacon’s introduction, the system’s reported coordinates will be verified for accuracy, and results will be documented.
Figure 5 illustrates the placement of all beacons along with the output of the Marvelmind application.
Figure 6a–h are screenshots from the Marvelmind dashboard for different scenarios, including a single robot moving in a straight line in different directions, a curvilinear path for one, two, or three robots, etc.
Figure 7a–h are graphs plotted on the horizontal plane based on the (x, y) coordinates provided by the mobile beacons. These were generated using a Python 3 program and show trajectories that resemble those in
Figure 7a–h, though not identical.
As expected, the two types are similar. The first set of images provides a qualitative view of the process, while the second set, based on real-time coordinates, is more precise and quantitatively valuable.
The objective of this test was to develop an application that not only displays beacon positions quantitatively and visually, but also provides coordinates in a format easily processed and used for the control of mobile robots and the entire system.
4. Theoretical Considerations on the Positions Obtained from the Indoor Positioning System
In order to establish a mathematical means of confirming the positions returned by the indoor positioning system, a simple geometrical model was used. In the following, we will consider four fixed beacons arranged as in
Figure 8, halfway along the sides of the considered work volume and all placed at the same height. In this situation, the dimensions of the work volume
Lx,
Ly, and
Lz are known. We will attach to this volume a Cartesian coordinate system Oxyz with the origin in the center of the base surface. The robot will move on the base surface considered flat. The current coordinates of the robot are (
x*,
y*). The Marvelmind system used allows the real-time measurement of the distances between the fixed beacons BF
1, BF
2, BF
3, and BF
4 and the beacon mounted on the robot R, i.e., the distances
d1,
d2,
d3, and
d4 (
Figure 8). Next, the robot coordinates will be determined depending on the geometry considered above and the measured distances.
BF1, BF2, BF3, BF4—fixed beacons;
R—mobile beacon;
Lx, Ly, Lz—dimensions of workspace;
d1, d2, d3, d4—distances to fixed beacons, respectively;
x*, y*—robot current coordinates.
In the right triangle ABR, the Pythagorean relation can be written:
and in right triangle
:
and finally:
Finally, the equations to determine the position of the robot relative to the fixed beacons are as follows:
Beacon | Equation |
BF1 | |
BF2 | |
BF3 | |
BF4 | |
Note: in the considered situation, since the robot moves on a flat surface, two fixed beacons are sufficient to determine the current coordinates of the robot. |
Next, the two coordinates will be determined based on the information received from beacons BF1 și BF3.
Once the distances d
1 and d
3 from are determined, the current coordinates of the robot x* and y* can be determined. To do this, the difference between the two expressions can be obtained:
and the
, coordinate as:
By substituting this expression into relation (1) we obtain:
In the same way, the coordinates of x* and y* can be determined by considering the information from other pairs of fixed beacons. In this situation, the coordinates can be determined with very good precision, each of them representing the arithmetic mean of the coordinates calculated for each individual case.
Following this model, a second variant was considered, one that will allow the positions of the mobile beacons to be determined by using their coordinates relative to the fixed beacons. In order to perform this second model, the working area was divided into squares, each with a side of 750 mm.
Figure 9 depicts a schematic representation of the work area, where points A, with coordinates XY (0, 140 cm), and B, with coordinates (0, 280 cm), were marked. Alongside this division, a position calculation model for the beacons was established based on their coordinates. The model will be used to verify the data provided by the indoor positioning system. To perform this verification, the positions of the fixed beacons BF
1 and BF
2, as well as the initial points in the plane where the mobile beacons BM
1, BM
2, and BM
3 will be placed, must be determined.
Once these positions are established, the distances *d
1*, *d
2*, and *d
3* (
Figure 10) between the fixed beacon BF
1 and the mobile beacons BM1, BM2, and BM3, respectively, can be calculated, as well as the distances *D
1*, *D
2*, and *D
3* (
Figure 11) between the fixed beacon BF
1 and the mobile beacons BM
1, BM
2, and BM
3, respectively.
Mobile beacon coordinates are as follows:
Right triangles
allow determining:
Additionally, right triangles
reveal:
Right triangles
allow for the following:
and from right triangles
:
Finally, this model allowed us to compare the results obtained from the system to the ones calculated.
5. Testing of Indoor Positioning System
In order to test the indoor positioning system from both a precision and functional standpoint, a test bench was set up consisting of an electrically driven translation axis on which the mobile beacon will be mounted. Since the functional characteristics of the axis are known in advance (length, speed, and positioning accuracy), it will be possible to compare the results obtained by moving the mobile slider of the axis in different positions and the coordinates given by the indoor GPS system.
The translation axis is driven by a DC motor equipped with a reducing gear and an incremental transducer. The rotational movement of the shaft at the output of the gear is transmitted to a mobile sled by means of a toothed belt.
The motor drives a toothed belt through a toothed pulley (D = 16 mm, 18 teeth, and GT2), a belt on which a translation sled is fixed, guided in turn by means of ball bushings and a guide shaft with a diameter of 8 mm. The axis is presented in
Figure 12 and
Figure 13.
To obtain fixed reference points in the analyzed workspace, a matrix composed of squares was constructed, each with a side of approximately 750 mm. A schematic of the surface is presented in
Figure 14.
Figure 15 shows the positioning of the axis over one of the points of the matrix, allowing for a somewhat precise positioning relative to the working area.
The functionality of the incremental encoder was tested by programing the microcontroller to process signals received from the position transducer. These signals were interpreted using a PID positioning algorithm to achieve accurate positioning of the mobile sled. Motor control is accomplished through a dedicated driver, which requires two digital control signals. One of these signals is a PWM signal used to regulate motor speed, where the speed is directly proportional to the duty cycle of the control signal.
The closed-loop axis positioning test utilized a PID algorithm to record specific axis positions, which were verified using the beacon system. As a result, the axis achieved a positioning accuracy of ±1 mm. Although this level of error may typically be considered significant, it was deemed acceptable since the axis is intended to test a positioning system with an accuracy of ±20 mm. For this test, two fixed beacons were installed at a height of 1 m above the test surface. Their placement is illustrated in
Figure 16.
Figure 17 shows the position of the mobile beacon, fixed on the mobile sled of the translation axis. It will move according to the commands received from the microcontroller.
In the Marvelmind application, the workspace was created, in which the system origin (coordinate points 0,0) was defined, as well as the positions of the fixed beacons, marked in
Figure 18 with numbers 4 and 8. In the same figure, the mobile beacon is marked with number 6.
Following these stages, the functionality of both the translation axis and the Marvelmind indoor positioning system was successfully demonstrated, enabling controlled positioning and accurate position determination. Subsequently, a test program was developed in Python to simultaneously evaluate the two systems. The program aimed to validate the coordinates provided by the indoor GPS system using the translation axis. For this purpose, an algorithm was implemented to monitor the state of the end-of-stroke sensor and control the operation of the electric motor until the sensor is triggered. At this point, the encoder counter is reset, defining the zero position of the translation axis. Starting from this reference point, the axis is moved across its entire working stroke based on the encoder data. Simultaneously, signals from the mobile beacon are collected, and the commands issued to the axis are compared with the positions reported by the indoor positioning system. Following the tests conducted using an application created in Python, the commands transmitted to the axis were synchronized with the information received from the indoor positioning system.
Results are presented in the graph in
Figure 19. This graph shows the evolution of the Y coordinate of the mobile beacon over time. It can be observed that the appearance of some stops when the axis stops in the positions (0, 100, 200, 300, 400, 500, and 600 mm), as well as the transitions of the mobile sled between these positions, marked with colored areas in the graph.
Figure 20 shows the comparative evolution of the axis position (orange line) and the position recorded by the mobile beacon (blue line). The error obtained (the difference between the two measurements) is graphically presented in
Figure 21, where it can be seen that the value proposed by the system manufacturer, of +/−20 mm, is not exceeded.
6. Discussion
After testing the indoor positioning system, a small team of mobile robots was put together to better understand how the system might behave in certain scenarios.
Figure 22a shows a structure of a tracked robot. For this type of robot, a commercially available chassis model was used. The chosen chassis is equipped with two DC motors with gearboxes, one for each track. In order to navigate the environment, robots have been equipped with four ultrasonic sensors, as depicted in
Figure 22b. The tracked robots are programed using a very simple algorithm that allows them to switch between random movement or heading straight along the longest distance, as indicated by the ultrasonic sensors. Certainly, 3D printing and autonomous robots transform industries by enabling rapid, customizable manufacturing and precise automation [
31]. All inter-robot communication is performed by using Wi-Fi communication, using the model included in either the microcontroller or SBCs in their construction. This is also true for the drone included in the system’s construction.
In order to diversify the team, two four-legged robots were included. A commercially available variant is a robotic dog equipped with visual artificial intelligence, featuring 12 degrees of freedom. It is built using six servomotors, aluminum alloy brackets, and a video camera. The robot can perform a variety of human-like actions and move in any direction, controlling its attitude across six dimensions/six degrees of freedom—the robot’s position along the X, Y, and Z axes, as well as orientation control, including pitch, roll, and yaw. Equipped with IMU sensors and angle sensors for the servomotors, DOGZILLA provides real-time feedback on its position and joint angles. Using inverse kinematics algorithms, the robot can execute various types of movements. A Raspberry Pi serves as the main controller, complemented by additional configurations such as Lidar and a voice module. By programming in Python, the robot can perform diverse functions, including AI-based visual recognition, navigation using Lidar maps, and voice control. An image of the robot is shown in
Figure 23.
For enhanced efficiency and remote monitoring, the surveillance system has been upgraded by introducing a drone equipped with the same type of beacon for indoor testing. This will assist with positioning the drone in the environment. In this stage, only 2D positioning of the device is needed.
Figure 24 shows a view of all the robots to be used.
Following this stage, after the system is installed and activated, it operates through a series of coordinated steps to ensure efficient monitoring and response. First, the Marvelmind measurement systems continuously track and read the position of each team member in real time, ensuring precise localization at any moment. Next, the system identifies the location of an event or anomaly using data collected by the sensors integrated into the mobile robots. These sensors are designed to detect unusual activities, environmental changes, or critical conditions in the monitored area.
If an anomaly is detected, the system immediately dispatches a drone to the exact coordinates of the mobile robot that initially reported the event. The drone, equipped with advanced sensors and real-time monitoring capabilities, positions itself close to the critical location to gather supplementary information. This allows the system to refine its understanding of the situation, ensuring more accurate assessments.
The drone transmits detailed data back to the command systems, including visual, positional, or environmental feedback. With this additional input, the command system analyzes the event and determines the most appropriate course of action based on the nature and severity of the anomaly. This step ensures that responses are efficient, targeted, and tailored to the specific circumstances, ultimately improving the overall reliability and effectiveness of the surveillance and intervention system.
7. Conclusions
The paper describes the steps taken to install, deploy, and test an indoor positioning system that will be used to synchronize the movements of mobile autonomous robots in order to implement a new surveillance algorithm. The surveillance of strategic targets, such as critical infrastructures, military areas, borders, transport networks, or sensitive industrial spaces, poses significant technological and security challenges. These challenges are generated by the increasing complexity of threats, from cyber attacks and physical sabotage to industrial security incidents or terrorism.
The involvement of humans in the surveillance of these strategic targets may remain important, but it has a number of limitations and challenges. Long-term surveillance can lead to fatigue and decreased attention, which can compromise human efficiency, especially in the case of monitoring complex systems or physical patrols. Human personnel can be a direct target for attackers, and their physical security is a concern in itself. Security personnel require ongoing training, salaries, and the provision of a safe working environment, which can involve significant costs, especially in areas where constant surveillance is required. In hard-to-reach places or in extreme environmental conditions, people may be physically limited, and the use of robots and drones becomes more efficient and safer.
Following the work presented in this paper, a synchronization algorithm coordinates the movements of all robots to avoid overlapping tasks and maximize target coverage. The robots communicate with each other, by radio, to update each other’s positions and states in real time. Each robot has an optimized path, avoiding collisions and optimizing reaction time. Using proximity sensors, each robot detects activities or anomalies in its area of action, and in case of danger, transmits the data to the drone. The synchronization algorithm manages this data in real time and transmits commands to the drone.
Furthermore, further testing the system will be performed in a 3D environment, allowing for all the autonomous robots (terrestrial and flying) to be concurrently used without limiting usage to a single plane. Following the analysis of the research results and the experimental determinations presented in this paper, several further development directions have been identified, including the implementation of the proposed system for various beneficiaries, the development of an external monitoring system for strategic objectives, the integration of artificial intelligence to develop new control and command algorithms for the system, conducting an economic study for the developed system, as well as the implementation of other types of robots and drones capable of operating in external conditions. Additionally, it is necessary to include in the control algorithm conditions related to integrating human partners into the system.