Next Article in Journal
Fluid Structure Interaction Using Modal Superposition and Lagrangian CFD
Next Article in Special Issue
A User Interface Design Framework for Augmented-Reality-Supported Maritime Navigation
Previous Article in Journal
Population Genetics Assessment of the Model Coral Species Stylophora pistillata from Eilat, the Red Sea
Previous Article in Special Issue
Remote Operation of Marine Robotic Systems and Next-Generation Multi-Purpose Control Rooms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote-Access Marine Robotics Infrastructure and Experiments at LABUST †

Laboratory for Underwater Systems and Technologies (LABUST), Faculty of Electrical Engineering and Computing (FER), University of Zagreb (UNIZG), Unska ulica 3, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in OCEANS 2022 Chennai, Chennai, India, 21–24 February 2022, including extensive new material such as Underwater cameras intrinsic and extrinsic calibration methods and new results both in the lab and in real scenarios.
J. Mar. Sci. Eng. 2024, 12(2), 317; https://doi.org/10.3390/jmse12020317
Submission received: 22 December 2023 / Revised: 23 January 2024 / Accepted: 31 January 2024 / Published: 12 February 2024
(This article belongs to the Special Issue New Trends in Marine Robotics: Virtual Experiments and Remote Access)

Abstract

:
Marine robotics is a complex field with a potentially high demanding logistics and high deployment cost. This is a barrier to many research groups. On the other hand, some research groups have substantial equipment and infrastructure that are typically underutilized. The push for sharing infrastructures was recently accentuated due to the COVID-19 pandemic. In particular, remote access experiments became the norm during the lockdown periods. LABUST completed new infrastructure during the COVID-19 pandemic and prepared it for state-of-the-art remote access from anywhere in the globe. This is important both for research and educational purposes. This article describes the infrastructure, equipment, and methods used for implementing the remote access including the calibration procedures. It also reports on real remote access trials for different applications such as Autonomous ships, diver–robot interaction, and Remotely Operated Vehicle (ROV) pilots. These examples include both pool and at-sea remote trials, proving the capabilities of the infrastructure.

1. Introduction

Marine robotics is an exciting, yet complex field to develop. Due to the harsh environment, experimentation involves a higher level of logistics complexity when compared to other robotics fields. This includes not only the dependency on weather conditions, but also the transportation cost, robot deployment at sea, the need for a support Vessel, and so on. These factors create a barrier to many high-potential research groups that may not have the resources and the ability to go out and test their robots at sea. This barrier is especially high for new players in the marine robotics domain and student teams. At the same time, several world-class institutions do have the equipment and research infrastructure needed such as Underwater sensor networks, Autonomous Surface Vessels (ASVs), Autonomous Underwater Vehicles (AUVs), research Vessels, and so on. These are often underused and, therefore, can be shared with partners with difficulties accessing this type of infrastructure, allowing them to fully realize their potential. It can also attract new research teams that may not have all the skills necessary to develop, deploy, and operate marine robots (e.g., mechanical construction). Moreover, by sharing infrastructures, the community grows, new partnerships are established or reinforced, and ideas are cross-fertilized, all leading to the advancement of the state-of-the-art.
Several projects and institutions in the world have in different ways shared infrastructure or provided access/funding to support other research or student teams along the years. This support has come, for instance, in the form of equipment loans, competitive access through competitions or open calls, and grants for joint experiments through open calls and cascade funding. Specific examples of infrastructure sharing or funding with third-parties can be found in the following Section 2. The remainder of the article is organized as follows: Section 2 presents the related research. Section 3 presents the infrastructure and methods used, while Section 4 presents the experiment results of different trials and applications. Section 5 discusses the results including the issues and limitations found. Finally, Section 6 concludes the article.

2. Background

In Europe, a series of EU projects starting with Eurofleets, continuing with Eurofleets2 and, most recently, Eurofleets+ [1], has been a long-standing enabler of young scientists by providing opportunities on board research Vessels and, to a lesser extent, by sharing AUVs and ROVs. At multi-domain robotics competitions such as ERL Emergency [2] or euRathlon [3], student teams with little or no experience in the maritime field were loaned commercial AUVs free of charge (following a call for applications). These free loans were instrumental to attract new teams to the field and to help teams that would be unable to develop their own Vehicles. These teams were given access to state-of-the-art hardware and could spend more time focusing on software development instead of solving mechanical or electronic issues. More info on these teams and their achievements is available in [2]. More recently, an evolution of the ERL Emergency, the Robotics for Asset Maintenance and Inspection (RAMI) [4] challenge, organized as part of the METRICS project [5], has extended competitions to the Virtual domain. In this case, real data collected during previous competitions at sea are shared with teams participating in a Virtual competition. The Virtual competition takes place in preparation for the physical competition. The purpose of sharing data sets collected in real-world conditions is to not only help out teams from the field that may not have easy access to the sea for their preparation, but also to attract new teams from the artificial intelligence (AI) and machine learning field. Sharing of this type of data can be just as valuable as sharing robots and infrastructure. Results on both the physical and the Virtual challenge can be found in [4].
Similarly, in the U.S. and funded by the Office of Naval Research (ONR), the RobotX competition [6] loans at no-cost large Autonomous Surface Vehicles (ASVs) to student teams. This type of large Vehicle is difficult to develop, but also to test for students due to the cost and logistics. Thus, providing the Vehicles at no-cost is not enough to prepare teams as they may not be unable to deploy the ASVs until the competition. Considering this difficulty and wishing teams to increase their task accomplishments, since 2019 (before the COVID-19 pandemic), RobotX has run a Virtual competition [7] as a way of preparing teams for the physical competition.
Other robotics challenges adopt a mixed approach of providing both hardware and funding to teams. This was the case for instance for the DARPA Robotics Challenge (DRC) [8]. In the DRC, teams were divided into different tracks with different levels of funding (some were not funded). According to the performance on a Virtual challenge, teams could obtain more funding, and the best teams received as well a standard robot to compete in the challenge.
Providing funding and standard platforms through different phases, starting with a simulation phase, was also the approach taken by the European Robotics Challenges project [9,10]. Like the DRC, part of the teams (a third) received a robot to perform in the second and third phases, as well as increasing funding levels. While the second phase included benchmark tests, the third phase was focused on real experiments at end-user sites.
As part of the EU’s Robotics for Inspection and Maintenance (RIMA) project [11], teams also receive funding to carry out real experiments in operational environments. The difference is that, in this case, funding is provided specifically for cross-border experiments involving European small- and medium-sized enterprises (SMEs) through open calls for proposals. The funding is provided for the development and testing of robotic systems. Moreover, RIMA wishes to help SMEs get to the market, and thus, it provides technical and economic viability validation and business and mentoring support services.
Another EU project that provided access to infrastructure and funding was the recently completed Marine Robotics Research Infrastructure Network (EUMarineRobots) [12,13] project. The main objective was to open up the most-important national and regional research infrastructures (RIs) for marine robotics to all researchers to optimize their use. To achieve this goal, a series of Transnational Access (TNA) actions were carried out. Sixty-one proposals from 68 applicants from all over the world were granted access. Full details regarding the TNA statistics are available in [14]. Further information on all activities related to the project can be found on the website [13].
In the case of EUMarineRobots, third-parties are granted both temporary access to infrastructure and travel funds to be able to execute the TNA actions. The TNA is also open to SMEs, but the focus of EUMarineRobots is different than RIMA. It offers opportunities for SMEs, universities, and research centers that may have difficulty accessing costly infrastructure such as research Vessels and others.
Within the EUMarineRobots project, the Laboratory for Underwater Systems and Technologies (LABUST) [15] of the Faculty of Electrical Engineering and Computing, the University of Zagreb, has offered its infrastructure to partners around the globe, leveraging on previous experience in the framework of the EU project, Road-, Air-, and Water-based Future Internet Experimentation (RAWFIE) [16]. In this project, LABUST ASVs [17] were made available for third-party experiments as part of the RAWFIE federated test beds.
As shown, different models exist for infrastructure sharing and support to teams. These models have co-existed for over a decade. However, the full potential for sharing infrastructures was not completely exploited until the COVID-19 pandemic introduced lockdowns and harsh restrictions. Indeed, with the COVID-19 pandemic, it became nearly impossible to have other research actors using infrastructure in situ, and many projects/institutions adapted to the new reality by offering remote access to their infrastructure.
In that context, remote access became more popular. However, the roots of remote trials in marine robotics have been lain over two decades. For instance, in [18], Internet-based remote teleoperation of an ROV was achieved in the Arctic and Antarctic environments (with a remote Control Center in Italy). This work is especially impressive given the bandwidths available in 2003 (and the usage of satellite communications). Although with limitations and limited testing time, this pioneering work proved the possibility of remote teleoperation of ROVs. Moreover, it allowed the public and other scientist teams to test the ROV going in the direction of infrastructure sharing.
Along the years, other works have similarly piloted ROVs from a remote location. For instance, in the DexROV project [19], an on-shore remote Control Center was used to diminish the burden and cost of offshore operation. Similar to [18], a satellite connection for the communication between offshore and onshore was used. Additionally, it had visual and haptic feedback for the user controlling the ROV. Indeed, using haptic feedback has been used in other remote (or teleoperation) operations such as the ones described in [20,21] together with Virtual Reality (VR) for better human–robot interaction. Our own work with haptic and VR feedback in the context of diver–robot interaction can be found in [22].
An interesting approach for remote access was presented in [23], where remote access was exemplified in a hardware-in-the-loop simulator. However, the overall aim of this work is to use a Virtual Control Cabin to Control a work-class ROV remotely.
More recently, in a joint work with Huazhong University of Science and Technology, our lab has performed cloud-based Control of multiple Unmanned Surface Vehicles [24]. In this case, the work focuses on Surface Vehicles (instead of ROVs as most remote access examples), and it includes scenarios with multiple Vehicles, which increases the complexity.
Another very recent example is the one presented in the context of the Blue-RoSES project [25]. This work follows previous pioneering work from the same above-mentioned research groups, but has the particularity of enabling not only professionals, but also non-experts in the remote piloting of an ROV. Like most examples, it focuses on ROVs.
Finally, a recent work [26] presented a remote Control room that gives access to a wide range of assets, subsea infrastructure, Surface or Underwater Vehicles and assets at remote experimentation sites. This is a great example of a remote Control Center that can be used to Control different assets instead of being focused only on ROVs (as most of the state-of-the-art). The authors further elaborated on how to leverage Virtual experimentation and digital twins to complement the remote access to the physical infrastructure.
It was in this context that LABUST completed the construction of a new laboratory (including a new pool). Given the on-going pandemic (at the time of construction), the infrastructure was made ready to encompass remote trials and to allow any partner in the world to easily collaborate and test algorithms and devices without the need for physical presence. The infrastructure was initially shared remotely within the context of EUMarineRobots as many of the TNA had to be transformed into remote trials. However, the remote-access infrastructure was also used for educational purposes in the context of the Marine Robots for better Sea Knowledge awareness (MASK) Erasmus+ project [27] and in the context of an Unmanned Ship project. Three examples (one per project) are included in Section 4.

3. Materials and Methods

To increase its capabilities, LABUST invested in a new laboratory infrastructure at its home institution, the Faculty of Electrical Engineering and Computing, the University of Zagreb. The construction of this infrastructure partially coincided with the COVID-19 pandemic restrictions that made joint experiments, equipment sharing, and field trials nearly impossible. Thus, special attention was paid to prepare the infrastructure for the new needs of remote access so as to enable anyone in the world including students, scientists, researchers, and the even general public to exploit opportunities for scientific progress. In the following, the physical infrastructure, positioning system, ceiling, and Underwater cameras for situational awareness and the network infrastructure are described.

3.1. Physical Infrastructure

The whole infrastructure has an area of 200 m 2 , including 80 m 2 allotted for a workshop and offices and 120 m 2 for the pool area. The pool has an area of 32 m 2 and includes a crane for Vehicle deployment, as shown in Figure 1. The pool size of 7.8 m × 4.1 m enables the testing of small Surface platforms such as the H2OmniX [28], as well as micro-Unmanned Aerial Vehicles (UAVs), while the 3 m of depth allow testing with similarly sized AUVs or ROVs. The pool bottom and sides are currently gray, but may be covered with natural Underwater photos for visual applications.
For monitoring Underwater and Surface activities, the pool is equipped with four Underwater cameras and three ceiling cameras covering the complete pool. The cameras are supplied by SwimPro [29], which provide infrastructure for detailed swim analysis in sports. In addition, the Pozyx Ultra-Wideband (UWB) system is used for real-time localization.

3.2. Robot Localization Using Pozyx

Considering the indoor environment, the Global Navigation Satellite System (GNSS) is not available. The Pozyx localization system provides an alternative positioning system for indoor robots, in this case Autonomous Surface Vessels (ASVs) and micro-UAVs. The system consists of UWB modules that serve as anchors or tags, with the anchors being placed in fixed positions, while the tags are attached to the robots. Any two Pozyx modules have the capability of estimating the range between them using the two-way travel time (TWTT) technique. The localization of the system is based on the tags measuring distances to known positions (anchors) and estimating their relative position. Due to the nature of the range-measuring procedure, only one range measurement can be made at a time. In order to localize a tag, it needs to measure the range to several tags in a round-robin fashion. This introduces a synchronization problem if we want to localize more than one tag, which is a sensible requirement. The Pozyx Creator Controller (PCC) solves this problem by introducing itself as a centralized tag (master) that queries other tags to initiate ranging with anchors. Location estimation is executed either on the tag itself or in the PCC. The manufacturer indicates that localizing in the PCC yields better precision and a faster localization rate. The latter approach aggregates the estimated ranges in the PCC, and the estimated positions are distributed via the MQTT network protocol. The location estimate is also available on the tag’s serial interface, but MQTT is a better option for our requirements.
We chose the PCC as a centralized localization provider that redistributes the estimated position via MQTT. The benefits of this approach include: flexibility in tag placement (tags can be battery-powered and attached to robots or other devices); no electrical integration is required; the robot can access the locations of all tags, facilitating multi-agent collaboration. While the benefits are plentiful, a noticeable delay in the localization reaction time was noticed, which may affect high-speed applications.
The LABUST pool facility has six Pozyx anchors placed in fixed locations on the walls. For the localization of the robots, a Pozyx tag is placed externally on the Vehicle. The calibration of the position of the anchors can be performed automatically with the PCC and/or manually. The initial calibration was carried out with the PCC. Subsequently, the anchor positions were corrected with a laser measuring device using the spatial properties of the room. The manual correction process can become tedious, but the calibration is performed once if not changing the anchor placement. To preserve the robot heading estimation capabilities using a magnetometer, it is necessary to align the Pozyx with the Magnetic or True North.
In addition, the Pozyx system can also be used to mimic indoor GNSS localization. In this case, the points surveyed by GNSS outdoors under a clear sky are used to calibrate the Pozyx system. Assuming an Earth-centered, Earth-fixed (ECEF) coordinate system at the Center of the fixed points, it is possible to generate a transformation of the Pozyx measurements directly into geographic coordinates, which can be converted to other formats as needed. The calibration details will be covered in Section 4.1.

3.3. Visual Robot Localization for Surface Vehicles

Reliable information regarding the robot position is essential to test navigation algorithms in the pool area. Thus, using the three ceiling cameras, a visual robot localization system was developed for the pool, as shown in Figure 2. The cameras are SwimPro IP Bullet cameras (SwimPro, Newcastle, Australia) with Full-HD resolution. The entire pool ( 7.8 m × 4.1 m ) is covered with a 10% overlap using these cameras mounted 3 m above the pool. They also meet the IP67 rating, due to the closeness to the water Surface and occasional water splash occurrence. Intrinsic camera calibration is extremely important in this particular case, as a significant fisheye effect was present in the camera streams. Without removing it, visual robot localization is impossible to perform. For that purpose, we used fiducial markers, which can be easily placed on the top of the robot to ensure good camera visibility. AprilTag, in particular, has proven to be an excellent tool for robot localization tasks. Therefore, the ROS package AprilTag_ros1 was used to detect robots on the pool’s Surface [30,31]. This algorithm provides a simple, robust, and accurate detection of AprilTags.
Overall, the entire procedure of ceiling camera system localization on the pool can be separated into 3 main steps:
  • Intrinsic camera calibration;
  • Extrinsic camera calibration;
  • Localization and tracking algorithm.
To be able to solve the localization issue, we first had to remove significant radial distortion in the camera stream. The OpenCV framework2 has widely used tools for intrinsic and extrinsic camera calibration, which are well-documented and simple to use. The result of the intrinsic calibration should include the camera matrix and distortion coefficients:
c a m _ m t x = f x 0 c x 0 f y c y 0 0 1 ,
d i s t = k 1 k 2 p 1 p 2 k 3 .
where f represents the focal lengths and c the optical centers of the camera distortion coefficients. These can be separated into two, k for the radial distortion and p for the tangential distortion. The procedure of intrinsic calibration using OpenCV is performed with many versatile chessboard images with different distances and angles of the chessboard. The minimal requirement for the number of detectable points on the chessboard is 12, and to obtain a nice fit on the regular paper size, the width-to-height ratio should be close to 1.4142:1. Finally, one can obtain better accuracy with a higher number of squares, but at the same time, the squares should be big enough for easy detection. In our scenario, the chessboard was printed on A3 paper and, it had 15 × 10 squares. The biggest challenge in the calibration procedure was the small overlap in the camera frames, and therefore, during the calibration procedure, we had to make sure that not much of the image was lost. For this matter, we found it was vital to have many diverse images with different angles and positions of the chessboard. A significant fisheye effect and calibration procedure with the chosen chessboard is shown in Figure 3.
In the second step, the goal of the extrinsic camera calibration was to find the position and orientation for each of the 3 cameras in relation to one corner of the pool. This issue could be easily solved using the OpenCV default algorithm for calibration if we had a satisfactory overlap between camera streams. The transformation from camera to camera could be calculated using multiple images of the chessboard that is visible from both cameras, and the final transformation to the one corner of the pool could be achieved from a single AprilTag. Still, our overlap was very small, so we decided to implement a new method, which included 4 main parts:
  • Plane estimation for the z-axis position in the camera frame;
  • Line estimation for the x and y position in the camera frame;
  • AprilTag localization algorithm;
  • Manually defining the translations of the cameras to the corner of the pool.
The camera frame was positioned so that the z-axis points forward, the y-axis down, and the x-axis right. The pool coordinate frame was selected so that z-axis points up, the x-axis represents the length of the pool, and the y-axis represents the width of the pool. The first part of the algorithm was to estimate the orientation of the z-axis of the pool frame in the camera frame. This is defined by the normal vector of the plane, which is aligned with the pool’s Surface. To estimate this plane, we used the RANSAC algorithm [32] with multiple positions of the AprilTag as the input. The points were positioned approximately at the pool’s Surface. It was very important to have a wide area covered with a significant number of points to obtain a satisfactory estimation of the plane. We decided to slide the AprilTag on the corner of the pool and used every position of the AprilTag detection algorithm. This resulted in more than 300 points, as shown in Figure 4. From the image, it is easy to see that we can estimate the camera rotation using these points. As described previously, we used RANSAC to obtain the z-axis of the pool frame in the camera frame.
To estimate the orientation of the x-axis and y-axis, we estimated one side of the pool as the line. Sliding one AprilTag parallel to the side of the pool resulted in a single line in the camera coordinate frame, which represents the position and orientation of the y-axis. Then, using the cross-product of two vectors, we were able to obtain the position of the remaining x-axis. The procedure was conducted for all 3 cameras separately. At this point, the orientation of every camera in the predefined pool frame was known. Still, the translations between the cameras had to be calculated by using the AprilTags at predefined locations. We chose one corner of the pool as the Center of the coordinate system and used a single AprilTag to detect the transformation of the first camera to that coordinate system. For the other 2 cameras, we placed the AprilTag in the frame of that camera and measured the translations to the pool’s coordinate frame. Using the AprilTag localization algorithm, we were able to obtain the translation of that camera to the coordinate frame of the pool. Future work will include implementing this localization system on the Underwater cameras.

3.4. Top-View Visualization

In addition to localization, ceiling cameras are also used to visualize the pool area. A single view of the entire pool is created from stitching together three camera streams. To perform the stitching, the perspective had to be changed due to the two corner cameras being at an angle, as shown in Figure 2. This pool view provides a clear representation of the robot’s position in the pool and is, therefore, suitable for online streaming of the pool area.
To obtain a suitable single-frame view of the pool, we used the AprilTag ROS package. This package can easily give us an accurate position and orientation of the AprilTag visible in the image. By placing these AprilTags in the corners of each camera frame with middle tags in the frame of two neighboring cameras, as shown in Figure 5, we were able to change the perspective of each camera frame and concatenated images using the AprilTag positions. At this stage, using an AprilTag detection algorithm directly on this single stitched image may seam like a simple and good approach for the localization problem. However, the process of stream concatenation could include a significant delay, which would present an issue for the localization algorithm. Another significant issue is that even a slight misalignment in the frames can result in the failed detection of the AprilTag when transitioning from one frame to other. Furthermore, this scenario could happen very often because objects below the plane of concatenation appear twice and the ones above are partly or fully lost.
Therefore, localization could only be performed on the plane of concatenation, as is visually described in Figure 6.
Due to the issues described previously, the stitched stream of the pool is used only for visualization purposes (shown in Figure 7), and it shall run on a separate thread from the localization algorithm to not introduce delays.

3.5. Robot Localization for Underwater Vehicles

Similar to the ceiling camera system, an Underwater camera localization system is currently under development using the 4 cameras (one in each corner). A similar setup as for the ceiling camera calibration can be used with a few differences. The first one is that the intrinsic calibration procedure changes as the cameras will be in the water and there will be refraction. There are typically two solutions: using the in situ calibration procedure (Underwater) or running the calibration procedure in dry conditions and compensating for the refraction in the water after. Our choice was to use the Pixar model for Underwater camera calibration [33]. This work provides a simple solution with in-air calibration and post-rectification. The solution is applicable for the cameras sealed in the housing with flat glass with a small distance of the camera to the flat glass (<1 cm) and the optical axis of the camera perpendicular to the glass Surface. After running the calibration procedure, we obtained a reasonable result (shown in Section 4). However, for extrinsic camera calibration, we used the OpenCV-based algorithm (as for the ceiling cameras), which required using a chessboard in the water. Therefore, with minimum additional effort, intrinsic calibration can be performed in the water, which can be considered a more-accurate approach because it is not dependent on the water refraction index. For the extrinsic camera calibration, we used the ROS package camera_calibration3, which has a Graphical User Interface (GUI) implemented for the calibration procedure of the stereo camera. This tool can provide the transformation matrix between cameras, as well as perform the intrinsic calibration of both cameras. In our case, this procedure needs to be performed for all 4 cameras in the pool setup. In the case of our setup, one of the additional requirements for this procedure was to be easily repeatable because the cameras are modular and can be moved around the pool. Underwater camera calibration and localization procedures are currently under testing and will be completed in future work.

3.6. Network Infrastructure and Remote Access

For mission Control and Vehicle communication, for interoperability reasons with external partners, the Robotic Operating System (ROS) [34] suite was used. ROS, a de facto standard in current robotic systems, uses a publisher–subscriber communication model for exchanging messages between different robots in an distributed system. The distributed system includes devices connected to the local network either by Ethernet cable or using WiFi. As detailed below, a Virtual private network (VPN) is configured so that external devices can connect to the internal network via the Internet. To provide real-time feedback and improve the situational awareness of external users, the cameras described in Section 3.1 use a static IP and publish the current video stream to the ROS topics using the ROS gscam tool4. Depending on the use case, a Vehicle Command and Control Center (CCC) can be instantiated. This was used in the experiment described in Section 4.3.
The CCC used in the experiments described in Section 4.3 (first remote trials using the infrastructure) was an HTTP server (implemented in Python). It served as the endpoint for remote Control of the mission execution. One of the good features available is that the server API can be easily extended and configured with mission-specific tasks. For data exchange, the JSON message format is used by the HTTP server. HTTP POST request handlers on the server implement Command executors. Monitoring and supervision parameters can be obtained remotely by using HTTP GET requests. While the JSON format is human-readable and programming-language-agnostic, the fairly large amount of data overhead in each HTTP message is a drawback. Moreover, data streaming is not supported, so the user cannot remotely stream ROS topic data via the CCC. For future work, a GRPC5 server that offers stream capabilities and ensures the order and delivery of messages is being investigated.
While it would be possible to expose access to the HTTP server directly, a more-secure approach, using a VPN, is used. This allows safer use of the HTTP approach, but also is a more-direct approach. Two VPNs were used, OpenVPN [35] (in bridging mode) and Wireguard [36]. The two VPNs are installed on the poolside router, while the remote clients depend on the application. For fairs, demonstrations, and trials where OSI Layer 2 remote access is required, a dedicated Teltonika 4G router with an OpenVPN client (TAP) is used to bridge the local and remote network. Bridge mode makes little sense if only OSI Layer 3 traffic is required, so the simpler Wireguard VPN was added. Namely, for the MASK project mentioned in Section 4.4, the router has a dedicated subnet for Wireguard clients, and students installed a client on their phones together with the ROV driving application.
A secondary task of the aforementioned Teltonika 4G router is enabling remote infrastructure access while in trials. While the OpenVPN bridge connection to the poolside network enables bidirectional access, the Wireguard server is configured on the remote router, which enables direct (not relayed) online access to the underlying network from any location not in the poolside network. An alternative approach is connecting to the poolside network via any VPN implementation and exploiting the Teltonika-poolside bridge connection, subsequently introducing additional latency and throughput reduction.

4. Results

In the following, after showing the Pozyx and camera calibration results, four different examples of the experiments are shown using three different setups. These are representative of the potentialities of the LABUST infrastructure, but are not exhaustive. These setups and infrastructure have been used in other trials/demos along the past couple of years including science fairs and open day events and allow easy and seamless access from anywhere, not only for specialists, but also for the general public.

4.1. Pozyx Localization Results

Following the work described in Section 3.2, localization experiments using Pozyx are hereby presented.
To mimic GNSS indoors, it is necessary to generate a transformation of Pozyx measurements to the ECEF coordinate frame. Six fixed points x i were surveyed using a GNSS module, and the ground truth data of the distance d i j between them were measured with a tape measure. The Pozyx anchor closest to the exit (ID 0x615a) was chosen as the origin of the coordinate frame. The distances from six outdoor points to that anchor were also measured d 0 j . Total Least Squares (TLS) optimization was used to correct the measured GNSS positions and to estimate the unknown anchor GNSS position x 0 . The initial optimization problem is outlined in Equation (3). The measured covariances of the GNSS surveyed points were 0.7 m for each measured point, which yielded a 0.2 m anchor position precision of the anchor used for transforming the Pozyx measurements to the ECEF frame. For the orientation alignment of the coordinate systems, the Pozyx coordinate system was aligned to True North by estimating the azimuth of the external building wall parallel to the wall on which the set of anchors was mounted. The azimuth of the external wall was estimated using GNSS. The resulting calibrated spatial configuration of the anchors is shown in Figure 8a.
min x i i j x i x j 2 2 + λ σ i j 2 , i , j = 0 , , 6 s . t . x i x j 2 = d i j + σ i j | σ i j | < δ
To tweak the Pozyx’s settings, a series of localization experiments was conducted using different numbers of Pozyx tags and UWB settings (non-exhaustive list). The positioning settings during these experiments are shown in the bottom half of Table 1. During the experiments, three parameters were varied: channel, Tx gain, and preamble length. All available channel numbers were investigated. The Tx gain was chosen as either 25 dB or 30 dB, while one configuration was tested with 20 dB, and the preamble was set to 1024 with one configuration tested at 2048. The Pulse Repetition Frequency and data bitrate was fixed to 64 MHz and 6.81 Mbps, respectively, as recommended by the manufacturer. The tested configurations utilized at most six tags, as is shown in the legend of Figure 9. Four tags were placed at the edges of the pool. tag x604d was placed on a static ASV in the pool, and tag x6024 was the polling tag connected to the PCC software v2.1.0. A small portion of the experiment configurations contained all tags with most containing one tag at the edge of the pool, a static ASV tag, and a PCC tag. Based on the positioning error displayed in Figure 9, configuration 8 was chosen as the configuration with the smallest positioning uncertainty. The chosen settings are displayed in Table 1.
After the anchor calibration and the setting-tweaking experiment, a station-keeping experiment was conducted using two ASVs. Figure 8b displays the unfiltered Poyzx positions of the Vis and Proteus ASVs. The settings in the PCC that achieved these results are displayed in Table 1.

4.2. Camera Calibration and Visual Localization System Results

The results of the camera calibration are shown in Figure 10. Due to the fisheye effect, which can be seen in the image, there is some room for improvement. This issue will be visible in the results of the camera localization, which is very dependent on the good intrinsic camera calibration results. The camera was calibrated using about 100 images, which should be sufficient for a good camera calibration. However, the issue was the position of the cameras, which were above the pool and difficult to access to obtain a good view of the chessboard. Therefore, in the future, we need to find a way to get closer to the cameras to obtain better chessboard images.
The effects of imperfect calibration are shown in Figure 11. For the two cameras, the calibration parameters were good, and therefore, the localization algorithm fit almost perfectly. However, for the camera positioned furthest to the left, it can be seen that the results were not perfect in some areas. The calibration procedure of this camera is shown in Figure 10, and it can be seen that the calibration did not provide optimal results. For the other two cameras, the calibration results were much better, and therefore, the localization algorithm also provided much better results. Nevertheless, the error in the worst part of the image was about 5 cm and only in a specific area of the image. Future work will focus on improving the calibration results, which will also lead to better localization.
Figure 12 shows the results of the Pinax model for the calibration of the Underwater cameras. The results look promising, but in future work, it will be necessary to compare them with the calibration of the cameras in situ to obtain a better understanding of the performance. For our purposes, the camera was accessible enough to obtain a good in situ calibration, and this will likely be used for Underwater camera localization algorithms in the future.

4.3. Diver–Robot Interaction Experiments

One of the first large experiments performed in part remotely in LABUST was the diver–robot communication experiment performed using a wearable sensing glove. As part of the project Advancing diver–robot interaction capabilities (ADRIATIC) [22], a smart diving glove containing dielectric elastomer sensors was developed at the University of Auckland, New Zealand, in order to capture divers’ hand gesture movement [37]. The data from the glove sensors are processed on the glove micro-controller together with the on-board IMU data to determine the gesture performed by the diver. Once recognized, the gesture is translated into a Command and sent acoustically to the robot. This enables divers to use customary diving gestures for human–robot interaction under water. Each gesture can be uniquely mapped to a robot Command based on the specific diving mission goal. The robot executing the Commands is a diver navigator upgraded with a propulsion skid and acoustic communication modems. Using a suite of precise navigation sensors, the robot can aid the diver in a variety of situations—from guidance toward way-points, assisting with specialized tools to transmitting messages to a Surface Command Center. This interaction concept was planned to be tested in sea and pool conditions in Croatia, but since travel restrictions were in place, it was impossible to coordinate. Using the remotely accessible infrastructure setup described in Section 3, it was possible to organize an experiment using robots in the pool and include participants from distant locations.
In late March and the beginning of April 2021, a series of joint experiments was organized with divers in a public pool located in Aucklad, New Zealand, and the AUV in the LABUST pool in Zagreb, Croatia as shown in Figure 13. The diver performed a series of gestures using the gesture-recognizing glove, which sent the gesture Commands over acoustics to a poolside PC with an acoustic receiver deployed in the pool. The Command was sent over HTTP to a server running in Zagreb, where a PC decoded the Commands back into acoustic messages and transmitted them in the pool. The gestures were mapped to a predetermined set of basic movement Commands, which the listening AUV then executed. For situational awareness, the ceiling cameras, an Underwater camera, as well as the camera from inside the Vehicle were streamed back to New Zealand. Each sequence was repeated several times with the transmission and reaction times being recorded. The statistics and transmission success rates have been published in a comprehensive journal paper [38].

4.4. Remote Piloting of ROVs

The pool infrastructure was used remotely by the high school students of the MASK project [27].
The initial experiments focused on the basic navigation skills of ROV piloting and used the top-three ceiling cameras for full situational awareness. Note that the Blueye Pro ROV [39] includes a front camera, but as the pool has featureless grey walls, it is hard to navigate remotely without feedback from the ceiling infrastructures (especially given that this was the first time students were performing it). To make things more challenging and to simulate a real-life mission, after the initial experiments, students were provided with feedback from only one or two cameras (as shown in Figure 14), and they had to navigate back and forth from the area unseen on the cameras to the area where they could see the ROV. The delay in the cameras’ feedback was around 2 s, but this did not affect the experiments overall and, at the same time, taught a real-life lesson where lags in images happen as well for work-class ROVs.
In the second remote session, the complexity was increased, and the students had no access to the ceiling cameras’ view. Furthermore, besides lawnmowers and circles around an object, more-complex tasks were introduced such as following a path around a set of anchored buoys. This complexity also led to a couple of entanglement situations, which gave an opportunity to explain the reality of the entanglement phenomena and provided a real-life experience to the students. This real-life experience through remote connection is useful as the students later in the year tried the ROVs at sea and were familiarized with the main issues of operations.
Regarding the network connection from the schools to the pool, the Wireguard setup turned out to be easy for students running both Android and iOS. Since it is supported by RouterOS, running on our poolside router, it was also easy to track the connections. No performance comparison between the two were performed, but online sources indicate higher throughput for the Wireguard implementation. Manual Control by human operators of the ROV using the Blueye ROV app was possible even for specific tasks such as: going around obstacles, entangling and untangling the ROV tether, etc. This shows that the delay over the remote connection and VPN is acceptable. The students in the MASK project used a school eduroam WiFi connection or a personal connection from home. The only issue was that some school routers blocked VPN access, and enabling it would require the change of the complete school network policy. This could be considered a drawback compared to direct HTTP access over a static public IP. A similar setup was used in the project trials at sea for the students that were unable to travel and could connect from home.

4.5. Remote Control Center Experiments

The aim of the remote access experiments was to validate the feasibility of using a 4G connection with a VPN connection for the transfer of the data necessary for ASV remote operation. The goal was to enable human-in-the-loop Control, where the operator can Control the Vessel without significant input delay. The preparation of the experiment took place in a controlled environment, where the operator was located in an office building of our faculty (FER) on a public network, while the Vehicle was located in the LABUST pool behind a VPN. In this way, we decoupled the uncertainties in 4G latency and throughput from the potential problems that are introduced by using a VPN. Figure 15a shows an office remote Control setup during the preparation tests. The Control interface consists of a video feed from a pan–tilt–zoom (PTZ) IP camera mounted on the top of the ASV and a Neptus Control software v2020.01.16 window used for mission Control while displaying the basic telemetry data (position, heading, speed) of the ASV. The Neptus Control software v2020.01.1 requires broadcast packages only available using a bridge VPN, so OpenVPN was mandatory. In the preparation tests, only the Pozyx system described in Section 3.1 was used for indoor localization. The experiment tested manual joystick Control, where the operator could feel smooth ASV Control without significant response delay. Note that the ASV used in the experiments reaches a maximum speed of 1.0 m/s. For the Control of Vessels that can reach higher speeds, a remote Control latency analysis should be performed and a maximum tolerable latency limit should be set for the safe Control of the Vessel.
The validation of the complete remote Control setup was carried out at Torpedo Lucica in Rijeka during a public demonstration of the INNOVAMARE project in October 2022. During the on-site demonstration, the identical remote Control setup was used as shown in Figure 15b, while the Vessel was locally connected to the VPN 4G router via WiFi. The validation of the proposed remote Control infrastructure was completed by simulating a potential application, the camera inspection of the pier and docked Vessels. Throughout the inspection, the visual camera feed provided a 1080p image at 15 Hz with sufficient clarity for the operator, as shown in Figure 15c on the left. The input delay did not go so far as to affect controllability when streaming high-resolution camera images. The full-resolution sensor data were stored locally for post-processing. These tests validated the remote access methodology and confirmed the ability to remotely Control an ASV both in the pool and in real conditions at sea.
A video of the remote Control experiments can be found at the link: https://youtu.be/Yx6nxLDbv-k?feature=shared (access on 20 January 2024).

5. Discussion

The infrastructure described in Section 3 and the results presented in Section 4 are here discussed both in general terms (comparing our work to the state-of-the-art) and, in particular, as concerns any challenges/limitations found in dedicated subsections.

5.1. Comparison to State-of-the-Art

When comparing our infrastructure and our work with others found in the literature reviewed in Section 2, we can note that most works on remote Control for marine robotics focus on ROVs [18,19,20,21,23,25,40]. There are works that include a fleet of ASVs (our joint work [24]) and a multitude of different heterogeneous Vehicles [26]. In our work, similar to [26], we showed the possibilities of our infrastructure to support experiments with divers, AUVs, ASVs, and ROVs. This wide scope and flexibility are not commonly found in the literature (besides the excellent example of [26]. On the other hand, in some of the above-mentioned works, due to the narrow focus only on ROVs (or, in some cases, only one specific ROV model), Control centers with better latency and human–robot interfaces when compared to our infrastructure are presented. The potential improvements regarding network access are discussed in the next subsection.
Another major difference and the most-distinctive part of our work is that most works focus on describing the infrastructure of their remote Control centers and on experiments at sea (with minimal infrastructure). On the contrary, our paper describes physical and network infrastructure to allow remote experiments to a pool (with a similar concept at sea being included as well in the Section 4). In particular, we detail the above and underwater camera setup and indoor positioning system to allow good tracking of Vehicles in the controlled environment of a pool. While on one side, this is not applicable to real trials at sea and those are invaluable, on the other side, it is extremely important to use pools for the testing and validation of marine robots before experiments at sea. Well-equipped pools such as ours can provide excellent conditions for repeatability tests and benchmarking of robots. Moreover, this infrastructure revealed itself to be essential during the COVID-19 pandemic restrictions, which prevented both third-parties and us from traveling to the seaside for trials. The installed cameras’ feedback was critical to the successful remote experiments as described in the Section 4.
Finally, one common thing between our work and some of the work found in the literature [21,40] is the utilization of Virtual Reality (VR) as a way of either complementing or enhancing remote experiments. Although not detailed in this paper, we used VR infrastructure in [22], which can function as a preparatory step for remote experiments in the pool.

5.2. Challenges and Limitations

In terms of challenges and limitations, we had some challenges with the methodology that still need to be fully addressed in future work, and we identified some limitations of the technology during real remote experiments that are being addressed. First of all, regarding Pozyx, as already mentioned, the flexibility of placing the tags (wireless installation, battery-powered) is both an advantage and disadvantage as one can try different setups until the best is found, but on the other hand, there is a noticeable delay in localization due to the wireless setup.
Another challenge found is related to the positioning of the cameras, as explained in Section 3.4. Due to the physical configuration of the cameras, localization could only be performed on the plane of concatenation, and the stitched stream of the pool was used for visualization only.
Regarding Underwater cameras, intrinsic calibration is challenging because it requires either in-water calibration or dry calibration with correction. The first method may be more logistically challenging, while the second one is potentially less accurate. This is an area of on-going improvement as regards localization (as for general visual feedback, the current solution works well enough).
In terms of network access, the choice of VPN over simpler port forwarding provides not only security features, but also reduces configuration complexity for providers and users [41]. The downside is that, depending on the location and connection options, VPN connections can be blocked (e.g., with carrier-based NAT solutions). Sometimes, this can be avoided by involving service providers’ support, but for regular data SIM cards, often, the options are limited. Potential alternatives, also considered as future work for the described network, are VPN services such as ZeroTier7, which enable operations in situations where a simple VPN fails. Specifically, for ZeroTier, also, the overall configuration is simplified for both the providers and users. Providers can manage their Virtual network over the web, and users only require knowledge about the network id. This makes is extremely simple to include users as opposed to a VPN configuration that contains much more details about the network and requires more knowledge to set up. Some other benefits for marine systems are also described in [26]. Additionally, as mentioned, one practical issue we found in the case of remote access provided to high schools was that the eduroam network at their schools would block the VPN ports.
Another practical issue was found during the intrinsic calibration of the ceiling cameras, as these are not easy to reach to obtain good chessboard images. This led to an imperfect calibration, but nonetheless, the impact was minimal as the maximum error (only one camera was affected) was less than 5 cm.
Similarly, in diver–robot experiments, a small practical issue occurred that was unrelated to our infrastructure. Given that the experiments took place in New Zealand and Croatia, the large time zone difference (12 h) created some issues as, on the New Zealand side, the working hours of the public pool used considerably restricted the amount of time the joint experiments could use. This required good planning and several short sessions instead of a longer one, and it is something to take into consideration when planning remote trials with far away countries.
Finally, latency is still something to improve in the case of the ceiling cameras installed above the pool. As mentioned, for remote ROV piloting with the high schools, there was a 2 s delay on the camera feed. On the one side, this introduced complexity, but on the other side, the students experienced a real-life effect especially for larger teleoperated ROV systems. Instead, for the remote Control of ASVs from the pool to a sea location, the latency was good enough for smooth Control.

6. Conclusions

The COVID-19 pandemic has pushed the world towards remote learning and remote experiments. Due to the lockdowns and restrictions, many experiments have had to be conducted virtually or remotely, as have many classes. This trend is here to stay when it comes to robotics [42]. The technology that enables remote access existed before the pandemic, but researchers have not fully exploited the possibilities as in-person real-life experiments are more valuable. However, given the need, several projects/initiatives focused on sharing infrastructure remotely to continue conducting collaborative experiments. LABUST was not immune to this process, and while building new infrastructure during the pandemic, it made it ready for the post-pandemic period, where remote access is common and heavily used. As described in the article, different setups can be used depending on the application and have been found to be effective and easy to use. Several examples showed the flexibility and wide range of use of the infrastructure. In addition, the concepts and methods for accessing the pool infrastructure were also used in field trials, with remote Control being tested both from our faculty and from students’ homes. Future work will focus on improving the Underwater cameras’ calibration and reducing the delay of the ceiling cameras. In the future, the infrastructure will keep being used in a series of different projects implying remote access and can be made available for researchers around the world upon establishment of collaboration agreements. Interested parties should contact [email protected].

Author Contributions

Conceptualization, F.F.; methodology, F.F. and Đ.N.; software, J.O., L.M., I.L., I.K. and N.K.; validation, J.O., L.M., Đ.N. and I.K.; resources, N.M.; writing—original draft preparation, F.F., J.O., Đ.N. and I.L.; writing—review and editing, F.F. and N.M.; visualization, J.O. and I.L.; supervision, N.M.; project administration, F.F. and N.M.; funding acquisition, N.M. and F.F. All authors have read and agreed to the published version of the manuscript.

Funding

The infrastructure of the Laboratory for Underwater Systems and Technologies (LABUST) was funded by the European Union within the Operational Programme Competitiveness and Cohesion 2014–2020 from the European Regional Development Fund. This work was supported by the European Regional Development Fund through the Interreg Italy-Croatia InnovaMare project (Partnership ID 10248782), the EU Horizon 2020 EU Marine robotics research infrastructure network—EUMarineRobots project under grant agreement No. 731103, the “Razvoj autonomnog besposadnog višenamjenskog broda” project (KK.01.2.1.02.0342) co-financed by the European Union from the European Regional Development Fund within the Operational Program “Competitiveness and Cohesion 2014–2020”, and the MASK—Marine Robots for better Sea Knowledge awareness project, funded from the EU Erasmus+ program, Key Action 2, 2022-1-HR01-KA210-SCH-000082831. The content of the publication is the sole responsibility of the project partner UNIZG-FER. The views and opinions expressed are, however, those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the University of Auckland Human Participants Ethics Committee under project number 024190.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to acknowledge all the LABUST staff that supported the preparation of the infrastructure and the experiments.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; nor in the decision to publish the results.

Notes

1
http://wiki.ros.org/apriltag_ros (accessed on 20 January 2024)
2
https://docs.opencv.org/4.x/index.html (accessed on 25 January 2024)
3
4
http://wiki.ros.org/gscam (accessed on 20 January 2024)
5
https://grpc.io/ (accessed on 25 January 2024)
6
https://www.lsts.pt/index.php/software/54/ (accessed on 25 January 2024)
7
https://www.zerotier.com/ (accessed on 25 January 2024)

References

  1. EUROFLEETS+. An Alliance of EUROPEAN Marine Research Infrastructure to Meet the Evolving Needs of the Research and Industrial Communities—EUROFLEETS+ Website. 2021. Available online: https://www.eurofleets.eu/ (accessed on 25 January 2024).
  2. Ferri, G.; Ferreira, F.; Djapic, V. Fostering marine robotics through competitions: From SAUC-E to ERL Emergency 2018. In Proceedings of the OCEANS 2018 MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018; pp. 1–7. [Google Scholar] [CrossRef]
  3. Ferri, G.; Ferreira, F.; Djapic, V.; Petillot, Y.; Franco, M.P.; Winfield, A. The euRathlon 2015 Grand Challenge: The First Outdoor Multi-domain Search and Rescue Robotics Competition—A Marine Perspective. Mar. Technol. Soc. J. 2016, 50, 81–97. [Google Scholar] [CrossRef]
  4. Ferri, G.; Faggiani, A.; Fabbri, T.; Ferreira, F. RAMI: An Inspection & Maintainance competition for Underwater robots. In Proceedings of the OCEANS 2023 Limerick, Limerick, Ireland, 5–8 June 2023; pp. 1–8. [Google Scholar]
  5. Avrin, G.; Barbosa, V.; Delaborde, A. AI evaluation campaigns during robotics competitions: The METRICS paradigm. In Proceedings of the 1st International Workshop on Evaluating Progress in Artificial Intelligence (EPAI 2020) in Conjunction with ECAI 2020, Santiago de Compostela, Spain, 4 September 2020. [Google Scholar]
  6. Robonation. RobotX Website. 2021. Available online: https://robotx.org/ (accessed on 29 January 2024).
  7. Bingham, B.; Agüero, C.; McCarrin, M.; Klamo, J.; Malia, J.; Allen, K.; Lum, T.; Rawson, M.; Waqar, R. Toward Maritime Robotic Simulation in Gazebo. In Proceedings of the OCEANS 2019 MTS/IEEE SEATTLE, Seattle, WA, USA, 27–31 October 2019; pp. 1–10. [Google Scholar] [CrossRef]
  8. DARPA Robotics Challenge Archived Website. 2021. Available online: https://web.archive.org/web/20160428005028/http://www.darparoboticschallenge.org/ (accessed on 25 January 2024).
  9. Siciliano, B.; Caccavale, F.; Zwicker, E.; Achtelik, M.; Mansard, N.; Borst, C.; Achtelik, M.; Jepsen, N.O.; Awad, R.; Bischoff, R. EuRoC—The Challenge Initiative for European Robotics. In Proceedings of the ISR/Robotik 2014; 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; pp. 1–7. [Google Scholar]
  10. van der Meer, A.J.; Caccavale, F.; Eredics, G. The EuRoC Project: Motivations and Design of the Challenges. In Bringing Innovative Robotic Technologies from Research Labs to Industrial End-Users; Caccavale, F., Ott, C., Winkler, B., Taylor, Z., Eds.; Springer Tracts in Advanced Robotics; Springer: Cham, Switzerland, 2020; Volume 136, pp. 3–14. [Google Scholar] [CrossRef]
  11. RIMA. Robotics in Inspection and Maintenance (RIMA) Project Website. 2021. Available online: https://rimanetwork.eu/about-rima (accessed on 20 January 2024).
  12. Alcaraz, D.; Antonelli, G.; Caccia, M.; Dooly, G.; Flavin, N.; Kopf, A.; Ludvigsen, M.; Opderbecke, J.; Palmer, M.; Pascoal, A.; et al. The Marine Robotics Research Infrastructure Network (EUMarine Robots): An Overview. In Proceedings of the 2020 IEEE/OES Autonomous Underwater Vehicles Symposium (AUV), St. Johns, NL, Canada, 30 September–2 October 2020; pp. 1–2. [Google Scholar] [CrossRef]
  13. EUMarineRobots. EUMarineRobots Project Website. 2021. Available online: https://www.eumarinerobots.eu/ (accessed on 25 January 2024).
  14. EUMarineRobots. EUMarineRobots Website—TNA Calls. 2021. Available online: https://www.eumarinerobots.eu/prev-TNAcalls (accessed on 25 January 2024).
  15. LABUST. LABUST Website. 2023. Available online: https://labust.fer.hr/ (accessed on 20 January 2024).
  16. Papadopoulou, P.; Kolomvatsos, K.; Hadjiefthymiades, S. Internet of Things Business Models: The RAWFIE Case. In Digital Transformation for a Sustainable Society in the 21st Century; Pappas, I.O., Mikalef, P., Dwivedi, Y.K., Jaccheri, L., Krogstie, J., Mäntymäki, M., Eds.; Springer: Cham, Switzerland, 2019; pp. 278–290. [Google Scholar]
  17. PlaDyFleet. A Fleet of Unmanned Surface Marine Vehicles PlaDyPos—PlaDyFleet Website. 2021. Available online: https://pladyfleet.fer.hr/ (accessed on 20 January 2024).
  18. Bruzzone, G.; Bono, R.; Bruzzone, G.; Caccia, M.; Coletta, P.; Spirandelli, E.; Veruggio, G. Internet-Based Teleoperation of Underwater Vehicles. IFAC Proc. Vol. 2003, 36, 193–198. [Google Scholar] [CrossRef]
  19. Gancet, J.; Weiss, P.; Antonelli, G.; Pfingsthorn, M.F.; Calinon, S.; Turetta, A.; Walen, C.; Urbina, D.; Govindaraj, S.; Letier, P.; et al. Dexterous Undersea Interventions with Far Distance Onshore Supervision: The DexROV Project. IFAC-PapersOnLine 2016, 49, 414–419. [Google Scholar] [CrossRef]
  20. Domingues, C.; Essabbah, M.; Cheaib, N.; Otmane, S.; Dinis, A. Human-Robot-Interfaces based on Mixed Reality for Underwater Robot Teleoperation. IFAC Proc. Vol. 2012, 45, 212–215. [Google Scholar] [CrossRef]
  21. Xia, P.; McSweeney, K.; Wen, F.; Song, Z.; Krieg, M.; Li, S.; Yu, X.; Crippen, K.; Adams, J.; Du, E.J. Virtual Telepresence for the Future of ROV Teleoperations: Opportunities and Challenges. In Proceedings of the SNAME 27th Offshore Symposium, Houston, TX, USA, 22 February 2022; p. D011S001R001. Available online: https://onepetro.org/SNAMETOS/proceedings-pdf/TOS22/1-TOS22/D011S001R001/2726486/sname-tos-2022-015.pdf (accessed on 24 January 2024). [CrossRef]
  22. Nađ, D.; Walker, C.; Kvasić, I.; Antillon, D.O.; Mišković, N.; Anderson, I.; Lončar, I. Towards Advancing Diver-Robot Interaction Capabilities. IFAC-PapersOnLine 2019, 52, 199–204. [Google Scholar] [CrossRef]
  23. Omerdic, E.; Toal, D.; Dooly, G. Remote presence: Powerful tool for promotion, education and research in Marine Robotics. In Proceedings of the OCEANS 2015—Genova, Genova, Italy, 18–21 May 2015; pp. 1–7. [Google Scholar] [CrossRef]
  24. Wang, Z.; Yang, S.; Xiang, X.; Vasilijević, A.; Mišković, N.; Nađ, Ð. Cloud-based remote Control framework for Unmanned Surface Vehicles. IFAC-PapersOnLine 2020, 53, 14564–14569. [Google Scholar] [CrossRef]
  25. Bibuli, M.; Caccia, M.; Presicci, C.; Sebastiao, L.; Cabecinhas, D.; Potes, A.; Quintas, J.; Jacinto, M.; Pascoal, A. The Blue-RoSES Project: A Gate for Remote Exploration of the Seas. IFAC-PapersOnLine 2023, 56, 5723–5728. [Google Scholar] [CrossRef]
  26. Vasilijevic, A.; Bremnes, J.E.; Ludvigsen, M. Remote Operation of Marine Robotic Systems and Next-Generation Multi-Purpose Control Rooms. J. Mar. Sci. Eng. 2023, 11, 1942. [Google Scholar] [CrossRef]
  27. Ferreira, F.; Nađ, D.; Jagušt, T.; Mišković, N.; Maurelli, F.; Blažina, A.; Maglić, L. MASK—Increasing sea knowledge awareness among high school students using marine robots. In Proceedings of the OCEANS 2023—Limerick, Limerick, Ireland, 5–8 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
  28. Robotics, H. H2O Robotics Website. 2021. Available online: https://h2o-robotics.com/ (accessed on 20 January 2024).
  29. SwimPro. SwimPro Website. 2021. Available online: https://swimpro.com.au/ (accessed on 25 January 2024).
  30. Malyuta, D. Guidance, Navigation, Control and Mission Logic for Quadrotor Full-cycle Autonomy. Master’s Thesis, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA, USA, 2017. [Google Scholar]
  31. Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 4193–4198. [Google Scholar] [CrossRef]
  32. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  33. Łuczyński, T.; Pfingsthorn, M.; Birk, A. The Pinax-model for accurate and efficient refraction correction of Underwater cameras in flat-pane housings. Ocean Eng. 2017, 133, 9–22. [Google Scholar] [CrossRef]
  34. Open Robotics. Robot Operating System Website. 2021. Available online: https://www.ros.org/ (accessed on 20 January 2024).
  35. OpenVPN. OpenVPN Website. 2023. Available online: https://openvpn.net/ (accessed on 25 January 2024).
  36. WireGuard. WireGuard Website. 2015. Available online: https://www.wireguard.com/ (accessed on 25 January 2024).
  37. Antillon, D.W.O.; Walker, C.R.; Rosset, S.; Anderson, I.A. Glove-Based Hand Gesture Recognition for Diver Communication. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 9874–9886. [Google Scholar] [CrossRef]
  38. Ferreira, F.; Kvasić, I.; Nađ, D.; Mandić, L.; Mišković, N.; Walker, C.; Antillon, D.O.; Anderson, I. Diver-Robot Communication Using Wearable Sensing: Remote Pool Experiments. Mar. Technol. Soc. J. 2022, 56, 26–35. [Google Scholar] [CrossRef]
  39. Blueye Robotics. Blueye Pro Website. 2020. Available online: https://www.blueyerobotics.com/products/pro (accessed on 25 January 2024).
  40. Xu, F.; Zhu, Q.; Li, S.; Song, Z.; Du, J. VR-Based Haptic Simulator for Subsea Robot Teleoperations. In Proceedings of the Computing in Civil Engineering 2021, Orlando, FL, USA, 12–14 September 2021; pp. 1024–1032. [Google Scholar] [CrossRef]
  41. Messan, P.N.; Krupiński, S.; Maurelli, F.; Vallicrosa, G.; Ridao, P. Evaluation of computer networking methods for interaction with remote robotic systems. In Proceedings of the 2021 IEEE AFRICON, Arusha, Tanzania, 13–15 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
  42. Birk, A.; Simunovic, D. Robotics Labs and Other Hands-On Teaching During COVID-19: Change Is Here to Stay? IEEE Robot. Autom. Mag. 2021, 28, 92–102. [Google Scholar] [CrossRef]
Figure 1. The new pool of LABUST in Zagreb, Croatia.
Figure 1. The new pool of LABUST in Zagreb, Croatia.
Jmse 12 00317 g001
Figure 2. The three ceiling cameras above the pool.
Figure 2. The three ceiling cameras above the pool.
Jmse 12 00317 g002
Figure 3. Unprocessed camera stream with chessboard used for intrinsic calibration.
Figure 3. Unprocessed camera stream with chessboard used for intrinsic calibration.
Jmse 12 00317 g003
Figure 4. The positions of AprilTag detections in the camera frame are logged for camera orientation estimation, utilizing plane estimation. Different colours represent different distance of the points to the camera.
Figure 4. The positions of AprilTag detections in the camera frame are logged for camera orientation estimation, utilizing plane estimation. Different colours represent different distance of the points to the camera.
Jmse 12 00317 g004
Figure 5. Pool setup ready for stitching.
Figure 5. Pool setup ready for stitching.
Jmse 12 00317 g005
Figure 6. The representation of the issue with stitching streams and the reason why a localization algorithm can be used only in a single plane.
Figure 6. The representation of the issue with stitching streams and the reason why a localization algorithm can be used only in a single plane.
Jmse 12 00317 g006
Figure 7. Stitched streams to form single view of entire pool.
Figure 7. Stitched streams to form single view of entire pool.
Jmse 12 00317 g007
Figure 8. Pozyx localization system.
Figure 8. Pozyx localization system.
Jmse 12 00317 g008
Figure 9. Results of localization experiments with different UWB configurations.
Figure 9. Results of localization experiments with different UWB configurations.
Jmse 12 00317 g009
Figure 10. Camera stream without processing and after removing distortion.
Figure 10. Camera stream without processing and after removing distortion.
Jmse 12 00317 g010
Figure 11. Localization experiment.
Figure 11. Localization experiment.
Jmse 12 00317 g011
Figure 12. Underwater camera before/after removing distortion.
Figure 12. Underwater camera before/after removing distortion.
Jmse 12 00317 g012
Figure 13. Diver performing gestures in a public pool in Auckland, New Zealand (above), and the AUV executing Commands in the LABUST pool in Zagreb, Croatia (below).
Figure 13. Diver performing gestures in a public pool in Auckland, New Zealand (above), and the AUV executing Commands in the LABUST pool in Zagreb, Croatia (below).
Jmse 12 00317 g013
Figure 14. An example of the view from one of the ceiling cameras. Students had to navigate from outside the field of view of the camera to the area shown in the image.
Figure 14. An example of the view from one of the ceiling cameras. Students had to navigate from outside the field of view of the camera to the area shown in the image.
Jmse 12 00317 g014
Figure 15. (a) Operator from FER office building controlling the ASV located at the LABUST pool over VPN connection. (b) Operator from FER (Zagreb) manually controls ASV in Rijeka over VPN connection. (c) Screenshot of the operator screen showing live camera feed and Vehicle position and heading in Neptus Control software v2020.01.1.
Figure 15. (a) Operator from FER office building controlling the ASV located at the LABUST pool over VPN connection. (b) Operator from FER (Zagreb) manually controls ASV in Rijeka over VPN connection. (c) Screenshot of the operator screen showing live camera feed and Vehicle position and heading in Neptus Control software v2020.01.1.
Jmse 12 00317 g015
Table 1. Chosen Pozyx creator Controller settings.
Table 1. Chosen Pozyx creator Controller settings.
Settings LocationParameter NameValue
Settings/system/UWBChannel4
Data bitrate6.81 Mbps
Pulse Repetition Frequency64 MHz
Preamble length1024
Tx gain (dB)25
Settings/system/positioningOff-boardon
AlgorithmUWB only
Dimension2.5D
Height1060 mm
Ranging protocolPrecision
Filter typeNone
Filter strength0
Sensor dataCoordinates
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ferreira, F.; Obradović, J.; Nađ, Đ.; Lončar, I.; Mandić, L.; Kvasić, I.; Kraševac, N.; Mišković, N. Remote-Access Marine Robotics Infrastructure and Experiments at LABUST. J. Mar. Sci. Eng. 2024, 12, 317. https://doi.org/10.3390/jmse12020317

AMA Style

Ferreira F, Obradović J, Nađ Đ, Lončar I, Mandić L, Kvasić I, Kraševac N, Mišković N. Remote-Access Marine Robotics Infrastructure and Experiments at LABUST. Journal of Marine Science and Engineering. 2024; 12(2):317. https://doi.org/10.3390/jmse12020317

Chicago/Turabian Style

Ferreira, Fausto, Juraj Obradović, Đula Nađ, Ivan Lončar, Luka Mandić, Igor Kvasić, Natko Kraševac, and Nikola Mišković. 2024. "Remote-Access Marine Robotics Infrastructure and Experiments at LABUST" Journal of Marine Science and Engineering 12, no. 2: 317. https://doi.org/10.3390/jmse12020317

APA Style

Ferreira, F., Obradović, J., Nađ, Đ., Lončar, I., Mandić, L., Kvasić, I., Kraševac, N., & Mišković, N. (2024). Remote-Access Marine Robotics Infrastructure and Experiments at LABUST. Journal of Marine Science and Engineering, 12(2), 317. https://doi.org/10.3390/jmse12020317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop