Next Article in Journal
A Robust Multivariate Time Series Classification Approach Based on Topological Data Analysis for Channel Fault Tolerance
Previous Article in Journal
The Aboveground Biomass Estimation of the Grain for Green Program Stands Using UAV-LiDAR and Sentinel-2 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MFBCE: A Multi-Focal Bionic Compound Eye for Distance Measurement

by
Qiwei Liu
,
Xia Wang
*,
Jiaan Xue
,
Shuaijun Lv
and
Ranfeng Wei
Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education of China, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(9), 2708; https://doi.org/10.3390/s25092708
Submission received: 5 March 2025 / Revised: 16 April 2025 / Accepted: 23 April 2025 / Published: 24 April 2025
(This article belongs to the Section Biosensors)

Abstract

:
In response to the demand for small-size, high-precision, and real-time target distance measurement in platforms such as autonomous vehicles and drones, this paper investigates the multi-focal bionic compound eye (MFBCE) and its associated distance measurement algorithm. MFBCE was designed to integrate multiple lenses with different focal lengths and a CMOS array. Based on this system, a multi-eye distance measurement algorithm based on target detection was proposed. The algorithm derives the application of binocular distance measurement on cameras with different focal lengths, overcoming the limitation of traditional binocular algorithms that only work with identical cameras. By utilizing the multi-scale information obtained from multiple lenses with different focal lengths, the ranging accuracy of the MFBCE is improved. The telephoto lenses, with their narrow field of view, are beneficial for capturing detailed target information, while wide-angle lenses, with their larger field of view, are useful for acquiring information about the target’s environment. Experiments using the least squares method for ranging targets at 100 cm yielded a mean absolute error (MAE) of 1.05, approximately one-half of the binocular distance measurement algorithm. The proposed MFBCE demonstrates significant potential for applications in near-range obstacle avoidance, robotic grasping, and assisted driving.

Graphical Abstract

1. Introduction

Distance measurement is a critical component of navigation and has a wide range of applications in fields such as criminal investigation, national defense, and transportation. Distance measurement methods can be broadly classified into two categories: active and passive. Active distance measurement methods involve the system actively emitting signals and analyzing the returning echoes to calculate the distance. These methods typically offer long measurement ranges, high accuracy, and strong penetration capabilities but come with high costs and safety concerns [1]. In contrast, passive distance measurement methods rely on image processing and do not require the system to emit signals. These methods are more cost-effective, concealed, and easy to operate, garnering increasing attention. Binocular distance measurement is a representative passive distance measurement [2,3] that simulates the human visual perception mechanism, reconstructing target distances based on triangulation. However, binocular distance measurement has several limitations, such as difficulties in balancing compact sizes with a wide field of view, restrictions to identical cameras, and significant measurement errors when the baseline is short. To address these issues, bionic compound eyes and multi-eye distance measurement algorithms have emerged as prominent research focuses both domestically and internationally in recent years.
Early bionic compound eyes often employed a microlens array [4], which featured small apertures, low resolutions, short baselines, and limited imaging distances, making them unsuitable for distance measurement applications. In recent years, multi-camera arrays [5] composed of multiple lenses and CMOS arrays have emerged as a new paradigm for compound eye systems. These systems offer high resolutions and long imaging distances, holding significant potential in the fields of public security [6], 3D sensing [7,8], medical imaging [9,10], and bionic navigation [11]. An ultra-thin high-performance monolithic camera array named Pelican Imaging Camera Array (PiCam) was presented by Venkataraman et al. [12], which has a huge potential application for mobile devices because it supports still images and videos, and it has low-light compatibility due to its small size. Afshari H et al. [13,14] designed a multi-camera system named the panoptic camera, which consists of a layered arrangement of thirty classical CMOS image sensors distributed over a hemisphere. Cao et al. [15] designed a spherical bionic artificial compound eye imaging system consisting of a spherical support and 37 sub-eye lenses. Carles G et al. [16] designed a multi-aperture image system consisting of 5 × 5 commercial low-cost cameras assembled into a plane array. Popovic et al. [17] designed a panoptic camera system consisting of five floors and 49 cameras. The 49 cameras are distributed on a sphere with a 30 cm diameter. Xue et al. [18] designed a bionic compound eye system based on a fiber faceplate. This system integrates nine lenses coupled with a CMOS camera featuring a fiber faceplate, enabling stereovision and motion target detection.
Moreover, compared to binocular distance measurement algorithms, the multi-eye distance measurement algorithms of bionic compound eyes utilize multi-scale information, achieving higher accuracy. Horisaki R et al. [19] proposed an effective method for three-dimensional information acquisition based on the thin observation module by bound optics (TOMBO). An image captured by the TOMBO system is composed of multiple images observed from several viewpoints. The distance (100 mm) between the TOMBO system and objects can be estimated using the parallax of the captured images. Lee et al. [20] developed a sparse-representation-based classification algorithm to estimate object depths in the COMPU-EYE imaging system. In their experiments, four characters were located at three different distances (108 mm, 109 mm, and 112 mm) from the compound eye. The proposed system with this depth estimation method can provide a depth map of the object with a 1 mm depth resolution. Yang et al. [21] proposed a positioning algorithm based on an optical fiber panel compound eye. Experiments were conducted to measure targets at distances ranging from 1 m to 1.6 m, with an average ranging error of 3.7%. Liu et al. [22] demonstrate the 3D measurement using a curved compound-eye camera. The experimental results show that the working range for 3D measurement can cover the whole FOV of 98°, and the working distance can be as long as 3.2 m, with a measurement error of no more than 5%. Oh W et al. [23] proposed a transformer-based neural network for eye-wise depth estimation, which is suitable for the compound eye image. The experiments were conducted within a 4.5 m range. The proposed method achieved a mean absolute error of 0.20 m on the GAZEBO dataset and 0.29 m on the Matterport3D dataset. Wang et al. [24] proposed a simulation method to evaluate the imaging and ranging performance of the designed infrared compound eye. The distance measurement error within 1 m is approximately 0.2 m.
Currently, bionic compound eyes demonstrate excellent performance in high-resolution imaging and wide-field detection. However, they struggle to simultaneously meet compact size, light weight, and clear imaging requirements [25,26], limiting their applicability to platforms with spatial and computational constraints, such as small unmanned vehicles and drones. Moreover, deep-learning-based multi-eye ranging methods are often constrained by hardware requirements. While binocular ranging can realize relatively accurate target distance estimation, it is typically limited to identical camera setups. Additionally, its accuracy is restricted by the baseline distance between the two cameras and requires prior calibration. When the baseline is short, the ranging precision is significantly reduced. To address these limitations, it is essential to optimize the structural design of bionic compound eye systems and their distance measurement algorithms. This optimization aims to enable high-precision distance measurement under conditions of compact size, short baseline, and varying focal lengths.
This paper addresses the challenges associated with the system architecture and distance measurement algorithms of existing bionic compound eyes by developing a multi-focal bionic compound eye (MFBCE) for distance measurement.
The main contributions of this paper are as follows:
  • We Design an MFBCE. This system integrates an onboard core processing unit to handle image data, thereby reducing reliance on GPUs. This design significantly decreases the weight and size of the bionic compound eye (Section 2).
  • We propose a multi-eye distance measurement algorithm. By utilizing multiple lenses with different focal lengths to capture multi-scale target images and deriving the intrinsic relationships between these images, the algorithm overcomes the limitations of binocular methods, which require identical cameras and prior calibration. This approach improves the ranging accuracy of the system (Section 3).
  • We conducted an analysis of distance measurement errors. The MFBCE, combined with the multi-eye distance measurement algorithm, was used to conduct experiments for error analysis (Section 4).
The MFBCE proposed in this paper is characterized by its compact structure, light weight, and high imaging accuracy. When combined with the multi-eye distance measurement algorithm presented, it enables high-precision distance measurement for near-range targets. The system achieves a mean absolute error of 0.54 for targets within the range of 90 cm to 120 cm. This system holds significant potential for applications in areas such as emergency obstacle avoidance in unmanned vehicles, robot grasping, and advanced driver assistance systems [27,28,29].

2. Design of MFBCE

2.1. Structure of MFBCE

The structure of MFBCE includes the compound eye lens array, image sensor board, intelligent visual processing module, and metal casing. The compound eye lens array consists of lenses with focal lengths of 6 mm, 8 mm, and 12 mm. The left and right optical paths use 6 mm focal length lenses, the up and down optical paths use 8 mm focal length lenses, and the central optical path uses a 12 mm focal length lens. The image sensor board is composed of five CMOS image sensors, with each sensor featuring a pixel size of 2.8 µm × 2.8 µm and a resolution of 1920 × 1080. The intelligent visual processing module includes RV1126 and RV1106 intelligent vision chips, as well as an image signal transmission board. The target distance measurement algorithm is deployed on the RV1126 chip. This module realizes the processing of visual signals and the interaction with external information. The metal casing is made of aluminum alloy, which is lightweight and corrosion-resistant. It serves to connect and secure the components while protecting the internal structure. The parameters of the MFBCE are presented in Table 1, with the structure shown in Figure 1.

2.2. Principle of MFBCE

The principle of MFBCE is shown in Figure 2. Light signals containing both target and background information are projected onto the image sensor board through the compound eye lens array. The light signals are then converted into electronic signals through photoelectric conversion on the image sensor board, generating the raw image. The raw image is subsequently processed by the RV1126 and RV1106 intelligent vision chips, which feature image signal processing (ISP) functions. These chips convert the raw image into a format that aligns with human visual perception. Additionally, the RV1126 and RV1106 chips support video-encoding inputs and outputs. The processed image is input into a deep learning model via the video input (VI) channel for object detection. After detection, the image is encoded through the video encoder (VENC) to generate a network video stream based on a real-time streaming protocol (RTSP). Finally, the network video stream is transmitted to the host computer through the image signal transmission board. The real-time detection and distance measurement of the target can be observed using a video player.

2.3. Field of View of MFBCE

The lens array of the MFBCE consists of lenses with focal lengths of 6 mm, 8 mm, and 12 mm. Long-focus lenses provide high imaging precision but have a narrow field of view, allowing for the capture of fine target details and texture information. In contrast, short-focus lenses offer a wide field of view but lower imaging precision, enabling large-area target detection when combined with object detection algorithms. The multi-focal bionic compound-eye system integrates the advantages of both—combining the high imaging precision of telephoto lenses with the wide field of view of short-focus lenses—thereby enabling multi-scale target observation. The rich image data obtained through this approach can be applied across various fields.
The field of view (FOV) of the MFBCE includes the diagonal field of view (DFOV), horizontal field of view (HFOV), and vertical field of view angle (VFOV). The FOV can be calculated based on the pixel size, resolution, and focal length. The results are shown in Table 2.
We modeled the FOV of MFBCE, as shown in Figure 3. In the figure, the red regions represent the FOV of the 6 mm focal length lenses, the green regions represent the 8 mm focal length lenses, and the blue regions represent the 12 mm focal length lenses. Due to the parallax between different lenses, different FOV distributions were observed at various imaging distances:
  • 36 mm: The FOV of four lenses at the top, bottom, left, and right intersect at this distance. Distances less than 36 mm create a blind area.
  • 300 mm: The FOV of four lenses covers the center lens’s FOV at this distance.
  • 300–2100 mm: The FOVs of the five lenses exhibit significant overlap, with noticeable differences caused by the parallax.
  • Greater than 2100 mm: The overlap between the FOVs of the 6 mm focal length lenses reaches 98%, while the overlap between the 8 mm focal length lenses is 95%. The effect of parallax becomes smaller, and the system can be approximated as a coaxial system.
Based on the above analysis, it can be concluded that when the imaging distance of MFBCE is 300–2100 mm, the target observed by the central lens can simultaneously be captured by the surrounding four lenses (Figure 4). Significant differences in the FOV arise due to the parallax. In this range, the five lenses can collectively observe the target in the central region. Subsequently, by analyzing the differences in the target projections on the imaging planes of the different lenses, the high-precision distance measurement of the target can be realized. Furthermore, increasing the distance between lenses can significantly enhance the parallax, thus extending the system’s measurement range. However, a larger distance between lenses also increases the system’s volume and introduces larger visual blind areas. In the practical use of the MFBCE, lenses with different focal lengths can be swapped based on the measurement distance requirements. Telephoto lenses are selected for distant and small targets to improve accuracy, while wide-angle lenses are chosen for close-range and large targets to avoid blind spots and ensure that the target remains within the field of view.

3. MFBCE Distance Measurement Algorithm

Currently, binocular distance measurement algorithms require identical lenses and cameras, as well as pre-calibrated camera parameters, which severely limit the universality of compound eyes. In contrast, depth estimation algorithms based on deep learning face challenges related to low accuracy. To address these issues, this paper proposes a multi-eye distance measurement algorithm based on target detection. By deriving the intrinsic relationship between images captured by lenses with different focal lengths, the proposed method overcomes the limitations of traditional binocular algorithms that require identical cameras and pre-calibration. Furthermore, the algorithm achieves higher measurement accuracy for specific targets through target detection compared to global depth estimation. Additionally, by converting the yolov5.pt model and deploying it on the RV1126 vision chip, the algorithm eliminates the reliance on computers. The process of the algorithm is shown in Figure 5.
The MFBCE distance measurement algorithm consists of three modules: image acquisition module, target detection module, and target ranging module. First, five images containing the target are captured by MFBCE. These images are then input into the intelligent vision chips of the MFBCE for target detection, where the target’s coordinates and category information are extracted. Finally, by incorporating camera parameters such as the pixel size, resolution, lens’s focal length, and distance between lenses, the target’s distance is derived through a set of equations:
  • Image acquisition module: MFBCE captures the target simultaneously using five lenses with different focal lengths. The use of lenses with varying focal lengths allows for the acquisition of multi-scale information, which provides a solid foundation for target detection and ranging.
  • Target detection module: The five acquired images are input into the intelligent vision chip of the MFBCE for target detection. In this study, we employ the YOLOv5 target detection algorithm [30,31]. Since traditional PyTorch models are not compatible with the RV1126 chip, the YOLOv5 model must be converted. First, the PyTorch model is converted into the ONNX (open neural network exchange), and then, the ONNX is further converted into the RKNN (rockchip neural network). RKNN is the deep learning neural network used by the RV1126 chip.
  • Target ranging module: Through the image acquisition and target detection modules, the target bounding box and coordinates are obtained. Then, by integrating information such as the CMOS pixel size, camera resolution, lens’s focal length, and distance between lenses, the target’s distance can be calculated through theoretical derivation. This process is elaborated in detail in the following sections.

3.1. Monocular Vision Distance Measurement Model

To obtain 3D distance information from a 2D image, it is necessary to model the camera and analyze the corresponding relationship between a point in the physical space and its projection onto the camera image based on the camera imaging principle [32,33]. This involves the transformation between four coordinate systems: the world coordinate system (Ow-XwYwZw), camera coordinate system (Oc-XcYcZc), image coordinate system (O-xy), and pixel coordinate system (o-uv). The relationship between these coordinate systems is illustrated in Figure 6. Let the coordinates of a point P be denoted as (Xw, Yw, Zw), (Xc, Yc, Zc), (x, y), and (u, v) in the respective coordinate systems.
The transformation from the world coordinate system to the camera coordinate system is a rigid transformation, meaning that the camera undergoes no deformation but only rotation and translation, as shown in Equation (1):
X c Y c Z c = R X w Y w Z w + T X c Y c Z c 1 = R T 0 1 X w Y w Z w 1
where R is a 3 × 3 rotation matrix, and T is a 3 × 1 translation vector.
The transformation from the camera coordinate system to the image coordinate system is a perspective projection, which, based on the pinhole imaging principle, projects real-world 3D objects onto a 2D image, as shown in Equation (2):
Z c x y 1 = f 0 0 0 f 0 0 0 1 X c Y c Z c
where f represents the focal length of the camera lens.
The transformation from the image coordinate system to the pixel coordinate system is a plane transformation, where the units of length are converted into pixel units, as shown in Equation (3):
u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x y 1
where dx and dy represent the physical length of a single pixel in the x and y directions of the camera’s sensor, respectively, which corresponds to the CMOS pixel size. u0 and v0 are the center point coordinates of the pixel coordinate system.
By combining Equations (1)–(3), the relationship between the pixel coordinate system and the world coordinate system can be derived, as shown in Equation (4):
Z c u v 1 = f d x 0 u 0 0 0 f d y v 0 0 0 0 1 0 R T 0 1 X w Y w Z w 1
where the matrix containing f / d x , f / d y , u 0 , and v 0 is the camera’s intrinsic matrix, which is related to the camera’s internal parameters. The matrix containing R and T is the camera’s extrinsic matrix, which reflects the camera’s position in the world coordinate system.
In this paper, the distance to the target relative to the camera needs to be calculated. In this case, the camera coordinate system coincides with the world coordinate system (Xw = Xc = X, Yw = Yc = Y, Zw = Zc = Z). With the CMOS parameter d x = d y = d , Equation (4) can be simplified into the following:
u v 1 = f d 0 u 0 0 f d v 0 0 0 1 X Z Y Z 1
Equation (6) can be derived as follows from Equation (5):
u = f X d Z + u 0 , v = f Y d Z + v 0
The size of the target in the pixel coordinate system can be correlated with its actual size using Equation (6), as shown in Equation (7):
w = u R u L = ( f X R d Z + u 0 ) ( f X L d Z + u 0 ) = f ( X R X L ) d Z = f W d Z
where uR and uL represent the left and right horizontal coordinates of the target’s projection in the image bounding box, XR and XL represent the left and right horizontal coordinates of the target in the real world, and w represents the width of the target in the pixel coordinate system, while W denotes the actual width of the target.
From Equation (7), it can be observed that in the monocular vision model, the distance between the target and the camera can be determined based on the actual size of the target, its size in the pixel coordinate system, and the camera’s intrinsic parameters (focal length and pixel size).

3.2. Binocular Vision Distance Measurement Model

In practical applications, the camera’s intrinsic parameters (focal length and pixel size) can be obtained from system design specifications or product manuals. The target’s position and size in the pixel coordinate system can be extracted from the bounding box provided by the target detection algorithm. However, the actual size of the target is often unknown. To address this problem, at least two cameras are required to simultaneously observe the target, utilizing the parallax to infer missing scale information.
When cameras i and j, positioned at different locations, capture images of a target with an actual width of W, the following is obtained:
W = d i w i Z i f i = d j w j Z j f j
where di and dj represent the pixel sizes of cameras i and j, fi and fj denote the focal length of the lenses in cameras i and j, wi and wj correspond to the target’s projected width in the pixel coordinate systems of cameras i and j, and Zi and Zj represent the distances between the target and cameras i and j, respectively.
Similarly, when cameras i and j, positioned at different locations, capture images of a target with an actual height of H, the following is obtained:
H = d i h i Z i f i = d j h j Z j f j
where hi and hj correspond to the target’s projected height in the pixel coordinate systems of cameras i and j.
The different positions of the cameras result in variations in the size and location of the target projection on the pixel coordinate systems, as illustrated in Figure 7. Taking into account the influence of the camera’s intrinsic parameters, the target distance can be determined using the camera parameters, inter-camera distance, and projection position. The following section presents the detailed derivation of the corresponding formulas.
Equation (10) can be derived as follows from Equation (8):
Z j = Z i ( f j d i w i f i d j w j )
By substituting Equation (10) into Equation (6), the relationship between the target’s size in the pixel coordinate system and its actual size can be established:
u j = f j X j d j Z j + u j , 0 = f j X j d j Z i ( f j d i w i f i d j w j ) + u j , 0 = ( w j w i ) f i X j d i Z i + u j , 0
u i = f i X i d i Z i + u i , 0
( u j u j , 0 ) ( w i w j ) ( u i u i , 0 ) = f i ( X j X i ) d i Z i
where uj and ui represent the horizontal coordinates of the center of the bounding box observed by two cameras, while uj,0 and ui,0 represent the coordinates of the center of the pixel coordinate systems. The Xi and Xj positions of a static target depend only on the positions of the cameras and are given by the following:
X j X i = Δ X
where ΔX is known as the baseline, which is the lateral distance between the two cameras. By combining Equations (13) and (14), the following can be obtained:
Z i = f i Δ X d i ( u j u j , 0 ) ( w i w j ) d i ( u i u i , 0 )
Similarly, for cameras distributed parallel to the Y-axis, the following is the case:
Z i = f i Δ Y d i ( v j v j , 0 ) ( h i h j ) d i ( v i v i , 0 )
where ΔY is also known as the baseline, which is the longitudinal distance between the two cameras.
From Equations (8) and (9), it can be observed that when the cameras are at the same depth ( Z i = Z j ),
w i w j = h i h j = d j f i d i f j
It can be inferred that when different cameras capture the same target at the same depth, the ratio of the target’s projected size is solely dependent on the camera’s parameters. When the camera parameters are identical, the size of the target’s projection will also be the same.
Substituting Equation (17) into Equations (15) and (16), the following is obtained:
Z i = Δ X ( u j u j , 0 ) ( d j f j ) ( u i u i , 0 ) ( d i f i ) = Δ Y ( v j v j , 0 ) ( d j f j ) ( v i v i , 0 ) ( d i f i )
From Equation (18), the target’s distance can be determined based on the baseline (ΔX, ΔY), camera intrinsic parameters ( d i , d j , f i , f j ) , the coordinates of the target’s bounding box center ( u i , u j , v i , v j ) , and the center coordinates of the camera’s pixel coordinate system ( u i , 0 , u j , 0 , v i , 0 , v j , 0 ) .
The baseline, camera intrinsic parameters, and coordinates of the center of the camera pixel coordinate system can be obtained from the system design. The coordinates of the target’s bounding box center can be obtained through the target detection algorithm. This algorithm addresses the scale ambiguity issue in the monocular vision model using binocular disparity and enables binocular ranging with cameras of different parameters (focal length, pixel size, and pixel array) without the need for pre-calibration.

3.3. Multi-Eye Vision Distance Measurement Model

The preceding discussion considers only the case of two cameras. In practical applications, errors in camera installation and target detection are inevitable. Therefore, we integrate the results from n cameras to obtain a more accurate distance calculation. Equation (19) can be derived as follows from Equation (18):
( u j u j , 0 ) ( d j f j ) ( u i u i , 0 ) ( d i f i ) ( v j v j , 0 ) ( d j f j ) ( v i v i , 0 ) ( d i f i ) Z i = Δ X Δ Y
At this time, let the following be the case:
( u j u j , 0 ) ( d j f j ) ( u i u i , 0 ) ( d i f i ) ( v j v j , 0 ) ( d j f j ) ( v i v i , 0 ) ( d i f i ) = Δ x Δ y
By integrating the results from n observation points, we obtain the following:
Δ x 1 , Δ y 1 , Δ x 2 , Δ y 2 , , Δ x n , Δ y n T   Z i = Δ X 1 , Δ Y 1 , Δ X 2 , Δ Y 2 , , Δ X n , Δ Y n T
where Δx and Δy represent the weighted pixel differences in the X-axis and Y-axis directions, respectively, between the two observation points. After the target is imaged, Δx and Δy can be calculated based on the camera’s intrinsic parameters, the coordinates of the target bounding box center, and the center coordinates of the camera’s pixel coordinate system. Additionally, ΔX and ΔY denote the baseline, which is the distance between the two cameras.
To calculate a more accurate distance, the results observed by n cameras can be integrated. The distance can be calculated using the following method based on Equation (21):
Arithmetic mean (AM):
Z AM = 1 n i = 1 n Z i
Root mean square (RMS):
Z R M S = 1 n i = 1 n Z i 2
Least squares (LS):
Z min S S D ( Z ) , S S D = i = 1 n ( Z Δ x i Δ X i ) 2 + i = 1 n ( Z Δ y i Δ Y i ) 2
By varying the value of Z, SSD is calculated for different Z values. The specific distance Z that minimizes the SSD can be determined using the least squares method, and this Z is taken as the final distance calculation result.
Multi-eye distance measurement is subject to errors, with the primary causes being similar to those in binocular distance measurements. Specifically, ranging errors stem from mechanical inaccuracies introduced during the system’s assembly process and errors in object detection. Mechanical errors are inevitable during assembly and can affect the overall ranging precision. Our proposed algorithm is based on object detection, where higher object detection accuracies lead to improved ranging precision. Intersection over union (IoU) is a key metric for evaluating object detection algorithms, as it measures the ratio of the intersection area to the union area between the predicted bounding box and the ground truth. When the IoU value is low, the deviation between the predicted and actual bounding boxes increases, resulting in the greater displacement of the selected bounding box center coordinates from the ground truth, which in turn increases ranging errors.
Letting the error be denoted as δ, Zpred can be calculated as follows:
Z pred = Δ X ( u j u j , 0 + δ 1 ) ( d j f j ) ( u i u i , 0 + δ 2 ) ( d i f i ) = Δ Y ( v j v j , 0 + δ 3 ) ( d j f j ) ( v i v i , 0 + δ 4 ) ( d i f i )
As derived from Equation (25), in practical applications, the ranging accuracy can be improved by increasing the baseline and using telephoto lenses to reduce the impact of the error δ.

4. Experimental Results

To validate the imaging and ranging capabilities of the MFBCE, as well as to assess the robustness of the proposed ranging algorithm, a series of experiments were conducted using the MFBCE system. The experiment captured animal card targets at distances of 100 cm from the MFBCE, as shown in Figure 8. The targets were set as follows: bird, cat, dog, zebra, horse, cow, and elephant. By using the YOLOv5 target detection algorithm, the bounding boxes of the animal card targets were extracted to obtain their coordinates in the pixel coordinate system. Finally, by incorporating pixel size, pixel array, lens focal length, and baseline into the previously derived equations, the distances between the animal card targets and the MFBCE can be calculated. All experimental results are retained to two decimal places.
The MFBCE used in the experiment is configured as described in Section 2. The left and right channels are equipped with 6 mm focal length lenses, the top and bottom channels use 8 mm focal length lenses, and the central channel is fitted with a 12 mm focal length lens. The baseline between the central lens and the peripheral lenses was 19 mm (ΔX/ΔY), while the baseline between the left–right and top–bottom lenses was 38 mm (ΔX/ΔY).
To evaluate the performance of the binocular and multi-eye distance measurement algorithms, we used the following metrics:
MAE (mean absolute error): 1 N i = 1 N Z true ( i ) Z pred ( i )
RMSE (root mean squared error): 1 N i = 1 N Z true ( i ) Z pred ( i ) 2

4.1. Binocular Ranging Experiments Between Lenses with Different Focal Lengths

In this section, we replaced the lenses of the MFBCE to perform binocular ranging experiments. First, we conducted three sets of binocular ranging experiments using lenses with different focal lengths. The baseline distance (ΔX) was set to 38 mm (the baseline distance between the peripheral lenses), and the focal length pairs used in the experiments were 6 mm and 8 mm, 6 mm and 12 mm, and 8 mm and 12 mm. The experimental results are presented in Table 3.
From the results in Table 3, it can be seen that the proposed algorithm successfully enables binocular ranging between cameras with different focal lengths, overcoming the limitations of traditional binocular ranging methods, which typically require identical cameras and prior calibration.

4.2. Binocular Ranging Experiments with Different Algorithms

To evaluate the binocular ranging algorithm, we benchmarked our approach against two state-of-the-art algorithms: the widely recognized RAFT-Stereo [34,35] and the recently proposed IGEV++ [36]. Both comparative methods were implemented using their publicly available pre-trained models, which were trained on the standard stereo dataset SceneFlow. The results are presented in Figure 9 and Figure 10, demonstrating the performance of our method relative to these deep learning methods under the same experimental conditions (f = 8 mm; ΔX = 38 mm).
Subsequently, we incorporated our system’s parameters and, based on the object detection results and disparity maps, computed the ranging accuracy for each method. The experimental results are presented in Table 4.
It is important to note that while RAFT and IGEV++ excel in holistic depth estimation tasks, their architectures inherently prioritize pixel-wise disparity accuracy over object-specific ranging precision—a distinction critical to our application scenario. Our algorithm has higher ranging accuracies for specific objects and does not confuse different targets at the same distance. Furthermore, RAFT and IGEV++ reliance on same-parameter stereo cameras introduces limitations in cross-hardware generalization, a challenge explicitly addressed by our framework.

4.3. Binocular Ranging Experiments with Different Object Detection Accuracies

In this section, we have supplemented our experiments with an analysis of the impact of object detection accuracy on ranging errors. Figure 11 illustrates cases of bounding box displacement observed during the experiments. Given the complexity of real-world scenarios, we assume no other sources of error (obtain accurate target distance when the predicted bounding box is not offset) and simulate the binocular ranging results under different baseline distances and focal lengths when a single predicted bounding box shifts inward by five pixels. Additionally, we conducted further simulations for a baseline distance of ΔX = 38 mm and a focal length of f = 12 mm, analyzing the ranging results when the bounding box shifts by two pixels and one pixel, respectively.
For binocular ranging with identical camera intrinsic parameters and resolution, Equation (18) can be simplified into the following:
Z = f Δ X d ( u j u i ) = f Δ Y d ( v j v i )
As shown in Table 5, increasing the baseline distance and focal length reduces the impact of bounding box displacement on ranging errors. Additionally, smaller bounding box shifts result in more accurate ranging outcomes. Therefore, the ranging accuracy can be improved by increasing the focal length and baseline distance, as well as by selecting a higher-precision object detection algorithm to minimize bounding box displacement. In practical applications, higher-precision algorithms typically have slower inference speeds. Considering the trade-off between detection accuracy, processing time, and model deployment on mobile devices, we have chosen the well-established YOLOv5 object detection algorithm. Table 6 shows the performance metrics for various YOLOv5 models trained on the COCO dataset. Finally, we utilize the pre-trained YOLOv5l model to balance performance and efficiency (https://github.com/ultralytics/yolov5, accessed on 20 March 2024).
In the future, with the adaptation of target detection algorithms on mobile platforms such as RV1126, higher-precision object detection algorithms can be employed to further enhance ranging accuracy. Additionally, training the detection model specifically for the target can significantly improve detection precision, thereby leading to more accurate subsequent distance estimation.

4.4. Comparison Experiment Between Binocular and Multi-Eye Ranging Algorithms

In this section, we replaced the lenses of the MFBCE to perform binocular ranging experiments. The results of the binocular ranging were then compared to those of the MFBCE’s multi-eye ranging, as shown in Table 7.
Unlike the simulated results in Table 5, which reflect the impact of object detection errors on ranging errors, Table 7 presents the actual ranging results, incorporating both mechanical errors and object detection errors. The results in Table 7 validate our findings in that increasing the baseline distance and using long-focal lenses can effectively reduce the impact of errors and improve ranging accuracy (as discussed in Section 3.3 and Section 4.3). As a result, the binocular ranging algorithm obtains the lowest error (MAE of 1.90 and RMSE of 2.28) when ΔX = 38 mm and f = 12 mm.
Furthermore, the proposed multi-eye ranging algorithm, based on the MFBCE, effectively integrates information from five different focal length lenses. Without increasing the baseline or using longer focal length lenses, it improves ranging accuracy by directly minimizing the error δ, leading to superior ranging results. The least squares method yielded the smallest error in this approach (MAE of 1.05 and RMSE of 1.26). Figure 12 presents a comparison of the absolute errors between the binocular ranging algorithm and the proposed multi-eye ranging algorithm.
The results presented in Table 8 show the distance estimation performance of the MFBCE system for targets at various distances. It can be observed that as the distance increases, the ranging error of the MFBCE increases significantly. This is because, in order to verify the generalizability of the algorithm, no calibration or registration was performed for the lenses of the MFBCE. The proper registration of a system can significantly reduce mechanical errors, thereby improving both the measurable distance range and overall accuracy.
In summary, the experiments demonstrate the effectiveness of the MFBCE for target detection and distance measurement. The results prove the superiority of the proposed MFBCE target distance measurement algorithm. Compared to traditional binocular ranging methods, the proposed algorithm exhibits strong environmental adaptability, making it suitable for cameras with different focal lengths. Moreover, by effectively integrating multi-eye information, it achieves higher ranging accuracies without the need for calibration.

5. Conclusions

This paper presents a multi-focal bionic compound eye (MFBCE) for distance measurements based on multi-eye vision. By utilizing an integrated core board to process image data, the system eliminates the dependence on graphics cards, reducing the weight and volume of the bionic compound eye. By deriving the intrinsic relationship of images obtained with multi-focal-length lenses, this approach overcomes the limitations of stereo and multi-eye ranging algorithms, which require identical cameras and prior calibration. This improvement enhances the system’s distance measurement accuracy and extends its applicability. The experimental results demonstrate that the proposed MFBCE is compact, lightweight, and highly accurate. Combined with the designed multi-eye distance measurement algorithm, it realizes high-precision distance measurements. Without increasing the baseline or using a longer focal length lens, the MAE is one-half of that of the binocular ranging method. This system holds significant potential for applications in areas such as autonomous vehicle obstacle avoidance, robotic grasping, and driver assistance systems.

Author Contributions

Conceptualization, Q.L.; data curation, Q.L. and S.L.; formal analysis, Q.L. and J.X.; funding acquisition, X.W.; investigation, Q.L. and J.X.; methodology, Q.L.; project administration, Q.L. and X.W.; resources, X.W. and R.W.; software, Q.L.; supervision, Q.L. and X.W.; validation, Q.L.; visualization, Q.L. and S.L.; writing—original draft, Q.L.; writing—review and editing, Q.L., X.W. and J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Petrie, G.; Toth, C.K. Introduction to laser ranging, profiling, and scanning. In Topographic Laser Ranging and Scanning; CRC Press: Boca Raton, FL, USA, 2018; pp. 1–28. [Google Scholar]
  2. Foley, J.M. Binocular distance perception. Psychol. Rev. 1980, 87, 411. [Google Scholar] [CrossRef] [PubMed]
  3. Blake, R.; Wilson, H. Binocular vision. Vis. Res. 2011, 51, 754–770. [Google Scholar] [CrossRef] [PubMed]
  4. Wu, S.; Jiang, T.; Zhang, G.; Schoenemann, B.; Neri, F.; Zhu, M.; Bu, C.; Han, J.; Kuhnert, K.-D. Artificial compound eye: A survey of the state-of-the-art. Artif. Intell. Rev. 2017, 48, 573–603. [Google Scholar] [CrossRef]
  5. Wang, D.; Pan, Q.; Zhao, C.; Hu, J.; Xu, Z.; Yang, F.; Zhou, Y. A study on camera array and its applications. IFAC-PapersOnLine 2017, 50, 10323–10328. [Google Scholar] [CrossRef]
  6. Shogenji, R.; Kitamura, Y.; Yamada, K.; Miyatake, S.; Tanida, J. Bimodal fingerprint capturing system based on compound-eye imaging module. Appl. Opt. 2004, 43, 1355–1359. [Google Scholar] [CrossRef]
  7. Ma, M.; Guo, F.; Cao, Z.; Wang, K. Development of an artificial compound eye system for three-dimensional object detection. Appl. Opt. 2014, 53, 1166–1172. [Google Scholar] [CrossRef]
  8. Zhao, Z.-F.; Liu, J.; Zhang, Z.-Q.; Xu, L.-F. Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance. Opt. Lett. 2020, 45, 1491–1494. [Google Scholar] [CrossRef]
  9. Kagawa, K.; Yamada, K.; Tanaka, E.; Tanida, J. A three-dimensional multifunctional compound-eye endoscopic system with extended depth of field. Electron. Commun. Jpn. 2012, 95, 14–27. [Google Scholar] [CrossRef]
  10. Tanida, J.; Mima, H.; Kagawa, K.; Ogata, C.; Umeda, M. Application of a compound imaging system to odontotherapy. Opt. Rev. 2015, 22, 322–328. [Google Scholar] [CrossRef]
  11. Davis, J.; Barrett, S.; Wright, C.; Wilcox, M. A bio-inspired apposition compound eye machine vision sensor system. Bioinspir. Biomim. 2009, 4, 046002. [Google Scholar] [CrossRef]
  12. Venkataraman, K.; Lelescu, D.; Duparré, J.; McMahon, A.; Molina, G.; Chatterjee, P.; Mullis, R.; Nayar, S. Picam: An ultra-thin high performance monolithic camera array. ACM Trans. Graph. (TOG) 2013, 32, 1–13. [Google Scholar] [CrossRef]
  13. Afshari, H.; Popovic, V.; Tasci, T.; Schmid, A.; Leblebici, Y. A spherical multi-camera system with real-time omnidirectional video acquisition capability. IEEE Trans. Consum. Electron. 2012, 58, 1110–1118. [Google Scholar] [CrossRef]
  14. Afshari, H.; Jacques, L.; Bagnato, L.; Schmid, A.; Vandergheynst, P.; Leblebici, Y. The PANOPTIC camera: A plenoptic sensor with real-time omnidirectional capability. J. Signal Process. Syst. 2013, 70, 305–328. [Google Scholar] [CrossRef]
  15. Cao, A.; Shi, L.; Deng, Q.; Hui, P.; Zhang, M.; Du, C. Structural design and image processing of a spherical artificial compound eye. Optik 2015, 126, 3099–3103. [Google Scholar] [CrossRef]
  16. Carles, G.; Chen, S.; Bustin, N.; Downing, J.; McCall, D.; Wood, A.; Harvey, A.R. Multi-aperture foveated imaging. Opt. Lett. 2016, 41, 1869–1872. [Google Scholar] [CrossRef]
  17. Popovic, V.; Seyid, K.; Pignat, E.; Çogal, Ö.; Leblebici, Y. Multi-camera platform for panoramic real-time HDR video construction and rendering. J. Real-Time Image Process. 2016, 12, 697–708. [Google Scholar] [CrossRef]
  18. Xue, J.; Qiu, S.; Wang, X.; Jin, W. A compact visible bionic compound eyes system based on micro-surface fiber faceplate. In Proceedings of the 2019 International Conference on Optical Instruments and Technology: Optoelectronic Imaging/Spectroscopy and Signal Processing Technology, Beijing, China, 2–4 November 2019; pp. 68–75. [Google Scholar]
  19. Horisaki, R.; Irie, S.; Ogura, Y.; Tanida, J. Three-dimensional information acquisition using a compound imaging system. Opt. Rev. 2007, 14, 347–350. [Google Scholar] [CrossRef]
  20. Lee, W.-B.; Lee, H.-N. Depth-estimation-enabled compound eyes. Opt. Commun. 2018, 412, 178–185. [Google Scholar] [CrossRef]
  21. Yang, C.; Qiu, S.; Jin, W.Q.; Dai, J.L. Image Mosaic and Positioning Algorithms of Bionic Compound Eye Based on Fiber Faceplate. Binggong Xuebao/Acta Armamentarii 2018, 39, 1144–1150. [Google Scholar]
  22. Liu, J.; Zhang, Y.; Xu, H.; Wu, D.; Yu, W. Long-working-distance 3D measurement with a bionic curved compound-eye camera. Opt. Express 2022, 30, 36985–36995. [Google Scholar] [CrossRef]
  23. Oh, W.; Yoo, H.; Ha, T.; Oh, S. Local selective vision transformer for depth estimation using a compound eye camera. Pattern Recognit. Lett. 2023, 167, 82–89. [Google Scholar] [CrossRef]
  24. Wang, X.; Li, L.; Liu, J.; Huang, Z.; Li, Y.; Wang, H.; Zhang, Y.; Yu, Y.; Yuan, X.; Qiu, L. Infrared Bionic Compound-Eye Camera: Long-Distance Measurement Simulation and Verification. Electronics 2025, 14, 1473. [Google Scholar] [CrossRef]
  25. Song, Y.M.; Xie, Y.; Malyarchuk, V.; Xiao, J.; Jung, I.; Choi, K.-J.; Liu, Z.; Park, H.; Lu, C.; Kim, R.-H. Digital cameras with designs inspired by the arthropod eye. Nature 2013, 497, 95–99. [Google Scholar] [CrossRef]
  26. Viollet, S.; Godiot, S.; Leitel, R.; Buss, W.; Breugnon, P.; Menouni, M.; Juston, R.; Expert, F.; Colonnier, F.; L’Eplattenier, G. Hardware architecture and cutting-edge assembly process of a tiny curved compound eye. Sensors 2014, 14, 21702–21721. [Google Scholar] [CrossRef]
  27. Leininger, B.; Edwards, J.; Antoniades, J.; Chester, D.; Haas, D.; Liu, E.; Stevens, M.; Gershfield, C.; Braun, M.; Targove, J.D. Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS). In Proceedings of the Defense Transformation and Net-Centric Systems 2008, Orlando, FL, USA, 16–20 March 2008; pp. 141–151. [Google Scholar]
  28. Cheng, Y.; Cao, J.; Zhang, Y.; Hao, Q. Review of state-of-the-art artificial compound eye imaging systems. Bioinspir. Biomim. 2019, 14, 031002. [Google Scholar] [CrossRef]
  29. Phan, H.L.; Yi, J.; Bae, J.; Ko, H.; Lee, S.; Cho, D.; Seo, J.-M.; Koo, K.-i. Artificial compound eye systems and their application: A review. Micromachines 2021, 12, 847. [Google Scholar] [CrossRef]
  30. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  31. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2778–2788. [Google Scholar]
  32. Griffin, B.A.; Corso, J.J. Depth from camera motion and object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1397–1406. [Google Scholar]
  33. Wang, X.; Li, D.; Zhang, G. Panoramic stereo imaging of a bionic compound-Eye based on binocular vision. Sensors 2021, 21, 1944. [Google Scholar] [CrossRef]
  34. Lipson, L.; Teed, Z.; Deng, J. Raft-stereo: Multilevel recurrent field transforms for stereo matching. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; pp. 218–227. [Google Scholar]
  35. Jiang, H.; Xu, R.; Jiang, W. An improved raftstereo trained with a mixed dataset for the robust vision challenge 2022. arXiv 2022, arXiv:2210.12785. [Google Scholar]
  36. Xu, G.; Wang, X.; Zhang, Z.; Cheng, J.; Liao, C.; Yang, X. IGEV++: Iterative multi-range geometry encoding volumes for stereo matching. arXiv 2024, arXiv:2409.00638. [Google Scholar]
Figure 1. Structure of the multi-focal bionic compound eye (MFBCE). (a) Design diagram; (b) real object image.
Figure 1. Structure of the multi-focal bionic compound eye (MFBCE). (a) Design diagram; (b) real object image.
Sensors 25 02708 g001
Figure 2. Principle of the MFBCE.
Figure 2. Principle of the MFBCE.
Sensors 25 02708 g002
Figure 3. Field of view (FOV) of MFBCE.
Figure 3. Field of view (FOV) of MFBCE.
Sensors 25 02708 g003
Figure 4. Real imaging field of view of MFBCE.
Figure 4. Real imaging field of view of MFBCE.
Sensors 25 02708 g004
Figure 5. MFBCE distance measurement algorithm.
Figure 5. MFBCE distance measurement algorithm.
Sensors 25 02708 g005
Figure 6. Transformation between four coordinate systems.
Figure 6. Transformation between four coordinate systems.
Sensors 25 02708 g006
Figure 7. Target projection on different pixel coordinate systems.
Figure 7. Target projection on different pixel coordinate systems.
Sensors 25 02708 g007
Figure 8. MFBCE target detection results.
Figure 8. MFBCE target detection results.
Sensors 25 02708 g008
Figure 9. Binocular target detection results. (a) Left and (b) right.
Figure 9. Binocular target detection results. (a) Left and (b) right.
Sensors 25 02708 g009
Figure 10. Binocular disparity map. (a) RAFT pseudo-color image; (b) RAFT grayscale image; (c) IGEV++ pseudo-color image; (d) IGEV++ grayscale image. The pseudo-color map is mapped in jet format, with red, orange, yellow, green, cyan, blue, and purple colors dis-tributed in order from near to far.
Figure 10. Binocular disparity map. (a) RAFT pseudo-color image; (b) RAFT grayscale image; (c) IGEV++ pseudo-color image; (d) IGEV++ grayscale image. The pseudo-color map is mapped in jet format, with red, orange, yellow, green, cyan, blue, and purple colors dis-tributed in order from near to far.
Sensors 25 02708 g010
Figure 11. Predicted bounding box offset. (a) Normal; (b) offset.
Figure 11. Predicted bounding box offset. (a) Normal; (b) offset.
Sensors 25 02708 g011
Figure 12. Comparison of absolute errors between binocular and multi-eye ranging (100 cm).
Figure 12. Comparison of absolute errors between binocular and multi-eye ranging (100 cm).
Sensors 25 02708 g012
Table 1. System parameters of MFBCE.
Table 1. System parameters of MFBCE.
ParameterTypical Value
Size60 mm × 60 mm × 80 mm
Weight275 g
Focal length6 mm, 8 mm, 12 mm
Inter-camera distance (baseline)19 mm
Optical format1/2.9 inch
Pixel size2.8 μm × 2.8 μm
Active pixel array1920 × 1080
Table 2. Field of view of the MFBCE.
Table 2. Field of view of the MFBCE.
Focal LengthDFOVHFOVVFOV
6 mm54.84°48.26°28.18°
8 mm42.52°37.14°21.40°
12 mm29.09°25.25°14.36°
Table 3. Binocular ranging results of lenses with different focal lengths.
Table 3. Binocular ranging results of lenses with different focal lengths.
TargetStereo ΔX = 38 mm
6 mm and 8 mm6 mm and 12 mm8 mm and 12 mm
bird100.31101.27101.00
cat104.92101.07100.56
dog103.9095.24101.66
zebra107.74102.97106.10
horse97.8698.95102.49
cow98.1696.74101.81
elephant101.2793.78105.69
MAE3.162.942.76
RMSE3.953.483.44
Table 4. Binocular ranging results with different algorithms.
Table 4. Binocular ranging results with different algorithms.
TargetStereo ΔX = 38 mm, f = 8 mm
RAFTIGEV++Ours (Detection)
bird111.69107.20105.09
cat110.54106.67105.09
dog110.54106.14103.56
zebra109.97105.61104.07
horse109.40105.61102.56
cow109.40105.61103.56
elephant108.84104.58102.07
MAE10.055.923.71
RMSE10.095.973.86
Table 5. Simulated binocular ranging results with predicted bounding box offsets.
Table 5. Simulated binocular ranging results with predicted bounding box offsets.
Distance
(cm)
Stereo ΔX = 19 mmStereo ΔX = 38 mm
6 mm8 mm12 mm6 mm8 mm12 mm12 mm
2 Pixel
12 mm
1 Pixel
90101.1898.1395.2695.2693.8992.5691.0190.50
95107.55104.11100.88100.8899.3597.8596.2195.56
100114.00110.14106.54106.54104.83103.17101.24100.62
105120.54116.24112.24112.24110.34108.50106.37105.68
110127.18122.40117.97117.97115.87113.84111.51110.75
115133.91128.62123.74123.74121.43119.21116.65115.82
120140.74134.91129.55129.55127.02124.59121.79120.89
Table 6. Performance metrics for various YOLOv5 models.
Table 6. Performance metrics for various YOLOv5 models.
ModelSizemAPval
50–95
mAPval
50
Speed
CPU b1
(ms)
Speed
RV1126
(ms)
Params
(M)
YOLOv5n64028.045.745331.9
YOLOv5s64037.456.898657.2
YOLOv5m64045.464.122414421.2
YOLOv5l64049.067.343028046.5
YOLOv5x64050.768.976650286.7
Table 7. MFBCE binocular and multi-eye ranging results (100 cm).
Table 7. MFBCE binocular and multi-eye ranging results (100 cm).
TargetStereo ΔX = 19 mmStereo ΔX = 38 mmOurs (MFBCE)
6 mm8 mm12 mm6 mm8 mm12 mmZAMZRMSZLS
bird107.14104.85104.53107.14104.37103.90102.52102.53101.39
cat105.82106.81103.90106.48104.85102.35101.06101.07100.95
dog109.89105.82100.25108.50102.96101.14101.11101.15100.90
zebra105.82104.85100.25106.48104.3799.96103.85103.86102.53
horse109.89106.81101.44105.17102.96100.54100.99101.01100.84
cow104.56105.82102.04105.82103.43102.3599.4999.5099.42
elephant108.50103.90102.04103.90103.43102.9699.9499.9899.86
MAE7.375.552.066.213.771.901.441.451.05
RMSE7.635.642.576.363.832.281.881.891.26
Table 8. MFBCE ranging results (different distances).
Table 8. MFBCE ranging results (different distances).
TargetOurs (MFBCE) ZLS
100 cm120 cm140 cm160 cm180 cm200 cm
bird101.39121.77144.22172.89193.07218.60
cat100.95123.21144.71170.44193.72215.56
dog100.90123.60147.20169.00195.22216.10
zebra102.53122.94147.27171.00192.91214.67
horse100.84122.13144.83169.94190.07213.12
cow99.42122.06144.97168.24193.03210.84
elephant99.86121.34144.54169.44185.09209.53
MAE1.052.435.3910.1511.8714.03
RMSE1.262.555.5210.2512.2714.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Q.; Wang, X.; Xue, J.; Lv, S.; Wei, R. MFBCE: A Multi-Focal Bionic Compound Eye for Distance Measurement. Sensors 2025, 25, 2708. https://doi.org/10.3390/s25092708

AMA Style

Liu Q, Wang X, Xue J, Lv S, Wei R. MFBCE: A Multi-Focal Bionic Compound Eye for Distance Measurement. Sensors. 2025; 25(9):2708. https://doi.org/10.3390/s25092708

Chicago/Turabian Style

Liu, Qiwei, Xia Wang, Jiaan Xue, Shuaijun Lv, and Ranfeng Wei. 2025. "MFBCE: A Multi-Focal Bionic Compound Eye for Distance Measurement" Sensors 25, no. 9: 2708. https://doi.org/10.3390/s25092708

APA Style

Liu, Q., Wang, X., Xue, J., Lv, S., & Wei, R. (2025). MFBCE: A Multi-Focal Bionic Compound Eye for Distance Measurement. Sensors, 25(9), 2708. https://doi.org/10.3390/s25092708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop