Next Article in Journal
Speckle-Correlation Scattering Matrix Approaches for Imaging and Sensing through Turbidity
Previous Article in Journal
Improved LiDAR Probabilistic Localization for Autonomous Vehicles Using GNSS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probe Sector Matching for Freehand 3D Ultrasound Reconstruction

School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(11), 3146; https://doi.org/10.3390/s20113146
Submission received: 14 March 2020 / Revised: 26 May 2020 / Accepted: 28 May 2020 / Published: 2 June 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
A 3D ultrasound image reconstruction technique, named probe sector matching (PSM), is proposed in this paper for a freehand linear array ultrasound probe equipped with multiple sensors, providing the position and attitude of the transducer and the pressure between the transducer and the target surface. The proposed PSM method includes three main steps. First, the imaging target and the working range of the probe are set to be the center and the radius of the imaging field of view, respectively. To reconstruct a 3D volume, the positions of all necessary probe sectors are pre-calculated inversely to form a sector database. Second, 2D cross-section probe sectors with the corresponding optical positioning, attitude and pressure information are collected when the ultrasound probe is moving around the imaging target. Last, an improved 3D Hough transform is used to match the plane of the current probe sector to the existing sector images in the sector database. After all pre-calculated probe sectors are acquired and matched into the 3D space defined by the sector database, a 3D ultrasound reconstruction is completed. The PSM is validated through two experiments: a virtual simulation using a numerical model and a lab experiment using a real physical model. The experimental results show that the PSM effectively reduces the errors caused by changes in the target position due to the uneven surface pressure or the inhomogeneity of the transmission media. We conclude that the PSM proposed in this study may help to design a lightweight, inexpensive and flexible ultrasound device with accurate 3D imaging capacity.

1. Introduction

Ultrasound imaging plays an important role in clinical diagnosis, in which the locations of the abnormalities need to be detected accurately [1,2]. Compared with other diagnostic methods, ultrasound imaging has several advantages, including its promptness, non-invasiveness and low cost. In addition, the ultrasound imaging probe works with low power consumption and causes no harm to patients and operators. Therefore, ultrasound-based diagnostic methods are used widely for screening and preventive health care, such as the regular prenatal care checkups for pregnant women and their fetuses.
A drawback of traditional 2D ultrasound imaging is that only cross-sectional images of the anatomical target are provided. Doctors need to reconstruct the full 3D structure of the target in mind. Additionally, it can be difficult to place the 2D transducer in a perfect position to appreciate the ideal cross section [3]. 3D reconstruction [4,5,6,7] may help solve this problem by synthesizing a set of cross sections with the spatial position information recorded by the transducer during the imaging. Reconstructed 3D images help doctors to better understand the anatomical morphology more intuitively. However, current equipment in the market with 3D ultrasound imaging capacity is expensive and often has a limited field of view.
Therefore, freehand 3D ultrasound reconstruction has become a research topic in recent years [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33], aiming to address the concerns of cost and field of view more appropriately. The freehand technology reduces the cost by using a linear transducer instead of a multi-dimensional transducer array in most 3D ultrasound products. Additionally, the linear transducer design is more compact, so the manipulation of the transducer becomes more convenient, and the field of view can be improved as well.
In general, there are three main ways to implement 3D ultrasound reconstruction: 1) A freehand system equipped with multiple positioning sensors [8,9,11,16,17,18,19,20,21,26]. This design is relatively lightweight. However, when the transducer is moved, the measured target surface may also be squeezed, generating distortions. Consequently, the position of the target may be displaced, as well, and the medium between the target and the probe changes. These two problems will eventually lead to the loss of fidelity. 2) A 3D robotic arm is used instead of a free hand to drive the transducer to translate, tilt and rotate [12,14,15,22,23,24,33]. However, it is difficult to improve the spatial resolution in this way, and the equipment is bulky and difficult to operate. 3) Multiple linear array transducers may be used simultaneously to acquire a set of cross sections distributed in different locations in space [31,32]. In this way, the equipment is often large and costly.
Based on the three designs mentioned above, a number of studies were carried out. An adaptive kernel regression method was proposed for volume reconstruction from freehand 2D images. The purpose of the method was to prevent the reconstructed image from being degenerated by speckle noise and artifacts [8]. By comparing the results of the reconstruction algorithm with the ultrasound data source, differences in image quality of the reconstructed volumes could be detected. This method generated new approaches for improving the quality of 3D reconstruction [11]. At low acquisition frame rates, a probe trajectory-based reconstruction method was able to improve the reconstruction results, but it was limited to the case where the probe moved at a constant speed [15]. In order to overcome the effects of target surface changes, a section-locating method based on Hough transform was proposed, which was able to quickly locate the current transducer sector by comparing the points obtained from the Hough transform with the actual position of the transducer [16]. By controlling the contacting force between the ultrasound probe and the surface of the target, a handheld force-controlled ultrasound probe was developed to improve the stability of ultrasound images [17]. Because this probe combined force and position control, it might help to overcome the effects caused by the surface changes. However, when the probe was in contact with a rigid target surface, the self-jitter of the probe would cause errors. In order to improve the reconstruction accuracy of the ultrasound image, the freehand probe was installed on a shelf with position sensors and multiple motion motors. The shelf was used to limit the movement of the probe. This method improved the quality of the reconstructed 3D images, but the portability and flexibility of the device limited the applications of this design [22].
In light of the advantages and disadvantages of the three common methods mentioned above, the current study proposes a freehand 3D ultrasound image reconstruction method, named probe sector matching (PSM), based on the linear array transducer design. PSM includes three main steps: 1) With the target as the imaging center, and the working range of the probe as the radius, all the 2D image planes that the probe sector may pass are obtained first to enable the fast matching algorithm implemented in a later stage [34,35,36,37,38]. 2) The user holds the probe and moves freely on the surface of the measured target. During the movement, the probe produces a set of 2D ultrasound images. An optical positioning device and an attitude sensor installed on the probe are used to obtain the high-precision spatial position and attitude of the probe. At the same time, a pressure sensor measures the pressure between of the probe and the target surface. This pressure is used to compensate the position errors caused by the squeezing. 3) With the improved 3D Hough transform [39,40,41], the current probe sector plane corresponds to one of the planes pre-calculated in Step 1. All the probe sectors are collected and aligned in a 3D space to complete the 3D ultrasound reconstruction. PSM is tested in two experiments, one with a numerical model and the other with a real target.

2. Space Plane Inverse Operation

In the proposed freehand 3D ultrasound imaging system with a linear-transducer probe, in which a space positioning device is installed, reconstruction of the 3D image based on the obtained 2D images and the actual position of the probe sector is the most critical technique. The working radius of the ultrasonic probe is generally fixed for the depth parameter of the same target. During the freehand movement of the probe, the distance between the imaging target and the probe changes. In case the surface of the target is rigid, the distance between the probe and the measured part may still vary due to changes in the position and angle of the probe itself, as shown in Figure 1.
In Figure 1, the shaded sector is the effective detection range of the probe, P is the location of the imaging target, r is the working radius of the probe, and S i x i , y i , z i are the probe coordinates obtained by the external positioning sensor. The probe moves along the surface of the target. When the probe moves to a new position, S 1 x 1 , y 1 , z 1 , the distance l may not be equal to the working radius of the ultrasound probe anymore.
In practice, the probe squeezes the surface of the target to be imaged during the movement. The distance between the probe and the target may change, causing the target to move in or out of the working radius of the probe at a certain time. However, due to the uncertainty of distance changes, direct calculations have a lot of extra overhead, as shown in Figure 2.
In actual measurement, the target position P i x , y , z needs to be set first. Assuming that the target is a straight line L , in a Cartesian coordinate system, we define L by the following equations:
A 1 x + B 1 y + C 1 z + D 1 = 0 A 2 x + B 2 y + C 2 z + D 2 = 0
A 1 , B 1 , C 1   and   D 1 are parameters for Plane 1 and A 2 , B 2 , C 2   and   D 2 are parameters for Plane 2. It is assumed that the ultrasonic waves enter in a direction perpendicular to the Z axis. Then, the sector obtained by the linear transducer may exist in countless planes passing through the straight line L . For the sake of convenience, assume that the line L overlaps with the Z axis. To locate the line L, the intersection of the sector and the X Y plane is needed. The method of calculating the spherical coordinates of the existing radius can be obtained by leveraging the idea of line detection based on Hough transform in a 2D plane.
As shown in Figure 3, in a spherical coordinate system with P as the origin of the coordinates, S i x i , y i , z i is a point on the sphere with a distance r from the origin P of the coordinates. If the detection depth of the probe is also set to be r , then S i x i , y i , z i are the coordinates of the contact point between the probe and the target surface. According to Hough transform, in the case where the radius r is known and P is the origin of the spherical coordinate system, a set of points { S 1 x 1 , y 1 , z 1 , , S n x n , y n , z n } can be calculated.
Expanded into a 3D space, with P as the origin of Cartesian coordinates, XYZ as the axis direction and the target M l as a cylinder with a radius l , the straight line L mentioned earlier passes through the center of the cylinder, as shown in Figure 4. Δ h is the horizontal resolution of the linear transducer. Δ h is generally a constant, related to the number of line transducers. It may also be changed by moving the probe along the Z axis.
Supposing that the surface of the measured target is a cylinder and that the straight line L passes through the center axis of the cylinder, with Equation (1), four straight lines can be calculated on the surface of the cylinder that are parallel to L . In Figure 5, the four planes composed of these four straight lines and the straight line L divide the cylinder into eight equal parts.
With any two points coming from the four sectors that pass L :   S 1 , S 2 , S 3   and   S 4 —for example, points A x 1 , y 1 , z 1 and B x 2 , y 2 , z 2 on L , and another point P i x i , y i , z i in each sector—the equation of each sector obtained by the coordinates of three points is as follows:
x x 1 y y 1 z z 1 x 2 x 1 y 2 y 1 z 2 z 1 x i x 1 y i y 1 z i z 1 = 0
where x i , y i   and   z i are the coordinates of any point P i x i , y i , z i in the sector, P 1 x 1 , y 1 , z 1 is one point in S 1 , P 2 x 2 , y 2 , z 2 is one point in S 2 , and so on.
The number of sectors in Figure 5 is equal to the vertical resolution Δ v . The more sectors, the better the quality of the reconstructed image but the longer it takes for the space plane inverse operation, which is important for creating the sector locations, and the more storage space it needs to consume.
The steps of the space plane inverse operation are shown in Table 1.

3. Spatial Position and Attitude of Probe Sector

3.1. Optical Positioning of the Probe

In order to get the spatial position of the freehand probe, PSM uses the optical positioning method with virtual reality equipment. This positioning system includes an external base station and multiple photosensors. The base station has an infrared LED array and two infrared laser emitters that rotate perpendicularly to each other. In order to obtain accurate position information, ultrasound or other positioning technologies are sometimes used to assist.
When the base station is operating, two infrared laser emitters alternately emit light during a rotating cycle. Multiple optical sensors are mounted on an optical positioning ball, which is attached to the freehand probe. Once three or more photosensitive sensors receive signals from the base station at the same time, we can measure the time when the two infrared lasers reach the sensor. Combined with the rotation angle information, the spatial location of the positioning ball on the freehand probe can be calculated, and subsequently, the trajectory of the probe can be obtained as a time sequence.
The accuracy of this positioning method depends on the temporal resolution. There is a certain distance between the light-sensitive sensors to ensure that the information of the rotation angle does not appear too large. This positioning method has many advantages: 1) the device is small in size; 2) the computational complexity is low; 3) the delay is small; and 4) the position feedback is accurate. It has been widely used in virtual reality equipment. The optical positioning ball and base station used in this paper are shown in Figure 6.
Although the positioning ball and the base station can be used to obtain the spatial position and moving trajectory of the probe, the spatial attitude of the probe cannot be determined yet. To that end, an attitude sensor is also required. The installation position of the attitude sensor and the positioning ball on the probe is shown in Figure 7. The 2D ultrasound images collected by the current sector of the probe will be transmitted wirelessly to a computer for processing.

3.2. Pressure Sensor of the Probe

The initial working radius of the ultrasound probe is r . As the freehand probe moves along the surface of the measured target, the ultrasonic transmission medium may change. The pressure of the probe on the surface may cause the target to change its position. Because the change of the medium cannot be predicted, the working radius of the probe cannot be adjusted, either. This situation sometimes leads to the situation in which the acquired ultrasound image does not include the target, making the 3D synthesis difficult. Adding a handle with a pressure sensor to the probe effectively solves the positioning problem caused by the change of the medium. Figure 8 is a schematic diagram of the pressure sensor installation. The greater the density of the medium, the greater the force feedback when the probe is squeezed on the surface. The medium from the linear transducer to the target location is usually non-linear. The feedback pressure roughly estimates the average medium density of the current detection area, which is crucial for post-processing.
In Figure 8, the target position changes when the probe is held against the surface with the handle. At this time, the working radius of the probe needs to be increased to acquire a 2D ultrasound image including the target for 3D reconstruction. The feedback value of the pressure sensor guides the adjustment of the working radius of the probe.

3.3. Error Analysis of the Sector

Ideally, the initial working radius of the ultrasound probe is set to r . In the actual operation, the working radius of the ultrasound image sector actually collected by the probe is not necessarily equal to r . The working radius mainly depends on whether the linear transducer completely fits the surface of the measured target. Because the force of the squeeze is not controllable, the working radius of the sector edge is not necessarily equal to r . There is a certain error Δ r , as shown in Figure 9.
Figure 9a shows that the probe is close to the surface of the object to be measured, and the working radius of the sector is r . Figure 9b shows that the probe is not close to the surface. The working radius of the center of the sector is r , but the working radius of the edge becomes d = r Δ r . In this case, the error of different line transducers is different. The sector error can be expressed as Δ r 1 , Δ r 2 , Δ r 3 , , Δ r n , where n is the total number of probe linear transducers. The error value can be calculated from the arrangement curve of the transducer. Squeezing the probe to get the pressure and comparing it with the error value may reduce the error.

4. Improved 3D Hough Transform

The Hough transform can transform the global detection problem in the image space into a local peak detection problem in the parameter space. The Hough transform identifies and detects any analytical curve in the image space. The essence of the Hough transform is the mapping from image space to parameter space, that is, all points on the analytical curve in image space are concentrated to a certain unit in the parameter space to form a local peak. As long as there are enough data points in the image space that belong to the same analytical curve, the parameters of the analytical curve can be calculated by judging the accumulated value of each point in the parameter space.
To extend the 2D Hough transform to 3D, we need to determine an appropriate straight-line transform method in a 3D space. Because the parameter range of Cartesian coordinates is not bounded, it is necessary to convert the straight-line representation of a 2D straight line from the Cartesian coordinates to a spherical coordinate system. The conversion Equation is as follows.
x sin θ cos φ + y sin θ sin φ + z cos θ = r
where r is the distance from the origin to the target point, θ is the angle between the straight line and the positive direction of the Z-axis, and φ is the projection of the line on the X Y plane and the positive angle of the X-axis. { P 1 x 1 , y 1 , z 1 , , P n x n , y n , z n } is the set of points corresponding to XYZ space, which satisfies the following restrictions.
First, let us take two points, A a 1 , a 2 , a 3 and B b 1 , b 2 , b 3 from the probe sector. Substitute the coordinates of each point of the { P 1 x 1 , y 1 , z 1 , , P n x n , y n , z n } set into Equation (3), according to the restrictions in Table 2. By substituting A a 1 , a 2 , a 3 and B b 1 , b 2 , b 3 into Equation (3), two more equations can be obtained. Now, we have three equations to calculate the value of θ ,   φ ,   r , as shown in Equation (4). After all points are used, the coordinates of a certain position in the space are obtained, and the accumulator of this position is incremented by one. Each point is traversed once to complete the 3D Hough transform of all the points. Taking the position with the largest count, the corresponding θ ,   φ ,   r can be converted to the plane data in X Y Z space.
a 1 sin θ cos φ + a 2 sin θ sin φ + a 3 cos θ = r b 1 sin θ cos φ + b 2 sin θ sin φ + b 3 cos θ = r x n sin θ cos φ + y n sin θ sin φ + z n cos θ = r
In the 3D Hough transform, the point P of the maximum value of the straight line in the parameter space corresponds to a plane P s of the coordinates, as shown in Figure 10. In Figure 10a, P is the maximum accumulated value in the parameter space. After 3D Hough transform, one can get a plane in the image space, as shown in Figure 10b.
Different representations of coordinate space correspond to different 3D Hough transforms. It needs to be selected comprehensively from the aspects of calculation complexity and storage capacity according to the actual situation. It can be seen from Equation (4) that the unknown variables of the expressions in coordinate space and parameter space are the same; all of them are three. Therefore, when we want to reduce the calculation dimension, we can consider reducing the number of unknown variables in coordinate space. The fewer the unknown variables of the expressions in coordinate space, the fewer the calculation dimensions of the corresponding parameter space. Based on the principle of the inverse Hough transform, this paper proposes an improved 3D Hough transform method. Points A , B and C in the coordinate space correspond to the planes A s , B s and C s in the parameter coordinates.
As shown in Figure 11b, P is the intersection of A s , B s and C s in image space. According to the 3D Hough transform, we can get a plane composed of three points A , B and C in the parameter space, as shown in Figure 11a. In other words, the intersection point P of planes A s , B s and C s is the description of the plane where points A , B and C are located. In this way, the plane data of any three points in space coordinates can be obtained.
In Figure 12, the plane data, S 1 ,   S 2 ,   S 3   a n d     S 4 that the probe may pass through are obtained by inverse calculation from the target position. The freehand probe moves in the space, and the space plane P 1 where the current sector is located is obtained by the positioning device. The core of 3D reconstruction is to quickly and accurately place 2D images of multiple probe sectors into a 3D space. Through the improved 3D Hough transform in this paper, the current sector picture of the probe can be quickly placed into the spatial plane model.
Steps of the improved 3D Hough transform proposed in this paper are shown in Table 3.

5. Reconstruction Experiment

The Probe Sector Matching (PSM) proposed in this paper is verified by two methods: a numerical model and a real target. First, a numerical model with adjustable surface and medium parameters is established with MATLAB, using the k-wave toolbox to create multiple probes that simulate the movement of a freehand probe in a 3D space. Pressure feedback and position feedback are obtained through the PSM, which is used to modify the distorted reconstructed image. Then, the correct reconstructed image is obtained. For the second method, the real target is placed in a hemispherical container filled with a sound coupling medium. A wireless linear probe with a spatial positioning device is used to acquire ultrasound images. The positioning device is installed on the wireless probe, which is composed of the attitude sensor and positioning ball. The process of obtaining the probe position is described in the third part. A set of images including the spatial positions collected by the probe are transmitted to the computer by wireless transmission. These images obtained are reconstructed through PSM to reconstruct the real target with MATLAB.

5.1. Experiment Preparation

The numerical model is based on the Huygens principle, which means that when a sound wave encounters an obstacle, the obstacle will become a new sound source. First, a circular target is created in the XY plane. The spatial positions of the target and the probe are shown in Figure 12. According to Figure 5, two sets of ultrasonic sensors with vertical resolutions of 8 and 128 are established to simulate the freehand movement of the probe in the 3D space. The vertical resolution is the number of ultrasonic sensors, and it is also the location where the ultrasonic probe collects images in space. As shown in Figure 13a, if the vertical resolution = 8, then the ultrasonic probe will collect images at eight positions with average distribution in the space. As shown in Figure 13, the blue dots are the positions passed by the probe. The surface where the probe moves is the initial position of the blue dot. The red dotted line is the reference sector, which is used for data normalization when the surface and medium change. The center of the yellow ring target is located at the origin of the XY plane.
Table 4 describes the specification of the wireless linear ultrasound probe used in the real model experiment. The real model is also a ring-shaped object, which is placed in a hemispherical container filled with the coupling medium, as shown in Figure 14. The probe moves along the surface of the container to acquire a set of ultrasound images, which is used as the raw data for the PSM.

5.2. Numerical Model Reconstruction Result

In the ideal case where the surface and the medium are unchanged, the relative position between the yellow ring target and the probe is unchanged. The detection radius r of the probe is set to a constant greater than n . Acquisition and processing are completed at vertical resolutions of 8 and 128, respectively. Then, the image of the yellow ring target is synthesized, which is indicated by the blue line in Figure 15. The higher the vertical resolution, the closer the target image is to reality. It can be seen from the data cursor on the reference sector in Figure 16 that the distance between the probe and the ring target is n = 4 mm.
Based on the initial values in Table 5, interpolation and fitting methods are used to establish a continuously changing surface parameter curve, as shown in Figure 16. The horizontal axis represents the change in radians. The vertical axis represents the surface coefficient of variation.
Table 5 shows the initial values of the numerical model when the moving surface of the probe changes. The resolution Δ v is 8. The speed of sound is 1500 m / s . The frequency is 1 kHz . n is the distance between the probe and the target on the reference sector, which is 4 mm. On the reference sector, it takes 2.7 μs for the probe to receive the echo. Therefore, 2.7 μs is the reference time. When the surface changes, the echo time error is relative to 2.7 μs. In order to simulate surface changes, the initial value of D is between 1.75 mm and 7 mm, which also indicates the change in the distance between the probe and the target. T and Δ t represent the time and error of the probe receiving the echo.
When the surface changes according to Figure 16, the ideal reconstruction algorithm cannot compensate the error by adjusting the working parameters of the probe. Therefore, except for the reference sector, the reconstructed images of other probe positions are not aligned correctly. The final reconstructed image is shown by the blue line in Figure 17.
The relative position of the probe and the target is D = 4 mm and remains unchanged. However, the transmission medium coefficient (TMC) of the probe changes. The initial values when the TMC changes in the numerical model are shown in Table 6. On the reference sector, the value of TMC is initialized to 1 and is used as the reference value of the echo time error when the TMC changes. The TMC values range from 2.77 to 0.75. The larger the value of the TMC, the weaker the transmission capacity of the medium and the longer the time for the echo to travel through the medium.
Similarly, on the basis of Table 6, interpolation and fitting are used to establish a continuously changing TMC parameter curve, as shown in Figure 18. The horizontal axis represents the change in radians. The vertical axis represents the transmission medium coefficient. When the TMC changes according to Figure 18, the final reconstructed image without the PSM is shown by the blue line in Figure 19, in which (a) and (b) are the results for Δ v = 8 and Δ v = 128.
The data of the wireless probe positioning ball, attitude sensor and pressure sensor mentioned above are processed according to the range of the surface and medium. Based on these data, the PSM monitors the change in distance between the probe and the target, and the force between the probe and the surface. The changes in the distance and the force both feed back to the wireless probe for adjusting the working parameters. As shown in Figure 20, the reconstruction of the surface changes is displayed in Figure 20a, and the reconstruction of the medium changes is displayed in Figure 20b. The red dotted line in the figure indicates the feedback information of the PSM output to the wireless probe in 3D space coordinates. The length of the dashed line indicates the feedback intensity of the probe at this point. The greater the intensity, the greater the distortion of the reconstructed image at this location. In this way, the distortion of the reconstructed image can be judged in advance. Therefore, it can be corrected during the reconstruction process to obtain a more accurate reconstructed image.
According to the PSM feedback, the reconstruction distortion due to the surface and medium changes is improved. The X and Y coordinates of each point in the ideally reconstructed image obtained in Figure 15 are considered. Additionally, the accumulated values of X and Y are taken as the vertical axis, and the radians, as the horizontal axis. The reconstructed ideal target curve in Figure 21 is obtained, as shown by the blue “+” in the figure. The reconstructed image curve corrected by the PSM is plotted with red dots. In Figure 21, a reconstructed image with a surface change is shown in Figure 21a, and a reconstructed image with a medium change is shown in Figure 21b. It can be seen that the coordinate point curve of the reconstructed image after the PSM is basically consistent with the change in the ideal case. Due to the spatial positioning error of the wireless probe and the non-linear relationship between the pressure feedback and the medium change, at some points, there is a certain deviation. But the true shape of the target can be clearly appreciated. The images reconstructed by the PSM are shown in Figure 22 and Figure 23. Figure 22 is the case where the resolution is 8. Figure 23 is the case with a resolution of 128. The red line indicates the reconstructed image. Compared with Figure 17 and Figure 19, the quality of the reconstruction of the target has been significantly improved.

5.3. Real Target Reconstruction Result

The real target experimental data are shown in Figure 24. When acquiring images, because it is a freehand probe, the distance between the probe and the target is uncertain. The medium of the container is also non-uniform. Figure 24 is a set of the cross-sectional images collected by the probe at various points in space. Figure 25 is the reconstructed image of the real target with the PSM.

6. Conclusions

This paper proposes a freehand 3D ultrasound reconstruction method, PSM, based on linear ultrasound transducers. Through the three steps described above, the PSM can match the current probe sector plane to a certain plane in the Sector Database. Then, the image of the probe sector is placed in a 3D space to complete the 3D ultrasound reconstruction. The PSM is validated in two experiments: one with a numerical model and the other with a real physical model. In the case of the surface extrusion and medium change, the experimental results show that the PSM effectively reduces the distortion caused by the shifted surface after pressing or the inhomogeneity of the transmission medium.
There are limitations of the study. The reconstruction results are still different from the ideal situation. This is mainly due to the errors in the spatial positioning of the probe and the non-linearity between the pressure feedback and the change of the medium. To address the problems, in the future, further study to optimize the PSM can be done in the following aspects:
  • More realistic models for the surface and the medium changes.
  • A better pressure feedback device to account for multidimensional pressure.
  • Animal experiments that may help to further validate the PSM for applications in biological sciences.

Author Contributions

Conceptualization, X.C. and Y.P.; methodology, X.C. and H.C.; software, X.C.; validation, X.C., Y.P. and D.T.; formal analysis, D.T.; investigation, X.C.; resources, X.C.; data curation, X.C.; writing—original draft preparation, X.C.; writing—review and editing, X.C. and Y.P.; visualization, X.C.; supervision, H.C. and Y.P.; project administration, X.C.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported in part by the National Natural Science Foundation of China (through the grant Nos. 61771039 and 61872030), Shandong Province Major Science and Technology Innovation Project [grant number 2019TSLH0206] and Y.P. was supported in part by the Fundamental Research Funds for the Central Universities through Beijing Jiaotong University (grant No. 2015JBM021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Noble, J.A.; Boukerroui, D. Ultrasound image segmentation: A survey. IEEE Trans. Medical Imaging 2006, 25, 987–1010. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ritter, F.; Boskamp, T.; Homeyer, A.; Laue, H.; Schwier, M.; Link, F.; Peitgen, H.-O. Medical image analysis. Comput. Phys. Commun. 2013, 2, 60–70. [Google Scholar] [CrossRef] [PubMed]
  3. Lazebnik, R.; Desser, T.S. Clinical 3D ultrasound imaging: Beyond obstetrical applications. Diagn. Imaging 2007, 1, 1–6. [Google Scholar]
  4. Sanches, J.; Bioucas-Dias, J.; Marques, J.S. Minimum Total Variation in 3D Ultrasound Reconstruction. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14–14 September 2005; Volume 3, p. III-597. [Google Scholar]
  5. Fenster, A.; Downey, D.; Cardinal, H.N. Three-dimensional ultrasound imaging. Phys. Med. Biol. 2001, 46, R67–R99. [Google Scholar] [CrossRef]
  6. Fenster, A.; Surry, K.; Smith, W.; Gill, J.; Downey, D.B. 3D ultrasound imaging: Applications in image-guided therapy and biopsy. Comput. Graph. 2002, 26, 557–568. [Google Scholar] [CrossRef]
  7. Chiu, B.; Egger, M.; Spence, D.J.; Parraga, G.; Fenster, A. Area-preserving flattening maps of 3D ultrasound carotid arteries images. Med Image Anal. 2008, 12, 676–688. [Google Scholar] [CrossRef]
  8. Wen, T.; Yang, F.; Gu, J.; Chen, S.; Wand, L.; Xie, Y. An adaptive kernel regression method for 3D ultrasound reconstruction using speckle prior and parallel GPU implementation. Neurocomputing 2018, 275, 208–223. [Google Scholar] [CrossRef]
  9. Solberg, O.V.; Lindseth, F.; Torp, H.; Blake, R.E.; Hernes, T.A.N. Freehand 3D Ultrasound Reconstruction Algorithms-A Review. Ultrasound Med. Biol. 2007, 33, 991–1009. [Google Scholar] [CrossRef]
  10. EJuszczyk, J.; Galinska, M.; Pietka, E. Time Regarded Method of 3D Ultrasound Reconstruction. In Proceedings of the International Conference on Information Technologies in Biomedicine, Kamień Śląski, Poland, 18–20 June 2018; Springer: Cham, Switzerland, 2018; pp. 205–216. [Google Scholar]
  11. Solberg, O.V.; Lindseth, F.; Bø, L.E.; Muller, S.; Bakeng, J.B.L.; Tangen, G.A.; Hernes, T.A.N. 3D ultrasound reconstruction algorithms from analog and digital data. Ultrasonics 2011, 51, 405–419. [Google Scholar] [CrossRef]
  12. Jayarathne, U.L.; Moore, J.; Chen, E.C.S.; Pautler, S.E.; Peters, T.M. Real-Time 3D Ultrasound Reconstruction and Visualization in the Context of Laparoscopy. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 602–609. [Google Scholar]
  13. Gee, A.; Prager, R.; Treece, G.; Berman, L. Engineering a freehand 3D ultrasound system. Pattern Recognit. Lett. 2003, 24, 757–777. [Google Scholar] [CrossRef]
  14. Chen, Z.; Huang, Q. Real-time freehand 3D ultrasound imaging. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 6, 74–83. [Google Scholar] [CrossRef]
  15. Coupé, P.; Hellier, P.; Azzabou, N.; Barillot, C. 3D Freehand Ultrasound Reconstruction based on Probe Trajectory. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Copenhagen, Denmark, 1–6 October 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 597–604. [Google Scholar]
  16. Chen, X.; Chen, H.; Tao, D.; Xie, J.; Li, X. Ultrasonic Section Locating Method Based on Hough Transform. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), Yilan, Taiwan, 20–22 May 2019. [Google Scholar]
  17. Gilbertson, M.; Anthony, B.W. Force and Position Control System for Freehand Ultrasound. IEEE Trans. Robot. 2015, 31, 835–849. [Google Scholar] [CrossRef]
  18. Daoud, M.; Alshalalfah, A.; Al-Najar, M. GPU Accelerated Implementation of Kernel Regression for Freehand 3D Ultrasound Volume Reconstruction. In Proceedings of the 2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), Kuala Lumpur, Malaysia, 4–8 December 2016; pp. 586–589. [Google Scholar]
  19. Vo, Q.; Le, L.; Lou, E. A semi-automatic 3D ultrasound reconstruction method to assess the true severity of adolescent idiopathic scoliosis. Med Biol. Eng. Comput. 2019, 57, 2115–2128. [Google Scholar] [CrossRef] [PubMed]
  20. Daoud, M.; Alshalalfah, A.; Awwad, F.; Al-Najar, M. A Freehand 3D Ultrasound Imaging System using Open-Source Software Tools with Improved Edge-Preserving Interpolation. Int. J. Open Source Software Process. 2014, 5, 39–57. [Google Scholar] [CrossRef]
  21. Daoud, M.; Alshalalfah, A.; Awwad, F.; Al-Najar, M. Freehand 3D Ultrasound Imaging System Using Electromagnetic Tracking. In Proceedings of the 2015 International Conference on Open Source Software Computing (OSSCOM), Amman, Jordan, 10–13 September 2015; pp. 1–5. [Google Scholar]
  22. Huang, Q.; Xie, B.; Ye, P.; Chen, Z. 3-D Ultrasonic Strain Imaging Based on a Linear Scanning System. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2015, 62, 392–400. [Google Scholar] [CrossRef] [PubMed]
  23. Prevost, R.; Salehi, M.; Sprung, J.; Bauer, R.; Wein, W. Deep Learning for Sensorless 3D Freehand Ultrasound Imaging. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017; pp. 628–636. [Google Scholar]
  24. Gao, H.; Huang, Q.; Xu, X.; Li, X. Wireless and sensorless 3D ultrasound imaging. Neurocomputing 2016, 195, 159–171. [Google Scholar] [CrossRef]
  25. Rezajoo, S.; Sharafat, A.R. Robust Estimation of Displacement in Real-Time Freehand Ultrasound Strain Imaging. IEEE Trans. Med Imaging 2018, 37, 1664–1677. [Google Scholar] [CrossRef]
  26. Schimmoeller, T.; Colbrunn, R.; Nagle, T.; Lobosky, M.; Neumann, E.; Owings, T.; Landis, B.; Jelovsek, J.; Erdemir, A. Instrumentation of off-the-shelf ultrasound system for measurement of probe forces during freehand imaging. J. Biomech. 2019, 83, 117–124. [Google Scholar] [CrossRef]
  27. Mozaffari, M.; Lee, W.S. Freehand 3-D ultrasound imaging: A systematic review. Ultrasound Med. Biol. 2017, 43, 2099–2124. [Google Scholar] [CrossRef] [Green Version]
  28. Mohamed, F.; Siang, C.V. A Survey on 3D Ultrasound Reconstruction Techniques. In Artificial Intelligence-Applications in Medicine and Biology; Mohamed, F., Siang, C.V., Eds.; IntechOpen: London, UK, 2019. [Google Scholar]
  29. Coupé, P.; Hellier, P.; Morandi, X.; Barillot, C. Probe trajectory interpolation for 3D reconstruction of freehand ultrasound. Med. Image Anal. 2007, 11, 604–615. [Google Scholar] [CrossRef] [Green Version]
  30. De Lorenzo, D.; Vaccarella, A.; Khreis, G.; Moennich, H.; Ferrigno, G.; De Momi, E. Accurate calibration method for 3D freehand ultrasound probe using virtual plane. Med. Phys. 2011, 38, 6710–6720. [Google Scholar] [CrossRef] [PubMed]
  31. Gemmeke, H.; Hopp, T.; Zapf, M.; Kaiser, C.G.; Ruiter, N. 3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results. Nucl. Instrum. Methods Phys. Res. Sect. A 2017, 873, 59–65. [Google Scholar] [CrossRef]
  32. Muller, T.; Stotzka, R.; Ruiter, N.; Schlote-Holubek, K.; Gemmeke, H. 3D Ultrasound Computer Tomography: Data Acquisition Hardware. In Proceedings of the IEEE Symposium Conference Record Nuclear Science 2004, Rome, Italy, 16–22 October 2004; Volume 5, pp. 2788–2792. [Google Scholar]
  33. Huang, Q.; Wu, B.; Lan, J.; Li, X. Fully automatic three-dimensional ultrasound imaging based on conventional B-scan. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 426–436. [Google Scholar] [CrossRef] [PubMed]
  34. Song, F.; Zhou, Y.; Chang, L.; Zhang, H.K. Modeling space-terrestrial integrated networks with smart collaborative theory. IEEE Network 2019, 33, 51–57. [Google Scholar] [CrossRef]
  35. Song, F.; Ai, Z.; Zhou, Y.; You, I.; Raymond Choo, K.; Zhang, H.K. Smart collaborative automation for receive buffer control in multipath industrial networks. IEEE Trans. Ind. Inform. 2019. [Google Scholar] [CrossRef]
  36. Kiryati, N.; Eldar, Y.; Bruckstein, A.M. A probabilistic Hough transform. Pattern Recognit. 1991, 24, 303–316. [Google Scholar] [CrossRef]
  37. Song, F.; Zhu, M.; Zhou, Y.; You, I.; Zhang, H.K. Smart collaborative tracking for ubiquitous power IoT in edge-cloud interplay domain. IEEE Internet Things J. 2019. [Google Scholar] [CrossRef]
  38. Wang, R.Z.; Tao, D. Context-Aware Implicit Authentication of Smartphone Users Based on Multi-Sensor Behavior. IEEE Access. 2019, 7, 119654–119667. [Google Scholar] [CrossRef]
  39. Hwang, S.; Kim, C.; Beak, J.; Eom, H.; Lee, M. A study on Obstacle Detection Using 3D Hough Transform with corner. In Proceedings of the SICE Annual Conference 2010, Taipei, Taiwan, 18–21 August 2010; pp. 2507–2510. [Google Scholar]
  40. Qiu, W.; Ding, M.; Yuchi, M. Needle Segmentation Using 3D Quick Randomized Hough Transform. In Proceedings of the 2008 First International Conference on Intelligent Networks and Intelligent Systems, Wuhan, China, 1–3 November 2008; pp. 449–452. [Google Scholar]
  41. Xu, L.; Oja, E. Randomized Hough transform (RHT): Basic mechanisms, algorithms, and computational complexities. CVGIP: Image Underst. 1993, 57, 131–154. [Google Scholar] [CrossRef]
Figure 1. The probe and the imaging target in ideal situation: (a) Distance is equal to the working radius of the ultrasound probe; (b) Distance is not equal to the working radius of the ultrasound probe.
Figure 1. The probe and the imaging target in ideal situation: (a) Distance is equal to the working radius of the ultrasound probe; (b) Distance is not equal to the working radius of the ultrasound probe.
Sensors 20 03146 g001
Figure 2. Surface of the target changes: (a) Distance less than the radius; (b) Distance is greater than the radius.
Figure 2. Surface of the target changes: (a) Distance less than the radius; (b) Distance is greater than the radius.
Sensors 20 03146 g002
Figure 3. Method based on Hough transform: (a) Target surface unchanged; (b) Target surface change.
Figure 3. Method based on Hough transform: (a) Target surface unchanged; (b) Target surface change.
Sensors 20 03146 g003
Figure 4. Horizontal resolution of measured target.
Figure 4. Horizontal resolution of measured target.
Sensors 20 03146 g004
Figure 5. Vertical resolution of the measured target.
Figure 5. Vertical resolution of the measured target.
Sensors 20 03146 g005
Figure 6. Optical positioning of the probe: (a) Base station; (b) The optical positioning ball.
Figure 6. Optical positioning of the probe: (a) Base station; (b) The optical positioning ball.
Sensors 20 03146 g006
Figure 7. Probe with the attitude sensor and the positioning ball.
Figure 7. Probe with the attitude sensor and the positioning ball.
Sensors 20 03146 g007
Figure 8. The probe with a pressure sensor. (a) Surface is not squeezed; (b) Surface is squeezed.
Figure 8. The probe with a pressure sensor. (a) Surface is not squeezed; (b) Surface is squeezed.
Sensors 20 03146 g008
Figure 9. Error analysis of the sector: (a) the probe fits the surface; (b) the probe does not fit the surface.
Figure 9. Error analysis of the sector: (a) the probe fits the surface; (b) the probe does not fit the surface.
Sensors 20 03146 g009
Figure 10. The 3D Hough transform: (a) The parameter space; (b) The image space.
Figure 10. The 3D Hough transform: (a) The parameter space; (b) The image space.
Sensors 20 03146 g010
Figure 11. The improved 3D Hough transform: (a) The parameter space; (b) The image space.
Figure 11. The improved 3D Hough transform: (a) The parameter space; (b) The image space.
Sensors 20 03146 g011
Figure 12. Probe sector synthesis in a 3D space.
Figure 12. Probe sector synthesis in a 3D space.
Sensors 20 03146 g012
Figure 13. Numerical model: (a) Δ v   = 8; (b) Δ v = 128.
Figure 13. Numerical model: (a) Δ v   = 8; (b) Δ v = 128.
Sensors 20 03146 g013
Figure 14. The experiment with the real model.
Figure 14. The experiment with the real model.
Sensors 20 03146 g014
Figure 15. The numerical model: (a) Δ v   = 8; (b) Δ v = 128.
Figure 15. The numerical model: (a) Δ v   = 8; (b) Δ v = 128.
Sensors 20 03146 g015
Figure 16. Surface change curve.
Figure 16. Surface change curve.
Sensors 20 03146 g016
Figure 17. Reconstructed image when the surface changes without the probe sector matching (PSM): (a) Δ v = 8; (b) Δ v   = 128.
Figure 17. Reconstructed image when the surface changes without the probe sector matching (PSM): (a) Δ v = 8; (b) Δ v   = 128.
Sensors 20 03146 g017
Figure 18. The transmission medium coefficient (TMC) curve.
Figure 18. The transmission medium coefficient (TMC) curve.
Sensors 20 03146 g018
Figure 19. Reconstructed image when the TMC changes without the PSM: (a) Δ v = 8; (b) Δ v = 128.
Figure 19. Reconstructed image when the TMC changes without the PSM: (a) Δ v = 8; (b) Δ v = 128.
Sensors 20 03146 g019
Figure 20. The PSM feedback when ∆v = 128: (a) The surface changes; (b) The TMC changes.
Figure 20. The PSM feedback when ∆v = 128: (a) The surface changes; (b) The TMC changes.
Sensors 20 03146 g020
Figure 21. The target curve comparison: (a) With surface changes; (b) With TMC changes.
Figure 21. The target curve comparison: (a) With surface changes; (b) With TMC changes.
Sensors 20 03146 g021
Figure 22. Reconstructed image with the PSM when Δ v = 8: (a) With surface changes; (b) With TMC changes.
Figure 22. Reconstructed image with the PSM when Δ v = 8: (a) With surface changes; (b) With TMC changes.
Sensors 20 03146 g022
Figure 23. Reconstructed image with the PSM when Δ v = 128: (a) With surface changes; (b) With TMC changes.
Figure 23. Reconstructed image with the PSM when Δ v = 128: (a) With surface changes; (b) With TMC changes.
Sensors 20 03146 g023
Figure 24. Cross-sectional images collected by the probe.
Figure 24. Cross-sectional images collected by the probe.
Sensors 20 03146 g024aSensors 20 03146 g024b
Figure 25. Reconstructed image of the real target with the PSM: (a) The real target; (b) The reconstructed image.
Figure 25. Reconstructed image of the real target with the PSM: (a) The real target; (b) The reconstructed image.
Sensors 20 03146 g025
Table 1. Steps of space plane inverse operation.
Table 1. Steps of space plane inverse operation.
STEP1According to the target position, set the probe detection depth parameter r. Then, set the target point Pi (x, y, z) inside the measured target. The direct distance D between Pi (x, y, z) and the probe may change during the measurement procedure. The actual value of D is obtained by the spatial position of the probe and pressure compensation, which will be analyzed in detail later. The value of D is generally much larger than l which is the radius of the measured target M as shown in Figure 4.
STEP2Select any straight line L that is passing through Pi (x, y, z) and parallel to the plane of the movement of the freehand probe.
STEP3According to real-time requirements and the situation regarding data storage space, the value of the vertical resolution Δ𝑣 is determined and the number of sectors is set. A point on each sector is taken to form a set of spatial points S1 (x1, y1, z1),…,Sn (xn, yn, zn ) where n = Δ𝑣. Using Equation (2), the space plane inverse operation can be completed to obtain the sectors S1,…,Sn. The data for these sectors will be used to create the Sector Database (SD), which is used for the quick matching step later on.
Table 2. Restrictions of values.
Table 2. Restrictions of values.
The Angle Between the Straight Line and the Positive Direction of the Z-Axis θ 0 , π
The projection of the line on the X Y plane and the positive angle of the X-axis φ π , + π
The distance from the origin to the target point r 0 , M a x { P 1 x 1 , y 1 , z 1 , , P n x n , y n , z n
Table 3. Steps of the improved 3D Hough transform.
Table 3. Steps of the improved 3D Hough transform.
STEP1Take B and C on the current plane of the probe sector.
STEP2Select a plane and take point A from the Sector Database (SD).
STEP3The randomly generated points A, B and C are respectively subjected to the 3D Hough transform. If the result is a point in the parameter space, the current sector plane P1 of the probe corresponds to S3 of the database. Conversely, if the result is not a point, then repeat STEP2. If there is no corresponding plane after traversing all the sectors of the database, a re-establishment of the spatial plane model is needed and the vertical resolution Δ𝑣 is improved.
Table 4. Specification of wireless linear ultrasound probe.
Table 4. Specification of wireless linear ultrasound probe.
ParameterValue
Elements128
Dimension156 × 60 × 20 mm
Image frame rate18 frames/s
Weight220 gs ~ 250 g
Table 5. Initial values when the surface changes.
Table 5. Initial values when the surface changes.
Δv = 8, Speed of Sound = 1500 m/s
Ultrasound Frequency = 1 kHz, n = 4 mm
D/mmT/us∆t/us
1.751.2−1.5
2.51.7−1
3.252.2−0.5
42.70
4.753.20.5
5.53.71
6.254.21.5
74.72
Table 6. Initial values when the surface changes.
Table 6. Initial values when the surface changes.
Δv = 8, Speed of Sound = 1500 m/s
Ultrasound Frequency = 1 kHz, n = D = 4 mm
TMCT/us∆t/us
2.7711.28.5
2.047.75
1.2841.3
12.70
0.852−0.7
0.821.8−0.9
0.811.7−1
0.751.5−1.2

Share and Cite

MDPI and ACS Style

Chen, X.; Chen, H.; Peng, Y.; Tao, D. Probe Sector Matching for Freehand 3D Ultrasound Reconstruction. Sensors 2020, 20, 3146. https://doi.org/10.3390/s20113146

AMA Style

Chen X, Chen H, Peng Y, Tao D. Probe Sector Matching for Freehand 3D Ultrasound Reconstruction. Sensors. 2020; 20(11):3146. https://doi.org/10.3390/s20113146

Chicago/Turabian Style

Chen, Xin, Houjin Chen, Yahui Peng, and Dan Tao. 2020. "Probe Sector Matching for Freehand 3D Ultrasound Reconstruction" Sensors 20, no. 11: 3146. https://doi.org/10.3390/s20113146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop