Next Article in Journal
First Natural Connection on Riemannian Π-Manifolds
Previous Article in Journal
Third Hankel Determinant for a Subfamily of Holomorphic Functions Related with Lemniscate of Bernoulli
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Door State Recognition Method for Wall Reconstruction from Scanned Scene in Point Clouds

1
College of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
2
Shaanxi Key Laboratory of Network Computing and Security Technology, Xi’an 710048, China
3
National Laboratory of Pattern Recognition/National Laboratory of Multimodal Artificial Intelligence System, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
4
School of Artificial Intelligence and Computer Science, Jiangnan University, 1800 of Lihu Road, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1149; https://doi.org/10.3390/math11051149
Submission received: 21 December 2022 / Revised: 6 February 2023 / Accepted: 20 February 2023 / Published: 25 February 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Doors are important elements of building façades in scanned point clouds. Accurate door detection is a critical step in building reconstruction and indoor navigation. However, recent door detection methods may often obtain incomplete information and can only detect doors with a single state (open or closed). To improve this, a door state recognition method is proposed based on corner detection and straight-line fitting. Firstly, plane segmentation based on local features is introduced to obtain structural division from the raw scanned data to extract the wall. Next, the bounding box of each plane is calculated to obtain the corner points, which is then combined with the feature constraint to classify the elements of door and wall. Then, the boundary of each plane is extracted by normal vector, and the disordered and discontinuous boundary points are straight-line fitted based on projection. Finally, the state of the door is obtained through analysis of the angle between the straight-lines of the wall and the door. The effectiveness of the proposed method is tested and evaluated on the Livingroom of ICL-NUIM and House of Room detection datasets. Furthermore, comparative experimental results indicate that our method can extract corner points and recognize the different states of doors effectively and robustly in different scenes.

1. Introduction

With the popularization and maturity of 3D data acquisition technology, indoor scene reconstruction has been an important research topic in computer vision and computer graphics. The reconstruction result is widely used in architectural design [1,2], robot navigation [3,4], game design [5] and so on. In indoor scenes, the door is the most important component of the wall and greatly affects the authenticity of the indoor 3D model. Therefore, building complete and accurate walls requires complete reconstruction of door information. Additionally, the accurate state of the door is very important especially in indoor robot navigation, which can also provide correct actions for the robot to operate the door in the next step. However, it is still challenging to recognize different states of the door due to the incompleteness of door structure when scanning and multiple doors existed in the wall.
In recent years, some automated door state recognition methods have been proposed [6,7,8,9]. In this work, we focus on the current methods for door detection and state recognition from point clouds. These methods can be roughly divided into two categories, i.e., the feature tailored method [6,7,8,10,11,12,13,14,15,16,17,18], the deep learning method [9,19,20].
The early feature tailored detection method is based on 2D image. Quintana et al. [10,11] proposed to detect the 3D positions of doors with different opening and closing states. The core idea of this algorithm is to convert the point cloud into an image with color and depth, which is then detected by traditional graphics algorithms and mapped back to the three-dimensional space. However, in the process of converting point clouds into an image, 3D information and some features of the raw point cloud data (PCD) will be lost.
To improve the accuracy and reliability of detection, some scholars directly detect the door information in the point clouds. Borgsen et al. [12] utilized the region growing segmentation and RANSAC [13] to obtain the candidate plane of the door. Then boundary points of the candidate plane are calculated to obtain the corresponding door frame. This method can only detect the closed doors. Ochmann et al. [14] detected the door and window through a greedy single-link clustering based on the distance of the intersection between the reconstructed wall and the simulated scanning laser ray, but for a room with half-closed walls balconies, these points may be classified as an outside area in the segmentation process. Xie et al. [15] presented to detect doors by Delaunay triangulation, α-shape and scanning trajectory. The method assumes that the state of all doors is open during data acquisition, so only the fully opened doors can be detected. Previtali et al. [16] proposed an improved RANSAC to segment the wall, in which the window and door are detected by ray tracing and regularization. But it can only detect doors with open or sidewall. Díaz-Vilariño [17] extracted the boundary of the candidate door from its orthophoto by the generalized Hough transform, and then determined the state of the door through the geometric information of the point clouds in the region. Kakillioglu et al. [18] introduced an aggregate channel feature (ACF) learning method to detect whether the recognized opening is a real door, but this method can only detect an open door. Adan et al. [8] extracted the open door by detecting the rectangular data opening in the wall, while the closed doors are detected by the discontinuity of the RGB-D space and the geometric characteristics of the wall. The method can obtain different states of the door, but in the case that the wall and the door having similar colors, a small region of the door will be labeled as a wall.
With recent advances in deep learning, more work has investigated door detection with 3D points input. Chen et al. [19] trained color pictures obtained from different visual angles in a convolutional neural network to detect doors. It can obtain a very small error rate, but a large number of points are missed during the detection process. Cheng et al. [20] proposed a semantics-guided method for indoor navigation element reconstruction from RGB-D sensor data and a deconvolution network is used to combine the local visual information and geometric features to restore clear target boundaries, and adjusted RGB information and depth cues to improve target detection performance. Yang et al. [9] proposed a U-Net-based door detection under the constraint of architectural structure. This method can identify most of the doors effectively. But it would fail when the structure is complicated and the door data is missing.
In summary, most of the recent indoor detection work focus on the door recognition and consider only the single-state doors (open door), and the identified door size and position is different from that in the actual scene. Compared to the existing methods above that require training on scene dataset, our method detects the door states from datasets and does not require additional training on scenes. The main contributions of our work can be summarized as:
(1) A comprehensive framework is constructed for door state recognition, which combines wall extraction based on local feature, door detection based on corner points and straight-line fitting to achieve door state recognition. It could meet the needs of indoor navigation robots for door opening actions in different states.
(2) A corner points detection method is introduced to determine the spatial location and size of door to solve the problem of incomplete door recognition, which improves the recognition rate of doors with incomplete structure.
(3) A reconstruction method of detailed wall structure is implemented based on the door state recognition, which could provide more accurate quality of scene model.
The organization of this paper is as follows. Section 2 presents the overview of our method. In Section 3 we describe the detailed algorithm of door state recognition. Section 4 introduces and analyzes the experimental results. Finally, conclusions are drawn in Section 5.

2. Overview

The input to our method is the raw scanned data of an indoor scene, represented as unorganized point cloud data. Figure 1 illustrates the overview of our proposed method, which essentially consists of three parts. The main steps of the pipeline are as follows:
Part 1: Wall extraction and segmentation—A random sampling consensus algorithm (RANSAC) is used to segment the original scene. Then the wall is extracted by constructing the histogram of z coordinate distribution of the point cloud data, and the region growing algorithm is used to divide the wall structure.
Part 2: Door recognition—To obtain the complete structure and features (volume and height) of planes accurately, a corner points detection method based on AABB (axis-aligned bounding box) and OBB (oriented bounding box) is proposed. To acquire corner points, we determine the plane whether it is parallel to the XOZ or not, then the volume and height of each plane is calculated according to the corner points. Next, the door is recognized based on the analysis of the volume and height of each plane. To more clearly observe the state of the door, the wall point cloud in Figure 1 is from the rear view.
Part 3: Door state recognition—To recognize the different states of the door rather than the single-state, a door state recognition method is proposed consisting of the boundary points extracted based on normal vector and straight-line fitting based on the least square method. According to the angle between straight-lines, the state of the door is determined to be either closed, open or semi-open.

3. Method

3.1. Wall Segmentation and Extraction

Generally, the wall is represented as a regular plane and can be extracted by recognizing the vertical planes. First, the scene is segmented into several planes (Figure 2a) by RANSAC. Then, the wall is extracted through recognizing the vertical planes based on the height of whole scene calculated by histograms of the z-coordinate distribution.
A histogram of the z coordinate distribution (Figure 2b) is constructed to analyze peak values according to the distribution of the scene, and the height between ground and ceiling is obtained by peak values. Then, the z coordinate of the scene is divided into multiple intervals. In this paper, the number of intervals is within the range of [90, 100], and the group distance μ is adapted according to the number of intervals. The distinct peaks on the histogram will be at the z coordinate of floor and ceiling points. The peak at the smaller and larger z coordinate is determined to be ground and ceiling points, respectively. Additionally, the difference of these x coordinates of peaks is then calculated to represent the height of the whole scene, H w h o l e . The average value of the ground z-coordinate is recorded as F z , and the average value of the ceiling z-coordinate is recorded as C z . Thus, the z coordinate range of the ground is [ F z μ , F z + μ ] , the ceiling is [ C z μ , C z + μ ] and the whole scene is [ C z F z 2 μ , C z F z + 2 μ ] .
Considering that the height of the wall is similar to the H w h o l e height of the whole scene, while the height of the door is similar to 3 / 4 H w h o l e . Therefore, we set the height threshold range of the wall as [ H w h o l e × 3 / 4 , H w h o l e ] .
According to the previous steps, we obtained all the walls from the raw scene. Then the curvature and the normal vector of the wall are estimated by PCA [21]. Based on these two features, the wall is segmented into different planes (Figure 2c) via the region growing method [22].

3.2. Corner Detection for Door Recognition

Since the corner points reflect the shape of the plane, we first detect the corner points based on bounding box , which is used to accquire the spatial location and size of the segmented plane, so as to obtain the features of door.

3.2.1. Corner Detection

The AABB and OBB are introduced to extract the corner points of different planes. For the plane parallel to the XOZ, the corner points are detected by AABB.
The vertices of the AABB are A ( x m i n , y m i n , z m i n ) , 𝐵 ( x m a x , y m i n , z m i n ) , 𝐶 ( x m i n , y m a x , z m i n ) , 𝐷 ( x m a x , y m a x , z m i n ) , 𝐸 ( x m i n , y m i n , z m a x ) , 𝐹 ( x m a x , y m i n , z m a x ) , 𝐺 ( x m i n , y m a x , z m a x ) , 𝐻 ( x m a x , y m a x , z m a x ) . Then, the surfaces of AABB are 𝐴𝐵𝐶𝐷: z z m i n = 0 , 𝐶𝐷𝐺𝐻: y y m a x = 0 , 𝐵𝐷𝐹𝐻: x x m a x = 0 , 𝐴𝐵𝐸𝐹: y y m i n = 0 , 𝐴𝐶𝐺𝐸: x x m i n = 0 , 𝐸𝐹𝐺𝐻: z z m a x = 0 . Therefore, the corner points of AABB can be obtained in Figure 3a.
While for the plane that is not parallel to XOZ, the OBB is used to extract the corner points.
Three principal axis directions ω 1 , ω 2 , ω 3 , of the object is calculated by PCA, which are then normalized as γ 1 , γ 2 , γ 3 via the Schmidt orthogonalization. Additionally, the side lengths, l 1 , l 2 , l 3 ( l 1 > l 2 > l 3 ), and the center point C ( x c , y c , z c ) of the OBB are obtained by projecting point clouds on the unit vectors γ 1 , γ 2 , γ 3 . The eight intersections of different surfaces are vertices of the bounding box, which are also the corner points of the wall (Figure 3b).

3.2.2. Door Recognition

After detecting the corner points, the plane can be classified as wall, door or other object according to its size. It may be observed that the height of a door is generally 3/4 of the wall, so the height ratio is a criterion to recognize the door.
First, the volume V b o x of the bounding box is calculated according to the corner points of the bounding box. For the AABB and OBB, the volume is calculated by Equation (1) and Equation (2) respectively.
V b o x = ( m a x _ A A B B x m i n _ A A B B x ) × ( m a x _ A A B B y m i n _ A A B B y ) × ( m a x _ A A B B z m i n _ A A B B z )
V _ b o x = l 1 × l 2 × l 3      
( m a x _ A A B B x , m i n _ A A B B x ) ,   ( m a x _ A A B B y , m i n _ A A B B y ) ,   and   ( m a x _ A A B B z , m i n _ A A B B z ) represent the maximum and minimum of AABB on the x-axis, y-axis and z-axis. l 1 , l 2 and l 3 represent the side lengths of OBB. The plane with the largest volume is regarded as the wall.
The height of each plane is calculated, where the height of the wall is h w a l l and the heights of the remaining planes are h 1 , h 2 ,... Finally, the height ratio of each plane is analyzed. If the height of the plane is within [ h w a l l × 3 / 4 φ , h w a l l ] ( 0 φ 0.5 ) , then the plane is identified as a door. Otherwise, the plane is identified as other object.

3.3. Door State Recognition Based on Boundary Extraction and Straight-Line Fitting

3.3.1. Boundary Extraction

We introduce the normal vector angles between each point to extract the boundary points of PCD. The specific process of boundary extraction is as follows.
(1)
For any point pi in the indoor scene data, P = {p0, p1, p2 …, pn}, suppose the covariance matrix M i = 1 k i = 1 k ( p i p ¯ ) ( p i p ¯ ) T is established by pi and its k-nearest neighbor points Nj = {xj, yj, zj}(j = 0, 1, …, k – 1). The eigenvalues of Mi are positive and ordered as λ0λ1λ2, and the normal vector ni can be determined by the eigenvector corresponding to λ0.
(2)
We project point pi and its neighbor points Nj on the plane that is perpendicular to the normal vector, ni, as in Figure 4. p i and N j are, respectively, the projection of pi and Nj (Figure 5), and the vector uj(j = 0, 1, …, k – 1) is from p i to N j . The angle, αj, between other projection vectors, uk and uj, is calculated,
α j = { α j ,         β j < 90 ° 360 ° α j ,         β j 90 °      
where βj is the angle between uk and ni × uj.
(3)
The angles αj are ordered in descending, and the angle between adjacent vectors is calculated by Equation (4). If the maximum angle δj is greater than the angle threshold εth, then the point is identified as the boundary point. Continue to search for the next boundary point until all points of PCD are processed to identify all the boundary points. The extraction result of the boundary points is shown in Figure 6.
δ j = { 360 α j , j = 0 α j 1 α j , j = 1 , 2 , , k 2 α j , j = k 1

3.3.2. Straight-Line Fitting

The boundary points obtained by the above steps are disordered and unstructured. So straight-line fitting is performed, as well as calculating and analyzing the angle between fitted straight-lines of the wall and the door, to accurately recognize the state of the door. Projection points are fitted as straight-lines by performing the least square method [23]. The projection points are { p 1 ( x 1 , y 1 ) , p 2 ( x 2 , y 2 ) , , p n ( x n , y n ) } , then the fitted straight-line is y = a x + b . The plane deviation, e 2 ,   can be obtained using Equation (5):
e 2 = i = 1 n ( y i ( a x i + b ) ) 2
The partial derivatives of Equation (6) with respect to a and b are as follows:
{ e a = i = 1 n 2 ( y i ( a x i + b ) ) ( x i ) = i = 1 n ( a x i 2 + b x i y i x i ) e b = i = 1 n 2 ( y i ( a x i + b ) ) ( 1 ) = i = 1 n ( a x i + b y i )
Set e a = 0 with e b = 0 and calculate the extremum of the partial derivatives, e 2 , then resolve equation group (Equation (7)) to obtain a and b.
{ ( i = 1 n x i 2 ) a + ( i = 1 n x i ) b = i = 1 n y i x i ( i = 1 n x i ) a + n b = i = 1 n y i

3.3.3. Door States Recognition

Generally, for an open door, the angle between the fitted straight-lines of the wall and door is greater than or equal to 90°. Similarly, if the angle between the fitted straight-lines is within the range of [0°, 5°) or [5°, 90°), the door is recognized as closed or semi-open, respectively. Therefore, it is crucial to analyze the angle between the straight-lines to recognize the door states.
The two fitted straight-lines are L1: y 1 = a 1 x 1 + b 1 and L2: y 2 = a 2 x 2 + b 2 , respectively, where a 1 , a 2 , b 1 , b 2 are known coefficients. Then, the intersection,   ζ ( x i n , y i n ) , of L1 and L2 can be acquired by solving the simultaneous equations of the lines, determining the direction vectors to calculate the angle between L1 and L2. The specific procedure for calculating the angle between L1 and L2 is as follows.
(1)
Construct the equation group (Equation (8)) of L1 and L2 to work out the intersection, ζ(x12, y12).
{ y 1 = a 1 x 1 + b 1 y 2 = a 2 x 2 + b 2  
(2)
Divide the PCD of the two straight-lines into 10 portions and analyze the density near the intersection ζ, of L1 and L2. According to the average of the x coordinate of the straight-line with the smallest density to obtain the corresponding y coordinate, represented as θ(xθ, yθ).
(3)
Calculate the slopes ks1 of L1 and ks2 of L2 based on intersection, ζ, and θ(xθ, yθ).
k s = y θ y 12 x θ x 12
(4)
Determine the angle between L1 and L2 based on ks1, ks2. If ks1·ks2 = −1, then the angle β = 90°. Otherwise, β is calculated by Equation (10).
{ β = a r c t a n ( k 2 k 1 ) 1 + k 1 k 2       ,   0   a r c t a n ( k 2 k 1 ) 1 + k 1 k 2 π β = a r c t a n ( k 2 k 1 ) 1 + k 1 k 2 + π   ,   a r c t a n ( k 2 k 1 ) 1 + k 1 k 2 < 0

3.4. Wall Reconstruction

After determining the door state, the wall and the door are reconstructed according to the up-sampling based on corner points to fit the plane. The implementation process is described as follows.
(1)
Connect the corner points and select the longest edge Lm, set the sampling number as Nup, so that the sampling distance is calculated by d i s u p   = L m N u p . Starting from the point with the smallest coordinates in the corner points, and then output points in order according to disup to perform up-sampling.
(2)
RANSAC is used to fit the plane. Three points are randomly selected to determine a plane. The points within the distance threshold are added to the plane, and the plane is updated until all points are processed to complete the plane fitting. The reconstruction result based on the RANSAC is represented in Figure 7.

4. Experimental Results

To verify the effectiveness and robustness of our proposed method, experiments are performed on the Livingroom scenario in the ICL-NUIM dataset [24], and two scenarios in the Room detection dataset [25] of UZH Visualization and Multimedia Lab, i.e., the House scenario and a scenario containing multiple doors. Our method is implemented using C++ and run on a desktop PC with an Intel Core i7-7700 CPU and AMD Radeon R5 430 GPU.

4.1. Door Detection and Door State Recognition

Corner point detection is a key step to acquire door elements in our method, thus we show the results of corner point detection first (Figure 8, Figure 9 and Figure 10). Then, the results of door element recognition are displayed in Table 1. Additionally, we show the results of door state recognition in detail containing its corresponding lines by straight-line fitting and the angle between the wall and the door (Table 2). Finally, the results of wall reconstruction are illustrated in Figure 11.
Figure 8b–d, Figure 9b and Figure 10b are the corners detected by AABB for the plane parallel to the XOZ, Figure 9c and Figure 10c,d are the corners detected by OBB for the plane not parallel to the XOZ, and the detected corner points are marked in red in Figure 8, Figure 9 and Figure 10. The results show that our method can obtain accurate corner points whether the plane is parallel to the XOZ or not.
The plane classification results are displayed in Table 1. The volume, height and plane category are determined according to the corner points of each plane, and the real height of each plane is acquired by manual measurement of raw scanned data. According to Table 1, we can see that our method can obtain the corresponding classification of the plane which includes door, wall and other objects.
Our method can detect the different states of the door in the scene, and reconstruction of the wall can reflect the recognition results of the door state more intuitively. After recognizing the door element, the state of the door is determined based on boundary extraction and straight-line fitting. The results of boundary extraction, straight-line fitting and wall reconstruction are exhibited in Figure 11. The final results of the state of the doors are demonstrated in Table 2, as well as the linear equation in the different planes, the direction vectors and the angles between them. For example, we classified the wall in the living room scene into three planes, then the linear equations obtained by linear fitting and direction vectors correspondings to the three planes are determined (Table 2). The results in Figure 11 and Table 2 indicate that our method can recognize the accurate state of a door, and it can also obtain satisfactory results in scenes containing multiple candidate doors.

4.2. Comparisons

In this section, we verify the robustness and effectiveness of the proposed method qualitatively and quantitatively.
Figure 12 compares the corners detected by our method with two traditional detection methods, i.e., Harris-based method and the curvature-based method. The Harris-based method incorrectly detects corner information for doors and walls instead of correct corner points (Figure 12a), and similarly, the curvature-based method also generates a large number of errors (Figure 12b). In the curvature-based method, we set the number of returned corner points to 400, which is much larger than the number of correct corner points, but still the returned corner points are not correct. Compared with the aforementioned two methods, the corner points extraction for the wall based on our method can obtain accurate results (Figure 12c). We also display the number of corners and the processing time of the corner detection in Table 3. Our method eliminates the errors that existed in the other two methods and thus obtains a more accurate result. For the processing time of corner detection, the curvature-based method has the best efficiency, followed by our method, and the Harris-based method takes the longest processing time. From the wall 2 and wall 3 data in Table 3, we can find that the processing time of our method is almost half that of Harris method.
In Table 4, we compare the results of our method with the door states in the real scenes. It can be seen that the door states detected by our method is relatively close to the real scene, and doors with multiple states in the wall can be accurately identified and analyzed.

5. Conclusions

In this paper, a door state recognition method is proposed based on the corner detection and straight-line fitting from the scanned indoor scene data. The indoor scene is segmented into several planes, of which the bounding boxes are obtained to detect the corner points. Then the door elements in the wall are identified according to the corner points of each plane and the prior knowledge of the door. Next, we use the angle between the normal vectors to extract the boundary points, which are then projected and fitted to generate the straight boundary lines. Finally, the different states of the door in the wall are recognized by calculating the angle between the straight boundary lines. Experimental results show that the proposed method can quickly and accurately identify the different states of the door in the wall.
Although our approach can obtain good detection results, due to the lack of accurate corner points of the non-rectangular planes, the method cannot identify non-rectangular doors in different states. In the future, we will conduct in-depth research on irregularly shaped doors, and optimize the algorithm framework to improve the robustness of the method.

Author Contributions

Conceptualization, X.N. and M.W.; formal analysis, Z.L. and J.Z.; methodology, M.W. and Y.W.; software, M.W.; supervision, X.N. and Y.W.; writing—original draft, L.W.; writing—review and editing, Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Nos. 61871320, 61872291, U21A20515, 62271074, 61972459, 61971418, U2003109, 62171321, 62071157, 62162044 and 32271983).

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank for reviewers and the editor for valuable reviews. We also thank for ICL-NUIM and UZH Visualization and Multimedia Lab for their open datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cui, Y.; Li, Q.; Yang, B.; Xiao, W.; Chen, C.; Dong, Z. Automatic 3-D Reconstruction of Indoor Environment with Mobile Laser Scanning Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3117–3130. [Google Scholar] [CrossRef] [Green Version]
  2. Cui, Y.; Li, Q.; Dong, Z. Structural 3D reconstruction of indoor space for 5G signal simulation with mobile laser scanning point clouds. Remote Sens. 2019, 11, 2262. [Google Scholar] [CrossRef] [Green Version]
  3. Li, J.; Yao, Y.; Duan, P.; Chen, Y.; Li, S.; Zhang, C. Studies on three-dimensional (3D) modeling of UAV oblique imagery with the aid of loop-shooting. Int. J. Geo-Inf. 2018, 7, 356. [Google Scholar] [CrossRef] [Green Version]
  4. Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H. Obstacle-aware indoor pathfinding using point clouds. Int. J. Geo-Inf. 2019, 8, 233. [Google Scholar] [CrossRef] [Green Version]
  5. Szwoch, M.; Bartoszewski, D. 3D optical reconstruction of building interiors for game development. In Proceedings of the 11th International Image Processing and Communications Conference (IP&C 2019), Bydgoszcz, Poland, 11–13 September 2020; pp. 114–124. [Google Scholar] [CrossRef]
  6. Zheng, Y.; Peter, M.; Zhong, R.; Elberink, S.; Zhou, Q. Space subdivision in indoor mobile laser scanning point clouds based on scanline analysis. Sensor 2018, 18, 1838. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Jarząbek-Rychard, M.; Lin, D.; Maas, H. Supervised Detection of Façade Openings in 3D Point Clouds with Thermal Attributes. Remote Sens. 2020, 12, 543. [Google Scholar] [CrossRef] [Green Version]
  8. Adán, A.; Quintana, B.; Prieto, S.A.; Bosché, F. An autonomous robotic platform for automatic extraction of detailed semantic models of buildings. Autom. Constr. 2020, 109, 102963.1–102963.20. [Google Scholar] [CrossRef]
  9. Yang, J.; Kang, Z.; Zeng, L.; Akwensi, P.H.; Sester, M. Semantics-guided reconstruction of indoor navigation elements from 3D colorized points. ISPRS J. Photogramm. Remote Sens. 2021, 173, 238–261. [Google Scholar] [CrossRef]
  10. Quintana, B.; Prieto, S.A.; Adán, A.; Bosché, F. Door detection in 3D colored laser scans for autonomous indoor navigation. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Madrid, Spain, 4–7 October 2016; pp. 1–8. [Google Scholar]
  11. Quintana, B.; Prieto, S.A.; Adan, A.; Bosché, F. Door detection in 3D coloured point clouds of indoor environments. Autom. Constr. 2018, 85, 146–166. [Google Scholar] [CrossRef]
  12. Zu Borgsen, S.M.; Schöpfer, M.; Ziegler, L.; Wachsmuth, S. Automated door detection with a 3D-sensor. In Proceedings of the Canadian Conference on Computer and Robot Vision, Montreal, QC, Canada, 5–9 May 2014; pp. 276–282. [Google Scholar]
  13. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  14. Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [Google Scholar] [CrossRef] [Green Version]
  15. Xie, L.; Wang, R. Automatic indoor building reconstruction from mobile laser scanning data. ISPRS 2017, XLII-2/W7, 417–422. [Google Scholar] [CrossRef] [Green Version]
  16. Previtali, M.; Díaz-Vilariño, L.; Scaioni, M. Towards automatic reconstruction of indoor scenes from incomplete point clouds: Door and window detection and regularization. ISPRS 2018, XLII-4, 507–514. [Google Scholar] [CrossRef] [Green Version]
  17. Díaz-Vilariño, L.; Martínez-Sánchez, J.; Lagüela, S.; Armesto, J.; Khoshelham, K. Door recognition in cluttered building interiors using imagery and LiDAR data. In Proceedings of the ISPRS Technical Commission V Symposium, Trento, Italy, 23–25 June 2014; pp. 203–209. [Google Scholar]
  18. Kakillioglu, B.; Ozcan, K.; Velipasalar, S. Doorway detection for autonomous indoor navigation of unmanned vehicles. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3837–3841. [Google Scholar]
  19. Chen, W.; Qu, T.; Zhou, Y.; Weng, K.; Wang, G.; Fu, G. Door recognition and deep learning algorithm for visual based robot navigation. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics, Bali, Indonesia, 5–10 December 2014; pp. 1793–1798. [Google Scholar]
  20. Cheng, Y.; Cai, R.; Li, Z.; Zhao, X.; Huang, K. Locality-sensitive deconvolution networks with gated fusion for RGB-D indoor semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3029–3037. [Google Scholar]
  21. Zhu, B.; Cheng, X.L.; Liu, S.L.; Hu, X.H. Building Point Cloud Elevation Boundary Extraction Based on PCA Normal Vector Estimation. Geomat. Spat. Inf. Technol. 2021, 44, 38–40. [Google Scholar]
  22. Peng, L.J.; Lu, L.; Shu, L.J. Three-dimensional point cloud region growth segmentation based on PCL library. Comput. Inf. Technol. 2020, 165, 21–23. [Google Scholar]
  23. Zhang, Y. The research of fitting straight-line least square method. Inf. Commun. 2014, 44–45. [Google Scholar]
  24. Handa, A.; Whelan, T.; McDonald, J.; Davison, A.J. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In Proceedings of the IEEE International Conference on Robotics & Automation, Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  25. Mura, C.; Mattausch, O.; Pajarola, R. Piecewise-planar Reconstruction of Multi-room Interiors with Arbitrary Wall Arrangements. Comput. Graph. Forum 2016, 35, 179–188. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Mathematics 11 01149 g001
Figure 2. Extraction result of wall. (a) Scene segmentation; (b) distribution of z coordinate; (c) wall partition.
Figure 2. Extraction result of wall. (a) Scene segmentation; (b) distribution of z coordinate; (c) wall partition.
Mathematics 11 01149 g002
Figure 3. Corner detection result. (a) Corner points based on AABB; (b) corner points based on OBB.
Figure 3. Corner detection result. (a) Corner points based on AABB; (b) corner points based on OBB.
Mathematics 11 01149 g003
Figure 4. Points projection.
Figure 4. Points projection.
Mathematics 11 01149 g004
Figure 5. Projection vectors.
Figure 5. Projection vectors.
Mathematics 11 01149 g005
Figure 6. Boundary point result.
Figure 6. Boundary point result.
Mathematics 11 01149 g006
Figure 7. Wall reconstruction result.
Figure 7. Wall reconstruction result.
Mathematics 11 01149 g007
Figure 8. Corner points of the wall in Livingroom. (a) Original scene; (b) plane1; (c) plane2; (d) plane3.
Figure 8. Corner points of the wall in Livingroom. (a) Original scene; (b) plane1; (c) plane2; (d) plane3.
Mathematics 11 01149 g008
Figure 9. Corner points of the wall in House. (a) Original scene; (b) plane1; (c) plane2.
Figure 9. Corner points of the wall in House. (a) Original scene; (b) plane1; (c) plane2.
Mathematics 11 01149 g009
Figure 10. Corner points of the wall with multiple candidate doors. (a) Original scene; (b) plane1; (c) plane2; (d) plane3.
Figure 10. Corner points of the wall with multiple candidate doors. (a) Original scene; (b) plane1; (c) plane2; (d) plane3.
Mathematics 11 01149 g010
Figure 11. Reconstruction results. (a) Boundary point; (b) projection point; (c) straight-line fitting; (d) wall model.
Figure 11. Reconstruction results. (a) Boundary point; (b) projection point; (c) straight-line fitting; (d) wall model.
Mathematics 11 01149 g011
Figure 12. Comparison of corner detection results. (a) Harris-based method; (b) curvature-based method; (c) our method.
Figure 12. Comparison of corner detection results. (a) Harris-based method; (b) curvature-based method; (c) our method.
Mathematics 11 01149 g012
Table 1. Plane category analysis.
Table 1. Plane category analysis.
ScenePlaneVolume (m3)Height (m)Real Height (m)Category
Livingroom1
2
3
0.005833
0.000804
0.003758
2.31575
1.84217
0.780573
2.331244
1.862738
0.818667
wall
door
other object
House1
2
0.045027
0.011329
2.8254
2.14059
2.826390
2.157281
wall
door
Multiple-doors1
2
3
0.461809
0.077693
0.078877
2.83712
2.31991
2.3059
2.830485
2.311211
2.311746
wall
door1
door2
Table 2. The angle of the straight line and the state of the door.
Table 2. The angle of the straight line and the state of the door.
ScenePlane CategoryStraight LineDirection VectorAngle λState of the Door
Wall of LivingroomWall
Door
Other object
Ay = 0.000041x − 1.9291
By = 0.000007x − 1.9291
Cy = 0.00029x − 1.91283
A (1, 0.000041)
B (1, 0.000007)
C (1, 0.00029)
AB: 0.002°
AC: 0.014°
Closed
Wall of House Wall
Door
Ay = −0.00049254x − 3.10184
By = −2.66718x − 16.3727
A (1, −0.00049254)
B (0.351064, −0.936352)
AB: 110.581°Open
Wall of multiple candidate doorsWall
Door1
Door2
Ay = −0.001x – 3.1099
By = 0.1246x – 2.452
Cy = 0.4706x + 1.3609
A (1, −0.00105234)
B (0.992331, 0.12361)
C (0.904805, 0.425827)
AB: 7.1608°
AC: 25.2633°
Semi-open Semi-open
Table 3. Corner detection time of different methods.
Table 3. Corner detection time of different methods.
WallMethod NameNumber of CornersProcessing Time (s)
Wall 1Harris-based method1093.06958
Curvature based method4000.43635
Our method163.47990
Wall 2Harris-based method16817.51997
Curvature based method4000.28158
Our method243.43450
Wall 3Harris-based method1510 7.38891
Curvature based method4000.28095
Our method163.46327
Table 4. Comparison of reconstruction results.
Table 4. Comparison of reconstruction results.
WallThe Number of Doors in the Real Scene
(Closed/Semi-Open/Open)
The Number of Doors in the Reconstructed Scene
(Closed/Semi-Open/Open)
Wall of Livingroom1(1/0/0)1(1/0/0)
Wall of House1(0/0/1)1(0/0/1)
Wall of multiple candidate doors2(0/2/0)2(0/2/0)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ning, X.; Sun, Z.; Wang, L.; Wang, M.; Lv, Z.; Zhang, J.; Wang, Y. Door State Recognition Method for Wall Reconstruction from Scanned Scene in Point Clouds. Mathematics 2023, 11, 1149. https://doi.org/10.3390/math11051149

AMA Style

Ning X, Sun Z, Wang L, Wang M, Lv Z, Zhang J, Wang Y. Door State Recognition Method for Wall Reconstruction from Scanned Scene in Point Clouds. Mathematics. 2023; 11(5):1149. https://doi.org/10.3390/math11051149

Chicago/Turabian Style

Ning, Xiaojuan, Zeqian Sun, Lanlan Wang, Man Wang, Zhiyong Lv, Jiguang Zhang, and Yinghui Wang. 2023. "Door State Recognition Method for Wall Reconstruction from Scanned Scene in Point Clouds" Mathematics 11, no. 5: 1149. https://doi.org/10.3390/math11051149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop