Next Article in Journal
Analysis of the Patrimonial Conservation of a Quito Suburb without Altering Its Commercial Structure by Means of a Centrality Measure for Urban Networks
Next Article in Special Issue
Texture-Cognition-Based 3D Building Model Generalization
Previous Article in Journal
Integrating Decentralized Indoor Evacuation with Information Depositories in the Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Case Report

Three-Dimensional Modeling and Indoor Positioning for Urban Emergency Response

1
State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100101, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, 818 South Beijing Road, Urumqi 830011, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(7), 214; https://doi.org/10.3390/ijgi6070214
Submission received: 13 April 2017 / Revised: 6 July 2017 / Accepted: 8 July 2017 / Published: 12 July 2017

Abstract

:
Three-dimensional modeling of building environments and indoor positioning is essential for emergency response in cities. Traditional ground-based measurement methods, such as geodetic astronomy, total stations, and global positioning system (GPS) receivers, cannot meet the demand for high precision positioning and it is therefore essential to conduct multiple-angle data-acquisition and establish three-dimensional spatial models. In this paper, a rapid modeling technology is introduced, which includes multiple-angle remote sensing image acquisition based on unmanned aerial vehicles (UAVs), an algorithm to remove linear and planar foregrounds before reconstructing the backgrounds, and a three-dimensional modeling (3DM) framework. Additionally, an indoor 3DM technology is introduced based on building design drawings, and an indoor positioning technology is developed using iBeacon technology. Finally, a prototype system of the indoor and outdoor positioning-service system in an urban firefighting rescue scenario is introduced to demonstrate the value of the method proposed in this paper.

1. Introduction

Three-dimensional information is an important component of modern cities’ safety-control systems and is a prerequisite for carrying out urban public safety operations. Three-dimensional spatial information about urban environments is difficult to acquire, largely due to the functional diversity and structural complexity of modern cities. Although important, the existing three-dimensional urban spatial information cannot meet modern safety requirements, due to a lack of data sources, spatial accuracy and timeliness. Therefore, there is an urgent need to develop better techniques for collecting multiple-dimensional data about cities.
To better acquire three-dimensional information, computer and communication technology has developed rapidly. A variety of new digital, lightweight, small-size and high-detection sensors are constantly becoming available. Thus, spatial-data-acquisition technology is being rapidly developed. With the ubiquitous application of GPS [1] and inertial navigation systems (INS), the development of airborne, vehicular and other three-dimensional data-acquisition systems has been promoted further. Currently, there are a number of ground-vehicle data-acquisition systems in the world, such as the Lynx laser-acquisition vehicles, which were developed by Optech; binocular panoramic-acquisition vehicles, which were developed by Earthmine; and the horizontal and vertical dual laser-radar scanning system, which was developed by Avideh Zakhor et al. [2]. China has also made great progress in this aspect. The ground-vehicle data-acquisition systems that have been commercialized include the binocular-vision method utilized by Wuhan Lead Laser Co. Ltd. and the vehicular three-dimensional data-acquisition system developed by the Key Laboratory of Virtual Geographic Environment of Ministry of Education at Nanjing Normal University. By simultaneously carrying a CCD camera and laser scanner, these three-dimensional data-acquisition systems resolved a number of problems. These problems mainly include high-rise building blocks [3,4,5,6,7] and high-rise building facades in urban data-acquisition. These systems enable the real-time three-dimensional simulation and reconstruction of a city. Moreover, the development of 3DM technology in the remote-sensing space entered a new stage of intelligent automation, visualization, and real-time processing.
In the research on 3DM technology [8], the first method was based on geometry [9]. A 3DM of features was generated completely artificially, and the method required experienced modeling staff to spend a very long time making complex and precise object models. Since each model was made by experienced people with a computer, the workload of the man–machine interaction was high and the modeling efficiency was low. Subsequently, 3DM reconstruction which was based on images was developed [10,11,12]. This method used a camera or video camera to collect a group of images of the objects. Although the workload of the man–machine interaction was reduced, it only created 2.5-dimensional 3D scenes. It then utilized the corresponding shooting angles to reconstruct the geometric structures of the objects. The new 3DM method obtained three-dimensional models of the features by using various optical instruments and equipment [13]. This method was extensively applied in vehicular and airborne three-dimensional data-acquisition systems. It can realize rapid urban modeling, but the model accuracy needs to be improved. Additionally, a laser range-finder [14,15,16,17] was used to extract enough depth information to cover the whole surface of the object. This was then used to complete the construction of the object-surface model. At the time, this was the focus of new technological developments in 3DM. At present, there are some more mature three-dimensional reconstruction systems, such as the TotalCalib system, which was developed by Bougnoux et al. [18] at the Institut National de Recherché en Informatique et en Automatique (INRIA), France; the automatic generation system of an object’s three-dimensional surface, which was developed by Pollefeys et al. [19], K.U. Leuven University, Belgium; and the Photo Builder [20] three-dimensional reconstruction system, which was developed by the Computer Vision and Robotics Group at Cambridge University. Many scholars have studied indoor positioning technology. First, received signal strength (RSS) measurements have been used for position-estimation [21]; in addition, researchers have demonstrated that RSS can be used to localize a multi-hop sensor network [22]. Subsequent studies proposed a sensor fusion framework combining Wi-Fi, pedestrian dead reckoning (PDR) and landmarks, using RSS measurements for UAV positioning [23,24,25]. In addition, the use of smart phones for mobile mapping has also been proposed [26].
With regard to the fusion of multiple-source remote-sensing data for large scenes, reconstruction efficiency and the integrated fusion of real-time three-dimensional data, the existing 3DM technology cannot meet the needs of indoor and outdoor integrated positioning [27,28,29,30,31,32,33] and the safety and emergency needs [34,35,36,37,38] of cities. The comprehensive application of high-resolution remote-sensing images or aerial images, large-scale multiple-level topographic maps and three-dimensional spatial information should be developed urgently. The real-time performance and the accuracy of urban environmental simulations and the amount of information provided to make operational command decisions are still insufficient to meet the actual demand. It is necessary to develop multiple-angle data-acquisition and to establish three-dimensional spatial models. It may then be feasible to implement essential functions, such as determining the spatial information of scenes for rapid access, real-time scene reduction and real-time and dynamic updating. Therefore, it may be feasible to use multidirectional, three-dimensional and high-precision spatial information and corresponding high-efficiency supporting technology to effectively assist the command tasks for security maintenance in a variety of situations.
This article mainly includes outdoor modeling, indoor modeling and indoor positioning. Outdoor modeling and indoor modeling provide outdoor and indoor scenes for emergencies. At present, indoor and outdoor modeling technology has many aspects of its application [39,40,41,42]. Firstly, the technology is used as a 3D map for locating and querying routes. Secondly, the technology is used as a static 3D scene for planning and preview. Thirdly, the technology is used as a dynamic 3D scene for simulation. In this paper, indoor and outdoor modeling technology is used to create an integrated 3D scene which provides a virtual geographic environment for the dynamic simulation of emergency response. In addition, indoor positioning technology provides technical support for a rapid rescue in emergency situations within indoor models. The 3DM and indoor positioning technologies developed in this paper have the following advantages over the traditional methods: (1) this method of using the space-air-ground integrated, multiple-directional, three-dimensional acquisition platform to obtain data greatly improves the efficiency of data-acquisition; (2) the combination of three-dimensional (3D) building-model reconstruction from point-clouds, automatic identification and removal of foreground shielding and 3DM software can improve the efficiency and fidelity of the 3DM; (3) the combination of 3DMs and indoor positioning technology can meet the needs of safety and emergency services in the city; (4) the urban emergency response application is a good case study of the Virtual Geographic Environment theory and technologies [43,44,45,46,47], which provides a virtual experimental research platform of human–computer interaction for emergency response.
The rest of this paper is organized as follows: Section 2 introduces 3DM technology for city emergency response; Section 3 details indoor 3DM and positioning; Section 4 describes a prototype system study with the developed technology: a city firefighting rescue application of the indoor and outdoor positioning service; Section 5 discusses the advantages and potential improvements of the current technology; and finally, Section 6 concludes this paper.
Outdoor 3DM technology provides a complete solution for quick three-dimensional acquisition for urban applications. The digital three-dimensional city is the basic spatial frame for a “smart city”. Indoor modeling and indoor positioning can be used in emergency rescues based on the construction of the 3D models of the city. The three parts can be used together to enhance the capabilities of urban emergency response.

2. 3DM Technology for City Emergency Response

2.1. Rapid Acquisition and Processing of Three-Dimensional, Low-Altitude Remote-Sensing Data with Low Cost and High Precision

The remote-sensing data-acquisition system for cities obtains multiple-source heterogeneous data using a space-air-ground integrated, multiple-directional, three-dimensional acquisition platform. After standardizing the input data, the primary data products are formed. This provides various standardized data interfaces for the subsequent modeling, service, simulation and application. Figure 1 shows the process of multisource data-acquisition and processing. For point-cloud data and image data, we mainly use the point-cloud modeling and texture mapping to build three-dimensional models. For the video data of the target taken by UAV, we mainly take the route of the key target to form the motion route and then combine the 3D model to realize the dynamic simulation, in order to serve the city’s emergency situation.
Space-based acquisition platforms mainly use high-resolution remote-sensing systems, which have recently been developed. In addition, space-based acquisition platforms have been applied to obtain decimeter-scale high-resolution remote-sensing images and construct high-resolution remote-sensing image databases. The aerial acquisition platform included high-altitude aerial acquisition, unmanned aerial acquisition, and others. In addition, it used vertical photography technology to obtain ultrahigh-resolution image data in order to jointly construct basic databases of the high-resolution remote-sensing images. The space-based acquisition platform also uses technologies such as oblique photography, radar point-cloud acquisition, autonomous positioning and navigation, video capture, and target tracking. Furthermore, the space-based acquisition platform can construct multiple-angle real-image databases, high-precision point-cloud databases and live dynamic video databases. The ground-acquisition-vehicle platform included a modified off-road vehicle which was equipped with a combination of a positioning system and a close-range acquisition system. It was mainly used to obtain fine-scale street-view photos and spatially normalized point-cloud data that could be included in real-image databases and point-cloud databases, respectively. The method of obtaining data using the space-air-ground integrated, multidirectional, three-dimensional acquisition platform greatly improves the accuracy of the data. In addition, the use of an unmanned aerial vehicle significantly reduces the cost of data-acquisition. After the unified integration and organization of the various data described above, a data warehouse for the primary data product was formed. It provided a standardized data-service interface for the construction of large-scale sand table models, a fine 3DM of the control area, a real-scene reconstruction of the core targets and real-time monitoring of a counter-terrorism scene.

2.2. Technical Scheme of the Three-Dimensional Scene Reconstruction

The extraction and modeling of the three-dimensional characteristics of buildings and the reconstruction and simulation of the real scene are based on the integration of multiple-platform data. First, an automatic registration and fast mosaic were carried out for the point-cloud [48] and image data with precise three-dimensional spatial locations. Based on this, mesh processing of the point-cloud data was carried out and the external profiles of urban features were extracted. According to image acquisition from photography measurements, the surface texture of each feature was extracted and the texture set was established. Furthermore, automatic identification and removal were conducted for the foreground shielding. According to the image or data interpolation of the different angles for multiple scenes, data recovery was conducted for masked areas. Information fusion was conducted for the external profiles and texture characteristics of features. The reconstruction of static three-dimensional scenes of a city was then achieved. According to real-time video data—which were obtained by unmanned aerial vehicles—automatic recognition, motion-state extraction and trajectory calculation were carried out for the important monitoring objectives. The simulation of moving objects in the city could also be conducted using simulation scripts.

2.2.1. The Acquisition and Processing of Point-Cloud and Image Data

With the INPHO software, an orthogonal projection was carried out through a series of processes, such as empty three encryption, the absolute positioning of the encrypted observation area, the extraction of the Digital Terrain Model (DTM) of the encrypted image, the manual editing and extraction of poor areas, the orthogonal correction of the original image, a mosaic and the editing of the image. Finally, we obtained data that satisfied the 1:500 orthogonal projection requirement.

2.2.2. The Extraction of Surface Texture Features and the Establishment of the Texture Atlas

We need to select the best texture images before extracting the texture. To ensure the accuracy and fidelity of the model, this study adopts a parallel computing method to choose the most suitable stereo photograph for the 3DM. The method is as follows:
1.
Filtering by locations
The image that contains the most complete texture information is found by calculating the distance from the ground-point projection images to the sloping ground.
2.
Filtering by angle
If the angle between the normal vector of the wall in the building and the vector from the center of the surface to the center of the photograph frame is less than 90 degrees, the specified side of the building is visible in the image.
3.
Filtering by area
The projected area of the buildings’ outline determines the quality of the photos.

2.2.3. Foreground Removal and Restoration Algorithms

In the case of foreground removal, the main algorithms are as follows:
  • Foreground trees are removed using hue and the parallel-line information. The leaves are usually green or a color similar to green, so we can select a range of hues from 80 to 200 to determine whether the pixels are located in the shelter area. In the image, lines on the wall of the building correspond to parallel lines in the X-axis or the Y-axis of the object coordinate system. These lines are arranged in a certain order, while the direction of the straight line of the tree is not regular. Therefore, the foreground can be removed according to information about parallel lines and color.
  • The foreground restoration algorithm is based on the stereo matching of image segmentation. This method, which is based on the adaptive-weight matching algorithm for color pixels, is used to reduce the influence of sampling noise. We use the cost of visual pixel matching, block pixels and parallax smoothness to define the energy function in order to determine the shade.
  • The image is divided into a rectangular grid of a certain size and the density of the parallel lines of the grid is calculated. If the grid density of an area is less than a given threshold, this area is considered a shelter area; otherwise, it is not a blocked area. The results of the above operation will contain the incorrectly-segmented block grid area, which is generally considered to contain fewer linear features or to be covered by a substantial amount of advertising signs.
In the case of foreground restoration, the main methods are as follows:
  • Image segmentation. These methods are applied to zone-element images. For some local filled areas, their closest filling blocks are centered in zones of similar textures, so filled target blocks are searched for only in their neighboring source areas. This approach is able not only to maintain the linear structures of images effectively but also to shorten the search time.
  • For planar shielding, the template-matching margin protection method is applied to restore foreground textures. For simple linear vertical foregrounds, the one-dimensional average-value interpolation and block-filling methods are used to recover the occluded textures. When the extracted margin width and the surrounding textures satisfy certain conditions, the one-dimensional average-value interpolation method is applied. Otherwise, the affine transformation is used to achieve texture recovery.
  • The image-texture substitution method is based on grid optimization. First, in the target region, the original images are divided into two-dimensional grids. In addition, the shape of the grid is consistent with the geometry of the target area. Second, the corresponding sampling points in the target area for each point on the texture plane are found. Third, color-space conversion and transmission are performed to maintain the brightness and shadow information of the original image. This method can replace the texture of the target area in the original image with a new texture and maintain the lighting effects of the original image.

2.2.4. Information Fusion of Outer Contours and Textural Features of Urban Targets

Textural information of aerial photographs is collected by a triangulation network. The following main steps are performed: first, the model and image are read separately by the program; second, the line characteristics of the image and model are extracted; third, the characteristic line of the model is projected onto the image; fourth, the accurate internal and external orientation elements of the image are calculated, then the precise matching between the characteristic line of the model and the image is achieved; fifth, several parameters for the application of textures are defined to project the texture onto the model. In the end, a 3DM of the buildings is accomplished by these steps.
To assess the valuation of the technologies stated in the paper above, a case study was conducted in Kuerle city, in the Xinjiang Uygur Autonomous region of China (the 3DM result and quality is shown in Figure 2). The main idea of this paper is to obtain the whole city’s image data to get point-cloud data. The point-cloud data is then processed to get the white models of the objects. Finally, the texture data is extracted from the image data and texture mapping is performed with the white model, so as to realize the 3D reconstruction. The modeling by the software of 3DMAX and SketchUp combines the structural features of the model to create a white mold by stretching and drawing lines. The efficiency contrast between the 3D modeling technologies proposed in this paper and the modeling by 3DMAX and SketchUp is shown in Table 1. From the contrast result, we can determine that the technologies presented in this paper have more advantages than the tradition methods. Firstly, compared with traditional methods, the proposed method in this paper can be used for large-area modeling and is more suitable for building a 3D city model. Secondly, the quantity of buildings obtained by the proposed method in this paper is more than that of the traditional method. Thirdly, because the traditional method is difficult to restore according to the actual detail size of the object, its height accuracy is very low. The method in this paper can reduce the detail size of objects by using point-cloud data, so it has a higher elevation accuracy. In addition, compared with traditional methods, the proposed method in this paper has a high efficiency.

2.3. Dynamic Simulation of an Emergency Response for Urgent Situations

To address urgent situations like fires or other sudden mass emergencies, we study the dynamic spatial-evolution analysis, spatial allocation and spatial-deployment simulations of spatial objects. For emergency management teams’ emergency response processes, we aim to achieve a 3DM construction for sudden emergency events, movement-rulemaking for the corresponding dynamic models, situation analysis and event-process simulations. We then aim to provide technical support for emergency response, an early warning system and decision-making in case of an urgent situation in a city.
A series of dynamic models for city-scene reproduction and scene simulation have been established. These models include various movement models (crowds of people, ground vehicles, aerial targets, etc.) and sudden emergency models (bombing, fire, hijacking, etc.). We have created movement rules and paths for the corresponding dynamic models, integrated video data from real-time monitoring and reproduced the dynamic scene.
In the field of emergency management, the three-dimensional model is combined with indoor positioning technology. Using the indoor CAD and handheld terminal systems, firefighters and the public can establish escape routes and fire commanders can determine whether firefighters or the public are trapped.
With the dynamic simulation, the evaluation and optimization of the emergency-response planning process, emergency-simulation technology can be used to support dynamic temporal and spatial data access for a “Smart City”. The original heterogeneous, multiple-source and multiple-angle data have been gathered through multi-dimensional data collection. The primary data products are then constructed after normalization processing to provide access to various standardized data for the subsequent modeling, simulation and application. We have drawn the 3D features of city buildings for modeling and collected dynamic time and space data to achieve the rebuilding and simulation of city scenes. According to the analysis of the needs in the demonstration application, the research task and objective have been further specified, and a series of practical interactive functions (implements) have been developed. These functions mainly include the 3D mapping of emergency targets and deployment schedules and the prediction and analysis of a sudden accident situation and its dynamics.

3. Indoor 3DM and Positioning

3.1. Indoor 3DM

With the 3DMAX and SketchUp software, the indoor 3DM—which contains doors, windows, stairs, corridors and firefighting devices—is developed using methods such as segmentation, stretching, and mapping. The specific steps are as follows:
  • A base-map of the building is established in Autodesk Computer Aided Design (AutoCAD) and is imported into SketchUp.
  • The imported base-map is used to produce the foundational data for the building. The building is split into different parts by adding lines, and the bottom of each part is pulled up to a related height to form the basic framework.
  • The window and door models of the building are created, and the models are divided into groups. Windows and doors are drawn with lines in the basic framework of the building and hollowed out. In the next step, the window and door groups are added into the hollowed parts of the building by continuous movements and replications.
  • Refinement-processing of the model is performed by segmentation and rotation and special parts of the building are constructed. The 3D framework is then generated.
  • The background and clusters of the images are removed and the intensity and contrast and angular deviation of the images are adjusted. The images are then cut and spliced and corrected. After the above treatments, the textural information of the building is formed.
  • By checking the textural position of the building, the treated textural information is added to the corresponding position of the model. It is then processed as a real model texture using several treatments, such as sizing and rotation, and the indoor 3DM is formed. Finally, the model is exported and provides the basis for the subsequent processes.

3.2. Indoor Positioning Technology

iBeacon technology was first released by Apple Inc. in 2013 and enables a mobile device to become aware of its location. The device obtains its location by receiving a Universally Unique Identifier (UUID) and Signal Strength Indication (UUID) from nearby iBeacons. In this paper, iBeacon technology is used to solve the indoor positioning problem in fire emergency command. By getting location information, the fire commanders can determine whether firefighters or the public are trapped. In addition, the location information and CAD drawings can be combined to generate an escape route. As shown in Figure 3, the framework for the integrated indoor and outdoor positioning system consists of the following components:
  • An iBeacon database and an indoor map database provide iBeacon indoor and outdoor map database services for indoor and outdoor positioning.
  • For iBeacon and for indoor and outdoor maps, an iBeacon online management system can be accessed and managed online.
  • An iBeacon deployment tool and an iBeacon inspection tool are used to deploy and monitor iBeacons.
  • An iBeacon is based on the indoor positioning core algorithm module, which is used to provide indoor positioning functions. This module has four core submodules: the iBeacon device-detection and monitoring submodule, the iBeacon distance calculation submodule, the spatial position-solving submodule and spatial position correction submodule.
  • An integrated indoor and outdoor map engine provides indoor and outdoor map services.
  • An outdoor positioning module uses GPS to provide an outdoor positioning service.
  • An integration of indoor and outdoor positioning is used to integrate the indoor and outdoor services. This service is provided by iBeacon and is based on the indoor positioning core algorithm module and the indoor and outdoor positioning module.
Further improvements and enhancements of iBeacon are used to meet fire emergency-command demands. First, the iBeacon devices and firefighters’ handheld terminals are enhanced with fire-retardant materials that allow these devices to work in fires. Second, a body sensor, a smoke sensor and communication equipment are integrated into each of these handheld terminals (Figure 3). In this way, the real-time indoor and outdoor locations of firefighters, the environmental status on the ground and the physical statuses of firefighters can be transmitted to the commander.

4. A Prototype System Study: City Firefighting Rescue Application of the Indoor and Outdoor Positioning Service

To meet the demands of a city firefighting rescue, a 3D digital virtual environment of the city is required, which should have both the outdoor spatial distribution information and the indoor construction spatial information. Location positioning with different precision should then be provided for the emergency response. Thus, the spatial architecture and location information of the accident building, as well as the precise location information of the firemen, can be provided to the commanding officer for decision-making application.
An indoor and outdoor positioning service system for city emergency response was developed based on the technologies described above. The system consists of five parts: firefighter handheld terminals, a public handheld device, a fire emergency command platform, a mobile emergency command platform and a fire emergency command center. A firefighter uses the firefighter handheld terminal in real-time to send his location, the environmental status of the fire area and his physical status to the fire emergency command platform. Members of the public use the public handheld device to send their locations to the fire emergency command system. The fire emergency command platform provides fire emergency command functions, such as indoor and outdoor map viewing, the tracking and visualization of indoor and outdoor positioning information, and information sharing and synchronization among the firefighter handheld terminals, command platforms in the command center and mobile command platforms. The mobile command platform is a mobile version of the fire emergency command platform. The fire emergency command center is a place where the commander can make decisions and is equipped with large screens, communication equipment and other required emergency command equipment.
The fire emergency command platform, which is based on a prototype of the indoor and outdoor positioning service, has been developed in the Microsoft .NET framework with the C# language. The firefighter handheld device was developed for the Android operating system. The fire emergency command functions, such as the indoor positioning of firefighters and the public, escape-route analysis and the indoor and outdoor navigation integration, have been achieved.
To assess the valuation of the technologies stated in this paper, a case study was conducted in Kuerle city, Xinjiang Uygur Autonomous region of China. About 4930 3D models of buildings in about 50 square km were constructed within 75 h. The indoor construction-modeling and iBeacon devices were provided with 10 stadiums. Figure 4 and Figure 5 show some functions of the indoor and outdoor positioning service system when the fire occurs. First, according to the outdoor positioning technology, the firefighters handle the fire outdoors. Second, according to the indoor positioning technology, firefighters and the public can obtain their location within 3 s with a positioning precision of less than 30 cm. The system has been used by the fire authorities of Kuerle municipal government and will be very helpful for urban emergency response.

5. Discussion

The 3DM and indoor positioning technologies studied in this paper have the following advantages over the traditional methods: (1) this method of using a space-air-ground integrated, multiple-directional, three-dimensional acquisition platform to obtain data greatly improves the efficiency of data-acquisition; (2) the combination of 3D building model reconstruction from point-clouds, the automatic identification and removal of foreground shielding and 3DM software can improve the efficiency and fidelity of the 3DM; (3) the combination of 3DMs and indoor positioning technology can meet the needs of safety and emergency services in the city.
In this paper, we use the space-air-ground integrated multiple-directional three-dimensional acquisition platform to obtain data. This method of obtaining data not only improves efficiency, but also improves the accuracy of modeling. The existing three-dimensional reconstruction systems, such as the TotalCalib system and the PhotoBuilder system, cannot meet the integration of urban indoor and outdoor positioning and security needs. In this paper, we combine indoor positioning technology with 3DM technology and realize indoor and outdoor positioning in three-dimensional scenes and security service requirements.
In this paper, we carry out the dynamic simulation of a fire scenario using an indoor and outdoor positioning service system and obtain good results. However, regarding the 3DM, there are still some shortcomings. For example, the reconstruction of the 3DM can be divided into three parts: extraction of the white mold, extraction of the surface texture and texture mapping. The accuracy and completeness of the texture mapping directly affects the accuracy and fidelity of the 3DM. In this paper, we have adopted a variety of methods to automatically identify and remove the foreground shielding and obtained some results. However, in order to improve the accuracy, we still need to further study the data-acquisition process and the automatic identification and removal of the foreground shielding.

6. Conclusions

In this paper, a quick modeling technology is introduced which first uses multiple-angle remote-sensing based on unmanned aerial vehicle measurements. To enhance the validity and accuracy of the spatial models, an algorithm is then proposed to remove the linear and planar foregrounds before reconstructing backgrounds. A 3D scene reconstruction of cities can be achieved with a high efficiency. The case study conducted in Kuerle city, in the Xinjiang Uygur Autonomous region of China shows that the 3D modeling technologies proposed in this paper are more efficient than traditional methods such as 3DMAX and SketchUp. As a 3DM of city scenes can only provide a spatial framework for the city’s indoor positioning, indoor 3DM technology is introduced, based on building design drawings. Furthermore, indoor positioning technology is introduced using iBeacon technology. In the end, the application of a prototype system of the indoor and outdoor positioning service system in a city firefighting scenario demonstrated the value of the method proposed in this paper. The technologies introduced above can be applied synthetically to enhance the capabilities for urban emergency response.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions. The study is funded by the National Key R & D Programs of China (grant No. 2017YFB0504201, 2015BAJ02B) and the Natural Science Foundation of China (grant No. 61473286 and 61375002).

Author Contributions

Xin Zhang, Linjun Yu and Weisheng Wang are the directors of the corresponding project. Linjun Yu designed the framework of the integrated indoor and outdoor positioning system. Yongxin Chen, Weisheng Wang and Qianyu Wu performed the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. He, H.; Li, J.; Yang, Y.; Xu, J.; Guo, H.; Wang, A. Performance assessment of single-and dual-frequency Beidou/GPS single-epoch kinematic positioning. GPS Solut. 2014, 18, 393–403. [Google Scholar] [CrossRef]
  2. Früh, C.; Zakhor, A. An Automated Method for Large-Scale, Ground-Based City Model Acquisition. Int. J. Comput. Vis. 2004, 1, 5–24. [Google Scholar] [CrossRef]
  3. Yoon, K.J.; Kwen, I.S. Adaptive support weight approach for correspondence search. IEEE Trans Pattern Anal. Mach. Intell. 2006, 28, 650–656. [Google Scholar] [CrossRef] [PubMed]
  4. Criminisi, A.; Perez, P.; Toyama, K. Object Removal by Exemplar Based in Painting. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 16–22 June 2003; pp. 721–728. [Google Scholar]
  5. Meng, R.; Song, X. Research on algorithm of segmenting color images. Express Inf. Min. Ind. 2004, 9, 21–24. [Google Scholar]
  6. Ashikhmin, M. Synthesizing Natural Textures. In Proceedings of the Symposium on Interactive 3D Graphics; ACM Press: New York, NY, USA, 2001; pp. 217–226. [Google Scholar]
  7. Fang, H.; Hart, J. Textureshop: Texture synthesis as a photograph editing tool. In Proceedings of the SIGGRAPH 2004, Los Angeles, CA, USA, 8–12 August 2004; pp. 254–359. [Google Scholar]
  8. Hu, J.; You, S.; Neumann, U. Approaches to Large scale Urban Modeling. Comput. Graph. Appl. 2003, 23, 62–69. [Google Scholar]
  9. Heuvel, F.A. 3D Reconstruction from a Single Image Using Geometric Constraints. ISPRS J. Photogramm. Remote Sens. 1998, 53, 354–368. [Google Scholar] [CrossRef]
  10. Debevec, P.; Taylor, C.; Malik, J. Modeling and Rendering Architecture from Photographs: A Hybrid Geometry and Image Based Approach. In Proceedings of the Siggraph; ACM Press: New York, NY, USA, 1996; pp. 11–20. [Google Scholar]
  11. Gruen, A.; Nevatia, R. Automatic Building Extraction from Aerial Images—Guest Editors’ Inreoduction. Comput. Vis. Image Underst. 1998, 73, 1–2. [Google Scholar]
  12. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  13. Frueh, C.; Jain, S.; Zakhor, A. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes From Laser Scans and Camera Images. Int. J. Comput. Vis. 2005, 61, 159–184. [Google Scholar] [CrossRef]
  14. Wang, Y.; Hu, C.M. A Robust Registration Method for Terrestrial LiDAR Point Clouds and Texture Image. Acta Geod. Cartogr. Sin. 2012, 41, 266–272. [Google Scholar]
  15. Rottensteiner, F.; Briese, C. Automatic Generation of Building Models from LIDAR Data and the Integration of Aerial Images. ISPRS 2003, 34, 1–7. [Google Scholar]
  16. Huber, M. Fusion of LiDAR Data and Aerial Imagery for Automatic Reconstruction of Building Surfaces. In Proceedings of the 2nd GRSS/ISPRS Joint Workshop on Data Fusion and Remote Sensing over Urban Areas, Berlin, Germany, 22–23 May 2003. [Google Scholar]
  17. Sohn, G.; Dowman, I. Building Extraction Using Lidar DEMs and IKONOS Images; ISPRS Commission III, WG III/3: Dresden, Germany, 2003. [Google Scholar]
  18. Bougnoux, S.; Robert, L. Totalcalib: A fast and reliable system for off-line calibration of image sequences. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 1997. [Google Scholar]
  19. Pollefeys, M.; Gool, V.L.; Vergauwen, M.; Cornelis, K.; Verbiest, F.; Tops, J. Image-based 3D acquisition of archaeological heritage and applications. In Proceedings of the 2001 Conference on Virtual Reality, Archeology, and Cultural Heritage; ACM Press: New York, NY, USA, 2001; pp. 255–262. [Google Scholar]
  20. Cipolla, R.; Robertson, D.P.; Boyer, E.G. Photobuilder-3D Models of Architectural Scenes from Uncalibrated Images. In Proceedings of the Conference on Multimedia Computing and Systems, Florence, Italy, 7–11 June 1999; pp. 25–31. [Google Scholar]
  21. Patwari, N.; Hero, A.O.; Perkins, M.; Correal, N.S.; O’dea, R.J. Relative location estimation in wireless sensor networks. IEEE Trans. Signal Proc. 2003, 51, 2137–2148. [Google Scholar] [CrossRef]
  22. Whitehouse, K.; Karlof, C.; Culler, D. A practical evaluation of radio signal strength for ranging-based localization. In ACM Sigmobile Mobile Computing and Communications Review; ACM Press: New York, NY, USA, 2007; Volume 11, pp. 41–52. [Google Scholar]
  23. Chen, G.L.; Meng, X.L.; Wang, Y.J.; Zhang, Y.Z.; Tian, P.; Yang, H.C. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization. Sensors 2015, 15, 24595–24614. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.C.; Xie, L. Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization. Sensors 2015, 15, 715–732. [Google Scholar] [CrossRef] [PubMed]
  25. Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Vettore, A. UAV positioning and collision avoidance based on RSS measurements. Int. Arch Photogr. Remote Sens. Spat. Inf. Sci. 2015, 40, 219–225. [Google Scholar] [CrossRef]
  26. Masiero, A.; Fissore, F.; Pirotti, F.; Guarnieri, A.; Vettore, A. Toward the Use of Smartphones for Mobile Mapping. Geo-Spat. Inf. Sci. 2016, 19, 210–221. [Google Scholar] [CrossRef]
  27. Leu, J.S.; Yu, M.C.; Tzeng, H.J. Improving indoor positioning precision by using received signal strength fingerprint and footprint based on weighted ambient WiFi signals. Comput. Netw. 2015, 91, 329–340. [Google Scholar] [CrossRef]
  28. Hossain, A.K.M.M.; Soh, W.S. A survey of calibration-free indoor positioning systems. Comput. Commun. 2015, 66, 1–13. [Google Scholar] [CrossRef]
  29. Tesoriero, R.; Tebar, R.; Gallud, J.A.; Lozano, M.D.; Penichet, V.M.R. Improving location awareness in indoor spaces using RFID technology. Expert Syst. Appl. 2010, 37, 894–898. [Google Scholar] [CrossRef]
  30. Moghtadaiee, V.; Dempster, A.G. Design protocol and performance analysis of indoor fingerprinting positioning systems. Phys. Commun. 2014, 13, 17–30. [Google Scholar] [CrossRef]
  31. Hafner, P.; Moder, T.; Wisiol, K.; Wieser, M. Indoor Positioning based on Bayes Filtering Using Map Information. IFAC PapersOnLine 2015, 48, 208–214. [Google Scholar] [CrossRef]
  32. Zhu, N.; Zhao, H.B.; Feng, W.Q.; Wang, Z.L. A novel particle filter approach for indoor positioning by fusing WiFi and inertial sensors. Chin. J. Aeronaut. 2016, 28, 1725–1734. [Google Scholar] [CrossRef]
  33. Bisio, I.; Lavagetto, F.; Marchese, M.; Sciarrone, A. Smart probabilistic fingerprinting for WiFi-based indoor positioning with mobile devices. Pervasive Mob. Comput. 2015, 31, 107–123. [Google Scholar] [CrossRef]
  34. Liu, H.S.; Zhang, X.L.; Song, L.X. Comprehensive evaluation and prediction of fire accidents in China based on Statistics. China Saf. Sci. J. 2011, 21, 54–59. [Google Scholar]
  35. Li, N.; Burcin, B.G.; Bhaskar, K.; Lucio, S. A BIM centered indoor localization algorithm to support building fire emergency response operations. Autom. Constr. 2014, 42, 78–89. [Google Scholar] [CrossRef]
  36. Liu, X.Y.; Zhang, Q.L.; Xu, X.Y. Petrochemical Plant multi-Objective and multi-Stage fire Emergency Management Technology System Based on the fire risk Prediction. Procedia Eng. 2013, 62, 1104–1111. [Google Scholar] [CrossRef]
  37. Joo, I.H.; Kim, K.S.; Kim, M.S. Fire Service in Korea: Advanced Emergency 119 System Based on GIS Technology; Springer: Berlin, Germany, 2004; pp. 396–399. [Google Scholar]
  38. Klann, M. Playing with Fire: User-Centered Design of Wearable Computing for Emergency Response; Springer: Berlin, Germany, 2007; pp. 116–125. [Google Scholar]
  39. Li, W.H.; Li, Y.; Yu, P.; Gong, J.H.; Shen, S.; Huang, L.; Liang, J.M. Modeling, simulation and analysis of the evacuation process on stairs in a multi-floor classroom building of a primary school. Phys. Stat. Mech. Appl. 2016, 469, 157–172. [Google Scholar] [CrossRef]
  40. Liang, J.; Shen, S.; Gong, J.; Liu, J.; Zhang, J. Embedding user-generated content into oblique airborne photogrammetry-based 3D city model. Int. J. Geogr. Inf. Sci. 2016, 31, 1–16. [Google Scholar] [CrossRef]
  41. Ogawa, K.; Verbree, E.; Zlatanova, S.; Kohtake, N.; Ohkami, Y. Toward seamless indoor-outdoor applications: Developing stakeholder-oriented location-based services. Geo-Spat. Inf. Sci. 2011, 14, 109–118. [Google Scholar] [CrossRef]
  42. Chen, H.T.; Qiu, J.Z.; Yang, P.; Lv, W.S.; Yu, G.B. Simulation study on the novel stairs and elevator evacuation model in the high-rise building. J. Saf. Sci. Technol. 2012, 8, 48–53. [Google Scholar]
  43. Lin, H.; Chen, M.; Lu, G.N.; Zhu, Q.; Gong, J.H.; You, X.; Wen, Y.N.; Xu, B.L.; Hu, M.Y. Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool. Earth Sci. Rev. 2013, 126, 74–84. [Google Scholar] [CrossRef]
  44. Hui, L.; Min, C.; Guonian, L. Virtual Geographic Environment: A Workspace for Computer-Aided Geographic Experiments. Ann. Assoc. Am. Geogr. 2013, 103, 465–482. [Google Scholar]
  45. Jia, F.; Zhang, W.; Xiong, Y. Cognitive research framework of virtual geographic environment. J. Remote Sens. 2015. [Google Scholar] [CrossRef]
  46. Chen, M.; Lin, H.; Lu, G. Virtual Geographic Environments; Science Press: Hingham, UK, 2009; Volume 31, pp. 1–6. [Google Scholar]
  47. Zhang, X.; Chen, Y.X.; Wang, W.S. Three-dimensional Modelling Technology for City Indoor Positioning and Navigation Applications. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Philadelphia, PA, USA, 2016. [Google Scholar]
  48. Vosselman, G.; Dijkman, S. 3D Building Model Reconstruction from Point Clouds and Ground Plans. Int. Arch. Photogramm. Remote Sens. 2001, 34, 37–43. [Google Scholar]
Figure 1. Urban remote-sensing data-acquisition system.
Figure 1. Urban remote-sensing data-acquisition system.
Ijgi 06 00214 g001
Figure 2. The 3D Modeling (3DM) result and quality of Kuerle city, Xinjiang Uygur Autonomous region of China.
Figure 2. The 3D Modeling (3DM) result and quality of Kuerle city, Xinjiang Uygur Autonomous region of China.
Ijgi 06 00214 g002
Figure 3. The framework of the integrated indoor and outdoor positioning system.
Figure 3. The framework of the integrated indoor and outdoor positioning system.
Ijgi 06 00214 g003
Figure 4. The indoor and outdoor fire scene.
Figure 4. The indoor and outdoor fire scene.
Ijgi 06 00214 g004
Figure 5. The application interfaces of the indoor and outdoor positioning service system.
Figure 5. The application interfaces of the indoor and outdoor positioning service system.
Ijgi 06 00214 g005
Table 1. The efficiency contrast between the 3D Modeling technologies proposed in this paper and the modeling by 3DMAX and SketchUp.
Table 1. The efficiency contrast between the 3D Modeling technologies proposed in this paper and the modeling by 3DMAX and SketchUp.
Contrasting Aspects3D Modeling Using the Technologies Proposed in This PaperModeling by 3DMAX and SketchUp
area50 km25 km2
quantity of buildings4930642
ground precise10 cmCannot realize fusion with ground
time 75 h20 days
node number of computer clust42
personnel usage 12
elevation preciseElevation precise 20 cm2 m cm

Share and Cite

MDPI and ACS Style

Zhang, X.; Chen, Y.; Yu, L.; Wang, W.; Wu, Q. Three-Dimensional Modeling and Indoor Positioning for Urban Emergency Response. ISPRS Int. J. Geo-Inf. 2017, 6, 214. https://doi.org/10.3390/ijgi6070214

AMA Style

Zhang X, Chen Y, Yu L, Wang W, Wu Q. Three-Dimensional Modeling and Indoor Positioning for Urban Emergency Response. ISPRS International Journal of Geo-Information. 2017; 6(7):214. https://doi.org/10.3390/ijgi6070214

Chicago/Turabian Style

Zhang, Xin, Yongxin Chen, Linjun Yu, Weisheng Wang, and Qianyu Wu. 2017. "Three-Dimensional Modeling and Indoor Positioning for Urban Emergency Response" ISPRS International Journal of Geo-Information 6, no. 7: 214. https://doi.org/10.3390/ijgi6070214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop