Next Article in Journal
Experimental Study of Nanosecond Laser-Generated Plasma Channels
Previous Article in Journal
Generation and Distribution of Quantum Oblivious Keys for Secure Multiparty Computation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing a Virtual Environment for Multibody Simulation Software Using Photogrammetry

1
Department of Mechanical Engineering, Lappeenranta-Lahti University of Technology, 53850 Lappeenranta, Finland
2
Raute Corporation, Research and Development, 15550 Lahti, Finland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(12), 4079; https://doi.org/10.3390/app10124079
Submission received: 18 May 2020 / Revised: 9 June 2020 / Accepted: 10 June 2020 / Published: 12 June 2020
(This article belongs to the Section Mechanical Engineering)

Abstract

:
Real-time simulation models based on multibody system dynamics can replicate reality with high accuracy. As real-time models typically describe machines that interact with a complicated environment, it is important to have an accurate environment model in which the simulation model operates. Photogrammetry provides a set of tools that can be used to create a three-dimensional environment from planar images. A created environment and a multibody-based simulation model can be combined in a Unity environment. This paper introduces a procedure to generate an accurate spatial working environment based on an existing real environment. As a numerical example, a detailed environment model is created from a University campus area.

Graphical Abstract

1. Introduction

In general, real-time simulation models based on multibody system dynamics interact with the graphical environment to illustrate the feasibility of the model and its functionality. Figure 1 shows examples of real-time simulation models and their environments, in which the environments vary from simple, see Figure 1a, to highly detailed and complex, see Figure 1d.
A simulation model interacts with its environment via tires, tracks, or other bodies that it can come into contact with and moves the environmental objects; Figure 1c,d shows typical examples. Graphical software, such as Blender, and game engine software, such as Unity, Unreal Engine, and CryEngine, can be utilized to generate working environments [1,2,3,4,5]. Each software has its own advantages and disadvantages. In Unity, the C programming language is used, whereas Unreal Engine and CryEngine use the C++ language. Unity has the capability to compile games from different platforms [6], and offers preprogrammed three-dimensional models, cameras, and lights [7]. Unreal Engine is a license-free software.
Graphical software makes it possible to generate environments for use in electronic games, virtual reality applications [8], and simulations. Realistic environment is an important aspect of simulations used, for example, to train operators of industrial vehicles [9,10,11]. Such training can help operators perform more efficiently, prevent accidents, and increase safety [12,13,14]. Graphical software are also widely used in the development of high-quality three-dimensional environments for educational purposes [15,16,17].
Photogrammetry is an estimation of the geometric and semantic properties of objects based on image analysis [18]. In other words, it is an approach used to generate three-dimensional models from detailed images of an object/area [19]. Digital photogrammetry collects data about an environment by calculating the locations of objects based on a predefined coordinate system [20]. It has been applied in many different fields and studies, such as material testing [21], recognition of deformation of beam elements and structures in fire tests [22,23], measurement of vertical deflections in large constructions such as bridges [24], and measurement of the roughness of the soil surface for better understanding of erosion processes [25]. Researchers have also generated precise three-dimensional models of large assets such as museums and historical sites by employing photogrammetry and laser scanning approaches [26,27,28,29]. Many scholars have studied how to generate 3D models using point cloud data. Cielos et al. introduced a methodology for the creation of 3D models. The method employs a laser scanning concept with light detection. With the help of this method, point cloud data can be collected, even in harsh weather conditions, to construct the 3D models. In addition, the collected point data can be used directly in photogrammetric software [30]. Laser scanning is widely used in building construction applications to collect point cloud data and utilise them for creation of 3D outdoor and indoor models of buildings [31]. Laser scanning has also been used to extract accurate data from rock surfaces [32].
In recent years, photogrammetry concept has also been used for mobile phones to construct 3D models of indoor places and close-range distances [33,34]. Many studies have been conducted to increase the accuracy of the photogrammetry method in the applications of city planning and building recognition. Wang et al. has introduced a procedure-based online matching to increase the accuracy of the 3D models produced with the photogrammetry. The procedure has been applied in recognizing buildings in aerial images [35]. In addition, the photogrammetry method is currently being used for constructing historical and cultural buildings [36]. Scholars have utilized photogrammetry to create 3D models of historical sites, which were created using the graphical software (Unity) for the virtual reality aspect [37,38]. Topographic methods or linear variable differential transducers (LVDTs) are two examples of alternative approaches for creation of a three-dimensional model from an object or area. However, they have major disadvantages, such as requiring long processing time, intensive manual work, and limitations in points positioning in a structure [39].
Laser scanning is also an alternative approach to photogrammetry. A notable advantage of the laser scanning approach is that it allows the data of low-textured objects to be collected. This is a situation where the photogrammetry matching approach often fails. The laser scanning uses a Global Positioning System (GPS) and Internal Navigation System (INS) for sensor orientation [40], and derives three-dimensional coordinates using Time of Flight (TOF) [41]. The laser scanner transmits impulses towards an object and estimates the distance between the scanner and the object. In addition, the laser scanner sends a laser line to an object and records the laser line reflection to obtain the geometry of the object [42].
Laser scanning, mainly airborne laser scanning (ALS), and photogrammetry have some differences and similarities. For example, they both use GPS and digital sensors. On the other hand, ALS uses point sensors, whereas photogrammetry uses line sensors. ALS takes points out of an area and photogrammetry covers the whole area. The production time in ALS is typically longer than in photogrammetry [40]. Photogrammetry is an inexpensive, easy-to-set-up method. On the other hand, in photogrammetry, to capture images with an appropriate map, mainly in harsh weather conditions, an experienced operator is needed. For some laser scanners, an extra digital camera is needed to capture RGB colors of surfaces. The disadvantage of scanners that have their own digital camera, is their low geometric resolution [43]. Laser scanning has a higher measurement accuracy than photogrammetry, however laser scanning procedures mostly costs more than photogrammetry [44]. When the material of the model/object absorbs or diffuses the laser, the photogrammetry method mostly works properly [45]. A number of studies have been conducted to identify the material during 3D model constructions. One of the material recognition methods is to classify the images and laser scanning data based on the spectral categorization. In this method, the image will be classified and analyzed based on its wavelength. Furthermore, by image analyzing, the effect of the environmental phenomena on the buildings’ surfaces can also be identified [46]. In some cases, laser scanning usage might be limited to short distances [47].
The laser scanning method also has some disadvantages. To collect accurate point cloud data via a laser scanner, the precision of the operator plays a crucial role. Furthermore, converting the collected point cloud data into a 3D model of buildings requires intensive work [48]. In addition, the laser scanner has to be relocated several times during the process; therefore, in the collected point cloud data, the buildings’ deformation should be considered [49]. A number of scholars conducted studies on collecting point cloud data using various procedures. Wang et al. have accomplished a comprehensive study on different applications, such as photogrammetry, Lidar, as well as laser scanning, to collect 3D point clouds in the construction industry. Point cloud data can be used for different purposes and areas, such as civil engineering, construction industries, and tracing progress in building constructions [50].
The objective of this paper is to generate a working environment for real-time multibody based simulation models. The environment is created from an existing area in the real world. The campus area of Lappeenranta-Lahti University of Technology, LUT University, Finland is selected as the case study. In this study, Unity software was used to develop the campus environment.

2. Methodology

This section introduces a procedure to create a three-dimensional environment using photogrammetry and graphical software.
From a graphical point of view, a multibody simulation model consists of graphics of bodies and the working environment that the machine is interacting with. Multibody simulation software usually offers the possibility to create a simple environment. However, as will be shown in this paper, a multibody model can represented in a graphical software. The use of graphical software allows a detailed description of a working environment.

Photogrammetry Approach

Photogrammetry uses contact-free sensors, which makes it possible to create three-dimensional models of objects that are expensive, fragile, toxic, or visible but inaccessible. It also allows to document the changes of an object or area, such as a building construction.
Photogrammetry suffers from a number of shortcomings such as sensitivity to light conditions. The light source can be optimized for small objects but in the case of outdoor objects and environments, optimization of lighting conditions remains a challenge.
To create a three-dimensional model with high precision, a large number of images are needed. In general, to create an initial three-dimensional model of a single object, a minimum of two planar images with a known offset is necessary [7]. A functional and affordable method for taking thousands of images of a wide area (including tall structures such as buildings) is the use of Unmanned Aerial Vehicles (UAVs) [51,52]. Assisted by UAVs, photogrammetry can be extended to cover areas in the scale of square kilometers.
Figure 2 presents a procedure for using photogrammetry to create a three-dimensional model of an area/object.
As Figure 2 illustrates, obtained images should be calibrated to calculate the distance between the camera and the object [53]. In the figure, exterior orientation means calculation of the exterior coordinates, which are the location of the projection center and the rotation angles of the object with respect to the considered global coordinate system. Surface modeling can be applied to the object to visualize the texture. In the final step, postprocessing is done to create a three-dimensional model of the object.
To put it simply, photogrammetry creates three-dimensional models out of planar images. To this end, postprocessing software compares two images taken of an object or area and recognizes identical points, see Figure 3. “Overlapping” between images helps to simplify the identical point recognition process and increases the quality of the object’s texture. In addition, “shape matching” can be assisted to match corresponding points in two overlapping images. Shape matching technique has different varieties. One common technique compares the shape of objects in two images without color consideration. This simplifies the process and reduces computational time. By considering corresponding points and the orientation of the cameras, the location of points in the three-dimensional environment can be estimated.
Even though the photogrammetry approach is extensively used to generate virtual realistic environments, it still faces some barriers and limitations. In most cases, the geometry and exact location of the object under investigation should be estimated. Transparent and dark colored objects, as well as tiny objects, pose challenges for photogrammetry [27,54,55]. Furthermore, there are some limitations when using photogrammetry and laser scanning procedures. Objects that are small, shiny, and transparent cannot be accurately captured in photogrammetry and laser scanning procedures. In addition, materials that absorb or diffuse the laser beams, are barriers to accurately collecting point cloud data during the laser scanning procedure.

3. Example Case

In this study, a photogrammetry approach is used to create an environment model from the campus area of Lappeenranta-Lahti University of Technology (LUT University). The University is located in the south of Finland, Figure 4. The photogrammetry covered area in this study is approximately 40,000 square meters.

3.1. Equipment

In this study, to collect three-dimensional data for photogrammetry, a drone (as a UAV) and a laser scanner were used. The drone used is a phantom 4 RTK from DJI Technology INC., see Figure 5. The Phantom 4 RTK drone has a location system, communication, and propulsion systems, as well as a flight controller, and a battery. The maximum flight speed is 49.9 m per second and the drone weighs 1391 g. The battery life is sufficient for a 30 min flight. Horizontal accuracy for the Phantom 4 RTK is three centimeters and it stores three-dimensional observational data, which will be used with the postprocessing software. The three centimeter horizontal accuracy is the relative accuracy. As pointed out, there is a ground reference check point where the drone started to fly from. Note that, the reference spot was inside the campus area. The drone carries a 20 megapixel camera with a CMOS sensor. A three-axis gimbal is attached to the drone to provide stability for the camera and enable a high resolution and clear images. The drone is also equipped with obstacle sensors to prevent crashes during flight. The remote flight controller uses the GS RTK app to generate a flight plan. The controller has a built-in 5.5 inch (13.97 cm) screen which shows the flight map of the drone, see Figure 6.
A laser scanner, Faro S70, was used to obtain a high-quality and accurate three-dimensional environment, see Figure 7. The laser scanner collects points in the order of millions (points cloud) to converge planar images to a three-dimensional model.The laser scanner also captures the textures of surfaces of buildings and other objects. It can be used both indoor and outdoor and it is suitable for distances between 0.6 m to 70 m. It can recognize the point locations with an accuracy of ± 1 mm and can provide one million points per second.
For the post processing step, the FARO S70 laser scanner uses FARO SCENE software or the Autodesk Reality Capture software, (ReCap software).

3.2. Procedure for Three Dimensional Environment

Figure 8 shows the process steps for a photogrammetry approach using a drone and a laser scanner to generate a three-dimensional environment for the use of real-time simulation.
As Figure 8 illustrates, the photogrammetry starts with the drone taking thousands of images. The drone flies at a specific height and takes images with a specified overlap between each two continuous images. Simultaneously, a laser scanner scans the environment and generates a point cloud of structures, the ground, trees and other objects. The images and point cloud are exported to a postprocessing software to generate an initial three-dimensional environment. ReCap software was utilized as the postprocessing software in this study. The generated initial environment is exported to a graphical software to create a detailed environment that can be employed in a real-time multibody simulation. In the procedures used, the drone captures the images and the laser scanner collects the point clouds. The laser scanner software has an altimeter, an inclinometer, a compass, and a color recognition feature.
Prior to starting photogrammetry and the laser scanning processes, a ground reference point for the laser scanner and the drone are defined. Based on the ground reference point, the postprocessing software identifies the corresponding points on surfaces in both methods. Prior to the operation, the laser scanner locations were predefined during its operation. This pre-definition helps with point matching and line matching between images and point cloud data.
The point cloud data collected by the laser scanner are exported to the photogrammetry software, ReCap, where the alignment between points is accomplished. Afterwards, the images are exported to the software, where the alignment between images will be done. At the final stage, the alignment between the point cloud data and images is accomplished based on the check points. As mentioned previously, the laser scanning process is performed to demonstrate the precise textures of the walls. At this stage, the multibody simulation software models can interact with the generated environment in the graphical software platform. Accordingly, there is no need to export the generated environment from the graphical software to the real-time simulation software. Instead, a model and its environment can be illustrated in the graphical software platform and controlled by the simulation software. Figure 9 shows an example of a model and an environment in the graphical software that can be controlled by the real-time simulation software. The graphics of the forklift model in Figure 9 were created in Blender software and the environment generated by Unity software.
Figure 6 shows the flight map of the drone for the targeted area. The drone started flying from a specified spot and, after arriving at a specific height, it flew horizontally with a horizontal velocity of ~20 km per hour while taking images. During its flight, the drone took approximately 1900 images with 80 percent overlap between continuous images. The percentage of the overlap can be defined by the controller before the flight. The maximum height that the drone flew was 50 m. Although in some photogrammetry procedures drones capture both nadiral and oblique images, in this case study, the drone captured nadiral images. The constructed 3D environment model is based on the nadiral images and point cloud, which is collected by the laser scanner.
To collect the point cloud data, the operators relocated the laser scanner to certain predefined locations. The whole process took nearly three hours. The obtained images and the point cloud data were transferred to the Recap software to build the three-dimensional environment. The ReCap software created a three-dimensional model, made some markers as Geo-references, and took measurements pertaining to the height. Finally, the Unity software prepared the environment for use in the simulation software

4. Discussion

As already stated, nearly 1900 images were taken of the campus area. In this section, a number of views of the campus area have been chosen for comparison to illustrate similarities and differences between the environment in the real world and the corresponding environment in Unity software, see Figure 10, Figure 11, Figure 12 and Figure 13.
Figure 10a and Figure 11a depict the campus area of the LUT University in the real world, and Figure 10b and Figure 11b illustrate the area as it is created using photogrammetry. As the figures show, the created environment reflects the real world environment appropriately. Cloudy weather facilitated proper matching of the points collected by the laser scanner with the corresponding points in the images. As Figure 10b and Figure 11b demonstrate, the physical dimensions of the buildings, their locations, as well as the distances between the structures have been correctly generated. The paths and streets are created without notable failures. Figure 12a,b and Figure 13a,b show the main entrance of the University in the real world and in the graphical software, respectively. Comparing the point of views, the distances between points are measured in both the real world and the graphical software. Table 1 shows the values for the point distances. As the table shows, the 3D graphical environment constructed by photogrammetry procedure reflects the real world environment appropriately.
Figure 12a and Figure 13a show the main entrance area of the LUT University main building. Figure 12b and Figure 13b are the corresponding scenes generated using the photogrammetry approach. As the figures show, the postprocessing phase in the photogrammetry is accomplished with acceptable accuracy. Nearly all points (generated by the laser scanner) match properly with the corresponding points in the images taken by the drone. The buildings are created appropriately and there is no major distortion in the structures. Comparisons of the dimensions, colors, and textures in Figure 12a,b show that Figure 12b and Figure 13a provide a highly accurate reflection of the real world. As pointed out, to construct the 3D environment model out of the planar images, photogrammetry has been used. In this study, a laser scanner has been utilised to increase the visibility and accuracy of the building surfaces, such as walls and windows. In addition, the environment will be used in the application of physics based real-time simulation models. Therefore, both methods have been utilised to keep the environment as realistic as possible.
As mentioned earlier, the graphical software can be connected to the simulation software to run simulation models. The environment of the LUT University campus area can be used for real-time simulation models. Figure 14 shows a real-time simulation model of an excavator which has been imported to the university campus environment.
The model is of a real excavator with 22 tons of operating weight. The excavator-simulated model consists of nine bodies. Its hydraulic circuit system is modeled using Lumped Fluid Theory [56]. The definitions of the bodies and their constraints, as well as interactions between them are accomplished in real-time simulation software that employs the semi-cursive multibody method [57]. In addition, these equations of motions can be solved using the Runge–Kutta time integration [58]. The excavator graphical model was created using the Blender software. The graphics for the excavator consist of the graphics to illustrate the model and the collision graphics. The collision graphics define the contacts between the bodies as well as with the ground. By collecting data that the real-time simulation models provide, designers can analyze dynamic behavior and consequently improve the performance of the models. In this study, a 3D environment based on the real world has been constructed such that it interacts with real-time simulation models. In practice, the model interacts with the environment, for example, an excavator driver can drive and excavate soil and have the experience of both working with a real excavator, and working in an environment based on the real world. Furthermore, data collected for real-time simulation models is more reliable than operating in an imaginary environment. From an educational point of view, the use of digital twin models in an environment based on the real-world assists operators in learning more quickly and more precisely. In addition, with the help of the virtual reality concept, the education process can be made more functional and closer to the real-world.

5. Conclusions

In this paper, a procedure for generating a three-dimensional environment based on a photogrammetry approach is introduced. To create the environment, a drone and a laser scanner were used to take images and collect the point cloud data, respectively. Using a photogrammetry-based approach, it is possible to generate environments that exist in the real world. Furthermore, the graphical software can be connected to the simulation software, which makes it possible to operate physics-based simulation models in their real environments.
The procedure introduced was applied to create an environmental model of the campus area of the LUT University in Finland. A multibody simulation model of an excavator was exported to the campus area. The real-time simulation model is run in a dynamic environment, which means the generated three-dimensional environments can be updated based on renovations in the corresponding real environment.

Author Contributions

A.M. worked with the concept and editing the manuscript. M.M. worked on the literature review, writing the manuscript, and performed the experiments and results. R.E. contributed to the writing and methods used. All authors have read and agreed to the published version of the manuscript.

Funding

This research is in part funded by the Academy of Finland Grant No. 316106.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, C.W.; Lee, T.H.; Huang, C.L.; Hsu, K.S. Unity 3D Production and Environmental Perception Vehicle Simulation Platform. In Proceedings of the IEEE 2016 International Conference on Advanced Materials for Science and Engineering (ICAMSE), Tainan, Taiwan, 12–13 November 2016; pp. 452–455. [Google Scholar]
  2. Vajak, D.; Livada, Č. Combining Photogrammetry, 3D Modeling and Real Time Information Gathering for Highly Immersive VR Experience. In Proceedings of the IEEE 2017 Zooming Innovation in Consumer Electronics International Conference (ZINC), Novi Sad, Serbia, 31 May–1 June 2017; pp. 82–85. [Google Scholar]
  3. Fritsch, D.; Klein, M. 3D Preservation of Buildings–Reconstructing the Past. Multimed. Tools. Appl. 2018, 77, 9153–9170. [Google Scholar] [CrossRef]
  4. Mach, V.; Valouch, J.; Adámek, M.; Ševčík, J. Virtual Reality–Level of Immersion Within the Crime Investigation. In MATEC Web of Conferences; EDP Sciences: Zlín, Czech Republic, 2019; Volume 292, p. 01031. [Google Scholar]
  5. Chiu, Y.P.; Shiau, Y.C.; Song, S.J. A Study on Simulating Landslides Using Unity Software. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Baech, Switzerland, 2015; Volume 764, pp. 806–811. [Google Scholar]
  6. Isar, C. A Glance into Virtual Reality Development Using Unity. Inf. Econom. 2018, 22. [Google Scholar] [CrossRef]
  7. Uggla, G. 3D City Models-A Comparative Study of Methods and Datasets. Master’s Thesis, School of Architecture and the Built Environment Royal Institute of Technology (KTH), Stockholm, Sweden, 2015. [Google Scholar]
  8. Oqua, O.I.; Chen, J.; Annamalai, A.; Yang, C. 3D Printed Data Glove Design for VR Based Hand Gesture Recognition. In Proceedings of the IEEE 2018 11th International Workshop on Human Friendly Robotics (HFR), Shenzhen, China, 13–14 November 2018; pp. 66–71. [Google Scholar]
  9. Brady, D.; Lee, A.; Pearce, A.; Shintaku, N.; Guerlain, S. Intelligent Cities: Translating Architectural Models Into a Virtual Gaming Environment for Event Simulation. In Proceedings of the IEEE 2015 Systems and Information Engineering Design Symposium, Charlottesville, VA, USA, 24 April 2015; pp. 369–373. [Google Scholar]
  10. Silva, J.F.; Almeida, J.E.; Rossetti, R.J.; Coelho, A.L. A Serious Game for EVAcuation Training. In Proceedings of the 2013 IEEE 2nd International Conference on Serious Games and Applications for Health (SeGAH), Vilamoura, Portugal, 2–3 May 2013; pp. 1–6. [Google Scholar]
  11. Chittaro, L.; Ranon, R. Serious Games for Training Occupants of a Building in Personal Fire Safety Skills. In Proceedings of the IEEE 2009 Conference in Games and Virtual Worlds for Serious Applications, Coventry, UK, 23–24 March 2009; pp. 76–83. [Google Scholar]
  12. Bhide, S.; Riad, R.; Rabelo, L.; Pastrana, J.; Katsarsky, A.; Ford, C. Development of Virtual Reality Environment for Safety Training. In Proceedings of the IIE Annual Conference, Orlando, FL, USA, 19–23 May 2015; Institute of Industrial and Systems Engineers-Publisher (IISE): Norcross, GA, USA, 2015; p. 2302. [Google Scholar]
  13. Sacks, R.; Perlman, A.; Barak, R. Construction Safety Training Using Immersive Virtual Reality. Constr. Manag. Econ. 2013, 31, 1005–1017. [Google Scholar] [CrossRef]
  14. Friend, M.A.; Kohn, J.P. Fundamentals of Occupational Safety and Health; Rowman & Littlefield: Lanham, MD, USA, 2018. [Google Scholar]
  15. Birt, J.; Stromberga, Z.; Cowling, M.; Moro, C. Mobile Mixed Reality for Experiential Learning and Simulation in Medical and Health Sciences Education. Information 2018, 9, 31. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, Y.; Yang, L.; Zhao, L.; Deng, Y. Design of Simulation Training System for Remote Sensing Large Data Processing of Natural Disasters. J. Coast. Res. 2018, 83, 328–334. [Google Scholar] [CrossRef]
  17. Chou, Y.T.; Lee, B.W.; Shih, H.Y. Study on Educational Virtual Reality Implementation Using Knowledge-Based Engineering. In Proceedings of the 2018 IEEE International Conference on Advanced Manufacturing (ICAM), Yunlin, Taiwan, 16–18 November 2018; pp. 433–436. [Google Scholar]
  18. ISPRM. International Society of Photogrammetry and Remote Sensing. 2019. Available online: https://www.isprs.org (accessed on 10 January 2020).
  19. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry. New York 2001, 19. [Google Scholar]
  20. Linder, W. Digital Photogrammetry; Springer: Berlin, Germany, 2009. [Google Scholar]
  21. Whiteman, T.; Lichti, D.; Chandler, I. Measurement of Deflections in Concrete Beams by Close-Range Digital Photogrammetry. In Proceedings of the Symposium on Geospatial Theory, Processing and Applications, Ottawa, ON, Canada, 9–12 July 2002; pp. 9–12. [Google Scholar]
  22. Fraser, C. Automated Off-Line Digital Close-Range Photogrammetry: Capabilities & Application. In Proceedings of the 3rd International Image Sensing Seminar on New Developments in Digital Photogrammetry, Gifu, Japan, 24–27 September 2001. [Google Scholar]
  23. Fraser, C.S.; Riedel, B. Monitoring the Thermal Deformation of Steel Beams via Vision Metrology. ISPRS J. Photogramm. Remote. Sens. 2000, 55, 268–276. [Google Scholar] [CrossRef] [Green Version]
  24. Jáuregui, D.V.; White, K.R.; Woodward, C.B.; Leitch, K.R. Noncontact Photogrammetric Measurement of Vertical Bridge Deflection. J. Bridge. Eng. 2003, 8, 212–222. [Google Scholar] [CrossRef]
  25. Rieke-Zapp, D.; Wegmann, H.; Santel, F.; Nearing, M. Digital Photogrammetry for Measuring Soil Surface Roughness. In Proceedings of the American Society of Photogrammetry & Remote Sensing 2001 Conference–Gateway to the New Millennium’, St Louis, MO, USA, 23–27 April 2001; American Society of Photogrammetry & Remote Sensing: Bethesda, MD, USA, 2001. [Google Scholar]
  26. Remondino, F.; Rizzi, A. Reality-Based 3D Documentation of Natural and Cultural Heritage Sites—Techniques, Problems, and Examples. Appl. Geomater. 2010, 2, 85–100. [Google Scholar] [CrossRef] [Green Version]
  27. Esmaeili, H.; Thwaites, H.; Woods, P.C. Workflows and Challenges Involved in Creation of Realistic Immersive Virtual Museum, Heritage, and Tourism Experiences: A Comprehensive Reference for 3D Asset Capturing. In Proceedings of the IEEE 2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Jaipur, India, 4–7 December 2017; pp. 465–472. [Google Scholar]
  28. Smith, M.J.; Hamruni, A.M.; Jamieson, A. 3-D Urban Modelling Using Airborne Oblique and Vertical Imagery. In Proceedings of the ISPRS Hannover Workshop, Hannover, Germany, 2–5 June 2009. [Google Scholar]
  29. Hashim, K.A.; Ahmad, A.; Samad, A.M.; NizamTahar, K.; Udin, W.S. Integration of Low Altitude Aerial & Terrestrial Photogrammetry Data in 3D Heritage Building Modeling. In Proceedings of the 2012 IEEE Control and System Graduate Research Colloquium, Selangor, Malaysia, 16–17 July 2012; pp. 225–230. [Google Scholar]
  30. Rodríguez-Cielos, R.; Galán-García, J.L.; Padilla-Domínguez, Y.; Rodríguez-Cielos, P.; Bello-Patricio, A.B.; López-Medina, J.A. LiDARgrammetry: A New Method for Generating Synthetic Stereoscopic Products from Digital Elevation Models. Appl. Sci. 2017, 7, 906. [Google Scholar] [CrossRef] [Green Version]
  31. Macher, H.; Landes, T.; Grussenmeyer, P. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef] [Green Version]
  32. Yi, X.; Zhang, R.; Li, H.; Chen, Y. An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction. Appl. Sci. 2019, 9, 906. [Google Scholar] [CrossRef] [Green Version]
  33. Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Visintini, D.; Vettore, A. Performance evaluation of two indoor mapping systems: Low-cost UWB-aided photogrammetry and backpack laser scanning. Appl. Sci. 2018, 8, 416. [Google Scholar] [CrossRef] [Green Version]
  34. Dabove, P.; Grasso, N.; Piras, M. Smartphone-Based Photogrammetry for the 3D Modeling of a Geomorphological Structure. Appl. Sci. 2019, 9, 3884. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, Q.; Zhao, H.; Zhang, Z.; Cui, X.; Ullah, S.; Sun, S.; Liu, F. Line matching based on viewpoint-invariance for stereo wide-baseline aerial images. Appl. Sci. 2018, 8, 938. [Google Scholar] [CrossRef] [Green Version]
  36. Manajitprasert, S.; Tripathi, N.K.; Arunplod, S. Three-Dimensional (3D) Modeling of Cultural Heritage Site Using UAV Imagery: A Case Study of the Pagodas in Wat Maha That, Thailand. Appl. Sci. 2019, 9, 3640. [Google Scholar] [CrossRef] [Green Version]
  37. Soto-Martin, O.; Fuentes-Porto, A.; Martin-Gutierrez, J. A Digital Reconstruction of a Historical Building and Virtual Reintegration of Mural Paintings to Create an Interactive and Immersive Experience in Virtual Reality. Appl. Sci. 2020, 10, 597. [Google Scholar] [CrossRef] [Green Version]
  38. Obradović, M.; Vasiljević, I.; Đurić, I.; Kićanović, J.; Stojaković, V.; Obradović, R. Virtual Reality Models Based on Photogrammetric Surveys—A Case Study of the Iconostasis of the Serbian Orthodox Cathedral Church of Saint Nicholas in Sremski Karlovci (Serbia). Appl. Sci. 2020, 10, 2743. [Google Scholar] [CrossRef]
  39. Valença, J.; Julio, E.; Araújo, H. Applications of Photogrammetry to Structural Assessment. Exp. Tech. 2012, 36, 71–81. [Google Scholar] [CrossRef]
  40. Baltsavias, E.P. A Comparison Between Photogrammetry and Laser Scanning. ISPRS J. Photogramm. Remote. Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  41. Vosselman, G.; Maas, H.G. Airborne and Terrestrial Laser Scanning; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  42. Barsanti, S.G.; Remondino, F.; Visintini, D. Photogrammetry and Laser Scanning for Archaeological Site 3D Modeling–Some Critical Issues. In Proceedings of the 2nd Workshop on ’The New Technologies for Aquileia’, Aquileia, Italy, 25 June 2012. [Google Scholar]
  43. Lerma, J.L.; Navarro, S.; Cabrelles, M.; Villaverde, V. Terrestrial laser scanning and close range photogrammetry for 3D archaeological documentation: The Upper Palaeolithic Cave of Parpalló as a case study. J. Archaeol. Sci. 2010, 37, 499–507. [Google Scholar] [CrossRef]
  44. Moon, D.; Chung, S.; Kwon, S.; Seo, J.; Shin, J. Comparison and utilization of point cloud generated from photogrammetry and laser scanning: 3D world model for smart heavy equipment planning. Autom. Constr. 2019, 98, 322–331. [Google Scholar] [CrossRef]
  45. Xu, Z.; Wu, L.; Shen, Y.; Li, F.; Wang, Q.; Wang, R. Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner. Remote Sens. 2014, 6, 10413–10434. [Google Scholar] [CrossRef] [Green Version]
  46. Meroño, J.E.; Perea, A.J.; Aguilera, M.J.; Laguna, A.M. Recognition of materials and damage on historical buildings using digital image classification. S. Afr. J. Sci. 2015, 111, 1–9. [Google Scholar] [CrossRef] [Green Version]
  47. Nouwakpo, S.K.; Weltz, M.A.; McGwire, K. Assessing the performance of structure-from-motion photogrammetry and terrestrial LiDAR for reconstructing soil surface microtopography of naturally vegetated plots. Earth Surf. Process. Landf. 2016, 41, 308–322. [Google Scholar] [CrossRef]
  48. Lu, Q.; Lee, S. Image-based technologies for constructing as-is building information models for existing buildings. J. Comput. Civ. Eng. 2017, 31, 04017005. [Google Scholar] [CrossRef]
  49. Pătrăucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modeling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef] [Green Version]
  50. Wang, Q.; Kim, M.K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  51. Burkart, A.; Cogliati, S.; Schickling, A.; Rascher, U. A Novel UAV-Based Ultra-Light Weight Spectrometer for Field Spectroscopy. IEEE Sens. J. 2013, 14, 62–67. [Google Scholar] [CrossRef]
  52. Eisenbeiß, H. UAV Photogrammetry. Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2009. [Google Scholar]
  53. Atkinson, K.B. Close Range Photogrammetry and Machine Vision; Whittles Publishing: Dunbeath, UK, 1996. [Google Scholar]
  54. Yu, I.; Mortensen, J.; Khanna, P.; Spanlang, B.; Slater, M. Visual Realism Enhances Realistic Response in an Immersive Virtual Environment-Part 2. IEEE Comput. Graph. Appl. 2012, 32, 36–45. [Google Scholar] [CrossRef] [Green Version]
  55. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs Classical Aerial Photogrammetry for Archaeological Studies. J. Archaeol. Sci. Rep. 2017, 14, 758–773. [Google Scholar] [CrossRef]
  56. Watton, J. Modeling, Simulation, Analog and Microcomputer Control; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1989. [Google Scholar]
  57. Slaats, P.M. Recursive Formulations in Multibody Dynamics; Technische Universiteit Eindhoven: Eindhoven, The Netherlands, 1991. [Google Scholar]
  58. Yang, X.; Shen, Y. Runge–Kutta Method for Solving Uncertain Differential Equations. J. Unce. Anal. Appl. 2015, 3, 1–12. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Simulation models with their working environments: (a) Four-bar mechanism. (b) Material handler. (c) Wheel loader. (d) Forestry vehicle.
Figure 1. Simulation models with their working environments: (a) Four-bar mechanism. (b) Material handler. (c) Wheel loader. (d) Forestry vehicle.
Applsci 10 04079 g001
Figure 2. Overview of the photogrammetry procedure.
Figure 2. Overview of the photogrammetry procedure.
Applsci 10 04079 g002
Figure 3. Estimation of three-dimensional specifications of a vehicle by comparing two planar images.
Figure 3. Estimation of three-dimensional specifications of a vehicle by comparing two planar images.
Applsci 10 04079 g003
Figure 4. Lappeenranta-Lahti University campus area selected for the photogrammetry.
Figure 4. Lappeenranta-Lahti University campus area selected for the photogrammetry.
Applsci 10 04079 g004
Figure 5. Phantom 4 RTK drone and controller.
Figure 5. Phantom 4 RTK drone and controller.
Applsci 10 04079 g005
Figure 6. Flight maps of the chosen area for photogrammetry.
Figure 6. Flight maps of the chosen area for photogrammetry.
Applsci 10 04079 g006
Figure 7. FARO S70 laser scanner.
Figure 7. FARO S70 laser scanner.
Applsci 10 04079 g007
Figure 8. Process steps to generate an environment for real-time simulation software using a photogrammetry approach and Unity software.
Figure 8. Process steps to generate an environment for real-time simulation software using a photogrammetry approach and Unity software.
Applsci 10 04079 g008
Figure 9. A fork lift model in its environment as an example of a combination between a real-time simulation software and Unity software.
Figure 9. A fork lift model in its environment as an example of a combination between a real-time simulation software and Unity software.
Applsci 10 04079 g009
Figure 10. Comparison between the LUT University campus area in the real world and in Unity software using photogrammetry: (a) Real world and (b) Unity environment.
Figure 10. Comparison between the LUT University campus area in the real world and in Unity software using photogrammetry: (a) Real world and (b) Unity environment.
Applsci 10 04079 g010
Figure 11. A comparison between the LUT campus buildings in real world and in the graphical software using the photogrammetry approach: (a) Real world and (b) Unity environment.
Figure 11. A comparison between the LUT campus buildings in real world and in the graphical software using the photogrammetry approach: (a) Real world and (b) Unity environment.
Applsci 10 04079 g011
Figure 12. LUT University main entrance in the real world and in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.
Figure 12. LUT University main entrance in the real world and in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.
Applsci 10 04079 g012
Figure 13. Comparison between the textures in the real world and created in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.
Figure 13. Comparison between the textures in the real world and created in the graphical software using the photogrammetry-based approach: (a) Real world and (b) Unity environment.
Applsci 10 04079 g013
Figure 14. Excavator model in the LUT campus area.
Figure 14. Excavator model in the LUT campus area.
Applsci 10 04079 g014
Table 1. The distance values shown in Figure 12a,b and Figure 13a,b.
Table 1. The distance values shown in Figure 12a,b and Figure 13a,b.
NameD1 (m)D2 (m)D3 (m)D4 (m)
Real world environment5.64.36.4519.6
Graphical software environment5.64.36.4519.6

Share and Cite

MDPI and ACS Style

Mohammadi, M.; Eskola, R.; Mikkola, A. Constructing a Virtual Environment for Multibody Simulation Software Using Photogrammetry. Appl. Sci. 2020, 10, 4079. https://doi.org/10.3390/app10124079

AMA Style

Mohammadi M, Eskola R, Mikkola A. Constructing a Virtual Environment for Multibody Simulation Software Using Photogrammetry. Applied Sciences. 2020; 10(12):4079. https://doi.org/10.3390/app10124079

Chicago/Turabian Style

Mohammadi, Manouchehr, Roope Eskola, and Aki Mikkola. 2020. "Constructing a Virtual Environment for Multibody Simulation Software Using Photogrammetry" Applied Sciences 10, no. 12: 4079. https://doi.org/10.3390/app10124079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop