Next Article in Journal
Numerical Evaluation of Rotordynamic Coefficients for Compliant Foil Gas Seal
Next Article in Special Issue
Virtual Reality in Museums: Exploring the Experiences of Museum Professionals
Previous Article in Journal
Development and Characterization of a DC-Driven Thermal Oscillator Using Acrylate-Based Composites
Previous Article in Special Issue
ArkaeVision VR Game: User Experience Research between Real and Virtual Paestum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mergin’ Mode: Mixed Reality and Geoinformatics for Monument Demonstration

by
Konstantinos Evangelidis
1,*,
Stella Sylaiou
2 and
Theofilos Papadopoulos
1
1
Department of Surveying & Geoinformatics Engineering, International Hellenic University, Serres Campus, 62124 Serres, Greece
2
School of Visual and Applied Arts, Aristotle University of Thessaloniki, Thessaloniki, 54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(11), 3826; https://doi.org/10.3390/app10113826
Submission received: 27 April 2020 / Revised: 21 May 2020 / Accepted: 25 May 2020 / Published: 31 May 2020
(This article belongs to the Special Issue Virtual Reality and Its Application in Cultural Heritage)

Abstract

:

Featured Application

“Mergin’ Mode,” the application featured in this article, takes advantage of traditional geospatial functionalities and 3D visualization frameworks to support the creation of custom virtual geospatial worlds. These may then be served via location-based services in applications utilizing augmented and mixed reality technologies with the aim of promoting cultural-touristic resources. The system relies solely on the Open Geospatial Consortium (OGC) standards for geospatial data and services and was developed on top of JavaScript APIs. It comprises an authoring tool and an end-user app, and its vision is to utilize the “web of cultural data” that will be activated on mobile smart devices in a way akin to Google Maps. The application will be accessible at GitHub.

Abstract

Since smart devices are becoming the primary technological means for daily human activities related to user-location, location-based services constitute a crucial component of the related smart applications. Meanwhile, traditional geospatial tools such as geographic information systems (GIS) in conjunction with photogrammetric techniques and 3D visualization frameworks can achieve immersive virtual reality over custom virtual geospatial worlds. In such environments, 3D scenes with virtual beings and monuments with the assistance of storytelling techniques may reconstruct historical sites and “revive” historical events. Boosting of Internet and wireless network speeds and mixed reality (MR) capabilities generate great opportunities for the development of location-based smart applications with cultural heritage content. This paper presents the MR authoring tool of “Mergin’ Mode” project, aimed at monument demonstration through the merging of the real with the virtual, assisted by geoinformatics technologies. The project does not aim at simply producing an MR solution, but more importantly, an open source platform that relies on location-based data and services, exploiting geospatial functionalities. In the long term, it aspires to contribute to the development of open cultural data repositories and the incorporation of cultural data in location-based services and smart guides, to enable the web of open cultural data, thereby adding extra value to the existing cultural-tourism ecosystem.

1. Introduction

In recent years there has been a widespread penetration of location-based services (LBS), also known as geo-services, into people’s daily activities, triggered by the increasing affordability of smart devices (tablets, mobile phones) for the average end-user/consumer [1]. A typical example of using geo-services is the navigation to a place using freely available maps and the simultaneous association of current location with other complementary information. At the same time, significant developments in geoinformatics technologies reflect the progress occurred in hardware and software technologies. Nowadays, it is possible to map a region with high topographical accuracy and create high-resolution digital terrain/surface models (DTM/DSM) by employing unmanned aerial vehicles (UAV) [2,3,4]. Likewise, new technologies allow the creation of three-dimensional models that simulate inanimate or moving/animated spatial objects (e.g., buildings or virtual humans respectively) [5]. The above, combined with high quality three-dimensional (3D) modeling visualization and animation capabilities offered by computer graphics technologies, enable the development of custom virtual geospatial worlds. A custom virtual geospatial world simulates real-world spatial objects in a number of overlaid geo-referenced thematic layers [6]. These layers may include raster surface textures, DTM/DSM models and 3D spatial entity models moving in pre-specified or dynamically defined motion paths. The implementation of such virtual worlds can be based on open technologies and standards and on functionalities provided by free software libraries. Moreover, they can be demonstrated on smart device platforms and/or standard interfaces of the World Wide Web. The ever increasing data transfer speed in communications and wireless networks [7], the widespread use of geospatial web services [8] and the evolution in the augmented reality (AR) technologies [9] are boosting the provision of custom virtual geospatial worlds for a mobile end-user and offer novel opportunities for the deployment of numerous smart applications (apps).
Building on the above-mentioned, the presented work aims to introduce “Mergin’ Mode”—a system comprising an authoring tool and an app, able to support overlaying of (a) highly detailed virtual terrain environments and three-dimensional models representing animate or inanimate objects, placed or moving over these environments and (b) the real world as captured by the camera of a smart device. For example, during the physical presence of a user in an archaeological site, it will be possible, through storytelling techniques, to “revive” on the screen of the mobile phone, historical events represented by a custom virtual geospatial world. These events along with dynamic reconstructions are evolving based on the users’ actual position (spatial reference) and the route they follow on the site. That way, the digital material must also possess a spatial reference and be provided under LBS. Possible application scenarios include information provision regarding both the history and the historical events that took place in monuments and archaeological sites. These scenarios can be used both for educational and recreational purposes for the local communities of the monuments and archaeological sites areas, and for the development of cultural tourism in those areas.
Technically, the system is based on the merging of the real with the virtual in mixed reality (MR), assisted by geoinformatics technologies, to be applied on monuments with the goal of demonstrating them. The final output is a set of web services that will enable the visualization of an archaeological site, a monument or a set of monuments in its current form as captured by the camera of a smart device in conjunction with virtual objects that can represent historical events and that can narrate stories to the end-users/visitors of the site. In addition, the visitor app may be freely available on the marketplaces and in combination with the virtual objects offered in special repositories, may enable virtual tours of the monuments remotely.
In a hypothetical scenario, the visitor of the monument opens the mobile camera and aims at a point of interest. Through a free application and a set of data made available by the wireless infrastructure of the archaeological site or the Internet, the cultural tourists can visualize and acquire, according to their wishes, in-depth information regarding the archaeological site, the monuments or historical events pertinent to them, enhancing, that way, their cultural experience. At the same time, the Global Positioning System (GPS) receiver of the smart device will accurately approximate the visitor’s position within the 3D virtual geospatial world. Alternatively, the visitor may be able to make use of the virtual content remotely without having to visit the site and receive information about the cultural product of the area of interest (Figure 1).
Having described all the above capabilities provided by the system in a laboratorial-experimental level, the real-world conditions at an operational level will be presented. There are two distinct factors to be considered, which both highlight the potential contribution of the presented work:
  • The authorities responsible for promoting specific cultural-touristic resources may exploit the portion of the system dealing with the development of the digital material. “Mergin’ Mode” authoring tool utilizes geographic information systems (GIS)-based functionalities to assist the development of virtual custom geospatial worlds presenting historical representations of the monuments. Beyond the typical thematic layers that may include the site terrain and vector graphics with areas, lines and points, additional themes may be added to compose sophisticated 3D scenes [6]. Such themes may include 3D models of natural spatial objects (e.g., trees or plants) or of cultural objects of historical importance either moveable (e.g., amphorae) and immovable (e.g., temples) or “living” ones (e.g., people and animals). Other thematic layers may specify the routes of motion for moving objects, or the points/areas of placement for the stable ones. Although the overlaying of numerous thematic layers to form a photorealistic 3D scene is a relatively old technique, since the era of the first multimedia projects, such as Flash [10], the geospatial reference of all the involved objects nevertheless requires a GIS-based approach. Besides, the spatial reference is the key property of an object that specifies its behavior in response with the end-user location. Moreover, although virtual reality and 3D computer graphics technologies have evolved over the last decades, their co-existence with LBS is an issue that invites further research and development projects.
  • The visitors/end-users perceive the digital material provided by the authorities of a monument as rendered over their camera, along with the monument in its current condition in a MR app. Considering the ubiquity of LBS in a vast number of smartphone apps, “Mergin’ Mode” invests in this evolving capability of establishing an on-demand, direct connection between the user of cultural digital material and the provider, without the need of specialized equipment except for a smart device. Although this issue is already raised [11,12,13], the developments so far justify additional research efforts and allow significant improvements. Someone could imagine the whole venture as being similar to that of the Google Maps: the user may download the maps for an area of interest and use them offline for routing purposes, whenever the device is located in that area. In our case, the material concerns cultural heritage resources and the representations of a monument along with historical events, which are triggered at the time of the user’s georeference in the area of the monument. Obviously, the material may be provided at the time of the visitors’ presence at the site, synchronously, through the authority’s communication infrastructure.
The next section presents a review on the conjunction of MR with LBS methods and tools. Most importantly, it attempts to identify critical features of contemporary MR authoring tools and highlights “Mergin’ Mode” contribution. The third section provides information about all the technical details, the technologies and the standards employed for the development and demonstration of “Mergin’ Mode” software prototype. Section 4 provides an extensive demonstration of the system and Section 5 highlights the results and possible future research directions.

2. Similar Works

2.1. Mixed Reality and Location-Based Services: Recent Developments

Many applications combining AR and GIS have been developed during last decades in various areas: environmental monitoring [14] and changes [15], navigation [16], architecture [17], pipeline prospect [18], tourist information system [19], landscape visualization [20], etc. VR and AR are receiving increasing attention in cultural tourism and virtual museums [21,22,23,24,25,26,27,28,29,30]. In fact, the size of the information required to be served via LBS, in order to form a MR-based scene on a mobile smart device owner, could not be supported before 4G’s introduction, given the bandwidth and Internet speeds in the Global System for Mobile Communications (GSM) networks of that era. Therefore, the spatial reference of the involved objects was not a primary specification in the related developments. Some indicative recent developments employing LBS, thereby involving geoinformatics technologies, are mentioned below:
A survey that includes (a) the technical requirements of MR systems in indoor and outdoor settings and (b) the purposes and the enabling technologies adopted by MR applications in cultural heritage is provided in Bekele et al. [31] (p. 26 and p. 28).
Debandi et al. [24] present a research co-funded by the H2020 European project 5GCity that resulted in a MR smart guide that provides information on a city-scale about historical buildings, thereby supporting cultural outdoor tourism. The user can select the object (monument/building/artwork) for which augmented contents should be displayed (video, text audio); the user can interact with these contents by a set of defined gestures. Moreover, if the object of interest is detected and tracked by the MR application, 3D contents can also be overlapped and aligned with the real world.
Nobrega et al. [32] describe a methodology for fast prototyping of a multimedia mobile application dedicated to urban tourism storytelling. The application can be a game that takes advantage of several location-based technologies, freely available geo-referenced media, and augmented reality for immersive gameplay. The goal is to create serious games for tourism that follow a main narrative but where the story can automatically adapt itself to the current location of the player, assimilate possible detours and allow posterior out-of-location playback. Adaptable stories can use dynamic information from map sources, such as points of interest (POI). An application designed for the city of Porto, namely, Unlocking Porto, is presented, which employs the above-mentioned methodology. This location-based game with a central, yet adaptable story engages the player into the main sights following augmented reality path while playing small games.
Balandina et al. [11] summarize their research in the area of the Internet of Things (IoT) for the development of services to tourists. More specifically, they share ideas of innovative e-Tourism services and present Geo2Tag LBS platform that allows easy and fast development of such services. The proposed platform provides open application programming interfaces (APIs) for local developers to create extension services on top of the available content and allows automatic binding of the new content and extending it by open data from various sources, thereby helping to advertise the regions concerned. They present the developed Open Museum platform that employs mixed/augmented reality and a couple of its implementation instances, namely, Open Karelia and New Moscow systems.
Alkhafaji et al. [13] introduce a list of guidelines for designing mobile location-based learning services with respect to cultural heritage sites. This list was set out based on the results of a user study in the field which was carried out with adult end users to evaluate a prototype mobile application that delivered information through mobile phones and smart eye glasses simultaneously, regarding cultural heritage sites based on location. Moreover, augmented reality and LBS are utilized in this specific app. This paper presents an empirical study that examines aspects of usability, usefulness and acceptance of the smart learning environment—“SmartCity”—designed to deliver instant information, based on location, with respect to cultural heritage sites.

2.2. Identifying State-of-the-Art Software

Prior to identifying related state-of-the-art software, this section addresses the scientific–technological areas involved in applications pertinent to the presented work. These areas are characterized by strong synergies between pure computer-graphics and animation-motion technologies with AR and MR technologies. In addition, AR software applications cooperate closely with geospatial software, in order to inherit LBS and GIS capabilities. Therefore, (a) animation-motion, (b) GIS functionalities and (c) VR–AR–MR comprise the three major software areas of interest relevant to the presented work.
Another difficult task was to classify the findings of this review according to their general software type, e.g., API, framework or library; however, such a classification would cause misunderstandings and non-logical comparisons. Besides, many terms may have similar meanings. Having gathered as much information as possible, a decision was taken to split the final table in three subdivisions based on the distinct contribution of the findings, as follows:
  • Game engines: Their role in 2D and 3D graphics rendering, physics simulation, interactive animation and motion effects is decisive to place them as head of the related software table. Beyond the features recording support of the previously defined technological areas, the capability of acting as an authority tool is also recorded and their capability to operate on a browser.
  • Libraries–platforms–prameworks: They were created to cover a broad range of functionalities, including basic geospatial ones, and graphics animations, and to cooperate with other software components to form complete solutions. This category records the same features as the above category.
  • AR tools: This subdivision contains software exclusively focused on AR that is obviously cooperating with software from the above categories to form complete solutions. Some AR-specific, individual features were selected such as simultaneous localization and mapping (SLAM), Geo-location, 2D and 3D images recognition and online-cloud recognition.
Before presenting the findings of the review, a brief presentation of some of them from each category is offered below.
Unity is a real-time 3D development platform that lets artists, designers and developers to create immersive and interactive experiences, in games, films and entertainment, architecture or any other industry. As of 2018, Unity had been used to create approximately half of the mobile games on the market and 60 percent of augmented reality and virtual reality content [33]. Unreal Engine is an open, advanced real-time 3D creation tool that is continuously evolving to serve as a state-of-the-art game engine, giving to creators, freedom and control to deliver cutting-edge content, interactive experiences and immersive virtual worlds [34]. These two engines seem to be the leaders in the field of game engines and a very recent article attempts to compare them [35].
CityEngine is an advanced 3D modeling software for creating expansive, interactive and immersive urban environments that may be based on real-world GIS data or may showcase a fictional city of the past, present or future. CityEngine fully covers all critical geospatial aspects, such as georeference, geolocation and overlaying; however, it does not integrate motion effects on spatial objects [36]. On the same category, vGIS Utilities is a cloud-based app that displays GIS and CAD data, using mixed and augmented reality. It does not require specialized hardware or client provided servers to operate. It connects to Esri ArcGIS and other data sources to aggregate and convert traditional 2D GIS data into 3D visuals. It primarily targets public utilities, municipalities and service providers [37].
Cesium.js is a geospatial visualization framework for 3D mapping on the Web. It is built on top of WebGL, is HTML5 standard, supports industry standard vector formats (KML, GeoJSON) and is open source and cross-platform. It also supports 3D models animation and user-controlled motion over the terrain. Another noteworthy Javascript library which also exploits WebGL renderer capabilities, with numerous contributors and free examples for animation and extreme motion effects is that of Three.js. [38]
Wikitude is a leading AR Software Development Kit (SDK) for developing apps for smart devices across all platforms that can recognize, track and augment images, objects, scenes and geographical locations, using native or Javascript API or other extensions (e.g., Unity) [39]. ARKit makes use of just very recently released [40] iPad’sLiDAR scanner and depth-sensing system to make realistic AR experiences. Via its API it is possible to capture a 3D representation of the world in real time, enabling object occlusion and real-world physics for virtual objects [41]. ARCore is Google’s platform for building augmented reality experiences using APIs across Android and iOS and enabling mobile phones to sense the environment, understand the world and interact with information [42].
All findings of this extensive search [43,44,45,46] are concentrated in Table 1 for the reader’s convenience. However, it should be clearly noted that the final table does not aim to act as a products comparison catalogue. Though the seekers have strived to gather any type of relevant stuff, this table should be considered as a non-exhaustive collection of existing state-of-the-art software solutions in the broad area of the presented work. In any case, the final result is changeable and needs continual update because many of the presented findings may soon be deprecated, may be merged to others or may not be supported or updated and so forth.

2.3. Specifying “Mergin’ Mode”

There are several solutions for developing MR environments and visualizations that exploit powerful 3D simulation engines and offer immersive experiences (e.g., Unity, OpenSceneGraph). Moreover, the geospatial community extends the capabilities of GIS software, in order to provide sophisticated 3D geospatial visualizations (e.g., CityEngine, vGIS).
The increasing demand for LBS involving MR technologies and advanced animation and motion effects justifies the need for developing apps combining features from all the above mentioned. Table 2 presents in detail the specifications that are deemed necessary for a project, in order to satisfy this need. The table also summarizes the specifications of “Mergin’ Mode”.

3. Materials and Methods

3.1. System Development

The development of “Mergin’ Mode” is based on the principles of openness [47], interoperability [48] and independency of specialized software or third-party software on the end-user side and utilizes:
  • Open platforms and programming languages supported by high-capability open libraries and frameworks.
  • Open data and web services and interoperability standards introduced by international open standards communities, such as the W3C (World Wide Web Consortium) and the OGC (Open Geospatial Consortium).

3.1.1. Open Software Development Platforms

The software prototype application adopts component-based software engineering approach [49], employs cutting edge web technologies and embodies code components, frameworks and libraries highly rated and widely used by the open source community. All the components used are stored and can be accessed through node package manager (NPM). This approach ensures the continuous upgrade of the application components and restricts the project lifecycle to the lifecycle of each component individually.
Regarding the server setup, the “everything on one server” approach is adopted, where the entire environment resides on a single server [50]. This includes NGINX as a load balancer and a web server, Node.js/Express.js as the application server and PostgreSQL PostGIS as the database server. All components are hosted in a Linux Debian 9 Machine with 1CPU, 25GB of storage and 1GB RAM provided by Linode free cloud migration service and may be upgraded according to “Mergin’ Mode” needs. The usage of the Docker open source software platform assures a secure building and sharing of the application with the community.
The application development takes advantage of the state-management supported by React v.16.13.1 and Redux, which makes it extremely fast and responsive as far as rendering is concerned. In terms of 3D rendering, Three.js r.114 was recently released and has completely taken on the weight of creating custom 3D geospatial worlds and enriching them with animated and inanimate georeferenced models. The user may navigate in custom worlds and observe them by modifying the position and the angle of view which in turn enable appropriately virtual events and storytelling. Individual features of the platform (navigation bar, side panels, modals), were developed by employing code components available at NPM. Finally, for any transformation between coordinate systems, including datum transformations, the Proj4.js is employed.

3.1.2. Open Data and Services

The system utilizes geospatial data and services at all distinct phases of its development and operation to meet the requirements of interoperability [48], reusability [51] and invocability [52] of geospatial data and services. These requirements are currently set by the relevant European Union legal framework and are constantly updated with new regulations and decisions for the implementation of the INSPIRE [51] Directive, as prerequisites for the successful completion of a project and for the successful dissemination and diffusion of its outputs and results, when these possess spatial identity or may be spatially represented.
Data and services are interrelated concepts as data can be made available through specialized web services. As regards the geospatial data and services of the system, the following open interoperability standards are adopted:
  • GML [53] (Geography MarkUp Language)—the XML-based standard that provides the ability to describe and transfer data and application schemas.
  • WMS [54] (Web Map Service) and WMTS (Web Map Tile Service) [55] for serving image maps and the texture of any model or DTM/DSM that makes up a custom virtual geospatial world.
  • WFS [56] (Web Feature Service) to serve spatial features and TSML [57] (TimeseriesML) to serve locations and motion paths of 3D models.
  • I3S [58] (Indexed 3d Scene Layers Standard) to serve arbitrarily large amounts of heterogeneously distributed 3D geographic data.
  • ARML [59] (Augmented Reality Markup Language) to describe virtual objects in an augmented reality (AR) scene with their appearances and their anchors.
Data import is a major functionality because it satisfies the need of creating custom virtual geospatial worlds with a variety of resources (3D models, vector and raster data, spreadsheet files, etc.).
Data import from a local source is achieved by utilizing the FileReader object, which lets web applications asynchronously read the contents of files (or raw data buffers) stored on the user’s computer [60]. This approach overpasses the obstacle of Three.js that only supports loading models via XMLHttpRequest [61] objects which are used for the applications to interact with servers. Then, the prototypes of Three.js functions used to load models (GLTFLoader and FBXLoader), are extended to support this new way of input data.
Data import from an online source of data is supported out of the box, taking advantage of the representational state transfer (REST) software architectural style and web services. Online data could form a chained geospatial web service and be provided by any of the OGC Web Services standard. In specific, WMS and WMTS can be used to serve the texture of any model or DTM/DSM that makes up the custom world, and WFS and TSML can be used to serve the location of a model or the path of its movement. Alternatively, a whole static scene, excluding movements and animations, can be served through the I3S standard. This may be consequently combined with a standard that supports spatiotemporal data (e.g., timeseries) to satisfy serving of custom virtual geospatial worlds containing also motion and animation effects of spatial objects.
All the work can be exported as imported. This means that the export can be individual XML files structured as one of the OGC standards, a JSON file, a scene layer package (SLPK) file or even a SLPK zip file including all resources with type Zip64 (I3S).

3.2. Technological and Research Areas

The main technological and research areas related to “Mergin’ Mode” implementation include:
  • Contemporary photogrammetric surveying technologies using high-capacity cameras and/or cameras on UAVs.
  • Image processing technologies for object recognition in images and videos for MR rendering.
  • Geoinformatics technologies for core GIS based functionalities, such as thematic layers overlaying and georeferencing.
  • Global satellite system technologies, for the visitor’s spatial reference in the archaeological-touristic site to achieve location-based servicing.

3.2.1. Photogrammetrical Mapping

Photogrammetry is the method of dimensioning and extracting high-precision metering data using photographs. As a method, it requires a systematic image capture of the objects, in order to produce 3D models of them. The systematicity of image capturing lies in the necessity of collecting photos with significant overlapping so that each object appears in more than three images. Modern photogrammetry specifies a minimum overlapping percentage of 75% on each image [62]. However, the accuracy of metering data depends not only on the systematic collection of photographs, but also on other factors, such as (1) internal orientation, which in modern digital cameras is described by (a) the physical size of the sensor; (b) the number of sensor pixels (resolution); (c) the distance of the sensor from the lens’s center of focus; (d) the lens’s focal length in mm; and (2) the outer orientation of each shot described by the physical location of the camera collecting photographs.
The photogrammetric processing software used for “Mergin’ Mode” demonstration purposes is the Swiss-based Pix4Dmapper Pro. It is one of the most recognized tools of modern photogrammetry and is capable of producing 3D point clouds, 3D textured grids, 2D orthophotomaps, 2D surface models and 2D terrain models.

3.2.2. Cameras

Almost any camera may be used to produce photogrammetric 3D models. However, depending on the desired accuracy of the result, the appropriate combinations of sensors, resolutions and lenses are used. One of the main advantages of the photogrammetric surveying method is that theoretically the accuracy can be increased as much as the user wishes, either by changing one of the above parameters (lens, resolution, and sensor) or by changing the reception distance of the objects.
The following are some of the representative cameras for the capturing needs of “Mergin’ Mode” demonstration:
  • GoPRO Hero 3 Black (12Mpixel, wide angle lens, CMOS sensor).
  • Panasonic Lumix GX80 (16mpixel, 28–70 mm lens, M4/3 sensor).
  • DJI OSMO X3 camera (12Mpixel, 32 mm lens, 2/3″ sensor).
  • NCTECH iSTAR 360 (14Mpixel 360 camera).
  • RICHO Theta S 360 (50Mpixel 360 camera).

3.2.3. Unmanned Aerial Vehicles

The use of UAVs in photography reception for photogrammetric processing is a technique in modern photogrammetry. The UAVs carry the cameras and allow for placement at heights and angles appropriate for modeling objects which may extend over several thousand acres. Therefore, they are the key ally for fast data collection and access to points difficult to access or even inaccessible.
The UAVs are divided into 2 basic families: fixed wing systems (airplanes) and rotary wing systems (polycopters). The former behaves like a typical airplane; that is, they require space for take-off and landing, are constantly moving and are highly efficient in terms of autonomy and coverage. The latter are vertical take-off and landing systems and are particularly energetic, with their main advantage being the capability to swing.
The following are some of the representative UAVs for the capturing needs of “Mergin’ Mode” demonstration:
  • senseFly eBee Plus (20Mpixel, 1″ sensor).
  • DJI Phantom 4 Pro (20Mpixel, 1″ sensor).
  • Parrot Anafi (21Mpixel, 1/2.4″ sensor).

3.2.4. Object Recognition

Analyzing and understanding the content of digital images is nowadays the focus of the digital image processing field and an ongoing research challenge for contemporary technological developments. It serves many disciplines, such as mechanical and robotic vision, geoinformatics, bioinformatics, computer graphics and virtual reality, and collaborates with scientific fields such as pattern recognition, neural networks, fuzzy logic, and artificial intelligence. Object recognition is involved in a vast variety of applications of contemporary technology and is often an integral part of most technology projects and products.

3.2.5. Mixed Reality and Space

MR is the synthesis of a world consisting of real and virtual objects that coexist and interact in real time. As Figure 2 below illustrates, MR occupies the middle ground between the real and the virtual environments as it combines elements from both and encompasses the concepts of augmented reality and augmented virtuality [63].
Therefore, MR involves representation of digitally generated imagery together with images captured from real life both rendered on a visual display medium. This could be a computer display, a camera or specialized equipment, such as the popular Microsoft Hololens product [64]. Particular importance in terms of final cost and appeal to the public, for a venture like the proposed one, has the final imaging medium, and in the present context, the end-user smart device, which may be the mobile phone itself.
In order to implement a hybrid representation, the two worlds, the virtual and the real, and their recognized objects, must have a spatial reference to a geographical coordinate system. For this reason, both virtual modeling and real-world imaging in the context of this proposal are based on geoinformatics technologies. These include primarily GIS-based functionalities that support overlaying of multiple vector and raster thematic layers and georeferencing of any type of involved object.
A particularly critical feature that significantly fosters the functionality of MR applications is the interaction between virtual and real objects [65]. However, this interaction may only be implemented as far as the real-world objects are recognized by the MR system in the same way with the virtual ones. In this case, the real objects possess both physical and digital properties and may be termed as mixed objects [66]. Thereby, it will be possible to apply interactivity conditions between virtual and mixed objects merged in a mixed (reality) environment [67]. Some intuitive cases demonstrating MR require immersive technologies and include interaction of the end user with both physical objects of the surrounding real environment and virtual objects. For example, a visitor of a site wearing a head-mounted display is touching a physical or a virtual object and this action enables unfolding of informative digital content. In addition, the physical interaction between the visitor and a virtual object leads to similar results to those occurring on the visitor’s interaction with physical objects. Alternatively, as demonstrated later in the present work, the same content unfolds on the screens of the visitors’ smart devices when they tap on the mixed or virtual object. The visitor also may dynamically interrupt and alter the behavior of virtual objects. Another example is occlusion [68] where the end user interface “realizes” that a moving virtual object is—at a given time in a given user’s location—behind a real object (or vice-versa) and renders appropriately the mixed scene by hiding the virtual object (or part of it) behind the real object. Although this capability is not a sine qua non for MR applications [69], it nevertheless represents an interaction between mixed (real) and virtual objects, an invaluable feature for many MR environments, as is the case with “Mergin’ Mode.”

4. Results

At the operational level, the system architecture, as freely illustrated in the image below (Figure 3), includes the distinct users/actors and the data, services and equipment technologies involved:
  • The repository of virtual geospatial world models, together with data and metadata that document the cultural-tourism resources represented through these models;
  • Web services and servers that make cultural content available;
  • Global positioning, video capture and internet connection technologies.
Two distinct software components are identified: (a) the “Mergin’ Mode” manager component concerning the authority responsible for the preparation of the cultural content which will be being served or deposited in the open repositories and (b) the “Mergin’ Mode” end-user component (app) concerning the visitor of the site which will be perceiving the digital content (virtual world) merged with the real world in a MR.

4.1. “Mergin’Mode” Authoring Tool

To develop a complete set of cultural data for the demonstration of a monument, a custom virtual geospatial world has to be prepared. The following paragraphs describe the minimum components required to develop a virtual geospatial world and those utilized for the purposes of the present demonstration.

4.1.1. 3D Models of the Monument Area in its Current Condition

The 3D model of the area of the monument in its current condition as captured by high capacity cameras and processed with photogrammetric techniques is required to enable MR functionality, as described in Section 3.2.5.
For the purposes of the present demonstration, an archaeological site located in Apollonia, northern Greece [70] has been selected. Figure 4a depicts an aerial photo of the Ottoman bath and Figure 4b the photogrammetric process performed.
The shots were taken at different heights and distances from the monument. In total, 229 shots with an average overlapping of 85% + were taken. The unmanned aerial vehicle used for shooting is Parrot’s Anafi product with a 21 Mpixel resolution camera, while the orbits and shoots were performed with Pix4D Capture Works by Geosense [71] software.

4.1.2. 3D Model of the Monument Area in its Past Condition

3D models of the monument and the archeological findings associated with it may be reconstructed in the form they had in the past by utilizing specialized 3D CAD software. However, for the reconstruction to be successful, the managing authority of the monument needs to possess significant material, analogue or digitized: maps, archival documents, photographs, etc. Figure 5 illustrates some views of the 3D reconstruction of the Ottoman bath.

4.1.3. Other 3D Models

Other 3D models stable or moving, animate or inanimate, with or without motion effects may populate the above-mentioned models with the aim to “revive” historical events over the archeological site and enrich the visitor’s experience. For the purposes of the presented demonstration free tree models were selected (Figure 6), and an ancient man was constructed.

4.1.4. Creating the Custom Virtual Geospatial World of the Monument

The creation of the custom virtual geospatial world of the monument practically means to put all the 3D models previously described together, along with motion effects, where appropriate, to produce a 3D scene of the monument. The “Mergin’ Mode” authoring tool offers easy to use tools for importing 3D point clouds, DTM/DSM and 3D textured meshes of the area; specifying or importing vector layers for placing other 3D models of animate or inanimate objects; and specifying motion paths for the last ones where needed. Figure 7 provides an overview of the authoring tool user interface.
An indicative workflow for creating a custom virtual geospatial world consists of the following activities (Figure 8):
  • Importing the DSM/DTM of the area.
  • Importing the surface texture.
  • Importing the monument.
  • Importing, rotating and scaling other 3D models.
  • Placing models at selected locations.
  • Implementing motion effects.
  • Combining all to make a “living geospatial world.”
Navigation in the monument area and demonstration of the custom virtual geospatial world can be checked at the “Mergin’ Mode” website available online: https://mergin-mode.prieston.tech (accessed on 8 May 2020).

4.2. The “Mergin’Mode” End-User Component (App)

As already described (Section 3.2.1) the objects of a 3D scene of a custom virtual geospatial world can be exported and served via geospatial web services during navigation on site or may be downloaded offsite, for offline use. As the user lies and navigates inside the “influence area” of the site and observes it, and as the position and the angle of view of his smart device are being modified, virtual events and storytelling are appropriately enabled and visualized.
The end-user component is a typical app that handles the satellite navigation system, the gyroscope and the camera of a smart device, merging the real with the virtual to present the content in an MR mode.
Figure 9 illustrates the implementation of a mixed object. The Ottoman bath previously (Section 4.1.1) captured and photogrammetrically processed is transformed to a virtual object presented with a textured mesh in the bottom right picture. The other virtual object (a tree) is partially covered by the Ottoman bath. A transparent mask is applied on top of the Ottoman bath (upper right picture).
The 3D model of the Ottoman bath totally covered by a transparent mask allows the real one-captured by the visitor’s camera to be displayed. Both objects, the 3D model and the real, compose the mixed object, which in turn interacts with the virtual tree and partially covers it (left picture).
Figure 10 provides instances of the end user interaction with mixed and virtual objects via smart device touch screen. This is implemented by utilizing Raycaster function of Three.js (upper left picture). Tapping on the screen in conjunction with the camera position creates a raycast which intersects a mixed (lower left picture) or a virtual (upper right picture) object. The user receives informative content, reconstructions (lower right picture) and may also dynamically alter a virtual object’s behavior; e.g., position and motion.
Finally, Figure 11 shows the reconstruction of the Ottoman bath in a mergin’ mode: the real Ottoman bath captured by the visitor’s camera is merged with the reconstructed 3D model of the virtual bath.

5. Discussion

“Mergin’ Mode” aims at enabling the web of cultural data. Ideally, this can be achieved through the development of a wide network of cultural heritage promoters acting as cultural content providers/emitters. In an ideal case every cultural authority will be publishing custom virtual geospatial worlds containing 3D scenes of the archaeological sites with representations and animations. What this practically means, is that a user located in an area or wishing to visit a destination or having scheduled a route, will be receiving web services, including information related to available cultural heritage resources based on his/her current location. The cultural content transmission is activated by utilizing various techniques, with geofencing [72] and geotargeting [73] being the most appropriate ones when the receivers/mobile users are located in the area of the cultural site.
The above rationale may well be applied to an urban area with numerous sites of cultural interest and therefore numerous potential emitters of cultural content. An end user of “Mergin’ Mode” app may potentially receive on the screen of his/her smart device historical events and representations of the whole area. The content will be unfolded in accordance with his/her location over the area and the scenes will come one after the other based on the order of visit. The content is distributed to multiple cultural authorities, each one serving the content of its responsibility. The content will be downloaded and executed on the mobile user’s app during passing through the site. The content may also be filtered based on user profile [74]
Concerning the software and hardware selections made for the system’s development:
  • The system is developed in Javascript and makes use, beyond others, of Three.js Javascript library, which is developed on top of WebGL visualization framework. Javascript has become the dominant programming language of the web rendered directly by any W3C compatible web browser without the need of installing plug-ins ensuring interoperability and source code reusability. “Mergin’ Mode” project will be available at Github; therefore, as a further development, many functions developed for the purposes of custom geospatial worlds may be concentrated to form a specialized Javascript library serving cultural informatics developments.
  • Some of the key decisions during system design involved the equipment specifications for the smart app of the visitors of a cultural heritage resource. No equipment is required beyond an average smart device. “Mergin’ Mode” focuses on the capability of the smart device to capture the site with a moderate capacity camera and to receive—via the WiFi—the 3D models of geospatial virtual worlds, rather than to locate the exact position of the end user. Special equipment is only needed for the needs of photogrammetric surveying and the development of DTM/DSM and of the 3D modeling of cultural heritage sites. However, the above-mentioned activities are subject of third-party contractors and depend on the needs, the financial capacity and the maturity of the managing authority responsible for the cultural heritage resources.
Finally, an interesting use case might include the development of a “past world finder” application that might be enabled by the Directory OpenLS service interface [75]. It might utilize this service to find custom worlds that fit, among others, flexible and/or specified temporal criteria which correspond to the diversity in archeology and history, in contrast with common date formats (e.g., custom worlds near the user, before 100 B.C. or worlds of the ancient times).

Author Contributions

Conceptualization, K.E.; methodology, K.E. and T.P.; software, T.P.; validation, K.E. and S.S.; formal analysis, K.E.; investigation, S.S.; writing—original draft preparation, K.E.; writing—review and editing, S.S.; visualization, S.S and T.P.; supervision, K.E.; project administration, K.E.; funding acquisition, K.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the Special Action “Open Innovation in Culture” (project code: T6YBΠ-00297).

Acknowledgments

The authors would like to thank Maria Tsiapali, director, Athina Tokmakidou, archaeologist; and Stella Kassidou, administrative staff, at the Ephorate of Antiquities of Thessaloniki City for the information provided and their fruitful cooperation; next, Paschalis Androudis, assistant professor at the School of History and Archaeology of the Aristotle University of Thessaloniki for his valuable advices concerning the reconstruction of the Ottoman bath. The authors would also like to thank Dimitris Charpouzanis, President of the Apollonia village and the Municipality of Nea Apollonia for their help in clearing the monument’s surrounding area. Special thanks should be given to the members of the participating companies of “Mergin’ Mode” project, Vassilios Efopoulos and Anastasia Koliakou from TESSERA Multimedia and Vassilios Polychronos and Dimitrios Ramnalis from GEOSENSE.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oxera. What Is the Economic Impact of Geoservices? Prepared for Google. 2013. Available online: https://www.oxera.com/publications/what-is-the-economic-impact-of-geo-services/ (accessed on 13 March 2020).
  2. Serifoglu Yilmaz, C.; Gungor, O. Comparison of the performances of ground filtering algorithms and DTM generation from a UAV-based point cloud. Geocarto. Int. 2018, 33, 522–537. [Google Scholar] [CrossRef]
  3. Giannetti, F.; Chirici, G.; Gobakken, T.; Næsset, E.; Travaglini, D.; Puliti, S. A new approach with DTM-independent metrics for forest growing stock prediction using UAV photogrammetric data. Remote Sens. Environ. 2018, 213, 195–205. [Google Scholar] [CrossRef]
  4. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy assessment of point clouds from LidaR and dense image matching acquired using the UAV platform for DTM creation. Isprs Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  5. Brutto, M.L.; Meli, P. Computer vision tools for 3D modelling in archaeology. Int. J. Herit. Digit. Era 2012, 1 (Suppl. 1), 1–6. [Google Scholar] [CrossRef] [Green Version]
  6. Evangelidis, K.; Papadopoulos, T.; Papatheodorou, K.; Mastorokostas, P.; Hilas, C. 3D geospatial visualizations: Animation and motion effects on spatial objects. Comput. Geosci. 2018, 111, 200–212. [Google Scholar] [CrossRef]
  7. Cherry, S. Edholm’s law of bandwidth. IEEE Spectrum 2004, 41, 58–60. [Google Scholar] [CrossRef]
  8. Evangelidis, K.; Ntouros, K.; Makridis, S.; Papatheodorou, C. Geospatial services in the Cloud. Comput. Geosci. 2014, 63, 116–122. [Google Scholar] [CrossRef] [Green Version]
  9. Haller, M. (Ed.) Emerging Technologies of Augmented Reality: Interfaces and Design: Interfaces and Design; Igi Global: Hershey, PA, USA, 2006. [Google Scholar]
  10. Ver Hague, J.; Jackson, C. Flash 3D: Animation, Interactivity, and Games; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  11. Balandina, E.; Balandin, S.; Koucheryavy, Y.; Mouromtsev, D. Innovative e-tourism services on top of Geo2Tag LBS platform. In Proceedings of the 2015 11th International Conference on Signal-Image Technology Internet-Based Systems (SITIS), Bangkok, Thailand, 23–27 November 2015; IEEE: Piscataway, NJ, USA; pp. 752–759. [Google Scholar]
  12. Chianese, A.; Marulli, F.; Moscato, V.; Piccialli, F. A “smart” multimedia guide for indoor contextual navigation in cultural heritage applications. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, 8–31 October 2013; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  13. Alkhafaji, A.; Cocea, M.; Crellin, J.; Fallahkhair, S. Guidelines for designing a smart and ubiquitous learning environment with respect to cultural heritage. In Proceedings of the 2017 11th International Conference on Research Challenges in Information Science (RCIS), Brighton, UK, 10–12 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 334–339. [Google Scholar]
  14. Veas, E.; Grasset, R.; Ferencik, I.; Grünewald, T.; Schmalstieg, D. Mobile augmented reality for environmental monitoring. Pers. Ubiquitous Comput. 2013, 17, 1515–1531. [Google Scholar] [CrossRef]
  15. Ghadirian, P.; Bishop, I.D. Composition of augmented reality and GIS to visualize environmental changes. In Proceedings of the joint AURISA and Institution of Surveyors Conference, Adelaide, South Australia, 25–30 November 2002; pp. 25–30. [Google Scholar]
  16. Jang, S.H.; Hudson-Smith, A. Exploring mobile augmented reality navigation system for pedestrians. In Proceedings of the GIS Research UK 20th Annual Conference GISRUK, Lancaster, UK, 11th–13th April 2012. [Google Scholar]
  17. Guo, Y.; Du, Q.; Luo, Y.; Zhang, W.; Xu, L. Application of augmented reality GIS in architecture. In Proceedings of the ISPRS Congress Beijing 2008, Beijing, China, 3–11 July 2008. [Google Scholar]
  18. Zhang, X.; Han, Y.; Hao, D.; Lv, Z. ARPPS: Augmented reality pipeline prospect system. In Proceedings of the International Conference on Neural Information Processing, Istanbul, Turkey, 9–12 November 2015; Springer: Cham, Switzerland, 2015; pp. 647–656. [Google Scholar]
  19. Lin, P.J.; Kao, C.C.; Lam, K.H.; Tsai, I.C. Design and implementation of a tourism system using mobile augmented reality and GIS technologies. In Proceedings of the 2nd International Conference on Intelligent Technologies and Engineering Systems (ICITES 2013); Springer: Cham, Switzerland, 2014; pp. 1093–1099. [Google Scholar]
  20. Ghadirian, P.; Bishop, I.D. Integration of augmented reality and GIS: A new approach to realistic landscape visualisation. Landsc. Urban Plan. 2008, 86, 226–232. [Google Scholar] [CrossRef]
  21. Kabassi, K. Personalisation Systems for Cultural Tourism. In Multimedia Services in Intelligent Environments; Smart Innovation, Systems and Technologies; Tsihrintzis, G., Virvou, M., Jain, L., Eds.; Springer: Heidelberg, Germany, 2013; Volume 25. [Google Scholar]
  22. Tscheu, F.; Buhalis, D. Augmented Reality at Cultural Heritage sites. In Information and Communication Technologies in Tourism 2016; Inversini, A., Schegg, R., Eds.; Springer: Cham, Switzerland, 2016. [Google Scholar]
  23. Jung, T.; tom Dieck, M.C.; Lee, H.; Chung, N. Effects of Virtual Reality and Augmented Reality on Visitor Experiences in Museum. In Information and Communication Technologies in Tourism 2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 621–635. [Google Scholar] [CrossRef]
  24. Debandi, F.; Iacoviello, R.; Messina, A.; Montagnuolo, M.; Manuri, F.; Sanna, A.; Zappia, D. Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing using immersive devices. IOP Conf. Ser. Mater. Sci. Eng. 2018, 364, 12048. [Google Scholar] [CrossRef]
  25. Panou, C.; Ragia, L.; Dimelli, D.; Mania, K. An Architecture for Mobile Outdoors Augmented Reality for Cultural Heritage. Isprs Int. J. Geo-Inf. 2018, 7, 463. [Google Scholar] [CrossRef] [Green Version]
  26. Raptis, G.E.; Fidas, C.; Avouris, N. Effects of mixed-reality on players’ behaviour and immersion in a cultural tourism game: A cognitive processing perspective. Int. J. Hum. Comput. Stud. 2018, 114, 69–79. [Google Scholar] [CrossRef]
  27. Han, D.-I.D.; Weber, J.; Bastiaansen, M.; Mitas, O.; Lub, X. Virtual and Augmented Reality Technologies to Enhance the Visitor Experience in Cultural Tourism. In Augmented Reality and Virtual Reality 2019; Springer International Publishing: Basel, Switzerland, 2019; pp. 113–128. [Google Scholar] [CrossRef] [Green Version]
  28. Trunfio, M.; Campana, S. A visitors’ experience model for mixed reality in the museum. Curr. Issues Tour. 2020, 23, 1053–1058. [Google Scholar] [CrossRef]
  29. Gavalas, D.; Sylaiou, S.; Kasapakis, V.; Dzardanova, E. Special issue on virtual and mixed reality in culture and heritage. Pers. Ubiquitous Comput. 2020. [Google Scholar] [CrossRef] [Green Version]
  30. Čejka, J.; Zsíros, A.; Liarokapis, F. A hybrid augmented reality guide for underwater cultural heritage sites. Pers. Ubiquitous Comput. 2020, 1–14. [Google Scholar] [CrossRef]
  31. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36. [Google Scholar] [CrossRef]
  32. Nóbrega, R.; Jacob, J.; Coelho, A.; Ribeiro, J.; Weber, J.; Ferreira, S. Leveraging pervasive games for tourism: An augmented reality perspective. Int. J. Creat. Interfaces Comput. Graph. (Ijcicg) 2018, 9, 1–14. [Google Scholar] [CrossRef]
  33. Bonfiglio, Nahila DeepMind Partners with Gaming Company for AI Research. The Daily Dot. 2018. Available online: https://www.dailydot.com/debug/unity-deempind-ai/ (accessed on 18 April 2020).
  34. Epic Games Inc. Unreal Engine. Available online: https://www.unrealengine.com/en-US/ (accessed on 18 April 2020).
  35. GAMEDESIGN. Unreal Engine vs Unity: Which is Better? Available online: https://www.gamedesigning.org/engines/unity-vs-unreal/ (accessed on 18 April 2020).
  36. ESRI. Esri CityEngine. Available online: https://www.esri.com/en-us/arcgis/products/esri-cityengine/overview (accessed on 18 April 2020).
  37. vGIS. BIM and GIS Data in Augmented Reality. Available online: https://www.vgis.io/ (accessed on 18 April 2020).
  38. Evangelidis, K.; Papadopoulos, T. Is there life in Virtual Globes? OSGeo J. FOSS4G 2016 Acad. Track 2016, 16, 50–55. [Google Scholar]
  39. Wikitude GmbH. Wikitude Augmented Reality SDK. 2020. Available online: https://www.wikitude.com/products/wikitude-sdk/ (accessed on 18 April 2020).
  40. Horwitz, J. Apple Releases ARKit 3.5, Adding Scene Geometry API and Lidar Support. 2020. Available online: https://venturebeat.com/2020/03/24/apple-releases-arkit-3-5-adding-scene-geometry-api-and-lidar-support/ (accessed on 18 April 2020).
  41. Apple Inc. 2020. Available online: https://developer.apple.com/augmented-reality/ (accessed on 18 April 2020).
  42. Google Developers. ARCore Overview. 2020. Available online: https://developers.google.com/ar/discover (accessed on 18 April 2020).
  43. Krämer, M.; Gutbell, R. A case study on 3D geospatial applications in the web using state-of-the-art WebGL frameworks. In Proceedings of the 20th International Conference on 3D Web Technology 2015, Heraklion, Greece, 18–21 June 2015; pp. 189–197. [Google Scholar]
  44. HTML5 Game Engines. Which HTML5 Game Engine Is Right for You? Available online: http://html5gameengine.com/ (accessed on 18 April 2020).
  45. GitHub Gist. Engines and Libraries. 2020. Available online: https://gist.github.com/dmnsgn/76878ba6903cf15789b712464875cfdc (accessed on 18 April 2020).
  46. G2. Best Augmented Reality SDK Software. 2020. Available online: https://www.g2.com/categories/ar-sdk (accessed on 18 April 2020).
  47. Daraio, C.; Lenzerini, M.; Leporelli, C.; Naggar, P.; Bonaccorsi, A.; Bartolucci, A. The advantages of an Ontology-Based Data Management approach: Openness, interoperability and data quality. Scientometrics 2016, 108, 441–455. [Google Scholar] [CrossRef]
  48. Percivall, G. The application of open standards to enhance the interoperability of geoscience information. Int. J. Digit. Earth 2010, 3, 14–30. [Google Scholar] [CrossRef]
  49. Heineman, G.T.; Councill, W.T. Component-Based Software Engineering. Putting the Pieces Together; Addison Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2001. [Google Scholar]
  50. Anicas, M. 5 Common Server Setups for Your Web Application. 2014. Available online: http://www.digitalocean.com/community/tutorials/5-common-server-setups-for-your-web-application (accessed on 18 March 2020).
  51. European Commission. European Commission Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE). Off. J. Eur. Union 2007, 50, 1–14. [Google Scholar]
  52. European Commission. Commission Regulation (EU) No 1312/2014 of 10 December 2014 Amending Regulation (EU) No 1089/2010 Implementing Directive 2007/2/EC of the European Parliament and of the Council as Regards Interoperability of Spatial Data Services. 2014. Available online: http://eur-lex.europa.eu/eli/reg/2014/1312/oj (accessed on 16 March 2020).
  53. Geography Markup Language|GML. Available online: http://www.ogc.org/standards/gml (accessed on 18 March 2020).
  54. Web Map Service|OGC. Available online: http://www.opengeospatial.org/standards/wms (accessed on 18 March 2020).
  55. Web Map Tile Service|OGC. Available online: http://www.opengeospatial.org/standards/wmts (accessed on 18 March 2020).
  56. Web Feature Service|OGC. Available online: http://www.opengeospatial.org/standards/wfs (accessed on 18 March 2020).
  57. TimeseriesML|OGC. Available online: http://www.opengeospatial.org/standards/tsml (accessed on 18 March 2020).
  58. Indexed 3D Scene Layers|OGC. Available online: https://www.ogc.org/standards/i3s (accessed on 18 March 2020).
  59. Augmented Reality Markup Language|OGC. Available online: https://www.ogc.org/standards/arml (accessed on 18 April 2020).
  60. MDN Web Docs. 2019. Available online: https://developer.mozilla.org/en-US/docs/Web/API/FileReader (accessed on 18 March 2020).
  61. MDN Web Docs. 2019. Available online: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest (accessed on 18 March 2020).
  62. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; Wiley: New York, NY, USA, 2001; p. 19. [Google Scholar]
  63. Milgram, P.; Takemura, H.; Utsumi, A.; Kishino, F. Augmented reality: A class of displays on the reality-virtuality continuum. In Telemanipulator and Telepresence Technologies; International Society for Optics and Photonics: Boston, MA, USA, 1995; Volume 2351, pp. 282–292. [Google Scholar]
  64. Evans, G.; Miller, J.; Pena, M.I.; MacAllister, A.; Winer, E. Evaluating the Microsoft HoloLens through an augmented reality assembly application. In Degraded Environments: Sensing, Processing, and Display 2017; International Society for Optics and Photonics: Anaheim, CA, USA, 2017; Volume 10197, p. 101970V. [Google Scholar]
  65. Freeman, R.; Steed, A.; Zhou, B. Rapid scene modelling, registration and specification for mixed reality systems. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology 2005, Monterey, CA, USA, 7–9 November 2015; pp. 147–150. [Google Scholar]
  66. Coutrix, C.; Nigay, L. Mixed reality: A model of mixed interaction. In Proceedings of the Working Conference on Advanced Visual Interfaces 2006, Venezia, Italy, 23–26 May 2006; pp. 43–50. [Google Scholar]
  67. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. Ieice Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  68. Wloka, M.M.; Anderson, B.G. Resolving occlusion in augmented reality. In Proceedings of the 1995 Symposium on Interactive 3D Graphics 1995, Monterey, CA, USA, 9–12 April 1995; pp. 5–12. [Google Scholar]
  69. Shah, M.M.; Arshad, H.; Sulaiman, R. Occlusion in augmented reality. In Proceedings of the 2012 8th International Conference on Information Science and Digital Content Technology (ICIDT2012), Jeju, Korea, 26–28 June 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 2, pp. 372–378. [Google Scholar]
  70. Wikimapia.org.Λουτρό (Aπολλωνία). Available online: http://wikimapia.org/30872346/el/%CE%9B%CE%BF%CF%85%CF%84%CF%81%CF%8C (accessed on 8 May 2020).
  71. GeoSense. Available online: http://www.geosense.gr/en/ (accessed on 8 May 2020).
  72. Reclus, F.; Drouard, K. Geofencing for fleet freight management. In Proceedings of the 2009 9th International Conference on Intelligent Transport Systems Telecommunications (ITST), Lille, France, 20–22 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 353–356. [Google Scholar]
  73. Chen, Y.; Li, X.; Sun, M. Competitive mobile geo targeting. Mark. Sci. 2017, 36, 666–682. [Google Scholar] [CrossRef] [Green Version]
  74. Wang, J.; Li, Z.; Yao, J.; Sun, Z.; Li, M.; Ma, W.Y. Adaptive user profile model and collaborative filtering for personalized news. In Proceedings of the Asia-Pacific Web Conference 2006, Harbin, China, 16–18 January 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 474–485. [Google Scholar]
  75. OpenGIS Location Service| OGC. Available online: https://www.ogc.org/standards/ols (accessed on 18 March 2020).
Figure 1. A hypothetical scenario, the visitor of the monument opens his mobile camera and aims at a point of interest.
Figure 1. A hypothetical scenario, the visitor of the monument opens his mobile camera and aims at a point of interest.
Applsci 10 03826 g001
Figure 2. Simplified representation of a reality–virtuality continuum.
Figure 2. Simplified representation of a reality–virtuality continuum.
Applsci 10 03826 g002
Figure 3. A lightweight presentation of the system operational architecture.
Figure 3. A lightweight presentation of the system operational architecture.
Applsci 10 03826 g003
Figure 4. (a) An aerial photo of the bath; (b) photogrammetric mapping of the bath.
Figure 4. (a) An aerial photo of the bath; (b) photogrammetric mapping of the bath.
Applsci 10 03826 g004
Figure 5. Reconstruction of the Ottoman bath.
Figure 5. Reconstruction of the Ottoman bath.
Applsci 10 03826 g005
Figure 6. 3D stable models representing trees from the website Turbosquid, available online: https://www.turbosquid.com/ (accessed on 8 May 2020) and a virtual human with clothes belonging to the Ottoman period.
Figure 6. 3D stable models representing trees from the website Turbosquid, available online: https://www.turbosquid.com/ (accessed on 8 May 2020) and a virtual human with clothes belonging to the Ottoman period.
Applsci 10 03826 g006
Figure 7. An overview of “Mergin’ Mode” authoring tool user interface.
Figure 7. An overview of “Mergin’ Mode” authoring tool user interface.
Applsci 10 03826 g007
Figure 8. The workflow of the creation of a custom virtual geospatial world.
Figure 8. The workflow of the creation of a custom virtual geospatial world.
Applsci 10 03826 g008
Figure 9. Implementing a mixed object (bath) and demonstrating occlusion of a virtual object (tree).
Figure 9. Implementing a mixed object (bath) and demonstrating occlusion of a virtual object (tree).
Applsci 10 03826 g009
Figure 10. Interacting with real (mixed) and virtual objects via the touch screen.
Figure 10. Interacting with real (mixed) and virtual objects via the touch screen.
Applsci 10 03826 g010
Figure 11. The “Mergin’ Mode” end user app providing the reconstructed Ottoman bath merged with its current state in a mergin’ mode.
Figure 11. The “Mergin’ Mode” end user app providing the reconstructed Ottoman bath merged with its current state in a mergin’ mode.
Applsci 10 03826 g011
Table 1. An extensive review of related state-of-the-art software.
Table 1. An extensive review of related state-of-the-art software.
NameOpen SourceAuthoring ToolAugmented RealityMixed RealityAnimation—MotionGIS FunctionalitiesBrowser-Based
Game enginesUnreal Engine √*
UNITY √*
Clara.io
CryEngine
Other: Amazon Lumberyard, BuildBox, GamePlay3d, Godot, jMonkeyEngine, LibGDX, OpenSceneGraph
Libraries—Platforms—FrameworksCesiumJS
Deck.gl
Blender √*
CityEngine
3DAV
LumaGL
vGIS
ThreeJS
Other: A-Frame, AwayJS, Babylon.js, Blend4Web, ClayGL, Construct 3, Filament, Hilo3d, HoloJS, litescene, Pex, PhiloGL, PhysicsJS, PixiJS, PlayCanvas, SceneJS, stack.gl, Turbulenz, Two.js, voxel.js, x3dom, xeogl, zen-3d
* Support via add-on/plugin
NameOpen SourceSLAM2D Image Recognition3D Object RecognitionCloud RecognitionGeo—LocationGIS Functionalities
AR toolsWikitude
AuGeo
Layar
ARCore√ *
Vuforia
ARKit √ *
Other: AR.js, Amazon Sumerian, ARGear, ARToolKit, Augment, AvatarPartners, blippar, BLUairspace, DeepAR, DroidAR, EasyAR, EON 9 Studio, Inde, Broadcast AR Development, Insider Navigation, Kudan, LiveAvatar, Lumin, Maxst, OpenSpace3D, Pikkart AR SDK, PlugXR, ScapeKit, Triple, Vidinoti, ViewAR, VISCOPIC Pins, VisionLib, WakingApp, ZapWorks
* Support via add-on/plugin
Table 2. Specifying “Mergin’ Mode.”
Table 2. Specifying “Mergin’ Mode.”
SpecificationTechnical Description
Import xyzImporting space (three dimensions) coordinates values that may be in a CSV file extracted from a typical Digital Elevation Model or DTM/DSM. Practically this is implemented through the transformation of the design dimensions (pixel values) into a specified coordinate reference system
CRS SupportTransforming coordinates of a model to a known coordinate reference system
Spatial referenceAssigning coordinates of a known CRS to model
Geometries SupportConnecting xyz points with lines based on known geometries (e.g., plane geometry)
ScalingAdjusting the size of a model according to the measurement units of the georeferenced model of the area
Web AppFunctioning over the World Wide Web
Web ServicesExchanging data with data sources and end users via http requests
Serving Level of DetailAdjusting the quality of served 3D models according to end-user device and network capacity
Open StandardsUtilizing W3C and OGC Web Services
AnimationLoading animated 3D models
MotionDefining motion paths for animated 3D models
Interactive motionSpecifying on-the-fly rules of motion
Vector layersSupporting thematic layers, in order to be used for specifying 3D models placement or motion paths
OverlayingSupporting superimposition of multiple thematic layers
Events triggering rulesDefining topology rules for triggering events. This practically means activating motions of 3D models based on the end-user location

Share and Cite

MDPI and ACS Style

Evangelidis, K.; Sylaiou, S.; Papadopoulos, T. Mergin’ Mode: Mixed Reality and Geoinformatics for Monument Demonstration. Appl. Sci. 2020, 10, 3826. https://doi.org/10.3390/app10113826

AMA Style

Evangelidis K, Sylaiou S, Papadopoulos T. Mergin’ Mode: Mixed Reality and Geoinformatics for Monument Demonstration. Applied Sciences. 2020; 10(11):3826. https://doi.org/10.3390/app10113826

Chicago/Turabian Style

Evangelidis, Konstantinos, Stella Sylaiou, and Theofilos Papadopoulos. 2020. "Mergin’ Mode: Mixed Reality and Geoinformatics for Monument Demonstration" Applied Sciences 10, no. 11: 3826. https://doi.org/10.3390/app10113826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop