Next Article in Journal
Astronomical Investigation to Verify the Calendar Theory of the Nasca Lines
Previous Article in Journal
Design and Realization of a Frequency Reconfigurable Multimode Antenna for ISM, 5G-Sub-6-GHz, and S-Band Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Purpose Ontology-Based Visualization of Spatio-Temporal Data: A Case Study on Silk Heritage

by
Javier Sevilla
,
Pablo Casanova-Salas
,
Sergio Casas-Yrurzum
* and
Cristina Portalés
Institute of Robotics and Information and Communication Technologies (IRTIC), Universitat de València, 46980 València, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(4), 1636; https://doi.org/10.3390/app11041636
Submission received: 28 December 2020 / Revised: 7 February 2021 / Accepted: 8 February 2021 / Published: 11 February 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Due to the increasing use of data analytics, information visualization is getting more and more important. However, as data get more complex, so does visualization, often leading to ad hoc and cumbersome solutions. A recent alternative is the use of the so-called knowledge-assisted visualization tools. In this paper, we present STMaps (Spatio-Temporal Maps), a multipurpose knowledge-assisted ontology-based visualization tool of spatio-temporal data. STMaps has been (originally) designed to show, by means of an interactive map, the content of the SILKNOW project, a European research project on silk heritage. It is entirely based on ontology support, as it gets the source data from an ontology and uses also another ontology to define how data should be visualized. STMaps provides some unique features. First, it is a multi-platform application. It can work embedded in an HTML page and can also work as a standalone application over several computer architectures. Second, it can be used for multiple purposes by just changing its configuration files and/or the ontologies on which it works. As STMaps relies on visualizing spatio-temporal data provided by an ontology, the tool could be used to visualize the results of any domain (in other cultural and non-cultural contexts), provided that its datasets contain spatio-temporal information. The visualization mechanisms can also be changed by changing the visualization ontology. Third, it provides different solutions to show spatio-temporal data, and also deals with uncertain and missing information. STMaps has been tested to browse silk-related objects, discovering some interesting relationships between different objects, showing the versatility and power of the different visualization tools proposed in this paper. To the best of our knowledge, this is also the first ontology-based visualization tool applied to silk-related heritage.

1. Introduction

The history of Europe is woven in silk. This material has shaped the way our society is today. However, silk textiles are a seriously endangered heritage, because of their fragility. In addition, several other related intangible cultural heritage assets, such as silk weaving techniques, are also at risk, since traditional looms and craftsmen/craftswomen have almost disappeared. For this reason, the European Commission has funded the SILKNOW project [1] under the H2020 framework program. The aim of this project, in which the authors participate, is to produce a computational system that helps understanding and preserving silk cultural heritage.
Understanding our cultural heritage (CH) is crucial to appreciate it and protect it. This principle was set up by Tilden in 1957: “Through interpretation, understanding; through understanding, appreciation; through appreciation, protection” [2]. Indeed, this principle is still valid today, since without a proper understanding of our past, it is impossible to shape a better future.
One of the tasks that are needed to understand our heritage is to be able to collect and interpret data about it. To be able to analyze these data, it is crucial to visualize it and do it properly [3,4,5]. Currently, massive amounts of information are generated and processed every day. Some of these data could be harnessed and may even lead to scientific discoveries or to the confirmation of scientific concepts/theories. However, understanding first the proper mechanisms to visualize this large amount of information is sometimes as important as the analysis itself, since these scientific discoveries often arise after observing patterns and relationships in the analyzed data by visualizing data, as in [6].
In CH, and in many other areas as well, space and time represent two of the most valuable dimensions. Spatio-temporal datasets, describing the evolution of information with respect to both space and time, are of the utmost importance in scientific analysis. However, visualizing data in space and time, or spacetime—following a more accurate physical depiction of these two variables that are, at a fundamental level, inseparable according to Einstein’s theory of general relativity—usually require a graphical representation of, at least, four dimensions, which is not easy to build on a two-dimensional computer screen or even in a three-dimensional world.
On the other hand, adding semantic information to the data helps identifying patterns and harnessing the knowledge embedded within the data. One of the most common ways of doing this is by creating ontologies. An ontology in computer science “defines the basic terms and relations comprising the vocabulary of a topic area as well as the rules for combining terms and relations to define extensions to the vocabulary” [7]. Ontologies are used to represent complex models of information in which semantic content plays an important role and can be used to infer hidden knowledge from data.
However, most visualization tools are standalone applications that handle small datasets and do not use ontologies nor are they capable of visualizing data stored in an ontology. In this regard, the visualization of multidimensional data often leads to ad hoc solutions, in which each particular application offers a different approach for visualizing its data. This unstructured approach can be improved using ontologies, since ontologies can be employed both to store the data that needs to be visualized but also to define how to visualize them. This second use is especially important as it provides a way to systematize the visualization tasks. This systematization is quite complex when the number and type of datasets are unknown, but can be handled if we know that data should be visualized in spatio-temporal terms, as it is the case of most CH applications. This restriction allows addressing the problem in a more attainable approach.
In this paper, we show the methodology developed in the scope of the SILKNOW project [8], for the spatiotemporal representation of silk heritage objects and their relationships, using a combination of map-based tools that retrieve information from a knowledge graph. In this way, the datasets that are visualized come from an ontology that it is used to harness the semantic information within the database of the project. This ontology has a multilingual web access and semantically enriches the digital data contained in the database. The database upon which this ontology is built collects data—mainly images and textual information—from a series of museum catalogue records and other small institutions related to silk heritage. In addition, another ontology is used to define how data should be visualized, allowing our visualization tool to work with different datasets or even with different source-data ontologies designed for other CH applications or even for non-CH related problems. Our methodology includes a strategy to map objects with multiple locations and time spans, a quite common case in the CH domain. The proper visualization of these large datasets, including the relationships among objects, helps appreciating the rich silk-related cultural heritage and allows discovering connections between silk textiles and places, by means of a spatio-temporal understanding of the historical evolution of silk textile design and techniques, for instance.
The rest of the paper is structured as follows. Section 2 reviews related work about data visualization, focusing on spatio-temporal information and ontologies. Section 3 describes the technical details of the STMaps visualization tool. Section 4 shows the results of using the SILKNOW implementation of STMaps in the CH domain. Finally, Section 5 deals with the conclusions and outlines future works on this matter.

2. Related Work

2.1. Visualization of Spatio-Temporal Data

Information visualization (often shorted as InfoVis [9]) is a recent but important research area. Due to the increasing importance and use of data analytics in almost every scientific field, information visualization is necessary in a variety of applications. In a broad sense, InfoVis can be divided in five different steps, which form the visualization pipeline [10]:
  • Step 1. Data collection and transformation. First, we need to collect data from their sources. Data is sometimes unstructured. Therefore, it is necessary to transform it in order to structure it appropriately.
  • Step 2. Filtering. Then, it is usually necessary to filter the data, in order to remove noise and focus only on what is meaningful for our visualization purposes. This step, which reduces significantly the amount of information that we need to handle, can only be done if data are previously sufficiently structured.
  • Step 3. Mapping. In this step, we need to map the filtered data into geometric elements (points, lines, circles, squares, bars etc.) that can be graphically represented.
    Step 4. Rendering. A rendering module is necessary, in order to arrange all these geometric primitives and create a visual representation.
  • Step 5. User Interaction. The last module is a user interface, by which users can modify the visualization method/parameters and/or explore different datasets.
In many occasions data are multifaceted, since they can be multidimensional, multimodal (from different sources), multivariate (with different attributes) or multimodel (coming from different models) [9]. This heterogeneity is often a problem for InfoVis and most visualization approaches rarely address all these facets. The case of spatio-temporal visualization is a particular case of multifaceted data visualization where the dimensions of time and space are used to plot how another (or others) variable(s) vary in space and time. Although much research has been carried out in this area, these data are difficult to visualize when several dimensions, in addition to spacetime, are considered simultaneously [11]. This often leads to ad hoc strategies for the visualization of spatio-temporal data with multiple dimensions.
In order to formalize this, a series of classifications and strategies on spatio-temporal data have been proposed. These strategies and structuring levels define what can be queried for each level and what can be obtained from the system by performing queries. For instance, in [12], three main parts are identified in spatio-temporal data: where (location), when (time) and what (objects). The last item represents the independent variable(s) that need(s) to be plotted with respect to spacetime. This simple idea is also used in [13], offering a classification of visualization methods with respect to the typology of tasks.
Several InfoVis tools use this where + when + what approach. The main strategy is to apply this idea to a bi-dimensional or three-dimensional region. Several techniques, such as color maps, color clustering, heat maps, elevation maps, bubbles or labels are used to represent information [14,15,16,17,18,19].
Most of the previously shown strategies take place in the rendering step of the InfoVis pipeline. Of course, the rendering step is not the end of the visualization process. User interaction is also necessary for users to control and filter the information they want to see. Time information is commonly addressed in this manner, since a map often represents the situation in a particular time. One of the most used interactive strategies is the use of a timeline, which allows controlling the time frame in which the spatial visualization occurs [20,21,22,23,24,25,26]. This strategy is frequently used in CH, since in this type of applications is often necessary to see different time periods, and switch between them interactively. It is usually possible to see a broad time level and zoom-down interactively to a concrete time period [27] or even to a precise month or day [13].
The use of moving elements and animations is also a good way to depict the flow of time. Several solutions have been proposed by the scientific community, such as spiral diagrams [28,29,30], river-flow diagrams [31,32], spacetime cubes [11,13,33] and some other less intuitive approaches [34]. For these purposes, it is important to identify the date at different time intervals. For instance, in a spiral or helix diagram, each revolution around the vertical axis represents a time period. Similarly, a layer in a spacetime cube represents a given time period. However, as the complexity of the visualization and the interaction increases, this becomes harder and harder. For instance, the work presented in [35] shows an immersive spacetime cube. The authors analyzed the usability and performance of this visualization in a series of tasks with young people and they concluded that the system receives positive opinions but does not improve conventional desktop-based space-time cube representation in terms of performance.
As technology advances and gets more complex, so do data. Indeed, with increasingly complex information models, data tend to have many more attributes. For this reason, new ways of classifying and visualizing spatio-temporal data arise, such as the one proposed in [36]. In this work, Guo added the who dimension to the Peuquet’s proposal. This new dimension refers to objects, and the what dimension refers to the attributes of these objects. These four dimensions can be queried simultaneously or in combinations, performing queries with four different levels of complexity. The more dimensions are combined, the more filtered data are, and the easier they will be to visualize. On the contrary, the fewer dimensions are used, the more information will be shown, reducing the understandability of the analysis.
Although this way of structuring spatio-temporal data for visualization purposes is useful, reality is often much more complex. For instance, the line between objects and attributes is not always clear, since sometimes objects can be at the same time objects and attributes. This could occur when an object is related to other objects and these latter objects become attributes of the former. In the case of CH, this happens frequently with manufactured objects, which can be crafted by using existing objects. If we need to show the spatio-temporal evolution of these objects, the problem gets increasingly complex. This is where ontologies and semantic information could provide useful insights.
In addition, in CH, it is not uncommon to have objects that are poorly or loosely classified in terms of spatio-temporal properties. Some geographic and temporal data could be missing, fuzzy, incomplete, duplicated or even wrong, since it is not always possible to be certain about the origin and age of CH-related items. Small- and medium-sized museums do not always have the required resources to properly document their objects. This poses an additional challenge for the visualization of spatio-temporal data. Indeed, showing items that may have duplicated or incomplete spatio-temporal properties requires special strategies to be able to effectively communicate these situations to the user and show the relationship between the duplicated or incomplete information. Although this problem has been previously studied [37,38,39], very little attention has been put in visualizing this type of CH-related information [40]. Although the visualization of the uncertainty in data may increase significantly the complexity of the visual representation, its omission can also create obvious problems. For this reason, a trade-off between visual complexity and visual accuracy should be found [41]. This trade-off depends on the objectives and the audience of the visualization tool. For instance, visualization for edutainment could hide this complexity, whereas professional/scientific tools should take the risk of visualizing this uncertainty. In this paper, we bring a strategy to represent this uncertainty by mapping the objects in their multiple locations and time spans, but indicating this uncertainty with the symbology. We follow the guidelines proposed in [40,41].

2.2. Ontology Visualization and Ontology-Based Visualization

Ontologies are used to represent knowledge in complex models and infer information from them, because they allow providing a high semantic content to the data representation. Since the structure of an ontology can be quite complex, many of the works dealing with ontology visualization focus on representing the structure and content of ontologies by means of graphs, trees, Euler diagrams and similar representations [42,43,44,45,46]. Tools such as OntoViz [47], OWLGrEd [48] or OWLViz [49] can be used to navigate through the ontology and visualize the classes, attributes and rules of the ontology.
Nevertheless, ontologies can also play a different and possibly much more important role in the visualization of data. They can be used to define how to visualize data, as proposed in [50,51]. This research field is called Ontology-based Visualization (OBV), Semantic Visualization (SV) or Knowledge-assisted Visualization (KAV), and the tools that rely on this technology are often called Knowledge-assisted Visualization Tools (KAVT or KVT). In order to distinguish between the different ontologies involved in the process, they receive different names. The ontology used to store the data of a specific domain is called the Domain Ontology (DO). It stores the data that are necessary to be visualized. The ontology used to capture the semantics of a visual representation and store the mapping between each concept and the corresponding visual elements is called the Visual Representation Ontology (VRO), sometimes also referred to as Visualization Ontology. In some occasions, it is also possible to have a third ontology that specifies the relationship between the concepts of the DO and those in the VRO. This is called the Semantic Bridge Ontology (SBO) [52].
One of the earliest works in the KAV field is the work of Voigt et al. [53]. They designed an algorithm for suggesting visualizations for semantic data. A similar approach is presented in [54], where the authors investigated strategies to recommend visualizations considering different user preferences and proposed VizRec, a visual recommender tool.
The research team involved in [53] is also responsible for the creation of the Visualization Ontology (VISO) [55]. This is a generic and reusable ontology that formalizes knowledge from the visualization domain. This formalization allows making this visualization-related knowledge usable in different contexts. VISO acts as a VRO. The use of VISO alongside the RDFS/OWL Visualization Language (RVL) [56] defines how to visualize the information of another ontology (a DO) and how to interact with it using a user-defined graphical environment. VISO is a generic approach; however, it presents some limitations. First, it is designed in close consideration of bi-dimensional data and hardly delves into three-dimensional visualization elements. In addition, the use of RVL is cumbersome, as it requires tagging the DO in order for VISO to be able to map data into visualizing elements. Although the solution is elegant and generic, it is also not very practical, as it requires modifying the DO.
A different approach is performed in [57]. In this work, the authors propose VUMO (a Visualization-oriented Urban Mobility Ontology). VUMO is oriented to visualize data from the urban mobility domain. It is based on concepts from this specific domain, but it also uses the visualization concept class. This class represents many basic geometric objects, from abstract conceptualization to concrete visualization aspects. Finally, it is also possible to use an ontology for both data exploitation and visualization in a particular context. This approach is performed in [27,57]. As can be seen, the role of ontologies in InfoVis is promising, despite being a recent research area.
Ontologies have also demonstrated to be very convenient for showing geographical data, as shown in [58,59,60]. Since geographic knowledge is of the utmost interest for CH applications, the use of ontology-based spatio-temporal geographic data would potentially ensure interoperability between applications, allowing the sharing of information. The use of ontologies in CH is also quite common [61,62,63]. Most of the applications use ontologies based on the CIDOC Conceptual Reference Model (CRM) [64], created by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Its use is so common that in 2014 it became an ISO standard (ISO 21127:2014).
However, there are very few works dealing with the spatio-temporal visualization of CH information based on ontologies. In addition, to the best of our knowledge, ontology-based visualization has not been applied to silk-related heritage, although there are a handful of works that use ontologies for silk road heritage [65,66]. For this reason, we believe the work described in this paper is meaningful and novel.

3. STMaps Visualization Tool

The main (and original) goal of the visualization system proposed in this paper is to show the information gathered by the SILKNOW project by means of spatio-temporal maps. For this reason, we call it STMaps. Nevertheless, it is important to emphasize that we have designed the STMaps application to be multi-purpose and able to work in different contexts, so it is not restricted to be used within the context of the SILKNOW project, nor is it limited to CH-related content. To do so, we have based the system on ontologies, both for storing the data and for visualization purposes.
SILKNOW is a multidisciplinary project aimed at preserving and promoting the heritage of silk textiles. For this reason, digital data about silk textiles from the databases of several institutions (museums, manufacturers, associations etc.) have been processed and analyzed with Artificial Intelligence (AI) techniques [67]. These data have been incorporated into the SILKNOW’s knowledge graph in order to allow queries from a public web site and also to virtually weave the textile with a software called Virtual Loom [68,69]. The results of these queries are presented in standard web format, but it is also possible to create a visual representation of these data in the form of an interactive map. This component is the STMaps tool. STMaps receives datasets from the SILKNOW’s knowledge graph (or any other knowledge graph from other DO) and shows them on a map. The data should have spatio-temporal information and be previously filtered, as is done by SILKNOW’s search tool—called ADASilk [70]—so that the items visualized correspond to the interest of the user.

3.1. SILKNOW’s Data and Ontology

SILKNOW’s ontology is a DO based on the CIDOC CRM. The selected implementation is the Erlangen CRM/OWL [71], a RDFS/OWL ontology. SILKNOW’s knowledge graph stores records of silk heritage mainly from the 15th to the 19th century. The records are mostly limited to a European geographical scope. However, there are some non-European records in the database and also data providers from the Americas.
SILKNOW’s knowledge graph currently stores more than 38,000 instances of E22 Man-Made Object class [72] (CIDOC model) and more than 67,000 images, although it is still growing. The main properties stored for each item are:
-
Production place;
-
Production time;
-
Type of object;
-
Material (silk, wool, cotton, gold, chenille etc.);
-
Weaving technique (brocatelle, damask, brocade, lampas, espolín etc.);
-
Depiction;
-
Museum that provided the record.
All but the last two properties (‘depiction’ and ‘museum’, which are used for filtering purposes only) are the ones that we want to show in a spatio-temporal representation. The knowledge graph also stores images of the items (when available) and some other textual information, such as a description of the object. The overview of SILKNOW’s ontology is shown in Figure 1. As can be seen, there are many textile technique classes, which extend the E22 Man-Made Object class from the CIDOC model, and the Weaving class, which extends the E12 Production class [73] from the CIDOC model.
The granularity of the spatial data provided by museums and organizations is diverse. Some records have a very specific location, whereas others indicate the first-order administrative division (country) or the second-order division (region/province/state). For this reason, we use the GeoNames database [74]. This free API and database contains more than 25 million geographic names, organized in 9 categories and 645 subcategories. GeoNames provides integration for its use in ontologies and semantic webs. Each place in GeoNames is associated to a unique Uniform Resource Identifier (URI). This URI provides access to all the information of the geographic feature. This allows identifying places that have more than one name and deal with different spatial granularities.
A similar problem occurs with the granularity of time data. Some items in SILKNOW are labeled as belonging to a particular century. However, some records provide more specific periods than others do. For this reason, a similar approach as the one used for spatial data is used.

3.2. Multipurpose Ontology-Based Spatio-Temporal Visualization

STMaps can be considered a spatio-temporal KAVT. Although the original purpose of the application is to show an interactive map of the information stored in SILKNOW’s knowledge graph, STMaps is capable of showing data from different domain ontologies. To accomplish this, it uses a VRO. STMaps is currently ready to work with VISO, although it could potentially work with other visualization ontologies.
STMaps is a visual tool implemented in Unity (Unity 2018.4.19 was used to develop the tool). The use of Unity allows developing a cross-platform application with state-of-the-art graphics. There are two main ways in which the STMaps tool can be used: the first one is by running a stand-alone application and the second one is by embedding a WebGL plugin into an HTML web page. In the case of a stand-alone application, no platform-dependent strategies have been implemented. Therefore, STMaps works on Windows, Linux and Mac OS. In the case of the HTML implementation, the web browser should be able to support WebGL 2.0 in order to run STMaps.
In both cases (stand-alone or web application), STMaps needs to get the source data from a DO, and in both cases, the same configuration file is used: STMaps.json. This is the default content of STMaps.json:
This file contains the URL of the domain ontology (DataOntology field) where the source data are located, the URL of the visualization ontology (VisualizationOntology field) which defaults to VISO although our tool could potentially support other VROs, a stylesheet file (StyleSheet field) and a data file (DataFile field). The stylesheet defines how data are visualized, whereas the data file specifies the datasets that have been filtered from the DO. As only a fraction of the DO is required to be visualized, this data file defines which datasets we want to visualize.
The data file simply contains an array of JSON objects of Point type. Each point is defined by:
-
The URI of the object (in the DO).
-
The class the object belongs to (it refers to a particular class defined in the stylesheet file).
-
The spatio-temporal properties of the object (as defined in the stylesheet file).
-
A link to an image, if available.
Since STMaps uses Unity and gathers the filtered data from the DO by means of this data file, the tool is fully portable and can be used both in stand-alone or web-based modes, as long as the filtering process can be streamed to the data file, something that is not difficult to achieve.
The stylesheet file defines two basic concepts: scenes and classes. A scene is a particular set of rules for visualizing spatio-temporal data. A class defines how each object is visualized for each of the scenes defined in each instance of STMaps. This includes defining which of their properties will be visualized and how they will be represented. An object can be visualized differently in different scenes. Therefore, multiple scene configurations can be defined per class. This file, thus, contains a list of scenes and a list of classes, each of them containing one or more scene configurations. The format of this file is shown next:
  • “SceneList”: {
  •   “Scene”: {
  •     “name”:”<name>”,
  •     “based”: “<URI_DynamicMap>”|”<URI_TimeMap>”,
  •     “mapData”: {
  •       “dataSource”:”<data_source_key”>”,
  •       “zoomLevels”:”<zoom_levels_on_map>”,
  •       “views”:”2D|3D|All”,
  •       “clusters”: {
  •         “fromLevel”:”<level_from_the_clusters_will_be_shown>”,
  •         “toLevel”:”<”level_to_the_clusters_will_be_shown”>,
  •         “fromDataLevel”:“<level_from_the_point_markers_will_be_shown>”,
  •         “numQuads”:”<num_of_quads_for_clustering_on_the_top_level>”
  •       },
  •       “timeIntervals”:”2|3|4|5”
  •     }
  •   }
  • }
  • “ClassesList”: {
  •   “Class”: {
  •     “URI”:”<URI>”,
  •     “SceneConfiguration”: {
  •       “sceneName”:”<name_of_the_scene>”,
  •       “pointRepresentation”:”<URL_File>|Sphere|Cube|Cylinder”,
  •       “pointColor”:”<color>”,
  •       “clusterRepresentation”:”<URL_File>|Sphere|Cube|Cylinder”,
  •       “spatialDataProperties”: {
  •         “long”:”<longitude_name_on_DataFile>”,
  •         “lat”:”<latitude_name_on_DataFile”
  •       }
  •       “timeDataProperties”: {
  •         “from”:”<from_name_on_DataFile>”,
  •         “to”:”<to_name_on_DataFile>”
  •       }
  •       “nameProperty”:”<name_on_DataFile>”,
  •       “relatedToProperty”:”<relation_name_on_DataFile>”,
  •       “relation”:{
  •         “based”:”<URI_Direct_Linking>”,“<URI_Relation_Ring>”,
  •         “color”:”<color>”
  •       }
  •     }
  •   }
  • }
A key aspect of STMaps is that the stylesheet file can easily change the way data are visualized and also relates to VISO. Although VISO is meant to be linked to RVL, STMaps does not use RVL. The use of RVL in an ontology-based visualization system implies the modification of the DO in order to incorporate the attributes RVL needs to perform the mapping. As this solution is invasive and cumbersome, we have taken a different approach. Instead, we use the VISO ontology to define the ways in which the elements could be visualized, but we map these visualization-related semantic data with the items to be visualized by means of the stylesheet file. With this file, we provide the information STMaps needs to link each visual element defined in VISO with the corresponding property in the datasets. An important assumption here is that there should be spatio-temporal data. Therefore, space and time should be provided for each item.
In addition, we have extended the VISO ontology in order to make it more suitable for the goals intended for STMaps. VISO is a fantastic way to formalize any visualization system, but it was created to represent how to visualize data in a device-independent way. This is a great idea, but it has problems. One problem is related to this device independence, which avoids there being a concrete specification about how to visualize the data. For instance, very important aspects such as position, height, width, scale or resolution are not included in the formalization. We propose the definition of specific stylesheets to define those and other specific details. Each specific visualization will use its stylesheet (fromDataLevel property) in order to adapt to the device. In addition, the VISO ontology is highly oriented to a 2D visualization. Hence, it is necessary to extend the current classes in order to formalize a 3D graphic visualization. Table 1 lists the main extensions added to VISO to manage 3D graphic visualization and other concepts needed for STMaps.
STMaps uses WGS84 (World Geodetic System 1984) coordinates to create a map onto where the objects of the DO will be represented. These coordinates are interpreted and mapped into a texture representing a world map in Unity. The spatial data received by STMaps are used to narrow down this map. If there are no data to represent, STMaps will display a world map. However, if there are data, the map will be zoomed in and only the geographical area covering the data will be shown. From the section of the world map defined by the spatial coordinates received from the objects of the DO, STMaps creates a quadtree-based representation. This allows dividing the map in clusters and grouping the data according to the zoom level (as in [76,77,78]). For instance, if several items are placed in the same cluster, an icon depicting the number of items is shown, instead of showing all the items in a very small area, which is confusing. If the user wants to see each individual item, they need to zoom in in order to split these clusters into items or into further sub-clusters. This process is performed up to a configurable depth (defined in the stylesheet file in fromLevel and toLevel properties). This clusterization is essential to keep the readability of the map without losing information. Figure 2 shows an example of clusterization.
The map can be viewed both in 2D and 3D. In both cases, icons are placed to represent the objects (or groups of objects within a cluster) placed at each particular location. These icons are defined in the stylesheet file, in the <URL_File> field. The 3D view is rather convenient for showing the spatial relationship between the objects, as distant objects (icons) appear smaller and feel distant. This does not occur in the 2D representation.
Finally, time evolution is shown in a particular fashion. Although the ‘time’ property of the objects can be shown with any ontology-defined visualization mechanism, time can also be shown dynamically with two specific visualization tools implemented in STMaps. The first one is an interactive timeline tool that allows seeing the evolution of the distribution of objects on the map through time. By moving the timeline, the user can see how the icons on the map appear and disappear in the local area chosen by the user moving through the map with the pan and zoom tools. The second time-specific visualization is a layered visualization mechanism by which STMaps shows simultaneously several maps (of the same geographical area) corresponding to different periods of time. This way, the user can quickly see how the objects are distributed through time. This stacked-map visualization is interactive, and the user can choose the number of layers (one layer per period) and navigate through each of them by clicking a particular period of time. When clicked, the selected period will appear in foreground, and the adjacent periods will still be visible, but in the background. Figure 3 and Figure 4 show these two time-related visualization mechanisms.

3.3. Representation of CH Spatio-Temporal Data. The SILKNOW Case

In Section 3.2, the general multi-purpose features of STMaps were explained. In this section, we describe the SILKNOW implementation of STMaps, where we focus on CH. In the particular case of the silk-heritage data stored for the SILKNOW project, we need to visualize information about silk-related objects. These objects and their properties are received from the SILKNOW’s knowledge graph and are filtered by a search engine, so that the map only shows the objects that the user is interested in. In particular, we are interested in showing the following properties of these objects:
-
Production place;
-
Production time;
-
Type of object;
-
Material;
-
Weaving technique.
The first two elements represent the spatio-temporal coordinates of the object. The first one is obviously used to place an icon on the map showing that there is an object produced there. The other four properties are mapped, by means of the extension we have created to the VISO ontology, into a ring-based representation (Relation Ring in Table 1) showing information about the relationship between each object and other objects shown in map regarding these four properties (see Figure 5). Each object shown on the map has this ring-based representation, and its ring is divided in as many properties as desired. The properties to be shown and their colors are fully configurable. Therefore, by changing the configuration file (stylesheet file), we can change the visual representation, adding or removing elements to the ring and other visualization elements. For this reason, this ring-based representation can be applied outside the SILKNOW context and used with other cultural and non-cultural datasets.
As previously said, in the case of SILKNOW, four properties (time, type of object, material and weaving technique) are chosen and specified in the configuration file. Therefore, the ring is divided in four parts (see Figure 5). Each section of the ring is then filled with two colors (one darker and one lighter) of the same hue. The amount of dark color with respect to light color represents a percentage. This percentage can also be seen with a number and indicates the ratio of objects shown in the map (only those visualized, not the whole database) that share this very same property with this object. For instance, let us imagine that a particular object of the database is found in Florence (Firenze), Italy. STMaps will show an icon on Florence and the icon will be encircled by a ring divided in four sections: one orange, one red, one green and one blue. Each colored section of the ring represents a percentage. The dark orange area over the light orange area represents the percentage of objects in the map that share the same weaving technique with the object in Florence. The dark red area over the light red area represents the percentage of objects that share the same material with that object. The dark green area represents the percentage of objects sharing the same type. Finally, the blue-colored section represents the percentage of objects sharing the same production time (with a granularity of one century). Figure 5 shows the ring-based representation.
The objects on the map can also be interactively explored. For instance, each object can be clicked in order to see a textual depiction of the ring-based information explained earlier. Once an object is clicked, in addition of seeing the relationship of this object with others in the map, we can also gather more information about it by clicking a plus sign. A new window is shown depicting all the data about this object. This window shows the description, the identification number, the type (and subtype if available) of object, all the techniques used in this object, all the materials and all possible places and production times (if there are more than one, which is possible). A picture of the object is also shown when available. Figure 6 shows an example of this information window. The search tool (ADASilk) through which STMaps is used provides also a link (outside STMaps, although it could be included in it) to a web-based version of the aforementioned Virtual Loom application, so that 3D model-based depictions of the silk-made objects can be interactively visualized.
In addition, the relationship between objects can also be shown with lines (using the N-Ary GraphicO2O Relation VISO class). Each item can be linked with a line to all the elements sharing each of the four properties defined for the visualization of the SILKNOW project (time, type, material and technique). The lines have the same color that is shown in the corresponding section of the ring-based representation. Therefore, multiple objects can be selected to show their relationships regarding different properties, allowing users to discover hidden patterns in the data. This view can be enabled and disabled for every property of every object. This can be chosen by using the window shown in Figure 6 (“Show Relationships” check buttons). In addition, when a region is clustered because of an insufficient zoom level, only one line is drawn to the cluster, instead of one line per object in the cluster, contributing to provide a clearer picture. Figure 7 shows the line-based visualization.
The timeline and the layered time visualization of STMaps are also used for SILKNOW’s maps. In the case of the timeline, the scroll allows to navigate through the different periods of time, as defined by the granularity of the filtered data. In the case of the layered time visualization, the user can choose as many layers as centuries span through the filtered data.
As previously commented, the objects are shown on the map with different icons, depending on the type of element they represent. In the SILKNOW implementation of STMaps, there are six icons corresponding to these six types of objects (see Figure 8):
-
Furniture;
-
Textiles/fabrics;
-
Religious objects;
-
Dresses;
-
Drawings;
-
Household objects.
We also need to deal with uncertain information. Not all the objects have well-defined information. Therefore, if the object has one and only one category, one of the six previously defined icons is shown. If the object has multiple categories, a special multi-category icon is shown. If the object does not have any category, a question-mark icon is shown. In addition, if the object has multiple locations, the icon is shown in a distinct color so that the user can quickly identify uncertain locations. As this icon system could be confusing, a legend can be also shown next to the map. Figure 8 shows the icons and their meanings.

4. Results and Discussion

As previously commented, STMaps is a multipurpose visualization tool. However, in order to show some results of the use of our tool in a real scenario, in this section, we provide different examples where we use the STMaps tool to explore data from SILKNOW’s knowledge graph, such as the relationships between a set of objects or their temporal evolution.
As explained above, in order to evidence the relationships among objects, we have implemented two visual strategies: lines and rings. Although with the line-based representation we can visualize the relationship between objects, the importance of the ring-based representation lies in their simplicity and power. With just a glance, we can clearly see if one of the objects has a distinct set of properties with respect to the rest shown on the map. Thus, this allows us to quickly identify singular objects that would otherwise be much more difficult to locate. An example is depicted in Figure 9, where the reader can see the difference in the simplicity and clarity of the two representations. These objects are obtained after a query asking for those objects related to “espolín”, which might refer to a weaving technique or a type of fabric. This query returns a total of 1900 results in the current version of ADASilk, which can be depicted in STMaps (Figure 9a). Whereas with the line-based representation the map is quickly filled with lines (notice that in Figure 9b we only show the relationship of one of the four properties for just a single object with the rest), the ring-based representation is much clearer (Figure 9c) and provides similar information. However, with the ring-based representation, one needs to zoom in the object, whereas with the line-based representation, greater areas can be depicted. Therefore, both representations are complementary, and either of them can be used depending on the case of study.
On the other hand, the rings also make it easy to visualize objects with incomplete data, as they appear as unrelated to the rest of objects. An example is shown in Figure 10. For the objects depicted in Figure 10a, there are some of them that are not related to any of the objects of the query, for some of their properties, and thus, some ring sections appear in light color. By clicking on one of the objects (Figure 10b), it can be seen that the properties “technique” and “material” are unrelated (0%) to the rest of objects, and by inspecting the details of the object (Figure 10c), it can be seen that these properties are empty, and thus, this information is incomplete.
To show the relevance of dealing with a correct representation of objects with multiple location and/or time spans, we show in Table 2 the total number of records that are currently included in ADASilk that have location (production place) and/or time (production time) information. For these records, we show how many of them have multiple locations and/or time spans, and the percentage that they represent. As can be seen, 6.3% of the objects that have information for location have multiple locations (usually because their production place is uncertain), whereas 8.9% of the objects that have time information have multiple time spans (usually because their dating falls into two different centuries). These types of uncertainty are not uncommon in CH.
This fact has a direct impact in the SILKNOW implementation of STMaps. It might happen that a user wants to visualize objects that belong to a certain region. However, if some of them have multiple locations, they will be shown in all of them, making users believe that there are more objects than actually are. An example is depicted in Figure 11. The user wants to depict on the map those records related to velvet in Venice (Venezia), Italy. In the current version of ADASilk, there are a total of 214 records that meet this criterion. When visualizing them in STMaps, the user sees 214 records in Venice, and 6 records in Italy (Figure 11a). Zooming in on the six objects (Figure 11b), it can be seen that the icon indicates that they have multiple locations and, inspecting them in detail (Figure 11c), it can be seen that the location description includes both Italy and Venice. Venice is in Italy, and one might think that it could be better to only show, in STMaps, the locations in Venice. However, the domain experts’ advice is to show both locations, since in most cases, it will be unclear if an object belongs to, for instance, Venice or to another region of the same country, so the catalogue of the object needs to reflect both. That is the reason why we prefer to map the two locations (with a special icon warning about data uncertainty), and let the user explore them on the map.
The objects in Figure 11 might also belong to different time spans. This can be checked when visualizing objects in different centuries, as shown in Figure 12. For instance, the sum of objects that are placed in Venice is 233, where it should be 214 if none of them belongs to multiple time spans. Therefore, there might be 233 − 214 = 19 objects with multiple time spans, in case that objects fall at a maximum of two centuries; this is the common case, e.g., an object can be dated from 1685 to 1710, so it will be mapped to the timelines belonging to both the 17th and 18th centuries. The result shown in Figure 12 enables a domain expert to explore the temporal evolution of velvet in the region of Venice. As can be seen, most of the objects were produced during the 18th century.
Finally, in order to show the meaningfulness of STMaps in the context of the features provided by other related interactive maps, we have elaborated a small comparative table (Table 3) in which we classify other mapping solutions that are related to cultural heritage, history and/or collections of museums.
We focus our attention on the implemented strategies for the spatial and temporal representation of cultural heritage data, as well as other relevant features, such as the depiction of relationships, uncertainties, filtering etc. As can be seen, regarding spatial features, most solutions use interactive icons and clustering, and also message boxes depicting text and images. Choropleth maps are also quite common, especially to show historical moments. On the other hand, not all the solutions take in consideration temporal scales, although the date of the objects is usually given as text (e.g., in message boxes). With regards to other relevant features, most of the maps show links to other pages or sidebar windows with detailed information of objects. It is, however, uncommon to find maps where the relationships between the properties of objects are visualized. Uncertainties are not generally evidenced, if any.

5. Conclusions and Future Work

In this paper, we show the STMaps tool, a multipurpose ontology-based visualization tool of spatio-temporal data. The tool has been designed to show the content of a CH project through an interactive map. This content is stored in a CIDOC CRM ontology and represents the knowledge acquired through the SILKNOW project, a research project devoted to study and promote the rich silk-related heritage of Europe. Nevertheless, although the tool has been designed within the context of silk cultural heritage, it can be used in other cultural and non-cultural contexts.
STMaps provides some unique features. First, it is a multi-platform application. It can work embedded in an HTML page and can also work as a standalone application over several computer architectures, provided that data can be supplied to the application. The advantage of the web-based application is that it can be integrated within an ontology-based search web page. Thus, the web page can be used to filter the data quickly and interactively.
Second, it is entirely based on ontology support as it gets the source data from an ontology (a DO) and also uses another ontology (a VRO) to define how data should be visualized. STMaps currently uses VISO (in fact, an extended version of VISO developed by the authors) as its VRO. In this regard, as the method relies on visualizing spatio-temporal data provided by an ontology using open-software, the tool could be used to visualize the results of any project even if different datasets are used, provided that these datasets contain spatio-temporal information. In addition, our tool has multi-ontology support and can be utilized even with other types of domain ontologies.
Third, it provides different mechanisms to show spatio-temporal data, and also deals with uncertain and missing data. Timelines, time layers, line-based representations and ring-based representations are used in STMaps. These representations are configured through a JSON-based mechanism and can be changed. The mapping between the VRO and the properties to visualize can also be configured. Therefore, the tool can be setup to provide different visualization procedures.
Finally, the SILKNOW implementation of STMaps is a unique tool, since, to the best of our knowledge, it is the first KAVT that shows spatio-temporal data of silk-related CH objects.
The tool has been tested with SILKNOW’s ontology over different computer architectures, exhibiting the same behavior in all of them. We have deployed and tested the application in several platforms: Windows, Mac OS, Android and iOS, with the most popular browsers: Firefox, Chrome and Edge. We have also used it to browse silk-related objects, discovering some interesting relationships between different objects, showing the versatility and power of the different visualization tools provided in STMaps.
Currently, we are performing an extensive evaluation of this visualization tool and the search utilities (more information on [94]) as part of the SILKNOW project, which is expected to finish in late 2021. As future work, we will consider possible extensions or improvements based on the feedback obtained from this evaluation. We also plan to further extend VISO to add more features and/or introduce different visualization ontologies into STMaps, so that it is possible to use a VRO not based on VISO. Finally, we need to further deepen the visualization of missing, incomplete or uncertain information, testing different alternatives and assessing them.

Author Contributions

Conceptualization, C.P. and S.C.-Y.; methodology, J.S., C.P. and S.C.-Y.; investigation of related work, S.C.-Y. and J.S.; software, P.C.-S. and J.S.; writing, S.C.-Y., J.S., C.P. and P.C.-S. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results is in the frame of the “SILKNOW. Silk heritage in the Knowledge Society: from punched cards to big data, deep learning and visual/tangible simulations” project, which has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 769504. Cristina Portalés is supported by the Spanish government postdoctoral grant Ramón y Cajal, under grant No. RYC2018-025009-I.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://ada.silknow.org/.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Silknow Project SILKNOW, Weaving Our Past into the Future. Available online: https://silknow.eu/ (accessed on 23 December 2020).
  2. Tilden, F. Interpreting Our Heritage: Principles and Practices for Visitor Services in Parks, Museums, and Historic Places; University of North Carolina Press: Chapel Hill, NH, USA, 1957. [Google Scholar]
  3. Portalés, C.; Casas, S.; Vera, L.; Sevilla, J. Current Trends on the Acquisition, Virtual Representation, and Interaction of Cultural Heritage: Exploring Virtual and Augmented Reality and Serious Games. In Recent Advances in 3D Imaging, Modeling, and Reconstruction; IGI Global: Hershey, PA, USA, 2020; pp. 143–167. [Google Scholar]
  4. Rodríguez-Gonzálvez, P.; Muñoz-Nieto, A.L.; del Pozo, S.; Sanchez-Aparicio, L.J.; Gonzalez-Aguilera, D.; Micoli, L.; Barsanti, S.G.; Guidi, G.; Mills, J.; Fieber, K. 4D Reconstruction and Visualization of Cultural Heritage: Analyzing Our Legacy through Time. Int. Arch. Photogramm. Remote Sens. Spatial Infor. Sci. 2017, 42, 609. [Google Scholar] [CrossRef] [Green Version]
  5. Windhager, F.; Federico, P.; Schreder, G.; Glinka, K.; Dörk, M.; Miksch, S.; Mayr, E. Visualization of Cultural Heritage Collection Data: State of the Art and Future Challenges. IEEE Trans. Vis. Comput. Graph. 2018, 25, 2311–2330. [Google Scholar] [CrossRef]
  6. Sezgin, S.; Hassan, R.; Zühlke, S.; Kuepfer, L.; Hengstler, J.G.; Spiteller, M.; Ghallab, A. Spatio-Temporal Visualization of the Distribution of Acetaminophen as Well as Its Metabolites and Adducts in Mouse Livers by MALDI MSI. Arch. Toxicol. 2018, 92, 2963–2977. [Google Scholar] [CrossRef]
  7. Neches, R.; Fikes, R.E.; Finin, T.; Gruber, T.; Patil, R.; Senator, T.; Swartout, W.R. Enabling Technology for Knowledge Sharing. AI Mag. 1991, 12, 36. [Google Scholar]
  8. Portalés, C.; Sebastián, J.; Alba, E.; Sevilla, J.; Gaitán, M.; Ruiz, P.; Fernández, M. Interactive Tools for the Preservation, Dissemination and Study of Silk Heritage—An Introduction to the Silknow Project. Mult. Technol. Interact. 2018, 2, 28. [Google Scholar] [CrossRef] [Green Version]
  9. Kehrer, J.; Hauser, H. Visualization and Visual Analysis of Multifaceted Scientific Data: A Survey. IEEE Trans. Vis. Comput. Graph. 2012, 19, 495–513. [Google Scholar] [CrossRef]
  10. Liu, S.; Cui, W.; Wu, Y.; Liu, M. A Survey on Information Visualization: Recent Advances and Challenges. Vis. Comput. 2014, 30, 1373–1393. [Google Scholar] [CrossRef]
  11. Bach, B.; Dragicevic, P.; Archambault, D.; Hurter, C.; Carpendale, S. A Review of Temporal Data Visualizations Based on Space-Time Cube Operations. In Proceedings of the EurographicsConference on Visualization, Swansea, UK, 9–13 June 2014. [Google Scholar]
  12. Peuquet, D.J. It’s about Time: A Conceptual Framework for the Representation of Temporal Dynamics in Geographic Information Systems. Ann. Assoc. Am. Geogr. 1994, 84, 441–461. [Google Scholar] [CrossRef]
  13. Andrienko, N.; Andrienko, G.; Gatalsky, P. Exploratory Spatio-Temporal Visualization: An Analytical Review. J. Vis. Lang. Comput. 2003, 14, 503–541. [Google Scholar] [CrossRef]
  14. Ponjavic, M.; Karabegovic, A.; Ferhatbegovic, E.; Tahirovic, E.; Uzunovic, S.; Travar, M.; Pilav, A.; Mulic, M.; Karakas, S.; Avdic, N. Spatio-Temporal Data Visualization for Monitoring of Control Measures in the Prevention of the Spread of COVID-19 in Bosnia and Herzegovina. Med. Glas. (Zenica) 2020, 17, 265–274. [Google Scholar]
  15. Zhang, X.; Zhang, M.; Jiang, L.; Yue, P. An Interactive 4D Spatio-Temporal Visualization System for Hydrometeorological Data in Natural Disasters. Int. J. Dig. Earth 2019, 1–21. [Google Scholar] [CrossRef]
  16. Jänicke, S.; Heine, C.; Scheuermann, G. GeoTemCo: Comparative visualization of geospatial-temporal data with clutter removal based on dynamic delaunay triangulations. In Computer Vision, Imaging and Computer Graphics. Theory and Application; Springer: Berlin, Germany, 2013; pp. 160–175. [Google Scholar]
  17. Zhang, S.; Zhang, W.; Wang, Y.; Zhao, X.; Song, P.; Tian, G.; Mayer, A.L. Comparing Human Activity Density and Green Space Supply Using the Baidu Heat Map in Zhengzhou, China. Sustainability 2020, 12, 7075. [Google Scholar] [CrossRef]
  18. Ku, W.-Y.; Liaw, Y.-P.; Huang, J.-Y.; Nfor, O.N.; Hsu, S.-Y.; Ko, P.-C.; Lee, W.-C.; Chen, C.-J. An Online Atlas for Exploring Spatio-Temporal Patterns of Cancer Mortality (1972–2011) and Incidence (1995–2008) in Taiwan. Medicine 2016, 95, e3496. [Google Scholar] [CrossRef]
  19. Hengl, T.; Roudier, P.; Beaudette, D.; Pebesma, E. PlotKML: Scientific Visualization of Spatio-Temporal Data. J. Stat. Softw. 2015, 63, 1–25. [Google Scholar] [CrossRef] [Green Version]
  20. Di Bartolomeo, S.; Pandey, A.; Leventidis, A.; Saffo, D.; Syeda, U.H.; Carstensdottir, E.; Seif El-Nasr, M.; Borkin, M.A.; Dunne, C. Evaluating the Effect of Timeline Shape on Visualization Task Performance. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  21. Kraak, M.-J. Timelines, Temporal Resolution, Temporal Zoom and Time Geography. In Proceedings of the 22nd International Cartographic Conference, A Coruña, Spain, 9–16 July 2005. [Google Scholar]
  22. Lee, C.; Devillers, R.; Hoeber, O. Navigating Spatio-Temporal Data with Temporal Zoom and Pan in a Multi-Touch Environment. Int. J. Geogr. Infor. Sci. 2014, 28, 1128–1148. [Google Scholar] [CrossRef]
  23. Rodríguez-Gonzálvez, P.; Guerra Campo, Á.; Muñoz-Nieto, Á.L.; Sánchez-Aparicio, L.J.; González-Aguilera, D. Diachronic Reconstruction and Visualization of Lost Cultural Heritage Sites. IISPRS Int. J. Geo. Infor. 2019, 8, 61. [Google Scholar] [CrossRef] [Green Version]
  24. Chronas. Available online: https://chronas.org (accessed on 7 February 2021).
  25. Helsinki Ennen. Available online: https://helsinkiennen.fi (accessed on 7 February 2021).
  26. Jewish Cultures Mapped. Available online: http://www.jewish-cultures-mapped.org (accessed on 7 February 2021).
  27. Wang, C.; Ma, X.; Chen, J. Ontology-Driven Data Integration and Visualization for Exploring Regional Geologic Time and Paleontological Information. Comput. Geosci. 2018, 115, 12–19. [Google Scholar] [CrossRef]
  28. Yang, H.; Li, T.; Chen, X. Visualization of Time Series Data Based on Spiral Graph. J. Comput. Appl. 2017, 37, 2443–2448. [Google Scholar]
  29. Weber, M.; Alexa, M.; Müller, W. Visualizing Time-Series on Spirals. In Proceedings of the Infovis, San Diego, CA, USA, 22–23 October 2001; Volume 1, pp. 7–14. [Google Scholar]
  30. Hewagamage, K.P.; Hirakawa, M.; Ichikawa, T. Interactive Visualization of Spatiotemporal Patterns Using Spirals on a Geographical Map. In Proceedings of the 1999 IEEE Symposium on Visual Languages, Tokyo, Japan, 13–16 September 1999; pp. 296–303. [Google Scholar]
  31. Guo, H.; Wang, Z.; Yu, B.; Zhao, H.; Yuan, X. Tripvista: Triple Perspective Visual Trajectory Analytics and Its Application on Microscopic Traffic Data at a Road Intersection. In Proceedings of the 2011 IEEE Pacific Visualization Symposium, Hong Kong, China, 1–4 March 2011; pp. 163–170. [Google Scholar]
  32. Havre, S.; Hetzler, B.; Nowell, L. ThemeRiver: Visualizing Theme Changes over Time. In Proceedings of the IEEE Symposium on Information Visualization 2000. INFOVIS 2000, Tokyo, Japan, 13–16 September 1999; pp. 115–123. [Google Scholar]
  33. Bogucka, E.P.; Jahnke, M. Feasibility of the Space–Time Cube in Temporal Cultural Landscape Visualization. IISPRS Int. J. Geo. Infor. 2018, 7, 209. [Google Scholar] [CrossRef] [Green Version]
  34. Fang, Y.; Xu, H.; Jiang, J. A Survey of Time Series Data Visualization Research. IOP Conf. Ser. Mater. Sci. Eng. 2020, 782, 022013. [Google Scholar] [CrossRef] [Green Version]
  35. Wagner Filho, J.A.; Stuerzlinger, W.; Nedel, L. Evaluating an Immersive Space-Time Cube Geovisualization for Intuitive Trajectory Data Exploration. IEEE Trans. Vis. Comput. Graph. 2019, 26, 514–524. [Google Scholar] [CrossRef] [Green Version]
  36. Guo, D.; Du, Y. A Visualization Platform for Spatio-Temporal Data: A Data Intensive Computation Framework. In Proceedings of the 2015 23rd International Conference on Geoinformatics, Wuhan, China, 19–21 June 2015; pp. 1–6. [Google Scholar]
  37. Pebesma, E.J.; de Jong, K.; Briggs, D. Interactive Visualization of Uncertain Spatial and Spatio-temporal Data under Different Scenarios: An Air Quality Example. Int. J. Geogr. Infor. Sci. 2007, 21, 515–527. [Google Scholar] [CrossRef]
  38. Shrestha, A.; Zhu, Y.; Miller, B. Visualizing Uncertainty in Spatio-Temporal Data. In Proceedings of the ACM SIGKDD Workshop on Interactive Data Exploration and Analytics (IDEA), New York City, NY, USA, 24 August 2014; pp. 117–126. [Google Scholar]
  39. Gerharz, L.; Pebesma, E.; Hecking, H. Visualizing Uncertainty in Spatio-Temporal Data. Spat. Accuracy 2010, 2010, 169–172. [Google Scholar]
  40. Windhager, F.; Filipov, V.A.; Salisu, S.; Mayr, E. Visualizing Uncertainty in Cultural Heritage Collections. In Proceedings of the EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3), Brno, Czech, 2–8 June 2018. [Google Scholar]
  41. Windhager, F.; Salisu, S.; Mayr, E. Exhibiting Uncertainty: Visualizing Data Quality Indicators for Cultural Collections. Informatics 2019, 6, 29. [Google Scholar] [CrossRef] [Green Version]
  42. Dudáš, M.; Lohmann, S.; Svátek, V.; Pavlov, D. Ontology Visualization Methods and Tools: A Survey of the State of the Art. Knowl. Eng. Rev. 2018, 33, 1–39. [Google Scholar] [CrossRef]
  43. Dudáš, M.; Zamazal, O.; Svátek, V. Roadmapping and Navigating in the Ontology Visualization Landscape. In Proceedings of the International Conference on Knowledge Engineering and Knowledge Management, Linköping, Sweden, 24–28 November 2014; pp. 137–152. [Google Scholar]
  44. Anikin, A.; Litovkin, D.; Kultsova, M.; Sarkisova, E.; Petrova, T. Ontology Visualization: Approaches and Software Tools for Visual Representation of Large Ontologies in Learning. In Proceedings of the Conference on Creativity in Intelligent Technologies and Data Science, Volgograd, Russia, 16–19 September 2019; pp. 133–149. [Google Scholar]
  45. Mikhailov, S.; Petrov, M.; Lantow, B. Ontology Visualization: A Systematic Literature Analysis. In Proceedings of the BIR Workshops, Prague, Czech, 14–16 September 2016. [Google Scholar]
  46. Katifori, A.; Halatsis, C.; Lepouras, G.; Vassilakis, C.; Giannopoulou, E. Ontology Visualization Methods—a Survey. ACM Comput. Surv. (CSUR) 2007, 39, 10-es. [Google Scholar] [CrossRef] [Green Version]
  47. Sintek, M. OntoViz. 2007. Available online: http://protegewiki.stanford.edu/wiki/OntoViz (accessed on 28 December 2020).
  48. Liepinš, R.; Grasmanis, M.; Bojars, U. OWLGrEd Ontology Visualizer. In Proceedings of the 2014 International Conference on Developers. CEUR-WS. org, Riva del Garda-Trentino, Italy, 14–23 October 2014; Volume 1268, pp. 37–42. [Google Scholar]
  49. Horridge, M. OWLViz. 2010. Available online: http://protegewiki.stanford.edu/wiki/OWLViz (accessed on 28 December 2020).
  50. Falconer, S.M.; Bull, R.I.; Grammel, L.; Storey, M.-A. Creating Visualizations through Ontology Mapping. In Proceedings of the 2009 International Conference on Complex, Intelligent and Software Intensive Systems, Fukuoka, Japan, 16–19 March 2009; pp. 688–693. [Google Scholar]
  51. Nazemi, K.; Burkhardt, D.; Ginters, E.; Kohlhammer, J. Semantics Visualization–Definition, Approaches and Challenges. Procedia Computer Sci. 2015, 75, 75–83. [Google Scholar] [CrossRef]
  52. Miksch, S.; Leitte, H.; Chen, M. Knowledge-Assisted Visualization and Guidance. In Foundations of Data Visualization; Springer: Berlin, Germany, 2020; pp. 61–85. [Google Scholar]
  53. Voigt, M.; Franke, M.; Meissner, K. Using Expert and Empirical Knowledge for Context-Aware Recommendation of Visualization Components. Int. J. Adv. Life Sci 2013, 5, 27–41. [Google Scholar]
  54. Mutlu, B.; Veas, E.; Trattner, C. Vizrec: Recommending Personalized Visualizations. ACM Trans. Interact. Intell. Syst. (TiiS) 2016, 6, 1–39. [Google Scholar] [CrossRef]
  55. Polowinski, J.; Voigt, M. VISO: A shared, formal knowledge base as a foundation for semi-automatic infovis systems. In CHI’13 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2013; pp. 1791–1796. [Google Scholar]
  56. Polowinski, J. Towards RVL: A Declarative Language for Visualizing RDFS/OWL Data. In Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics, Madrid, Spain, 12–14 June 2013; pp. 1–11. [Google Scholar]
  57. Sobral, T.; Galvão, T.; Borges, J. An Ontology-Based Approach to Knowledge-Assisted Integration and Visualization of Urban Mobility Data. Exp. Syst. Appl. 2020, 150, 113260. [Google Scholar] [CrossRef]
  58. Kauppinen, T.; Deichstetter, C.; Hyvönen, E. Temp-o-Map: Ontology-Based Search and Visualization of Spatio-Temporal Maps. In Proceedings of the Demo track at the European Semantic Web Conference ESWC, Innsbruck, Austria, 3–7 June 2007; pp. 4–5. [Google Scholar]
  59. Potnis, A.; Durbha, S.S. Exploring Visualization of Geospatial Ontologies Using Cesium. In Proceedings of the VOILA@ ISWC, Kobe, Japan, 17 October 2016; pp. 143–150. [Google Scholar]
  60. Kauppinen, T.; Henriksson, R.; Väätäinen, J.; Deichstetter, C.; Hyvönen, E. Ontology-Based Modeling and Visualization of Cultural Spatio-Temporal Knowledge. In Proceedings of the 12th Finnish Artificial Intelligence Conference STeP 2006, Espoo, Finland, 26–27 October 2006; p. 37. [Google Scholar]
  61. Haubt, R.A.; Taçon, P.S. A Collaborative, Ontological and Information Visualization Model Approach in a Centralized Rock Art Heritage Platform. J. Archaeol. Sci. Rep. 2016, 10, 837–846. [Google Scholar] [CrossRef]
  62. Yaco, S.; Ramaprasad, A. Informatics for Cultural Heritage Instruction: An Ontological Framework. J. Doc. 2019. [Google Scholar] [CrossRef]
  63. Damiano, R.; Lieto, A.; Lombardo, V. Ontology-Based Visualisation of Cultural Heritage. In Proceedings of the 2014 Eighth International Conference on Complex, Intelligent and Software Intensive Systems, Birmingham, UK, 2–4 July 2014; pp. 558–563. [Google Scholar]
  64. International Committee for Documentation of the International Council of Museums CIDOC Conceptual Reference Model (CRM). Available online: http://cidoc-crm.org/ (accessed on 23 December 2020).
  65. Andaroodi, E.; Andres, F. Ontology-Based Semantic Representation of Silk Road’s Caravanserais: Conceptualization of Multifaceted Links. In Proceedings of the Joint International Semantic Technology Conference, Awaji, Japan, 26–28 November 2018; pp. 89–103. [Google Scholar]
  66. Andaroodi, E.; Andres, F.; Ono, K.; Lebigre, P. Developing a Visual Lexical Model for Semantic Management of Architectural Visual Data, Design of Spatial Ontology for Caravanserais of Silk Roads. J. Digit. Infor. Manag. (JDIM) 2004, 2, 151–160. [Google Scholar]
  67. Dorozynski, M.; Clermont, D.; Rottensteiner, F. Multi-Task Deep Learning with Incomplete Training Samples for the Image-Based Prediction of Variables Describing Silk Fabrics. ISPRS Ann. Photogramm. Remote Sens. Spatial Infor. Sci. IV-2/W6 2019, 47–54. [Google Scholar] [CrossRef] [Green Version]
  68. Gaitán, M.; Portalés, C.; Sevilla, J.; Alba, E. Applying Axial Symmetries to Historical Silk Fabrics: SILKNOW’s Virtual Loom. Symmetry 2020, 12, 742. [Google Scholar] [CrossRef]
  69. Portalés, C.; Sevilla, J.; Pérez, M.; León, A. A Proposal to Model Ancient Silk Weaving Techniques and Extracting Information from Digital Imagery-Ongoing Results of the SILKNOW Project. In Proceedings of the International Conference on Computational Science, Faro, Portugal, 12–14 June 2019; pp. 733–740. [Google Scholar]
  70. Silknow Project SILKNOW’s ADASilk Search Engine. Available online: https://https://ada.silknow.org/en (accessed on 2 February 2021).
  71. University of Erlangen-Nuremberg, Department of Computer Science & Artificial Intelligence Erlangen CRM/OWL, CIDOC-CRM Implementation. Available online: http://erlangen-crm.org (accessed on 23 December 2020).
  72. International Committee for Documentation of the International Council of Museums E22 Man-Made Object in Version 6.1. Available online: http://www.cidoc-crm.org/Entity/e22-man-made-object/version-6.1 (accessed on 23 December 2020).
  73. International Committee for Documentation of the International Council of Museums E12 Production in Version 6.1. Available online: http://www.cidoc-crm.org/Entity/e12-production/version-6.1 (accessed on 23 December 2020).
  74. Silknow Project Ontology Management Environment—SILKNOW Ongoing. Available online: http://ontome.dataforhistory.org/namespace/36#graph (accessed on 23 December 2020).
  75. geonames.org GeoNames. Available online: http://www.geonames.org/ (accessed on 23 December 2020).
  76. Hungaricana. Available online: https://gallery.hungaricana.hu/en/map/?layers=google-roadmap%2Cvector-data&bbox=-1691399%2C4333062%2C5353037%2C7635141 (accessed on 7 February 2021).
  77. Cronobook. Available online: https://cronobook.com (accessed on 7 February 2021).
  78. Collections Du Musée Albert-Kahn. Available online: http://collections.albert-kahn.hauts-de-seine.fr (accessed on 7 February 2021).
  79. UNESCO Interactive Map. Available online: https://whc.unesco.org/en/interactive-map (accessed on 7 February 2021).
  80. PERICLES. Available online: https://mapyourheritage.eu (accessed on 7 February 2021).
  81. Cultural Routes. Available online: https://www.coe.int/en/web/cultural-routes/cultural-routes-database-main-page (accessed on 7 February 2021).
  82. CYARK. Available online: https://www.cyark.org/projects (accessed on 7 February 2021).
  83. Historic Country Borders. Available online: https://historicborders.vercel.app (accessed on 7 February 2021).
  84. Map of the Ancient World. Available online: https://www.ancient.eu/map (accessed on 7 February 2021).
  85. Cultural Atlas of Australia. Available online: http://australian-cultural-atlas.info/CAA/search.php (accessed on 7 February 2021).
  86. Sanborn Maps from USA. Available online: https://selenaqian.github.io/sanborn-maps-navigator (accessed on 7 February 2021).
  87. A Map of Myth, Legend and Folklore. Available online: https://mythsmap.english-heritage.org.uk (accessed on 7 February 2021).
  88. Geoquiz History. Available online: https://baffioso.github.io/geoquiz-history (accessed on 7 February 2021).
  89. Industrial Heritage for Tourism. Available online: https://industrialheritage.travel/map (accessed on 7 February 2021).
  90. EAMENA. Available online: https://database.eamena.org/map (accessed on 7 February 2021).
  91. The Museum of the World. Available online: https://britishmuseum.withgoogle.com (accessed on 7 February 2021).
  92. OldSF. Available online: http://www.oldsf.org (accessed on 7 February 2021).
  93. Willmes, C.; Brocks, S.; Hoffmeister, D.; Hütt, C.; Kürner, D.; Volland, K.; Bareth, G. Facilitating Integrated Spatio-Temporal Visualization and Analysis of Heterogeneous Archaeological and Palaeoenvironmental Research Data. ISPRS Ann. Photogramm. Remote Sens. Spatial Infor. Sci. 2012, I-2, 223–228. [Google Scholar] [CrossRef] [Green Version]
  94. Silknow Project SILKNOW’s Virtual Loom & ADASilk Evaluation. Available online: https://silknow.eu/index.php/evaluation/test_en/ (accessed on 23 December 2020).
Figure 1. Overview of SILKNOW’s ontology. Image generated with the WebVOWL (Web-based Visualization of Ontologies) viewer, integrated in the Ontome web page [75].
Figure 1. Overview of SILKNOW’s ontology. Image generated with the WebVOWL (Web-based Visualization of Ontologies) viewer, integrated in the Ontome web page [75].
Applsci 11 01636 g001
Figure 2. WGS84-based map and some examples of clusterization (with 72 and 97 items collapsed).
Figure 2. WGS84-based map and some examples of clusterization (with 72 and 97 items collapsed).
Applsci 11 01636 g002
Figure 3. Timeline visualization in STMaps showing data from two centuries: 16th (a) and 19th (b).
Figure 3. Timeline visualization in STMaps showing data from two centuries: 16th (a) and 19th (b).
Applsci 11 01636 g003
Figure 4. Layered time visualization in STMaps showing data from 17th–18th centuries and from the 19th century. Layered maps always appear in a 3D perspective.
Figure 4. Layered time visualization in STMaps showing data from 17th–18th centuries and from the 19th century. Layered maps always appear in a 3D perspective.
Applsci 11 01636 g004
Figure 5. Ring-based representation of the relationship between the properties of the objects in SILKNOW. Here we can see four ring sections (orange, red, green and blue) for each object. Each colored ring section represents a property of the object and is divided in two parts (one lighter and one darker) showing the percentage of objects in the map that share each property with this object.
Figure 5. Ring-based representation of the relationship between the properties of the objects in SILKNOW. Here we can see four ring sections (orange, red, green and blue) for each object. Each colored ring section represents a property of the object and is divided in two parts (one lighter and one darker) showing the percentage of objects in the map that share each property with this object.
Applsci 11 01636 g005
Figure 6. Information window of the properties of the objects in SILKNOW shown with STMaps.
Figure 6. Information window of the properties of the objects in SILKNOW shown with STMaps.
Applsci 11 01636 g006
Figure 7. Line-based representation of the relationship between the properties of the objects in SILKNOW.
Figure 7. Line-based representation of the relationship between the properties of the objects in SILKNOW.
Applsci 11 01636 g007
Figure 8. Icons (and their meanings) shown in STMaps for the objects in SILKNOW.
Figure 8. Icons (and their meanings) shown in STMaps for the objects in SILKNOW.
Applsci 11 01636 g008
Figure 9. Examples of relationship visualization, where: (a) results of a query for objects related to the term “espolín”; (b) visualizing the objects that have the same value for the “category” property than a single object, selected by the user, with the line-based representation; (c) visualizing how many objects are related for each object of a query, for the four properties (“technique”, “material”, “time” and “category”), for a small set of objects, using the ring-based representation; (d) inspecting the percentages in detail, for one of the objects (the same used in (b)).
Figure 9. Examples of relationship visualization, where: (a) results of a query for objects related to the term “espolín”; (b) visualizing the objects that have the same value for the “category” property than a single object, selected by the user, with the line-based representation; (c) visualizing how many objects are related for each object of a query, for the four properties (“technique”, “material”, “time” and “category”), for a small set of objects, using the ring-based representation; (d) inspecting the percentages in detail, for one of the objects (the same used in (b)).
Applsci 11 01636 g009
Figure 10. Example of objects with incomplete data, where: (a) results of a query in which some ring sections appear with light color; (b) depiction of the percentages for each property of an object; (c) detail of the object in (b).
Figure 10. Example of objects with incomplete data, where: (a) results of a query in which some ring sections appear with light color; (b) depiction of the percentages for each property of an object; (c) detail of the object in (b).
Applsci 11 01636 g010aApplsci 11 01636 g010b
Figure 11. Example of objects that have multiple locations, where: (a) results of a query; (b) depiction of objects with multiple locations; and (c) details of one of the objects.
Figure 11. Example of objects that have multiple locations, where: (a) results of a query; (b) depiction of objects with multiple locations; and (c) details of one of the objects.
Applsci 11 01636 g011
Figure 12. Temporal evolution of objects related to velvet, for the region of Venice, for the 15th to 20th centuries. (a) 15th century; (b) 16th century; (c) 17th century; (d) 18th century; (e) 19th century; (f) 20th century.
Figure 12. Temporal evolution of objects related to velvet, for the region of Venice, for the 15th to 20th centuries. (a) 15th century; (b) 16th century; (c) 17th century; (d) 18th century; (e) 19th century; (f) 20th century.
Applsci 11 01636 g012aApplsci 11 01636 g012b
Table 1. Extensions to the VISO ontology implemented by STMaps.
Table 1. Extensions to the VISO ontology implemented by STMaps.
New ClassClass ExtendedProperties Used
Graphic Representation 3DGraphic Representation
Graphic Object 3DGraphic ObjectFile, Primitive, Color
Interactive Graphic Representation 3DInteractive Graphic RepresentationTilt_allowed
Map 3DMap
Dynamic Map 3DMap 3D, Interactive Graphic Representation 3DClustering_allowed, Data_allowed
ClusterGraphic ObjectClustering (Spatial, Shape), Domain, Ring
Cluster 3DCluster, Graphic Object 3D
Time MapDynamic MapLevels
Time Map 3DTime Map, Dynamic Map 3D
PointData StructureX, Y
Spatial DataPoint
Time DataPoint
Relation RingN-Ary Graphic O2O RelationColor, Legend
Table 2. Number and percentages of objects with multiple locations and/or multiple time spans, included in ADASilk (on 21 December 2020). The total number of objects is 38,971.
Table 2. Number and percentages of objects with multiple locations and/or multiple time spans, included in ADASilk (on 21 December 2020). The total number of objects is 38,971.
Number of Objects with “Production Place” Information%Number of Objects with “Production Time” Information%
Total36,08110029,465100
One value33,83493.726,85291.1
More than one value22476.326138.9
Table 3. Comparative analysis between STMaps and other related CH interactive maps.
Table 3. Comparative analysis between STMaps and other related CH interactive maps.
WorkSpatial Features Representing DataTemporal Representation of DataOther Relevant Features
UNESCO Interactive Map [79]Icons; message boxes (text, images); 2D/3DFilter data by dateFilters (categories, themes etc.)
PERICLES [80]Clustering; interactive icons; message boxes (text)-Filters (object, song, story etc.); users can add their own data
Cultural Routes [81]Clustering; interactive icons; message boxes (text, images)-Link to detailed inspection of objects
CYARK [82]Clustering; interactive icons; message boxes (text, images)Timeline (from, to)Link to detailed inspection of objects
Historic Country Borders [83]Choropleth map; textTimeline-
Chronas [24]Interactive choropleth map; icons; message boxesMap timeline; another timeline linked to description of objects (Wikipedia)Line-based representation of relations (between objects and visual data); link to Wikipedia; link to related items
Map of the Ancient World [84]Choropleth map; textTime selection (from certain historical moments)-
Cultural Atlas of Australia [85]Interactive icons; message boxes (text, images)Timeline (from, to)Filters (narrative, location, state)
Sanborn Maps from USA [86]Interactive multi-resolution choropleth map; interactive iconsAs textual data related to objectsLink to detailed inspection of objects (Library of Congress)
Jewish Cultures Mapped [26]Interactive icons; message boxes (images)Timeline with interactive icons representing events, connected to the mapLink to detailed inspection of objects; filters (projects, people, tags etc.); related objects through events
A Map of Myth, Legend and Folklore [87]Interactive drawings; animated drawings-Link to detailed inspection of stories; some stories are submitted by users
Geoquiz History [88]Interactive icons; message boxes (text)-Link to Wikipedia
Industrial Heritage for Tourism [89]Interactive icons; message boxes (text)-Virtual tours (360° images); videos
EAMENA [90]Interactive icons; clustering; message boxes (text)-Link to heritage place
The Museum of the World [91]Interactive icons; areas represent continents; simple message boxes (text, images); detailed message boxes (stories, images, videos etc.)3D representation of time, highly interactiveLine-based representations of relations; filtering (art and design, living and dying etc.)
Hungaricana [76]Interactive icons (only clustered); message boxes with imagesDate of objects is depicted when inspecting themLink to detailed inspection of objects
Collections du Musée Albert-Kahn [78]Clustering; interactive icons; message boxes with images-Filters (themes etc.)
OldSF [92]Interactive icons (only clustered)Timeline (from, to)Inspection of objects (images and text)
Helsinki Ennen [25]Interactive icons; message boxes (text and images)Timeline (selection of specific time spans)-
Cronobook [77]Clustering; pictorial icons; message boxes (image, text)Date of objects is depicted when inspecting them-
Willmes et. al 2012 [93]Choropleth map; iconsTimelineBased on semantic web
STMapsClustering; interactive icons; message boxes (text, images, properties); 2D/3DTimeline and time-layer representation (granularity: centuries)Uncertainty (icons for records with multiple locations); based on knowledge graph; line-based and ring-based representations of relations; filtering
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sevilla, J.; Casanova-Salas, P.; Casas-Yrurzum, S.; Portalés, C. Multi-Purpose Ontology-Based Visualization of Spatio-Temporal Data: A Case Study on Silk Heritage. Appl. Sci. 2021, 11, 1636. https://doi.org/10.3390/app11041636

AMA Style

Sevilla J, Casanova-Salas P, Casas-Yrurzum S, Portalés C. Multi-Purpose Ontology-Based Visualization of Spatio-Temporal Data: A Case Study on Silk Heritage. Applied Sciences. 2021; 11(4):1636. https://doi.org/10.3390/app11041636

Chicago/Turabian Style

Sevilla, Javier, Pablo Casanova-Salas, Sergio Casas-Yrurzum, and Cristina Portalés. 2021. "Multi-Purpose Ontology-Based Visualization of Spatio-Temporal Data: A Case Study on Silk Heritage" Applied Sciences 11, no. 4: 1636. https://doi.org/10.3390/app11041636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop