Next Article in Journal
Patellofemoral Joint Loads during Running Immediately Changed by Shoes with Different Minimalist Indices: A Cross-sectional Study
Previous Article in Journal
Multi-Scenario Cooperative Evolutionary Algorithm for the β-Robust p-Median Problem with Demand Uncertainty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments

1
Department of Information and Communications Engineering, Hankuk University of Foreign Studies, Seoul 02450, Korea
2
Senior Technical Advisor, Digital Literati Information Technology Co. Ltd., Cheong-ju 28126, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(19), 4175; https://doi.org/10.3390/app9194175
Submission received: 19 August 2019 / Revised: 18 September 2019 / Accepted: 27 September 2019 / Published: 5 October 2019

Abstract

:
Interoperability has become a major challenge for the development of integrated healthcare applications. This is mainly because of the reason that data is collected, processed, and managed using heterogeneous protocols, different data formats, and diverse technologies, respectively. Moreover, interoperability among healthcare applications has been limited because of the lack of mutually agreed standards. This article proposes a semantic mediation model for the interoperability provision in heterogeneous healthcare service environments. To enhance semantic mediation, the Web of Objects (WoO) framework has been used to support abstraction and aggregation of healthcare concepts using virtual objects and composite virtual objects with ontologies. Besides, semantic annotation of healthcare data has been achieved with a simplified annotation algorithm. The alignment of diverse data models has been supported with the deep representation learning method. Semantic annotation and alignment provide a common understanding of data and cohesive integration, respectively. The semantic mediation model is backed with a target ontology catalog and standard vocabulary. Healthcare data is modeled using the standard Resource Description Framework (RDF), which provides triples structure to describe the healthcare concepts in a unified way. We demonstrate the semantic mediation process with the experimental settings and provide details on the utilization of the proposed model.

1. Introduction

Innovations in Information and Communication Technologies (ICTs) have revolutionized healthcare services landscape and created several possibilities to collect and manage healthcare data [1] effectively. However, automated healthcare information exchange across services is affected due to the heterogeneity of systems. Consequently, one of the major challenges in healthcare services is of data interoperability, which is spawned when healthcare information services are developed independently in different institutions. They involve diverse data processing methods and eventually materialize into heterogeneous models to maintain healthcare information [2,3].
Data interoperability has been defined as the capability through which heterogeneous systems and applications could communicate and exchange data and interpret the meaning of data so that it becomes sharable and reusable across systems [2]. It provides huge benefits in terms of integration of multiple systems which enables collaboration and distribution of information. Interoperability can be achieved in different levels; hence, its different types have been considered to realize uniform sharing of information in heterogeneous healthcare environments fully. These include technical interoperability, syntactic interoperability, and semantic interoperability. The technical interoperability is concerned with technological aspects of healthcare platforms and systems, such as those that are associated with specific software elements or that enable communication among different systems. It also encompasses issues regarding protocols and their technical specifications. The syntactical interoperability, on the other hand, defines functions to support syntax level mediation. It is concerned with different data formats, heterogeneous schemas, and diverse interfaces. Besides, the semantic interoperability is concerned with the meaning of data. It defines the true meaning of the contents that are generated by healthcare information services and mutually agreed by different systems that use these contents. Consensus on meaning is required while exchanging the data across systems. The semantic interoperability enables different stakeholders to access and understand data unambiguously.
Nowadays, semantic interoperability provisioning in healthcare data applications has become a crucial and the most challenging problem [2,4,5,6]. Several healthcare applications maintain data through isolated technological silos and manage information in different semantic models, where each model is designed specifically to a particular healthcare application. The lack of standards for integration and sharing of healthcare data from diverse systems provide a challenge for developing semantically interoperable cross-domain applications. It is difficult to resolve such heterogeneity through solid database solutions merely because each database is designed by different knowledge engineers with a different set of requirements which results in semantic heterogeneity [7]. Therefore, this article mainly focuses on key aspects of semantic interoperability. The semantic interoperability provisioning requires features such as the common semantic models and semantic mediation mechanisms to maintain a shared understanding of data, eventually providing a mutual exchange, and integration of information. The semantic mediation mechanisms provide intermediation processing and management of data to generalize well for the isolated and heterogeneous data solutions. To develop a semantic interoperability provisioning model, some key procedures have been recognized in previous studies [8,9]. The semantic interoperability provisioning involves ontological model’s core concepts and their metadata in a source domain to be aligned well with the concepts of target domains. Such mediation requires very efficient and reliable mechanisms to achieve effective interoperability for healthcare applications.
A view of the healthcare application environments where semantic interoperability provisioning is required is shown in Figure 1. In such a scenario, different stakeholders such as hospitals, clinical rehabilitation centers, government institutes, etc. want to integrate and share data for mutual decision making. A variety of healthcare applications could benefit, if there is a common understanding of the shared data and generic processing mechanisms; which could reduce the cost, enable faster processing, and decrease the overall data management complexity.
Data interoperability benefits the global healthcare market, which is increasing with tremendous potential for diverse applications. These application areas include healthcare systems [10], diagnosis and consultation applications [11,12], telehealth solutions [13], clinical decision support systems [14], health information interchange applications [15], chronic healthcare management applications [16], and several others.
Moreover, healthcare services are not only restricted to stationary information services but they have evolved into more dynamic applications, such as remote health monitoring in pervasive and Internet of Things (IoT) enabled environments, such as emotion detection [17,18], depressive disorder analysis [19], or mental health monitoring [20] from physiological signals collected through smart sensors. However, in these type of applications, data interoperability still pose a major challenge [21]. Applications that provide healthcare services by monitoring the affective human health state, such as emotions, collect data from diverse heterogeneous sources and modalities using wearable and stationary sensors and devices [22], as well as from smartphones [23]. In such applications, heterogeneity in data models, data processing methods, and communication protocols develop interoperability bottleneck and limit the integration of data collected from multiple sources and reduces the flexibility for healthcare information sharing in smart environments.
To overcome these challenges, the proposed work contributes to a semantic mediation model and provides a solution for data interoperability in heterogeneous healthcare applications. The model involves several features including the semantic annotation of healthcare data collected from different sources, a semantic alignment mechanism involving the deep representation learning, and the integration of healthcare concepts. The work also contributes a base ontology model to provide the mapping of healthcare concepts from diverse ontologies. It provides a Web of Objects (WoO) [24] based ontology catalog infrastructure to deploy semantic ontologies which could be utilized by diverse healthcare systems to support interoperable data management and processing.
The main contributions of this article are as follows:
  • We propose a semantic mediation model to support interoperability provisioning for healthcare data integration, sharing, and exchange;
  • A base ontology model has been developed with a standard healthcare vocabularies catalog, supporting the reuse of semantic ontologies across applications;
  • We utilize deep representation learning-based mechanisms to mitigate the heterogeneity of semantic data models.
  • Ontology models have been developed to describe and maintain healthcare knowledge.
  • The development of interoperable healthcare applications has been simplified by devising Web of objects enabled features with semantic interoperability provisioning capabilities.
  • We formulated a semantic annotation algorithm and semantic alignment procedures to enhance overall interoperability provisioning.
Moreover, the development of data interoperability mechanism based on a common data model has been realized, which enables the harmonization of data from different domains. A reusable catalog of ontologies and standard vocabularies can provide an effective solution when it is supported with Web of Objects enabled features. It can mitigate the semantic heterogeneity problem of healthcare applications. The Web of Objects model provides the integration of heterogeneous data resources with semantic support features.

2. Background and Related Work

In large scale healthcare environments, providing interoperability among diverse applications has become a crucial requirement nowadays as the lack of data interoperability has become a significant problem in information sharing, integration, and cross-domain application development [25]. Systems generating diverse types of data provide limitations for integration and sharing to support high-level objectives of unified healthcare services. Healthcare platforms use heterogeneous data models, and involve different protocols to extract and process data, even different data terminologies and formats are used for the data management. As well, due to the lack of agreed standards interoperability provisioning solutions are very limited. It has been envisaged that without an appropriate level of interoperability, the full benefits of integrated healthcare applications cannot be achieved.
Data integration and sharing are also hampered when similar models are used to represent data, but the meaning of concepts differ among models; creating a gap to interoperate [26]. Such scenarios require the interoperability of semantics. Semantic interoperability provides a common interpretation of the meaning of contents across different systems. Diverse systems could collaborate and integrate their applications based on shared semantic models, schemas, and common metadata and methods that provide mediation among different collaborating systems [27].
Data interoperability has been investigated from diverse perspectives in the existing literature; some approaches provide the standard semantic ontology models [28,29] to mitigate the heterogeneity, while others use the semantic alignment [30] approaches to mediate among different models. Also, common data models have been utilized by some organizations to minimize heterogeneity in healthcare applications.
Semantic interoperability in healthcare applications has been influenced by the use of approaches where ontologies provide a semantic bridge for different applications. Standard ontologies and vocabularies define the domain concepts and their relationships which are used by heterogeneous systems to understand and interoperate with each other. Several research studies and implementations have been explored in academia and industry. Such as adil et al. [31], proposed an ontology-based approach for semantic interoperability in distributed healthcare information services.
To support interoperability in heterogeneous environments where healthcare monitoring could be realized using smart devices, several semantic ontology models have been introduced, such as Commonwealth Scientific and Industrial Research Organization (CSIRO) ontology [32], which is known as an early effort for the description of sensors and their aspects related to functions, operations, and measurements. Another was Sensor Web for Autonomous Mission Operations (SWAMO) [33], which enabled the cooperation among agents with a web of sensors in intelligent environments. Later on, the most widely used ontology in pervasive environments, the W3C Semantic Sensor Network (SSN) ontology [34] was introduced. The SSN ontology defines the concepts related to sensors, actuators, observations, features, related procedures, and samples. The SSN ontology incorporates the SOSA ontology which defines concepts of Sensor, Observation, Samples, and Actuator. Several applications take benefit from the combination of SOSA and SSN, such as sensing and monitoring applications. To enable semantic interoperability across platforms, oneM2M [35] also provides a base ontology for onem2m-based applications. The purpose of providing such an ontology model is to enable other systems to interoperate with the onem2m platform.
Describing the semantics has been achieved well with ontology models. However, due to the difference in human opinions about the understanding of domain knowledge, there exist numerous ontologies that differ from each other while describing the same domain concepts. In such cases, ontology alignment approaches have been investigated to support semantic interoperability. Several states of the art methods have been explored, such as Lambrix et al. [36], provides the alignment of ontologies related to the biomedical domain. This work follows one to one alignment among ontology concepts and their relationships. This system uses structure and terminology based alignment and some external information for similarity-based alignment. An automated ontology alignment method proposed by Wei Hu [6], handles complex ontologies with RDFs and Ontology Web Language (OWL). The method processes alignment in three steps—first is to divide the ontologies, second is to provide alignment of the divisions, and the third is to learn the alignments. An agent-oriented alignment approach has been provided by Nagy Miklos [37], where agents provide initial closely accurate alignments which are further improved after the integration. Another approach that has been developed for semantic alignment is based on multi-policy dynamic alignment with multiple matching tasks [38]. Also, in the bioinformatics domain, one study [39] provides an automatic semantic similarity-based alignment and integration.
Moreover, several other approaches that support data interoperability make use of common data models. These approaches provide a shared representation of data through standard vocabulary and shared schema. In data interoperability provisioning process common representation of data is generated so that it can be understood by several heterogeneous applications which provide a highly useful way to support integration and sharing of information. To support healthcare applications analytics, Observations on Healthcare Data Sciences, and Informatics (OHDSI) [40] has introduced a common data representation model. Some similar types of data models have also been developed in previous research studies such as [41], where the purpose was to build a common abstract data model to handle neuroscience data. Another shared data model introduced by pcornet [42] was derived from the Mini-Sentinet Common Data Model (CDM), follows a common data transformation.

3. Data Interoperability Provisioning with Web of Objects

3.1. Web of Objects Model

The Web-of-Objects (WoO) reference model [24] supports the development of interoperable applications using data-driven features in combination with knowledge-driven approaches. Web-of-Objects provides the mechanism for semantic virtualization of data and resources. It provides a unified model of digital representation of heterogeneous information generated from diverse data sources including sensing, actuating, and other information sources. Incorporation of WoO features in heterogeneous healthcare applications could foster data interoperability provision.
The Web-of-Objects is highly significant for the development of interoperable healthcare applications. Its importance can be realized through the layered reference model (WoO Specification [ITU-T Y.4458] [24]) for the management and processing of data. Its virtual level features Virtual Objects (VOs) which are necessary to provide a semantic representation of heterogeneous healthcare data. Composite virtual level provides enhanced conglomeration and orchestration of VOs to reflect logical constraints on data with respect to healthcare domain. To develop interoperable service capabilities over the healthcare data model, it supports features for distributed applications with semantic data management.

3.2. Modeling Interoperable Healthcare Services with WoO

Modeling healthcare services with WoO enables data interoperability in the heterogeneous healthcare environment and simplifies the integration of data using standard virtual object information models. Data collected from real-world objects in various healthcare settings are virtually digitized and semantically annotated into VOs. Healthcare data in VOs are accumulated and harmonized by the CVOs (Composite Virtual Objects), to extract high-level knowledge. An overall view of the multi-domain healthcare environment supported with WoO features for interoperability provisioning has been rendered in Figure 2. WoO features are equipped with the necessary data processing capabilities and interfaces to collect and process the healthcare data from multiple platforms. This design is based on a bottom-up style where data generated from various healthcare environments are gathered for processing at different levels. It follows the WoO reference platform to represent diverse healthcare data.
VO data processing functions provide mechanisms to manage virtual objects. VOs are based on the semantic ontology, and they follow a well-defined information model which semantically annotates the data coming from the varied healthcare-oriented real-world objects. VOs are identified through unique Uniform Resource Identifiers (URIs). On top of the VO layer, CVOs form a composition of multiple VOs. In other words, one or more VOs are composed in a CVO to serve a service request. It is important to note that CVO not only provides a mashup of multiple Vos, but it also defines the set of rules that generate actionable information on VO data. These rules are triggered once the data from the VO satisfy the conditions already defined in the rules of each CVO.
In multi-domain healthcare environments, it is necessary to support interoperability in case the data or objects can be reused in different domains. The semantic mediation model in our design incorporates a set of features that facilitate interoperable operations required to mitigate the heterogeneity of different domains.
The healthcare data analytics and processing layer provide analytic functionalities. The analytics processing services are categorized into knowledge-driven and data-driven services. The knowledge-driven services process the semantic data models and use reasoning engine to infer the knowledge hidden in the semantic objects. In contrast, the data-driven services integrate learning processes to classify and predict the healthcare-related facts using deep learning algorithms.
The top layer in the design is related to service management functions, which contains system-level functionalities. These include service request evaluation and analysis service, which processes the user request based on the user history and preference parameters and also considers user access level policy. The lifecycle management process handles states of different services in the system; it takes care of which services are in running state and which are halted due to the service load. The discovery process manages services availability by querying the service repository database. The service repository database provides the entries of all services available in the system. Whereas, the orchestration process is responsible for the composition flows of services and their configurations.
Besides, virtual object data processing functions manage VOs, which digitally represent healthcare objects with their descriptive semantic information. The VO representation model is necessary to describe the healthcare data object properties that can be associated with VOs. It includes different types of metadata including VO description, its interface information, its functions, features, and profile. The VO representation model describes the entities to be used by CVOs and services in the WoO domain. The VOs are generated from the VO template based on the representation model. Following the defined semantic information, the VO representation model’s (VO R-M) generic layout has been illustrated in Figure 3.

4. Semantic Interoperability Provision in Heterogeneous Healthcare Service Environments

4.1. Semantic Mediation Model

Semantic interoperability is one of the key requirements to support integrated healthcare solutions, whether it would be a patient’s remote health monitoring service which collects information through several data sources or an integrated healthcare application to facilitate access to many healthcare services for diverse stakeholders. The semantic mediation model is based on several interoperability provisioning functions. A view of this model is illustrated in Figure 4. It consists of many features which mainly include semantic annotation of healthcare data, semantic alignment of heterogeneous models, the common data model for unified representation of healthcare contents, and a catalog of target ontology models to maintain the repository of semantically aligned ontologies for the integration of multiple healthcare systems.
Semantic annotation of healthcare data is achieved by providing enhancement to the contents collected from varied sources, which provides interlinking among the data, as well as making them machine-readable with semantic ontology attachment. Through the semantic annotation process, information is attached to existing contents based on the defined concepts in the subject ontology model. With the semantic annotation, isolated data are transformed into descriptions associated with the ontology that can be further inferred, exchanged, and reused.
The semantic mediation model also consists of the ontology alignment mechanism. The semantic alignment and linking function enable the mapping and management of source and target schemas. The mechanism of semantic alignment includes a semantic ontology management process service, which provides the capability to manage the ontology alignment. It is designed as a microservice to instantiate the ontology alignment process by providing the source and target ontology models. It takes both ontologies and performs an alignment. This service returns the alignment results, in terms of the true mapping achieved through the mapping algorithm. The approach to semantic alignment is followed through deep representation learning, as further discussed in Section 4.2.3.
Besides, a Common-Data-Model (CDM) for healthcare data integration, sharing, and interoperability provision plays an important role. CDM transforms the heterogeneous types of data collected from diverse sources, having different structure into coherent, unique data formats. The process of transforming the data into the CDM model involves standard vocabularies, core schema, and the common data format. To satisfy the requirements of semantic interoperability for healthcare data, CDM provides a key role. A single source of data cannot provide an overall status, and it is insufficient to be analyzed where applications are dependent on many data sources with different semantics and formats. To comply with semantic interoperability requirements, a common data model has been realized.
Semantic interoperability provision in an environment where healthcare applications can utilize the information from several data models which are although designed for different applications but refer to the same type of data can mitigate semantic heterogeneity. In such scenarios; however, it is very difficult to integrate such data automatically. To achieve this, a target ontologies catalog has been designed to provide the semantic repositories and necessary data processing and management mechanism based on the WoO infrastructure. Moreover, base ontology has been designed to contain the generic concepts to support the semantic interoperability for healthcare applications. It also has features and associated metadata. The base ontology model has an interface to the standard ontology models of the healthcare domain and other external ontologies to express and provide interoperable features with existing systems. Moreover, the functionalities in the semantic mediation model have been designed through a microservices design pattern to support scalable interoperability provision in dynamically changing environments. A few of these services that have been designed are semantic annotation microservice and alignment microservice, semantic format registration, and query processing microservices.

4.2. Semantic Interoperability in Healthcare Data Models

To effectively develop interoperable applications on healthcare data, various methods at different levels have been employed to date. These include the use of interoperable data models to share a common set of concepts over the data or the use of semantic ontology models to follow a standard vocabulary for integrating and sharing healthcare information across multiple applications. Also, a major effort for providing semantic interoperability is achieved through semantic ontology alignment techniques which support the discovery of similar concepts defined in heterogeneous data models. The ontology alignment approaches in existing studies so far make use of token-based or character-based methods with string-matching or character-matching algorithms to align the entities in different ontologies. However, high-level information relationships in the ontology entity description are often missed. To overcome such a problem, learning-based approaches have been utilized. In the learning-based methods, deep representations of ontology entities descriptions are learned. It has been observed that the learned representations of ontology entities provide better results while computing similarity among entities of ontology models.

4.2.1. Development of Ontology Models for Semantic Alignments

To mitigate semantic heterogeneity so that cross-domain healthcare applications could be developed, semantic alignment and annotation based on the standard vocabularies are highly required. We developed three ontology models of affective human health conditions and realized their annotations, where standard National Cancer Institute (NCI) Thesaurus [43] and Systematized Nomenclature of Medicine (SNOMED) terminologies [44] have been followed (as shown in Figure 5). The developed ontology models and their alignment with the base ontology have been further discussed in Section 5.1.
Moreover, the semantic base ontology consists of the generic concepts to support the interoperability for healthcare applications (as shown in Figure 6). It includes the base object entity, which represents Real-World Objects (RWOs). The RWO has the operation mode as it may represent any sensor or actuator; however, if the RWO represents any other information, this mode will be set to none. A Base Object Element (BOE) is an entity to hold a base object. The BOE can have input features as well as the output features. Each feature has associated metadata with which it presents the details on the BOEs and the operation that could be supported on this abstraction. The Composite Elements (CE) are the entities in the base ontology that will contain the collection of BOE entities. They also have features and associated metadata. Base Ontology model has an interface to the standard W3C SSN ontology and other external ontologies to express and provide interoperable features with existing systems.

4.2.2. Semantic Annotation of Data Generated from Heterogeneous Sources

The semantic annotation process enables the annotation of the data based on the standard semantic ontology data model. It provides annotation using the selected annotation description language. Semantic annotation provides enhancement of content by interlinking machine processed data to the mined concepts of a semantic ontology graph. Through semantic annotation process, information is attached to existing contents based on the concepts of the ontology model. Semantic annotation allows heterogeneous data to be transformed into meaningful information using an ontology model that can be inferred, exchanged, and reused. Features of semantic annotation capability include describing the associations among concepts and ontology models, associating information to the ontology graphs, and assigning semantic concepts and properties to the target data.
To provide semantic annotation of heterogeneous healthcare data, a generic semantic annotation algorithm has been developed. The algorithm takes a collection of ontology graphs [⅀ɸ] and a pointer to the registry entries (ℝe). The process involves loading concepts that need to be annotated with selected semantic ontology. Queries are performed on reference ontology graphs to extract semantic annotations. The algorithm contains several nested operations, in which the first series of iterations compute similarities on each extracted concept and generate a semantic graph. In the second phase, the entries are checked, and a validation is performed to evaluate the annotations. Moreover, validated entries are stored in the registry, and a base ontology instance is created to maintain annotated concepts. Also, verified semantically annotated graphs are either added to the annotation queue for quick reference or stored in the graph dataset of the semantic repository based on the computed weights. To visualize the process, a flow chart of the algorithm is presented in Figure 7.

4.2.3. Semantic Ontology Alignment using Deep Representation Learning

Ontology graphs are an important way to represent shared concepts in diverse domains for the sharing of knowledge across different systems. Even though ontologies that describe the same domain, may differ in their element level information, as they are designed by different knowledge engineers, which provide their viewpoint while encoding the semantic descriptions and relationships on real-world concepts.
The solution to semantic interoperability is to provide an effective ontology alignment approach. The existing ontology alignment approaches are based on diverse types of mapping mechanisms, some apply the character level mapping while others make use of tokens information of the entities in a chosen ontology. Some feature extraction-based approaches have also been explored in the recent literature [45,46,47]. However, these studies are very limited, and they only focused on extracting features that are based on the ontology structure or terminology only. However, engineering features from these information sources are quite time-consuming. As well, such features could only be used for the domain they have been designed for; therefore, the performance of ontology alignment methods varies according to the features used.
To resolve such problems, deep learning-based approaches have been introduced in the literature [48,49]. In these approaches, the deep neural network [50] based models are employed to generate a representation from ontology descriptions which could be used to compute similarity among ontology entities. To realize the semantic alignment process, we follow a similar method and provide interoperability provision through ontology mapping for heterogeneous data in different VO ontology models. In our approach, we extend utilization not only on the VO ontology classes and properties but also the VO ontology individual description and their metadata information. We have considered the processing of ontology data, such as extracting the ontology entities, learning the deep representation of ontology entities, and developing the semantic similarity computation. We take ontologies for alignment as RDF graph formats. These RDF files are processed to remove extra contents and extract the core entities.

Learning VO Ontology Representations

Representation learning provides the opportunity to support semantic interoperability problems through learning representation of ontological concepts, and this has been envisaged through the improved performance of deep learning methods. Moreover, existing approaches do not focus on the higher-level information available in the ontologies. To utilize this opportunity, deep learning-based methods to learn the representation of VO ontological information are utilized. This information is processed through the vector of words which ultimately provide improved performance to the ontology alignment method. The proposed model provides a mechanism to interoperate the domain ontologies by aligning their entities. There exist several types of information with respect to entities in an ontology which include concept or classes in the ontology graph, entities or the individuals, the associated data properties and the object properties, many different types of other data, such as identifiers of entities and comments, which depend on the ontology design as shown in Figure 8.
We describe the formal expression of each entity information that is used in our model (Equations (1)–(6)). While we define the formal notations (based on our previous study [51]) for the alignment of source ontology (Ꝺntoα) and target ontology (Ꝺntoβ) at the individual element level, such that data elements in an ontology are expressed as follows:
nto α =   {   ( C o n α 1 n )     ( e l α 1 , n )     ( d P r p α 1 , n )     ( o P r p α 1 , n )     ( m O b j α 1 , n )   }
  Con α = {   { C o n α 1 ,   C o n α 2 , ,   C o n α n   }   { C o n α d o m a i n α }   }
E l _ α =   { { e l α 1 ,   e l α 2 , ,   e l α n   }     { E l α d o m a i n α }   }
D P r o p _ α = {   { d P r p α 1 ,   d P r p α 2 , ,   P r p α n   }   { D P r p α d o m a i n α }   }
O P r o p _ α = { { d P r p α 1 ,   d P r p α 2 , ,   P r p α n   }   { D P r p α d o m a i n α }   }
M O b j _ α =   { { m O b j α 1 ,   m O b j α 2 , ,   m O b j α n   }   { M O b j α d o m a i n α }   }
The above expression describes the data elements in an ontology nto α , as a set of individual elements, where Con _ α is the set of all concepts in an ontology hierarchy related to the specific domain, in this case it is d o m a i n α . The E l _ α defines all the individuals or elements related to the defined classes in the domain ontology. The expression D P r o p _ α defines data properties associated with each element of the domain, where O P r o p _ α is a collection of all object properties defined to express the links among ontology elements. The M O b j _ α is a collection of all metadata associated with the ontology, including the metadata of its classes, object properties, and data properties.
The ontology alignment (SA) process is to discover the semantic association among two diverse ontologies, where the first one is the initial source, and the other one is target ontology [49,52]. We denote here as nto S and nto T . The input of the semantic alignment is the two diverse ontologies, and the output of the alignment process is the correspondence or the association defined as a tuple in the following equation.
S A =   U c o d ,   E t S ,   E t T ,   S C ,   C  
E t S   , E t T : are the entities in the source ontology and target ontology.
S C : Semantic correspondence between the entities of two ontologies.
C : The degree of strong confidence among the entities of ontologies.
U c o d : The unique identification code of entity alignments.

Procedure for Learning Representations

To learn the representation of ontology entities for semantic alignment, some preprocessing procedures have been used. The input to the preprocessing method is a set of ontologies chosen for alignment task. The procedure involves several steps to retrieve the information. These include, first to extract ontology descriptions from the provided ontology models. The extraction of ontology tuples has been achieved through Jena libraries using open source Jena framework [53]. The extracted tuples are processed by removing the additional information and extracting the entity descriptions. The descriptions are refined through cleaning, lemmatization, and token formation. After processing the word tokens, they are processed and utilized for the Binary-Bag-of-Word (B-B-W) vector generations. Whereas, the next step involves the provision of B-B-W vectors input to learn the model which generates the representations. Finally, abstract representations need to be utilized for computing similarities. The stepwise process for the semantic alignment has been shown in Figure 9.
In the process to learn the VO ontology graph representation, we use descriptions of the VO ontology entities, as discussed in the above section. The representation learning model learns the representation in multiple layers from the low level to a high level. The core steps performed to learn the representation involve, first, to produce the Binary-Bag-of-Word (B-B-W) vectors of each VO ontology entities based on their description. Secondly, to get a higher representation, a learning-based approach has been followed. Thirdly, to get the alignment among the ontology graph entities, cosine measure of similarity [49] has been used to get the similarity matrix. Similar to previous work, similarity mapping is identified with the algorithm of stable marriage [54,55].
Diverse ontology alignment approaches have been used in previous researches. However, recently, the ontology alignment process is moving towards learning-based approaches. Research, such as in [49,52], have realized that the deep neural network-based approaches prove better in ontology alignment using representation learning. We utilize a similar approach to get the learned representation of ontological entities using a model of Auto Encoder (AuE) [56], which learns the abstract representation on the VO data. The learning model is illustrated in Figure 10, where input VO ontology data (VOO) are given as B-B-W vector to the model and in the output VO ontology representation (VOÔ) is obtained, that is same to (VOO) based on learning the estimate to function (E(VOO)). The model constitutes an encoder and decoder parts, similar to previous studies [50,57]. Also, following the existing practices in standard approaches [50,58], the encoder is defined as E ( V O O ) = P ( W k + B j 1 ) , and the decoder is defined as D ( E ( V O O ) ) = P (   W T ( E ( V O O ) ) + B j 2 ) . Function P is an activation function (same as sigmoid P ( o )   =   1 / ( 1 + exp ( V O O ) ) ) , whereas B, and W, are bias and weights used in the model, respectively. To get maximum information, the model tries to reduce the errors Q ( V O O ,   ( D ( E ( V O O ) ) ) ) = ( [   | V O O   D ( E ( V O O ) ) |   ] ) 2 to redevelop the output [57]. Going through transformation process, the proposed model acquires the representations for each entity in the VO ontology. Moreover, for learning the description of entities and their correlation, the model minimizes loss function for VO ontology set of class entities L o s s f ( w ,   b ) n C l s s = k = 1 n   D i f f ( C l s s V O O ( i ) ,   C l s s V O Ô ( j ) ) , given the weights (w) and bias (b), where the custom build function D i f f minimizes the loss between VO class descriptions of C l s s V O O ( i )   and C l s s V O Ô ( j ) as per standard procedure [50,52]. A similar operation is performed for the data properties ( L o s s f ( w , b ) n D p ) and individual entities ( L o s s f ( w , b ) n I n d ). Here, V O O ( i ) , V O Ô ( j ) is the vector representation of the (B-B-Ws) of entities in the VO source and target ontologies.
When the vectors of B-B-Ws of VO ontology entities are provided to the network, it generates the representation at diverse levels. Based on multi-layers of learning, representations have been learned. In this model, the last layer or representation as the final output of the VO ontology description are used to calculate the semantic alignment of concepts. The two AuEs have been used, where the last AuE creates the final representation. Using the learned representations, the semantic distance based on cosine similarity measure is computed similarly used in [49], and the similarity matrix is formulated. Likewise based on the method used in [54,55], the algorithm of stable marriage has been utilized to find out the similarity among entities based on the similarity matrix of VO source ontology and VO target ontology.

4.2.4. Deployment of Learned Semantic Alignments in the Target Ontology Catalog

Semantic interoperability provision can enhance data sharing in an environment where healthcare applications could utilize the information from diverse models that are, although designed for different applications, refer to the same data. In such scenarios, it is very difficult to integrate such data automatically. Existing approaches of semantic alignment require the mechanisms to align these data models with the help of human or tool efforts. We propose a setup of automatic semantic alignments in the face of diverse semantic models (as shown in Figure 11). We design a set of microservices which enable the semantic processing into features such as semantic annotation, semantic query processing, and semantic alignments processing. Moreover, a base ontology catalog has been set up to provide the repositories and data processing and management mechanism based on the WoO infrastructure. The learning mechanism has been designed as the microservice process to enable learning of the ontology representations, which are input to the semantic alignment process to compute the semantic similarities between data models. Depending on the semantic matches, entries are stored in the base ontology instances. These instances collectively form an interoperable shared data model to provide semantic coherence of multiple data models. A base ontology generated from the semantic alignment process could be used by other systems to deal with the heterogeneity of the chosen system; this enables the development of applications to form service flows on the heterogeneous data which are semantically synchronized from multiple systems.

4.2.5. Common Data Model

The Common-Data-Model (CDM) transforms the heterogeneous types of data collected from diverse sources, having different structure into coherent, unique data formats. The process of transforming the data into the CDM model involves standard vocabularies, core schema, meta-data schema, and the common data format. To satisfy the requirement of semantic interoperability for healthcare data, CDM provides a key role. A single source of data cannot provide an overall status, and it is insufficient to be analyzed where applications are dependent on many data sources with different semantics and formats. To comply with the semantic interoperability requirement, a common and standard data model is highly necessary. To fill the gap of existing approaches for a commonly approved data model, we propose our CDM realized as Interoperable Shared Data Model (ISDM).
The ISDM provides data enhancements using a common schema and Meta schema and standard vocabulary with data transformation procedures to map the data from heterogeneous domains into an interoperable shared data resource to provide sharing and integration among applications at different data levels (as presented in Figure 12a).
Transformation to the interoperable shared data model is done by extracting the entities from the domain-specific data model with their metadata description and their translation from the concepts of a domain to the generic entities of ISDM. An example of this transformation is shown in Figure 12b, given below. Such translation involves a core schema to transform the concept of the domain and a Meta schema to transform the related metadata of the domain. Core Schema provides the collections of reference concepts and their intermediate relationships used to represent the data in subject domains. The meta schema provides a set of reference metadata for the transformation of domain metadata to a cross-domain interoperable metadata description. The major elements of CDM involve the management of core data and metadata, which promotes the improvement in the data interoperability provision process through the features such as the expressive description of data elements; commonly understood meaning of data across diverse domains; reusability of data among different applications; and the uniform transformation of data into standard formats.

5. Experimental Analysis and Implementation

5.1. Implementation Setup

A WoO based Implementation setup for healthcare data interoperability has been developed (as shown in Figure 13). It constitutes a suite of microservices including a semantic annotation microservice, the semantic query processing microservice, and semantic alignment microservice which are deployed in the implementation setup. The annotation microservice enriches the contents from a domain object by linking the information with the semantic ontology. It involves associating extra information with the contents of an object corresponding to an ontology. The annotation of information helps the content to be machine-readable and interpretable in other domains. The query processing microservice handles the queries from the data analytics layer. It checks and maintains query logs and executes the query if the entry is available; otherwise, it generates a new query specific to the service requirement. The alignment microservice is invoked in case two objects, which need to interoperate in service, are using different data models to represent similar information. In this case, the alignment microservice aligns the concept from these data models to make them interoperable and usable to handle the service request. Besides, for the semantic description, a validator microservice has been defined that checks the description of annotated contents to verify its correspondence with the semantic ontology concepts and their relationships.
Semantic alignment of data with base ontology has been achieved in these settings. A base ontology catalog has been formulated in a cloud-based setup of repositories implemented using the Jena Fuseki Server [59]. Also, linking with standard terminologies has been provided. An Interoperability processing server (IPS) has been set up to support these processing functions. The IPS provides functions to retrieve data elements and allow query processing services. Also, a SPARQL endpoint to support query processing interfaces have been set up. An Object Abstraction Management (OAS) server has also been established to deploy abstraction processing microservice which instantiates and maintains VO templates and provides an interface to retrieve VO profiles.

5.2. Semantic Ontology Development

Existing research [60,61], has demonstrated that affective states of human health could be recognized from physiological condition monitoring. In this regard, several recent studies have focused on collecting data on affective health states through wearable sensors. Queen Mary University of London, United Kingdom, and the University of Trento, Italy, in collaboration, have gathered such data and organized them into the AMIGOS [1] and DEAP [3] datasets. Based on this, we develop two ontology models to express health state concepts and their semantic relationships.
The first ontology model has been developed to express the domain concepts of diverse affective health states, which is designed as AMIGOS affective states ontology (shown in Figure 14). This ontology contains several concepts about different states of affective conditions such as stress, anger, amusement, etc. Also, it contains concepts such as the severity of these states, healthcare profiles, and sensor input feed. The modeled ontology has been aligned to the base ontology model following a shared data categorizations. The second ontology model (DEAP semantics ontology as shown in Figure 15), also constitutes the concepts on affective health state conditions and their level of severity based on the diverse data input; however, the concepts are categorized in different hierarchies and the data are semantically expressed differently. The reason for using AMIGOS and DEAP models is to realize the interoperability issues in the healthcare domain, specifically related to heterogeneous models.
The ontology models have been developed through Protégé and expressed in RDF format. Protégé is an open-source ontology development tool and framework which supports the implementation of intelligent systems. It is backed by a large community of users in academia and corporate [62]. One of the reasons to use Protégé is that it provides the facility to develop knowledge-oriented solutions in any domain with much powerful modeling capability.
The base ontology model provides a hub for the integration of multiple semantic ontologies to express the concepts of a domain. In this case, the domain of our use case is the affective states of human health. Therefore the base ontology provides an intermediated concepts, data properties and object properties and semantic conditions that satisfy the rules imposed by the shared representation model for the core data and metadata. The semantic base ontology model developed for the affective heath states data has been shown in Figure 16. Moreover, the descriptions of the data input from multiple ontologies that are represented into base feature elements have been incorporated in the model, as shown in the RDF code in Figure 17. The implementation of the base ontology model provides an RDF based ontology to incorporate the affective states concept integration for the AMIGOS ontology and DEAP ontology models. To express the shared representation of the ontology concepts, the lexicon dictionaries of affective states [63] has been followed.
The semantic alignment process is based on the semantic annotation procedures. In order to perfectly align the concepts and the properties of different domains’, annotation of the data with respect to standard ontologies has been achieved. Moreover, to support the interoperability of these different annotation schemes, a common annotation based on the base ontology model has been developed. In this case, to map the annotation of the AMIGOS model to the DEAP model, semantic annotation to the base ontology model through shared data mapping has been performed. The illustration of this annotation model has been demonstrated in Figure 18.
In the above model, annotations are expressed as named graphs where the annotation triples are at the center. The model contains types (such as the AMIGOS Annotation) and also the metadata (such as the affective state, severity level, etc.). A combination of types and the metadata form the context of annotation. The representation of annotation for two models and their mapping has been shown in the following code snippet (as presented in Figure 19).
To perform semantic annotation based on the shared data model, an instance of semantic mapping among the concepts of sensor modality data has been shown in the following ontology mapping Figure 20. In this mapping, it has been shown that Galvanic Skin Response (GSR) observations from DEAP ontology are mapped to the GSR concepts of the AMIGOS ontology with the mediation of the base ontology model and the semantic annotation with the shared data representations. Moreover, the semantic annotation based on the developed AMIGOS ontology to annotate the AMIGO sensor modalities has been shown in Figure 20, whereas a sample of the AMIGOS ontology annotated concepts in the OWL format is listed in Figure 21.

5.3. Semantic Alignment of the Affective States Ontology Models

Additionally, a semantic alignment process based on the base ontology model has also been developed to align the models expressing the healthcare data with the semantic heterogeneity in terms of the concepts of the domain and data or object properties of the domain. The alignment has been performed with microservice processes which harmonize the concept from each domain ontologies, in our case, these include the AMIGOS ontology, DEAP ontology, and ACGM ontology. The ACGM ontology is the third ontology we developed for our proof of concept to evaluate the semantic alignment process. Moreover, in these settings, microservices are designed to extract the ontology concepts from the list of VOs which represent the data expressed by the individual ontology. Three microservices, each aligning the ontology of the individual model to the base ontology model have been developed. The semantic alignment process has been illustrated in Figure 22. The process involves mapping the concepts of the semantic model of diverse ontologies into a base ontology model. For example, the VO Blood Volume Pulse (VO_BVP_d) and VO Electromyography (VO_EMG_d) from the DEAP ontology model and VO_BVP_A and VO_EMG_A from the ACGM ontology model express the same type of data with different semantics. The alignment of such data has been achieved using microservices which map these input data based on the semantic similarity processing into a common model of base ontology instances.

5.4. Evaluation and Discussion

To evaluate the semantic interoperability provisioning features with the proposed semantic mediation model, several aspects have been analyzed. First, the experiments have been performed to evaluate the semantic alignment process; this is achieved through analyzing the performance of the learning algorithm on the semantic data models. The experiments have been repeated and the best results have been reported.
Secondly, experiments have also been performed to evaluate the behavior of semantic objects created in the process, such as VOs and CVOs. The performance of microservices processes for the semantic interoperability processing tasks, such as the discovery of VO ontologies from the global ontology catalog, semantic alignment, and query processing tasks have been analyzed.
Based on the semantic alignment of ontology models, the final mapping of semantic concepts has also been evaluated. The mapping results are expressed in terms of the accuracy of the semantic similarity alignment of ontologies. A standard method of measuring the performance of the ontology alignment task has been followed. For this, a set of affective health state ontologies data have been utilized for the experiments. To judge the performance of semantic interoperability, the precision measure, recall, and f-measure are defined with the following equations, where R L alignment are the relevant alignments and R T alignment are the retrieved alignments.
P r e c i s i o n   M e a s u r e = P   = 1 n   i = 1 n P m i =   1 n i = 1 n | { R L alignment } i     { R T alignment } i   | | { R L alignment } i |
R e c a l l   M e a s u r e = R = 1 n   i = 1 n R m i =   1 n i = 1 n | { R L alignment } i     { R T alignment   } i   | | { R T alignment } i |
F M e a s u r e =   1 n   i = 1 n { 2 · R i · P i } { R i +   P i }
Besides, results have been extracted, preprocessed, and prepared based on various tools. For the development of ontology models, protégé has been used. To extract different ontology features, Jena Libraries [53] have been utilized. For experiments, we have set up Apache Jena Fuseki [59] based SPARQL Endpoint and RDF repositories. The learning models have been implemented using Python with the Keras. Training has been performed on Intel Core i7; with 3.4 GHz clocked, 32 GB RAM and NVIDIA GeForce GTX configuration.
The experimental analysis for the semantic alignment task has been performed on several ontology pairs discussed in Section 4.2.2. These ontology pairs constitute the data related to affective human health conditions. We divide the ontologies into two groups where the first group is used for training, and the second group is utilized for the testing purpose. In the experiment settings, reference ontology from the source group needs to be aligned with the ontologies in the target group. In these experiments, an F-Measure which is known as the measure of test the accuracy has been calculated. It can be realized from Figure 23, that the testing accuracy obtained on different data is reasonably good such as 80.24%, 82.66%, 85.95% in the three groups shown below, respectively.
Moreover, experimental analysis on the benchmark dataset of ontologies from Ontology Alignment Evaluation Initiative (O.A.E.I) [45,64] has been conducted. The dataset constitutes about 110 ontologies. We utilize 70% of the data for training and 30% for testing. In the experiment settings, the source ontology needs to be aligned with the target ontologies. For the processing of ontologies, the Jena [53] framework has been used and the method to mine the descriptions of ontology graphs (Section “Procedure for Learning Representations”) has been utilized. Vectors of the ontology entities have been developed before model assignments. Training of AuEs model has taken up to 200 iterations in the first settings and up to 400 iterations in the second. The representation learning experiment with AuEs has been analyzed and the results are shown in the graph (Figure 24), expressing the precision, recall, and f-measure. Here to compare the alignment of the ontology graphs in the two settings of our approach, including Set-A and Set-B with different layers of AuEs. It has been realized that the method acquires reasonable value in f-measure by a maximum of 84% with the first set in Set-A, and by 86% maximum f-measure with the second set in Set-B.
Additionally, we examine the discovery time concerning semantic virtual object ontology models. In the semantic interoperability implementation setup discussed in Section 4.2.1, these settings allow the discovery process of virtual and composite virtual objects, which is achieved through processing SPARQL query requests to the ontology catalog. In these experimental settings, it has also been observed that the discovery time is proportional to the increasing number of objects. As in the case of CVOs, the object’s discovery processing require the reading of the ontology graphs which requires requesting RDF graphs from the SPARQL endpoints. The increasing number of requests for the CVOs increase the processing time to discover the CVOs as it can be observed from Figure 25a. Moreover, in the semantic interoperability provisioning settings, discovering the VO ontology graphs has also been realized which plays a part in semantic composition scenarios. In this case, an increase in VOs increases, ultimately the time required to discover the objects. However, it has also been realized from Figure 25b, that VOs are discovered in less time as of the CVOs, this is because of the fact that when the CVO discovery process is instantiated, additional rules need to be checked that involve checking the conditions on semantic ontology graphs and the semantic dependencies on VO ontologies.
Besides, the system has also been evaluated with respect to semantic processing of microservices. The evaluation has been done by analyzing the execution time of different tasks of microservices’ functional operations (as shown in Figure 26). These tasks include (1) the semantic annotation (SAN) microservice for the semantic annotation of the data, (2) the query processing (QPR) microservice which involves the discovery of multiple VOs and CVO ontologies, (3) the semantic alignment (SAL) microservice for semantic mapping of ontology models. For each microservice task, instances have been created to accept service requests. The average execution time of one and ten instances of each service has been illustrated in the following graph. The SAN execution time is less than the other microservices execution time, but it highly depends on the volume of data, in this case, 500+ tuples have been annotated by the service by a single instance. The QPR execution time, on the other hand, is dependent on the querying the SPARQL endpoint; the query processing time for an average discovery of 30 objects, has been shown for the single service instance. It has been analyzed that the SAL services require additional average execution time, it is due to fact that the alignment process includes sub-processes for the model selection and execution to perform alignment among the concepts or entities of the selected ontologies.

6. Conclusions

Nowadays, healthcare systems are continuously evolving from just keeping a simple record of patient observations to much more sophisticated services based on the modern electronic health record of patients. This contains not only the symptoms and clinical recommendations from doctors but also physiological healthcare conditions collected and observed from time to time through diverse sources such as wearable devices or environmental sensors. Achieving data interoperability in such a scenario has become a much more complicated issue currently. This study provides a solution by focusing on semantic interoperability aspects in healthcare services to mitigate the heterogeneity of information services and foster data sharing and integration. To overcome the challenge, the proposed work contributes a semantic mediation model that provides a novel solution for data interoperability in heterogeneous healthcare applications. The model involves several features including semantic annotation of healthcare data collected from different sources, a semantic alignment mechanism involving the deep representation or learning-based semantic mapping, and integration of healthcare concepts. The work also contributes a base ontology model to enable the mapping of healthcare concepts from diverse ontology models and it provides a Web of Objects (WoO) based ontology catalog infrastructure to deploy semantic ontologies which can be utilized in a heterogeneous healthcare environment to support interoperable data management and processing features. The semantic mediation process has been demonstrated with the experimental setting, and details on the utilization of the proposed model are furnished.

Author Contributions

The research work was conducted in collaboration with all authors. S.A. and I.C. defined the research theme and designed the proposed model. S.A. implemented the prototype and wrote the article. S.A. and I.C. discussed and evaluated the prototype outcomes. Both authors have thoroughly contributed to reading and validating the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1063720).

Conflicts of Interest

Authors declare no conflict of interest.

References

  1. Haluza, D.; Jungwirth, D. ICT and the future of healthcare: Aspects of pervasive health monitoring. Inform. Health Soc. Care 2018, 43, 1–11. [Google Scholar] [CrossRef] [PubMed]
  2. Iroju, O.; Soriyan, A.; Gambo, I.; Olaleke, J. Interoperability in Healthcare: Benefits, Challenges and Resolutions. Int. J. Innov. Appl. Stud. 2013, 3, 2028–9324. [Google Scholar]
  3. Interoperability: The Key To The Future Health Care System: Interoperability will bind together a wide network of real-time, life-critical data that not only transform but become health care. Health Aff. 2005. [CrossRef]
  4. Dixon, B.E.; Vreeman, D.J.; Grannis, S.J. The long road to semantic interoperability in support of public health: Experiences from two states. J. Biomed. Inform. 2014, 49, 3–8. [Google Scholar] [CrossRef] [PubMed]
  5. Arvanitis, T.N. Semantic interoperability in healthcare. Stud. Health Technol. Inform. 2014, 202, 5–8. [Google Scholar] [PubMed]
  6. Hu, W.; Qu, Y.; Cheng, G. Matching large ontologies: A divide-and-conquer approach. Data Knowl. Eng. 2008, 67, 140–160. [Google Scholar] [CrossRef]
  7. Adel, E.; El-Sappagh, S.; Barakat, S.; Elmogy, M. Ontology-based electronic health record semantic interoperability: A survey. In U-Healthcare Monitoring Systems; Elsevier: Amsterdam, The Netherlands, 2019; pp. 315–352. [Google Scholar]
  8. ISO/IEC 30182: 2017-Smart City Concept Model—Guidance for Establishing a Model for Data Interoperability. Available online: https://www.iso.org/standard/53302.html (accessed on 12 April 2018).
  9. Serrano, M.; Barnaghi, P.; Carrez, F.; Cousin, P.; Vermesan, O.; Friess, P. Internet of Things, IoT Semantic Interoperability: Research Challenges, Best Practices, Recommendations and Next Steps, European Research Cluster on the Internet of Things. 2015. Available online: http://www.eglobalmark.com/wp-content/uploads/2016/06/2015-03-IoT-Semantic-Interoperability-Research-Challenges-Best-Practices-Recommendations-and-Next-Steps.pdf (accessed on 20 August 2019).
  10. Yang, J.J.; Li, J.; Mulder, J.; Wang, Y.; Chen, S.; Wu, H.; Wang, Q.; Pan, H. Emerging information technologies for enhanced healthcare. Comput. Ind. 2015, 69, 3–11. [Google Scholar] [CrossRef]
  11. Miah, S.J.; Hasan, J.; Gammack, J.G. On-Cloud Healthcare Clinic: An e-health consultancy approach for remote communities in a developing country. Telemat. Inform. 2017. [Google Scholar] [CrossRef]
  12. Miah, S.J.; Hasan, N.; Hasan, R.; Gammack, J. Healthcare support for underserved communities using a mobile social media platform. Inf. Syst. 2017, 66, 1–12. [Google Scholar] [CrossRef]
  13. Wan, J.; Zou, C.; Ullah, S.; Lai, C.F.; Zhou, M.; Wang, X. Cloud-Enabled wireless body area networks for pervasive healthcare. IEEE Netw. 2013, 27, 56–61. [Google Scholar] [CrossRef]
  14. Duan, L.; Street, W.N.; Xu, E. Healthcare information systems: Data mining methods in the creation of a clinical recommender system. Enterp. Inf. Syst. 2011, 5, 169–181. [Google Scholar] [CrossRef]
  15. Batra, U.; Sachdeva, S.; Mukherjee, S. Implementing healthcare interoperability utilizing SOA and data interchange agent. Health Policy Technol. 2015, 4, 241–255. [Google Scholar] [CrossRef]
  16. Lee, J.A.; Choi, M.; Lee, S.A.; Jiang, N. Effective behavioral intervention strategies using mobile health applications for chronic disease management: A systematic review. BMC Med. Inform. Decis. Mak. 2018, 18, 12. [Google Scholar] [CrossRef] [PubMed]
  17. Mano, L.Y.; Faiçal, B.S.; Nakamura, L.H.V.; Gomes, P.H.; Libralon, G.L.; Meneguete, R.I.; Filho, G.P.R.; Giancristofaro, G.T.; Pessin, G.; Krishnamachari, B.; et al. Exploiting IoT technologies for enhancing Health Smart Homes through patient identification and emotion recognition. Comput. Commun. 2016, 89–90, 178–190. [Google Scholar] [CrossRef]
  18. Shu, L.; Xie, J.; Yang, M.; Li, Z.; Li, Z.; Liao, D.; Xu, X.; Yang, X. A Review of Emotion Recognition Using Physiological Signals. Sensors 2018, 8, 2074. [Google Scholar] [CrossRef]
  19. Kim, A.Y.; Jang, E.H.; Kim, S.; Choi, K.W.; Jeon, H.J.; Yu, H.Y.; Byun, S. Automatic detection of major depressive disorder using electrodermal activity. Sci. Rep. 2018, 8, 17030. [Google Scholar] [CrossRef]
  20. Garcia-Ceja, E.; Riegler, M.; Nordgreen, T.; Jakobsen, P.; Oedegaard, K.J.; Tørresen, J. Mental health monitoring with multimodal sensing and machine learning: A survey. Pervasive Mob. Comput. 2018, 51, 1–26. [Google Scholar] [CrossRef]
  21. Noura, M.; Atiquzzaman, M.; Gaedke, M. Interoperability in Internet of Things: Taxonomies and Open Challenges. Mob. Netw. Appl. 2019, 24, 796–809. [Google Scholar] [CrossRef]
  22. Tzirakis, P.; Trigeorgis, G.; Nicolaou, M.A.; Schuller, B.W.; Zafeiriou, S. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks. IEEE J. Sel. Top. Signal Process. 2017, 11, 1301–1309. [Google Scholar] [CrossRef] [Green Version]
  23. Zualkernan, I.; Aloul, F.; Shapsough, S.; Hesham, A.; El-Khorzaty, Y. Emotion recognition using mobile phones. Comput. Electr. Eng. 2017, 60, 1–13. [Google Scholar] [CrossRef]
  24. Y.4452: Functional Framework of Web of Objects. Available online: http://www.itu.int/rec/T-REC-Y.4452-201609-P (accessed on 24 January 2017).
  25. Pan, E.; Walker, J.; Johnston, D.; Adler-Milstein, J.; Bates, D.W.; Middleton, B. The Value of Health Care Information Exchange and Interoperability. Health Aff. 2005. [Google Scholar] [CrossRef]
  26. Jaulent, M.C.; Leprovost, D.; Charlet, J.; Choquet, R. Semantic interoperability challenges to process large amount of data perspectives in forensic and legal medicine. J. Forensic Leg. Med. 2018, 57, 19–23. [Google Scholar] [CrossRef] [PubMed]
  27. Khan, W.A.; Khattak, A.M.; Hussain, M.; Amin, M.B.; Afzal, M.; Nugent, C.; Lee, S. An adaptive semantic based mediation system for data interoperability among health information systems systems-level quality improvement. J. Med. Syst. 2014, 38, 28. [Google Scholar] [CrossRef] [PubMed]
  28. Adel, E.; El-Sappagh, S.; Barakat, S.; Elmogy, M. Distributed electronic health record based on semantic interoperability using fuzzy ontology: A survey. Int. J. Comput. Appl. 2018, 40, 223–241. [Google Scholar] [CrossRef]
  29. Mezghani, E.; Exposito, E.; Drira, K.; da Silveira, M.; Pruski, C. A Semantic Big Data Platform for Integrating Heterogeneous Wearable Data in Healthcare. J. Med. Syst. 2015, 39, 185. [Google Scholar] [CrossRef] [PubMed]
  30. Otero-Cerdeira, L.; Rodríguez-Martínez, F.J.; Gómez-Rodríguez, A. Ontology matching: A literature review. Expert Syst. Appl. 2015, 42, 949–971. [Google Scholar] [CrossRef]
  31. Adel, E.; El-Sappagh, S.; Barakat, S.; Elmogy, M.; Adel, E.; El-Sappagh, S.; Barakat, S.; Elmogy, M. A unified fuzzy ontology for distributed electronic health record semantic interoperability. In U-Healthcare Monitoring Systems; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  32. Sensor Ontology 2009-Semantic Sensor Network Incubator Group. Available online: https://www.w3.org/2005/Incubator/ssn/wiki/SensorOntology2009 (accessed on 10 February 2019).
  33. Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D. Autonomous Mission Operations for Sensor Webs. In American Geophysical Union Fall Meeting 2008, Abstract IN33C-05; American Geophysical Union: Washington, DC, USA, 2008. [Google Scholar]
  34. Cox, S.J.D. Ontology for observations and sampling features, with alignments to existing models. Semant. Web 2016, 8, 453–470. [Google Scholar] [CrossRef]
  35. oneM2M-oneM2M Ontologies. Available online: http://www.onem2m.org/technical/onem2m-ontologies (accessed on 8 February 2019).
  36. Lambrix, P.; Tan, H. SAMBO—A system for aligning and merging biomedical ontologies. J. Web Semant. 2006, 4, 196–206. [Google Scholar] [CrossRef]
  37. Nagy, M.; Vargas-Vera, M.; Motta, E. DSSim-ontology mapping with uncertainty. In Proceedings of the 1st International Workshop on Ontology Matching (OM-2006), Athens, GA, USA, 5 November 2006. [Google Scholar]
  38. Li, J.; Tang, J.; Li, Y.; Luo, Q. RiMOM: A Dynamic Multistrategy Ontology Alignment Framework. IEEE Trans. Knowl. Data Eng. 2009, 21, 1218–1232. [Google Scholar]
  39. Jean-Mary, Y.R.; Shironoshita, E.P.; Kabuka, M.R. Ontology matching with semantic verification. J. Web Semant. 2009, 7, 235–251. [Google Scholar] [CrossRef] [Green Version]
  40. OHDSI–Observational Health Data Sciences and Informatics. Available online: https://www.ohdsi.org/ (accessed on 30 April 2019).
  41. Gardner, D.; Knuth, K.H.; Abato, M.; Erde, S.M.; White, T.; DeBellis, R.; Gardner, E.P. Common Data Model for Neuroscience Data and Data Model Exchange. J. Am. Med. Inform. Assoc. 2001, 8, 17–33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Common Data Model (CDM) Specification. In The Book of OHDSI; Chapter 4; Observational Health Data Sciences and Informatics: New York, NY, USA, 2019.
  43. NCI Thesaurus (NCIt). Available online: https://ncit.nci.nih.gov/ncitbrowser/ (accessed on 16 March 2019).
  44. SNOMED CT Terminilogies. Available online: http://bioportal.bioontology.org/ontologies/SNOMEDCT?p=classes&conceptid=285854004 (accessed on 22 March 2019).
  45. Mao, M.; Peng, Y.; Spring, M. Ontology Mapping: As a Binary Classification Problem. In Proceedings of the 2008 Fourth International Conference on Semantics, Knowledge and Grid, Beijing, China, 3–5 December 2008; pp. 20–25. [Google Scholar]
  46. Ngo, D.H.; Bellahsene, Z.; Ngo, D. YAM++: A multi-strategy based approach for Ontology matching task. In Proceedings of the 18th International Conference on Knowledge Engineering and Knowledge Management, Galway, Ireland, 8–12 October 2012. [Google Scholar]
  47. Doan, A.; Madhavan, J.; Domingos, P.; Halevy, A. Ontology Matching: A Machine Learning Approach. In Handbook on Ontologies; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  48. Zhang, Y.; Wang, X.; Lai, S.; He, S.; Liu, K.; Zhao, J.; Lv, X. Ontology Matching with Word Embeddings. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. NLP-NABD 2014, CCL 2014; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  49. Kolyvakis, P.; Kalousis, A.; Kiritsis, D. DeepAlignment: Unsupervised Ontology Matching with Refined Word Vectors. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers); Association for Computational Linguistics: Stroudsburg, PA, USA, 2018; pp. 787–798. [Google Scholar]
  50. Hinton, G.E. Learning multiple layers of representation. Rev. Trends Cogn. Sci. 2007, 11, 428–434. [Google Scholar] [CrossRef] [PubMed]
  51. Kibria, M.G.; Ali, S.; Jarwar, M.A.; Kumar, S.; Chong, I. Logistic Model to Support Service Modularity for the Promotion of Reusability in a Web Objects-Enabled IoT Environment. Sensors 2017, 17, 2180. [Google Scholar] [CrossRef] [PubMed]
  52. Kolyvakis, P.; Kalousis, A.; Smith, B.; Kiritsis, D. Biomedical ontology alignment: An approach based on representation learning. J. Biomed. Semant. 2018, 9, 21. [Google Scholar] [CrossRef] [PubMed]
  53. Apache Jena. A Free and Open Source Java Framework for Building Semantic Web and Linked Data Applications. Available online: http://jena.apache.org/ (accessed on 26 March 2017).
  54. Huang, J.; Dang, J.; Vidal, J.M.; Huhns, M.N. Ontology Matching Using an Artificial Neural Network to Learn Weights. Available online: https://pdfs.semanticscholar.org/cd6b/c9913f067968f113762febab025437d8dcaa.pdf (accessed on 03 June 2018).
  55. Wang, P.; Xu, B. LILY: The Results for the Ontology Alignment Contest OAEI 2007. In Proceedings of the 2nd International Workshop on Ontology Matching (OM-2007) Collocated with the 6th International Semantic Web Conference (ISWC-2007) and the 2nd Asian Semantic Web Conference (ASWC-2007), Busan, Korea, 11 November 2007. [Google Scholar]
  56. Bengio, Y.; Courville, A.; Vincent, P. Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives. Available online: https://docs.huihoo.com/deep-learning/Representation-Learning-A-Review-and-New-Perspectives-v1.pdf (accessed on 20 August 2018).
  57. Hinton, G.E.; Zemel, R.S. Autoencoders, Minimum Description Length and Helmholtz Free Energy. In Proceedings of the 6th International Conference on Neural Information Processing Systems, Denver, CO, USA, 29 November–2 December 1993. [Google Scholar]
  58. Cui, L.; Zhang, D.; Liu, S.; Chen, Q.; Li, M.; Zhou, M.; Yang, A. Learning Topic Representation for SMT with Neural Networks. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Association for Computational Linguistics: Stroudsburg, PA, USA, 2014; pp. 133–143. [Google Scholar]
  59. Apache Jena-Apache Jena Fuseki. Available online: https://jena.apache.org/documentation/fuseki2/ (accessed on 27 February 2018).
  60. Miranda-Correa, J.A.; Abadi, M.K.; Sebe, N.; Patras, I. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups. IEEE Trans. Affect. Comput. 2017. [Google Scholar] [CrossRef]
  61. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis; Using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  62. Protégé. A Free, Open-Source Ontology Editor and Framework for Building Intelligent Systems. Available online: https://protege.stanford.edu/ (accessed on 9 February 2018).
  63. NRC Emotion Lexicon. Available online: https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm (accessed on 4 May 2019).
  64. Ontology Alignment Evaluation Initiative: Home. Available online: http://oaei.ontologymatching.org/ (accessed on 17 January 2019).
Figure 1. A view of the data interoperability in heterogeneous healthcare service environments.
Figure 1. A view of the data interoperability in heterogeneous healthcare service environments.
Applsci 09 04175 g001
Figure 2. High level design of interoperable healthcare services with the Web of Objects (WoO) model.
Figure 2. High level design of interoperable healthcare services with the Web of Objects (WoO) model.
Applsci 09 04175 g002
Figure 3. Representation model (R-M) of the Virtual Object (VO) to express Real World Object (RWO) and other information.
Figure 3. Representation model (R-M) of the Virtual Object (VO) to express Real World Object (RWO) and other information.
Applsci 09 04175 g003
Figure 4. Semantic mediation model in heterogeneous healthcare service environments.
Figure 4. Semantic mediation model in heterogeneous healthcare service environments.
Applsci 09 04175 g004
Figure 5. Developed ontology models of affective human health conditions.
Figure 5. Developed ontology models of affective human health conditions.
Applsci 09 04175 g005
Figure 6. Base ontology model conceptualization.
Figure 6. Base ontology model conceptualization.
Applsci 09 04175 g006
Figure 7. Flow Chart of the semantic annotation algorithm.
Figure 7. Flow Chart of the semantic annotation algorithm.
Applsci 09 04175 g007
Figure 8. Ontology graphs data descriptions.
Figure 8. Ontology graphs data descriptions.
Applsci 09 04175 g008
Figure 9. Stepwise semantic alignment process.
Figure 9. Stepwise semantic alignment process.
Applsci 09 04175 g009
Figure 10. Representation of the learning model.
Figure 10. Representation of the learning model.
Applsci 09 04175 g010
Figure 11. Semantic interoperability processing with the base ontology catalog.
Figure 11. Semantic interoperability processing with the base ontology catalog.
Applsci 09 04175 g011
Figure 12. (a) A model of CDM as an Interoperable Shared Data Model (ISDM) with transformation process to shared data representation along with standard vocabulary, core and meta schemas. (b) An example of transformation to the Interoperable Shared Data Model (ISDM) through extracting entities from the domain-specific data model with their metadata description and their translation from the concepts of a domain to the generic entities.
Figure 12. (a) A model of CDM as an Interoperable Shared Data Model (ISDM) with transformation process to shared data representation along with standard vocabulary, core and meta schemas. (b) An example of transformation to the Interoperable Shared Data Model (ISDM) through extracting entities from the domain-specific data model with their metadata description and their translation from the concepts of a domain to the generic entities.
Applsci 09 04175 g012aApplsci 09 04175 g012b
Figure 13. Semantic interoperability implementation setup.
Figure 13. Semantic interoperability implementation setup.
Applsci 09 04175 g013
Figure 14. AMIGOS semantic Ontology Model.
Figure 14. AMIGOS semantic Ontology Model.
Applsci 09 04175 g014
Figure 15. DEAP Semantic Ontology Model.
Figure 15. DEAP Semantic Ontology Model.
Applsci 09 04175 g015
Figure 16. Base semantic ontology model.
Figure 16. Base semantic ontology model.
Applsci 09 04175 g016
Figure 17. Base ontology feature inputs to express data from multiple ontologies.
Figure 17. Base ontology feature inputs to express data from multiple ontologies.
Applsci 09 04175 g017
Figure 18. Semantic data mapping.
Figure 18. Semantic data mapping.
Applsci 09 04175 g018
Figure 19. Code example of semantic annotation with the base ontology model through shared data mapping.
Figure 19. Code example of semantic annotation with the base ontology model through shared data mapping.
Applsci 09 04175 g019
Figure 20. Ontology annotation mapping with the base ontology model.
Figure 20. Ontology annotation mapping with the base ontology model.
Applsci 09 04175 g020
Figure 21. Semantic annotation with the AMIGOS ontology model.
Figure 21. Semantic annotation with the AMIGOS ontology model.
Applsci 09 04175 g021
Figure 22. Semantic alignment of ontology models.
Figure 22. Semantic alignment of ontology models.
Applsci 09 04175 g022
Figure 23. Semantic ontology alignment accuracy score on affective states ontologies.
Figure 23. Semantic ontology alignment accuracy score on affective states ontologies.
Applsci 09 04175 g023
Figure 24. Semantic ontology alignment accuracy on external data.
Figure 24. Semantic ontology alignment accuracy on external data.
Applsci 09 04175 g024
Figure 25. (a) CVOs semantic ontologies discovery, (b) VOs semantic ontologies discovery.
Figure 25. (a) CVOs semantic ontologies discovery, (b) VOs semantic ontologies discovery.
Applsci 09 04175 g025
Figure 26. Microservices execution time analysis.
Figure 26. Microservices execution time analysis.
Applsci 09 04175 g026

Share and Cite

MDPI and ACS Style

Ali, S.; Chong, I. Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments. Appl. Sci. 2019, 9, 4175. https://doi.org/10.3390/app9194175

AMA Style

Ali S, Chong I. Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments. Applied Sciences. 2019; 9(19):4175. https://doi.org/10.3390/app9194175

Chicago/Turabian Style

Ali, Sajjad, and Ilyoung Chong. 2019. "Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments" Applied Sciences 9, no. 19: 4175. https://doi.org/10.3390/app9194175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop