Next Article in Journal
Review of Federated Learning and Machine Learning-Based Methods for Medical Image Analysis
Previous Article in Journal
DaSAM: Disease and Spatial Attention Module-Based Explainable Model for Brain Tumor Detection
Previous Article in Special Issue
Application of Natural Language Processing and Genetic Algorithm to Fine-Tune Hyperparameters of Classifiers for Economic Activities Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ontology Merging Using the Weak Unification of Concepts

Department of Software Science, Tallinn University of Technology, 19086 Tallinn, Estonia
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(9), 98; https://doi.org/10.3390/bdcc8090098
Submission received: 13 May 2024 / Revised: 17 August 2024 / Accepted: 23 August 2024 / Published: 27 August 2024
(This article belongs to the Special Issue Recent Advances in Big Data-Driven Prescriptive Analytics)

Abstract

:
Knowledge representation and manipulation in knowledge-based systems typically rely on ontologies. The aim of this work is to provide a novel weak unification-based method and an automatic tool for OWL ontology merging to ensure well-coordinated task completion in the context of collaborative agents. We employ a technique based on integrating string and semantic matching with the additional consideration of structural heterogeneity of concepts. The tool is implemented in Prolog and makes use of its inherent unification mechanism. Experiments were run on an OAEI data set with a matching accuracy of 60% across 42 tests. Additionally, we ran the tool on several ontologies from the domain of robotics. producing a small, but generally accurate, set of matched concepts. These results clearly show a good capability of the method and the tool to match semantically similar concepts. The results also highlight the challenges related to the evaluation of ontology-merging algorithms without a definite ground truth.

1. Introduction

The interoperability between knowledge-based decision systems where autonomous agents have heterogeneous individual knowledge, partially built-in and partially acquired by learning and sharing with other agents during operation, presumes merging these knowledge fragments into a coherent whole when accomplishing cooperative missions. The aim of this work is to provide an algorithm and automatic tool to consolidate the knowledge, remove discrepancies, and optimize the knowledge bases of agents to ensure well-coordinated task completion in the knowledge context of their mission goals and constraints.
The prevailing knowledge representations and manipulation techniques in autonomous agent systems rely on ontologies and ontology operations. The key ontology operation we are focusing on is ontology merging, an operation where the concepts of two or more ontologies are compared based on some similarity measure and merged when their similarity exceeds some predefined threshold.
Applying different similarity measures such as the unifiability of concept constituent terms, similarity as an equivalence relation, similarity based on semantic distance, and many others could provide rather different and incomparable results. To address this challenge, this paper provides an integrated approach by combining multiple metrics and exploiting mainstream knowledge representation and manipulation frameworks such as RDF and OWL. This allows the easy porting of ontology operations results and the uniform usage with other knowledge platforms. The main novelty of this work is the elaboration of a weak unification principle for ontology merging and its integration with existing techniques implemented in RDF/OWL frameworks. The weak unification, inspired by ‘standard’ unification applied in resolution-based reasoning methods, has been relaxed to provide better flexibility in determining if two terms—or in our case, two concepts—can be unified when using them in SLD resolution, i.e., when reasoning about clauses that have semantics defined using ontologies. In our approach, the method employs a structural approach where the multi-criterial weighed similarity metrics are used to assess the unifiability of concepts by their structural composition and elements such as relations and their values. The comparison of individual values is based on string and semantic matching, making use of the WordNet lexical database and string similarity measures.
While in natural language, information can be successfully exchanged between participants even if they have a different or incomplete understanding of the same concept, the knowledge representation of the same concept may differ in terms of its structure, terminology, and level of detail, which makes its proper interpretation in artificial systems drastically more difficult. As such, our approach to ontology merging permits a degree of difference between semantically close concepts even if they substantially differ by syntactic representation. This is achieved by the multi-criterial minimal confidence threshold and context-sensitive meta-parameters.
We present our findings from two different perspectives. Firstly, we provide a quantitative validation based on the Ontology Alignment Evaluation Initiative (OAEI) 2016 benchmark test. The test data are designed so that each new test introduces some form of variation to the original reference ontology. Variations include changes in the value names, the structure of the concept, and the ontology in general. Secondly, we analyze the results received from merging actual ontologies from the domain of robotics. These ontologies vary in terms of scope, level of detail, and structure, and unlike OAEI benchmark tests, no ground truth is provided in advance, which makes the method evaluation based on them less conclusive.
The rest of the article is structured as follows. In Section 2, we introduce the core concepts of ontology theory and ontology operations. Section 3 highlights related work and the state of the art in the domain of ontology operations and positions our approach in that context. Section 4 presents the method developed in this work. Section 5 presents the experimental results to validate the method. The paper concludes with a discussion of the main results, their validity, and open questions for future work.

2. Ontologies and Their Operations

Ontology in the context of computer science is typically understood as a type of formal knowledge representation used to hold domain knowledge. This knowledge representation is usually expressed in the form of classes, properties, and relations between class members [1].
Euzenat and Shivaiko [2] have provided a formal definition for an ontology as a tuple o = C , I , R , T , V , , , , = where
  • C is the set of classes;
  • I is the set of individuals;
  • R is the set of relations;
  • T is the set of data types;
  • V is the set of values ( C , I , R , T , V being pairwise disjoint);
  • ⊑ is a relation on ( C × C ) ( R × R ) ( T × T ) called specialisation;
  • ⊥ is a relation on ( C × C ) ( R × R ) ( T × T ) called exclusion;
  • ∈ is a relation over ( I × C ) ( V × T ) called instantiation;
  • = is a relation over I × R × ( I V ) called assignment.
Since the motivation of current research is to provide tools for semantic reasoning on knowledge that is interpretable using ontologies, we consider ontologies as logic theories and apply logic programming and logic constraint programming as a natural framework for representation and reasoning about logic. The language of logic consists of individuals, classes, functions, relations, and axioms. The exact list of logic predicates and modalities changes depending on the specific logic language one adopts. Usually, an ontology is specified in description logic (DL), an efficiently decidable subset of first-order language (FOL), though some logic programming languages such as SWI-Prolog to provide themeans for expressing constructs, which raises the abstraction level of ontology-based semantic reasoning to the level of higher-order constructs such as predicate and function variables, enabling the construction and application of domain-agnostic meta-rules for semantic reasoning and meta-interpretive learning [3]. However, in the rest of this paper, we restrict ourselves to ontology constructs expressible in FOL and keep open the possibility of later expansion to higher-order logic.

2.1. Taxonomy of Ontologies

The types of ontologies can be differentiated in terms of two general perspectives: (1) by the degree of formality and (2) by the hierarchy of the ontologies [2].
With regards to the degree of formality, the least formal type of ontology is a collection of tags or folksonomies describing some body of knowledge (e.g., annotating pictures on Flickr). Conversely, the most formal type of ontology would be expressed in some form of description logic, implemented as a special-purpose ontology language (such as OWL) [2]. The motivation for using a formal language to express an ontology is that it prescribes the possible ways of interpretation, thus permitting unambiguous interpretation and automatic reasoning on the basis of the syntactic and semantic rules of that language.
With regard to the hierarchy of ontologies, differences can be made between foundational, upper-level, and domain ontologies. Additional categorizations such as reference ontology and application-level ontology are also used.
Foundational ontologies are ontologies that define fundamental concepts (e.g., concepts such as endurant and pedurant, which denote concepts we can perceive in their entirety at any given time and concepts that only partially exist at any given time) used in other ontologies. Foundational ontologies are complemented by upper-level ontologies, which define non-foundational commonplace concepts (such as vehicles and people) that are typically also used by other ontologies [2]. For this reason, both foundational and upper-level ontologies can be seen as horizontal ontologies, as they aim to cover a broad spectrum of concepts on an abstract level. The distinction between these two types of ontologies is not always clear. Examples of general-purpose foundational or upper-level ontologies are the Unified Foundational Ontology (UFO) [4] and the Suggested Upper Merged Ontology (SUMO) [5].
Domain-level ontologies are ontologies that encompass concepts relevant to a specific domain (e.g., robotics, bibliography, and medicine). For that reason, they can be seen as vertical ontologies [2].

2.2. Ontology Operations

The term ontology operations refers to a number of manipulations that are conducted to enhance the information contained in the ontologies. Usually, this involves a comparison of one or more ontologies, though not always. Maroun distinguishes the following ontology operations: mapping, alignment, merging, annotation, matching and integration [6]. However, it should be noted that the definitions of these operations are not entirely univocal.
Mapping is a general term for “a formal expression describing the semantic relationship between two (or more) concepts belonging to two (or more) different ontologies.” [6]. This semantic relationship could be, for instance, equivalence, vertical (e.g., X is a child of Y) or horizontal (e.g., X is part of Y) relationships. Namyoun et al. distinguish additional contexts for ontology mapping: (1) ontology mapping between an integrated global ontology and local ontologies, (2) ontology mapping between local ontologies and (3) ontology mapping in ontology merging and alignment [7]. In contrast, Euzenat and Shivaiko call this operation (i.e., the establishment of a relationship between concepts of different ontologies) correspondence and describe mapping as the directed version of alignment, strictly expressing equivalence relationship [2].
Alignment is a “set of correspondences between two (or more) ontologies in the same domain or in related fields. These correspondences are called ‘mappings’” [6]. As a result of ontology alignment, source ontologies “become consistent with each other, but are kept separate” [7].
Merging is the combination of mapped concepts into one, creating a new ontology from source ontologies [6]. This merged ontology contains the knowledge from all source ontologies but leaves the knowledge unchanged [2,7].
Annotation is the process of creating metadata using ontology as a vocabulary [6].
Matching denotes finding correspondences between linguistically related concepts from different ontologies, which can symbolize equivalence or any other semantic relationship [6]. A more abstract description is provided by Doan et al. as “the problem of finding the semantic mappings between two given ontologies” [8]. Euzenat and Shivaiko call matching the process of “finding relationships or correspondences between entities of different ontologies” [2].
Integration: Maroun provides three definitions of ontology integration: the process of (1) building a new ontology by reusing other already-available ontologies, (2) building an ontology by merging several ontologies into one that unifies them all, or (3) building an application using one or more ontologies [6]. To differentiate this operation from ontology merging, the focus of integration is on the generation of a new functional ontology, whereas merging entails the mapping of relationships between ontologies and the reproduction of its results. This is also expressed by Euzenat and Shivaiko, noting that in ontology integration “contrary to merging, the first ontology is unaltered while the second one is modified” [2].

Interpretation of Ontology Operations

The definitions of different ontology operations are not always fixed. Often, the terms have several meanings depending on the context of their use (e.g., mapping and integration). The definitions may be interdependent (e.g., mapping is an expression of relationships that are discerned from the process of alignment, whereas alignment is a set of correspondences called mappings). There is also interchangeable use of the terms on the general ontology level and on the level of individual concepts (e.g., mapping expresses relationships between concepts, but it can also be said that ontologies are being mapped). For the sake of clarity, we provide our own interpretation for key ontology operations.
Mapping occurs between individual concepts and is solely concerned with the equivalence relationship, as we attempt to identify concepts from two different ontologies that are similar to one another.
Matching occurs between individual concepts, resulting in a measure of how similar (equivalent) those two concepts are. The methodology for matching in this work is primarily based on semantic and string matching.
Alignment occurs between ontologies and is the product of matching every concept in both source ontologies, either to an equivalent concept in the other ontology or to none at all if no equivalent concept for a particular concept exists.
Merging occurs between ontologies after mapping and matching have identified equivalent concepts. As a result of merging, a new ontology is generated, where matched concepts are presented as a single concept. We also employ an additional specialization step, where this single concept only holds those properties and relations that are similar according to some form of metric, whereas separate sub-concepts are generated for those properties and relations that are different. If no equivalence is identified for a particular concept, it is generated as a separate concept with no additional relations.

2.3. Heterogeneity of Ontologies

One of the major challenges of implementing ontology operations is the different ways an ontology can be represented. Euzenat and Shivaiko [2] distinguish several types of heterogeneity as follows.
  • Syntactic heterogeneity, which is caused by modeling the ontology in different ontology languages.
  • Terminological heterogeneity is caused by variations in how concepts are specified. This may occur when ontologies are written on the basis of different natural languages but also when synonyms or different phrasing is used in the same natural language.
  • Conceptual heterogeneity, which is caused by differences in how knowledge of the same domain has been modeled in different ontologies. These differences may occur in coverage (two ontologies have slightly different domains that partially overlap), granularity (two ontologies have a different level of detail of the same domain) and perspective (two ontologies describe the same domain with the same level of detail but from a different perspective).
  • Semiotic heterogeneity, which is caused by the interpretation of concepts by people. Heterogeneity in this context refers to how entities with the same semantic interpretation can be interpreted differently by humans depending on the context of their use.
In this research, we are mainly concerned with terminological heterogeneity and conceptual heterogeneity. We assume that concept names and other relation values are either written in English or are formulated on the basis of that. We aim to solve the terminological heterogeneity resulting from differences in the use of words or phrasing. As for conceptual heterogeneity, we aim to mitigate variations in how concepts are structured, as well as in their level of detail.

3. Related Work

Most ontology operations rely on the notion of concept similarity. Euzenat and Shvaiko propose three broad categories for how to measure the similarity of two concepts [2].
Name-based techniques, which analyze the name, comment or other linguistic data of the concept. A difference is made between methods that purely use character strings and those that employ some type of linguistic knowledge.
Internal structure-based techniques, which additionally take into account the structure of the concepts, such as the value and data type of their properties or the comparison of the concept to other concepts it is related to.
Extensional techniques, which measure similarity on the basis of individuals (instances) of concepts. For example, if two concepts depicting the notion of a book share an identical set of book titles, we can surmise that these two concepts are equivalent. Extensional techniques are further distinguished into three distinct cases:
  • Comparing instances that are common between two concepts.
  • Instance identification. This approach is used when an explicit common set of instances does not exist, but it is known in advance that the instances are the same (e.g., when integrating two databases that contain the same information).
  • Disjoint extension comparison. In this case, rather than directly using a common data set for both ontologies, approximation or statistical measures are used to compare concept extensions. This could be, for instance, assessing the property value of instances using some form of statistical measurement (e.g., mean, variance, etc.) to determine whether two disjoint subsets of instances are extensions of the same concept.

3.1. String and Semantic Matching

A common approach used in ontology matching is analysing the string values of the concepts that are being compared. As this can be equally performed with both concept names and property values, we do not strictly distinguish a name-based approach from an internal structure-based approach.
By string matching, we mean any kind of technique that analyzes the structural similarity of two strings without considering the meaning of those strings in natural language. This can be purely mechanical (e.g., Levenshtein distance [9]) or on the basis of some larger corpus of text (e.g., calculating cosine similarity using GloVe [10]).
By semantic matching, we mean that the similarity of two strings is evaluated based on the natural language meaning of those words (for example, by comparing whether two English-language words are synonyms). This is typically performed with the help of some lexical database, such as WordNet for English, that links words based on their meaning, semantic and grammatical relations [11].
String and semantic matching is extensively used, for example, in Robin and Uma [12], which employs a mixed strategy of both, choosing the best outcome from several metrics. Single-metric approaches also exist, such as in the work of Stoilos et al., which focuses solely on developing a single string-matching technique for ontology alignment [13]. String or semantic matching can also serve as a complementary technique to some other approach, for example, in Karimi and Kamandi [14], where trigram word similarity is used as part of an inductive logic programming approach to ontology alignment.
An obvious drawback of using string and semantic matching is that it assumes that the names and property values of a concept are in some form meaningful in natural language. The unconstrained use of string matching is prone to false positives (e.g., words such as correct and incorrect are structurally very similar, despite being semantically opposite to one another). Semantic matching may require the extensive pre-processing of the string (such as identifying and separating individual words from a longer string) and be further complicated by identical words having different meanings.

3.2. Formal Concept Analysis

One major extensional technique mentioned by Euzenat and Shivaiko is Formal Concept Analysis (FCA) [2], which deserves additional attention as it is also integrated into our approach. From a more general perspective, FCA is a data analysis technique that enables one to analyze the relationship between a set of objects and a set of attributes. The input data of FCA are typically represented as a cross table where the set of objects and the set of attributes are the dimensions and the presence of a property in an object is marked. Using these data, FCA produces groups that (1) represent “natural” (formal) concepts based on the attributes of the data and (2) a collection of implications describing specific dependencies that exist in the data [15]. It should be noted that concept in the context of FCA does not bear the same meaning as concept in the context of ontologies. In past literature, Stumma and Maedche [16] distinguished the two terms as formal concept and (ontology) concept, respectively, in order to avoid confusion. We will adopt this approach and will always refer to concepts in the context of FCA explicitly as formal concepts.
Euzenat and Shivaiko position FCA as an extensional ontology matching technique for when a common set of instances already exists. In such a case, attributes of the FCA lattice could be seen as the concepts where instances are known to belong, regardless of their source ontology. FCA would allow us to identify not only equivalence relationships but also sub-type correspondences [2]. This is a fundamentally different approach to previously discussed semantic and string-matching techniques, which typically presuppose that the input ontologies and their concepts are defined in a formal ontology language such as OWL. Rather, an FCA-specific approach relies on two sets of instances (e.g., entries from a database or a collection of documents) as the primary input data and identifies similar formal concepts on that basis.
An example of such an approach in ontology matching is FCA-Merge, proposed by Stumme and Maedche [16]. FCA-Merge takes ontologies and a set of domain-specific text documents as input. Natural language processing techniques are used to extract instances from domain-specific texts. FCA is used to derive a concept lattice from this input data. Finally, a new merged ontology is generated on the basis of the interaction between the concept lattice and a human domain specialist [16].
In our approach, we use FCA in a more limited scope to pre-process the input ontologies. More specifically, FCA enables us to collect all inherited relations of each concept present in the ontology. The resulting lattice is a strict merger of input ontologies, where only absolutely identical relations (i.e., relations that have an identical IRI both as their type and value) are declared equivalent. The results of FCA (a set of identical and a set of different relations between each concept pair) are then used in the subsequent semantic and string-matching phase.

3.3. Ontologies in Robotics

Although dealing with ontology operations can be seen as a universal domain-agnostic problem, the underlying motivation for this paper lies in the domain of robotics, and, more specifically, the collaboration of autonomous agents is the domain of multi-robot systems. With that in mind, the following section will provide an overview of the intersection between ontologies and autonomous agents.
It is characteristic for multi-robot applications to have dynamic mission profiles that require interpreting data whose meaning depends on the context of the current situation and on the intentions/state of the knowledge of other collaborating agents. For instance, knowledge representation and interpretation of objects and the environment can be achieved through the use of situation context-specific ontologies and by merging them with ontologies of involved agents. The domain knowledge represented in ontologies improves the flexibility, re-usability, and adaptability of various robotic tasks [17].
To mitigate the heterogeneity of ontologies developed for robotics, IEEE has proposed a standard for knowledge representation and reasoning in autonomous robotics or autonomous robot architecture (ROA) ontologies, which provides a conceptual framework for sharing information about robot architectures [18]. ROA is built on top of Suggested Upper Merged Ontology (SUMO) and Ontology for Robotics and Automation (CORA), which pass on some of their concepts to ROA. CORA is an upper-level ontology specific to the field of robotics, providing a knowledge representation of concepts such as robot, robotic system, and robot part, among others [19]. Other upper-level ontologies relevant to robotics are Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) [20], Ontology-based Unified Robot Knowledge (OUR-K) [21], and OpenRobots Common Sense Ontology (ORO) [22].
Manzoor et al. [17] describe a number of domain ontologies related to robots with social roles. These are the KnowRob [23] project and related SOMA ontology [24], Ontology for Robotic Orthopedic Surgery (OROSU) [25], CARESSES [26], Perception and Manipulation Knowledge (PMK) [27], search and rescue SARbot [28], indoor environmental quality (IEQ) [29], SmartRules [30], Adapting Robot Behavior based on Interaction (ARBI), Worker-cobot [31], and Agility Performance of Robotic Systems (APRS) [32].
Work has also been carried out to develop a universal approach for linking an OWL ontology with ROS middleware. One ongoing effort in this regard is the Java-based ROS Multi Ontology Reference (ARMOR) framework [33], which also has a publicly available repository with installation instructions and other tutorials [34].

3.4. Our Approach

Based on the techniques described above, our approach can be seen as a structure-based analysis relying primarily on the lexical similarity of relation values where relation values of matching relation types are compared for their similarity on the basis of string and semantic matching.
Euzenat and Shivaiko [35] cover a number of systems that employ a similar mix of structure-based and lexical approaches. These include, for example, SAMBO [36], which combines terminological matching (n-gram, edit distance, and comparison of word lists), structural analysis of is-a and part-of hierarchies, and background knowledge; Falcon [37], which uses a divide-and-conquer approach where structural proximity between concepts is the basis for the subsequent partition ontologies into smaller clusters; RiMOM [38], which employs linguistic data from WordNet as background knowledge; and a variation of Similarity Flooding Algorithm [39] to assess structural similarity.
In Table 1, we provide a bird’s eye comparison of our proposed method to previous work in the field. In addition to the methods already covered in this section, we have included some of the top-performing solutions from Section 5:
We can see from the table that most approaches employ some form of lexical and/or semantic comparison. The Similarity Flooding Algorithm is also often used as part of structure-based techniques. Other methods include the use of extensional techniques based on domain documents, the extraction of (the concept’s) local context as virtual documents, the use of a knowledge base or other form of domain knowledge, the use of training data, and the use of artificial neural networks. Difference can also be made between whether a particular approach is general-purpose or geared towards a specific domain (i.e., specialization to medical ontologies via the use of UMLS).
Our approach is distinguished from these earlier approaches with a more explicit approach towards comparing asymmetric ontologies. It is common that two ontologies are not described in comparable detail (i.e., they have different levels of granularity) or from a comparable perspective. We aim to mitigate this issue by introducing a predetermined similarity value indicating the absence of a value in scenarios where two concepts do not share the same types of relations (e.g., one concept is defined to be a subclass of something, whereas the other concept is not a subclass of anything). If two concepts do share a relation type, string and semantic matching are used instead.
The motivation here is to successfully identify semantically similar or equivalent concepts that have been described in vastly different levels of detail while minimizing the occurrence of false positives. We call this approach the principle of weak unification as it can be seen as an extension of Prolog’s inherent unification mechanism with a threshold-based metric. What constitutes an acceptable threshold for a positive match, as well as the weight of different similarity measures, is determined as input parameters in the range of [0,1]. The method is intended to be automatic without the need for human intervention.
Furthermore, we aim to have a more agnostic approach towards the relations attributed to concepts. As such, values of each relation type constitute a disjoint subset that can only be compared to other values in that subset. Note that with this approach, the semantic context of the concept (e.g., the name, label or description of the concept) has exactly the same weight as any structural relation (e.g., being a subclass of some concept or belonging to some domain) of that concept. In addition to relation values that are directly attributed to a concept, relation values inherited from the concept’s ancestors (an is-a hierarchy) are used to assess the similarity of a concept pair. As mentioned in Section 3.2, we use FCA to collect inherited relations for all possible concept pairs.
With regards to string and semantic matching, we employ a dynamic approach that will either use the combination of both measures using predefined weights or exclusively prefer one method if the calculated similarity value is high enough (i.e., if it exceeds an upper threshold).

4. Proposed Methodology

On the most abstract level, ontology merging, the contribution that this paper is focusing on, can be seen as the comparison of every concept C i in ontology O 1 to every concept C j in ontology O 2 and matching the concept pairs that exceed some predefined similarity threshold.
The degree of how similar two concepts are is referred to as the confidence value of each concept pair in the range of [0,1], where 0 expresses the lowest and 1 expresses the highest possible degree of similarity. The minimal degree of similarity to declare two concepts as being similar is expressed as a parameter in the same value range and is referred to as the confidence threshold. A concept from ontology is matched to a concept from another ontology if neither of the concepts is already matched, their confidence value exceeds the confidence threshold, and it is also the highest available confidence value for both concepts in the concept pair. If a concept is not paired with any other concept, it is considered to be mismatched.
Let C i O 1 , C j O 2 , O
The signature of the merging operation can formally be expressed as
m e r g e : O O m a t c h ( C i , C j ) if C i O 1 , C j O 2 : max k [ 1 , m ] f ( C i , C j ) k f U f ( C i , . ) O f ( . , C j ) O O { C i } { C j } otherwise
where
  • C i and C j are input concepts;
  • O 1 and O 2 are input ontologies;
  • O is the merged output ontology;
  • f ( C i , C j ) is the match of concepts C i and C j ;
  • m a x k [ 1 , m ] f ( C i , C j ) k is the maximum confidence value from all possible concept pairs ( C i , C j ) k where k [ 1 , m ] ;
  • f U is the confidence threshold;
  • m is the number of possible concept pairs from input ontologies O 1 and O 2 so that m = | O 1 |   ×   | O 2 | .
The matched concept pairs are further refined by assessing the confidence value of each relation value pair in the matched concept pair. If the confidence value of a particular value pair falls below the confidence threshold, sub-concepts of the matched concept pair are created and the diverging relation type and value are added as a distinct sub-concept. This could be considered the specialization of matched concepts, denoted as ⊑. The new merged ontology is written after the completion of the specialization operation.
Let r e l s be an operation that returns the set of all relations of a concept:
r e l s : C { r e l 1 , . . . , r e l n }
then
r e l s ( f ( C i , C j ) ) = r e l s ( C i ) r e l s ( C j ) .
and the specialization operation is expressed as
( r e l s ( f ( C i , C j ) ) ) = ( { r e l s ( C i ) f r e l s ( C j ) } ) { r e l s ( C i ) } { r e l s ( C j ) }
where
  • ⊑ is the specialization operation;
  • f operator signifies the intersection of relation value pairs whose confidence value exceeds the confidence threshold;
  • C i and C j are the new subconcepts created as a result of specialization;
  • r e l s ( C i ) = r e l s ( C i ) \ r e l s ( C j ) and r e l s ( C j ) = r e l s ( C j ) \ r e l s ( C i ) , where \ denotes set difference.

4.1. Similarity Matching as a Form of Weak Unification

In the context of logic programming, the unification of terms is the process of transforming logic clauses using mgu-algorithm [48] into a form where some of their literals become equivalent modulo negation, allowing the elimination of these literals from the derived clause by the SLD resolution rule. It is used in Prolog to evaluate the truth value of a goal on the basis of clauses with the instantiation of their argument terms that are provided in the knowledge base. In order to make better use of the Prolog unification mechanism, we apply the encapsulation of clauses [49]; i.e., we transform all concepts into a single uniform representation where relation types that were found in both input ontologies are taken as arguments to be used for the grounding of concepts. Therefore, the value of each argument is represented as a list of relation values of that type. If a concept does not have a particular relation, the argument value is an empty list. The fundamental difference between strong syntactic unification and weak unification applied in our method for term comparison and clause merging is as follows: while in syntactic unification, the terms are transformed to syntactically equivalent form, weak unification on the contrary enables some difference in the structure of terms and values up to predefined threshold in some well-defined metrics. The relation values are then calculated for their similarity.
Hence, in the case of strong unification, two concepts are matched only when their argument values are either identical or they are variables that can be uniformly instantiated on the basis of the Prolog knowledge base. We call the current approach weak unification because we use a similarity metric (confidence) instead of Prolog’s internal unification mechanism.
For example, the following two concepts named android would not be unified using strong unification due to the asymmetry of relations and differences in the values these relations hold. However, they could be unified in the sense of weak unification if the semantic connection between robot and automaton is made (using, for example, WordNet) and the absence of a value in the relation equivalentClass is permitted.
 
android(equivalentClass([]), subClassOf([′robot′]).
 
android(equivalentClass([′humanoid′]), subClassOf([′automaton′])).

4.2. Calculating the Similarity Confidence of a Concept Pair

The similarity confidence of a concept pair is the average of the confidences calculated over each relation value pair of compared concepts. If a relation has more than one value, the values are matched using a best-first approach and each value pair is considered separately. The two concepts need not have an identical structure: either one of the concepts has a relation that the other does not, or the two concepts have a different number of values for the same relation. We use an additional input parameter referred to as the confidence value of an absence which determines what confidence value is attributed to a relation that does not have a counterpart in the concept it is compared with.
Relation values of any ancestral concept are also used in similarity calculation if the concept has an ancestral relation (e.g., in RDF expressed either as rdfs:subClassOf or rdfs:subPropertyOf) and there exists knowledge of the ancestor concept (e.g., it is either defined in the same ontology or provided as background knowledge).
OWL class descriptions (e.g., owl:Restriction, owl:intersectionOf, owl:unionOf, owl:complementOf) are compared with the same principles as concepts but are constrained to matching relation types. If a pair of class descriptions does not have identical relation types, their similarity is considered to be 0, regardless of relation values.
The formal expression of calculating the similarity confidence of a concept pair is
S ( C 1 , C 2 ) = Σ i = 1 n ( S ( R 1 , R 2 ) ) n
where
  • C 1 and C 2 are input concepts;
  • S ( C 1 , C 2 ) is the similarity confidence of a concept pair;
  • n is the total number of relations that exist in input concepts;
  • S ( R 1 , R 2 ) is the similarity confidence of the relation value pair ( R 1 , R 2 ) .
To illustrate the above method, we provide an example of calculating the similarity between the following two concepts expressed in RDF/OWL.
         Example A         Example B
<owl:ObjectProperty<owl:ObjectProperty
    rdf:about=    rdf:about=
       " iri1#isGridPowered">       " iri2#has−grid−power">
    <rdfs:domain    <rdfs:domain
      rdf:resource=      rdf:resource=
         " iri1#device"/>         " iri1#airplanes"/>
    <rdfs:domain    <rdfs:range
      rdf:resource=      rdf:resource=
         " iri1#airplanes"/>         " iri2#battery"/>
</owl:ObjectProperty ></owl:ObjectProperty >
In the example provided at Table 2, the confidence value of an absence is set at 0.5 and the following confidence values have been calculated for relation value pairs:
In that case, the calculated similarity of the entire concept pair is 4 · 1 + 2 · 0.8 + 2 · 0.5 2 · 4 = 0.825 . Note that the total number of relations is 8 because both the concept name and rdf:type are also considered as relations. The structure of the concepts is not symmetric as the concepts have a different number of relations of a particular type (2 vs. 1 for rdfs:domain and 0 vs. 1 for rdfs:range).

4.3. Calculating the Similarity Confidence of a Relation Value Pair

We use semantic similarity and string similarity to calculate the similarity S ( R 1 , R 2 ) of relation value pairs ( R 1 , R 2 ) . The weight w i of either method is implemented as an input parameter of the matching algorithm. Additionally, we use an exception in situations where the weight of one metric has a high value (i.e., greater than parameter S u ) that is distinctly dominating over values of other metrics.
The calculation of the similarity confidence can be formally expressed as
S ( R 1 , R 2 ) = Σ i = 1 n ( w i × S i ( R 1 , R 2 ) ) if i : S i ( R 1 , R 2 ) < S u S k ( R 1 , R 2 ) if k [ 1 , n ] : m a x ( S k ( R 1 , R 2 ) ) S u
where
  • R 1 and R 2 are relation values;
  • n is the number of value matching techniques (in present case, n = 2 );
  • S ( R 1 , R 2 ) is the calculated similarity confidence of a relation value pair;
  • w i is the weight of method i
  • S i ( R 1 , R 2 ) is the similarity confidence of a relation value pair using method i;
  • S u is the upper threshold of similarity confidence.

4.3.1. Semantic Similarity

WordNet is used to examine the meaning of words that are found within the relation value. Each relation value is discerned into a list of words that are reduced to their root form for normalized comparison. Every word in one list is then compared to every word in the other list to evaluate the similarity in meaning. We distinguish between three types of similarity: the words are identical, they are synonymous, or they have some other positive semantic relation (e.g., hypernymy, meronymy, etc.). These are referred to as the confidence value of a synonym and the confidence value of other semantic relations, respectively. The confidence of a relation value pair is the sum of all identified similarities divided by the total number of words in both compared relations. Conceptually, this is similar to Formula (3) and can formally be represented as follows:
S e m ( R 1 , R 2 ) = 2 ( w i d + w s y n p s y n + w o t h p o t h ) n
where
  • R 1 and R 2 are the compared relation values;
  • n is the total number of words in both input concepts;
  • w i d is the number of identical word pairs;
  • w s y n is the number of synonymous word pairs;
  • w o t h is the number of word pairs with any other semantic relation;
  • p s y n is the confidence value of a synonym;
  • p o t h is the confidence value of other semantic relations.
Note that w i d , w s y n and w o t h refer to the number of word pairs, while n refers to the number of individual words. To reconcile the two formats, we multiply the numerator of the fraction by 2.
For example, given 0.9 as the confidence value of a synonym and 0.45 as the confidence value of another semantic relation, the semantic similarity for the relation values #bike and #electricBike would be 2 ( 1 + 0 · 0.9 + 0 · 0.45 ) 3 = 0.667 . For relation values #bike and #electric-Bicycle, where bike and bicycle are identified as synonyms, the semantic similarity would be 2 ( 0 + 1 · 0.9 + 0 · 0.45 ) 3 = 0.6 , and for #velocipede and #electricBicycle, where bicycle is a hyperonym of velocipede, the semantic similarity would be 2 ( 0 + 0 · 0.9 + 1 · 0.45 ) 3 = 0.3 .

4.3.2. String Similarity

We are using a string metric developed by Stoilos et al. [13] to determine how similar two relation values are without considering the possible meaning of words in those values. The method is based on the length of common substrings, similar to Levenshtein distance. The formal representation of the string metric proposed by Stoilos et al. is
s i m ( s 1 , s 2 ) = c o m m ( s 1 , s 2 ) d i f   f ( s 1 , s 2 ) + w i n k l e r ( s 1 , s 2 )
where
  • c o m m ( s 1 , s 2 ) is the commonality between s 1 and s 2 ;
  • d i f   f ( s 1 , s 2 ) is the difference between s 1 and s 2 ;
  • w i n k l e r ( s 1 , s 2 ) is the authors’ improvement on the method introduced by Winkler [50]
In addition, the authors define commonality and difference as follows:
c o m m ( s 1 , s 2 ) = 2 i ( l e n g t h ( m a x C o m S u b S t r i n g i ) ) l e n g t h ( s 1 ) + l e n g t h ( s 2 )
d i f   f ( s 1 , s 2 ) = u L e n s 1 · u L e n s 2 p + ( 1 p ) · ( u L e n s 1 + u L e n s 2 u L e n s 1 · u L e n s 2
where
  • p [ 0 , inf) determines the importance of the difference factor d i f   f ( s 1 , s 2 ) and was set to 0.6 as a result of the authors’ experiments;
  • u L e n i is the length of unmatched sub-string from the initial string s i scaled with string length l e n g t h ( s i ) .

4.3.3. Determining the Upper Threshold Values of Similarity Metrics

Both methods have a weight that is determined as an input parameter of the matching algorithm, and the similarity of a relation value pair is determined as a weighed average of the results from each method. However, when the result of a single method is greater than or equal to an upper threshold, only that particular method is exclusively used to assess the similarity of the relation value pair. This approach ensures reinforcement of more decisive matching results when other similarity methods provide a low score.
For example, when comparing Robot to Automaton, the semantic similarity method returns a similarity score that exceeds the upper threshold, as the two words are synonymous in WordNet. However, the string similarity method provides a similarity score of 0. Clearly, providing a high similarity value on the basis of semantic similarity alone is preferable to providing a considerably lower value as a weighed average of both similarity values.
The opposite also applies when semantic matching fails to identify either the words in the relation value (e.g., the relation value is an abbreviation) or the underlying semantic connection, while string matching identifies the high similarity of the two values. For example, in the comparison of relation and relational, semantic matching returns a similarity score of 0, because even though both words are documented in WordNet, there is no explicit syntactic relation between them. String matching using [13], on the other hand, identifies the similarity of these two values and provides a similarity score that exceeds the upper threshold.

4.3.4. Confidence Pre-Value

When two relation values both refer to a concept in their ontology, the last known confidence value of the already-compared concepts is used instead of the string comparison. We refer to this as the confidence pre-value as it is the confidence value of the concept pair that is known from the previous comparison iteration and to exclude repeated evaluations, the existing estimate is used.

5. Experimental Results

We employed two approaches to validate our results: (1) a quantitative approach on the basis of test data from the Ontology Alignment Evaluation Initiative OAEI and (2) a qualitative approach based on the results from comparing robotics-related ontologies outlined in [17].

5.1. OAEI-Based Evaluation

OAEI is an international initiative that hosts a number of test sets as well as annual competitions and workshops for ontology matching. We use the 2016 benchmark test set [51,52] that comprises 111 tests, where the base ontology from the domain of bibliography is compared to its different variations. The base ontology consists of 33 named classes, 24 object properties, 40 data properties, 56 named individuals, and 20 anonymous individuals. Each new test introduces some form of variation to the original reference ontology. These variations include changes in the value names (e.g., different naming conventions, use of synonyms, and scrambling), the structure of the concept (e.g., omission of restrictions, comments, or other relations), and the ontology in general (e.g., removal or addition of concepts). In addition, the data set includes four additional tests that are based on real bibilographic ontologies that only partially correspond to the reference ontology.
We ran our program on the subset of 42 base tests (all tests from 101 to 266 ending with a full number) from the 111 total tests and on all of the 4 additional tests (tests from 301 to 304) that were based on real ontologies. The full results for each test can be found in Table 3.
To conserve time and resources, we decided to exclude all tests with hyphenated numeration (e.g., 201-2, 202-4, etc.), as these were partial adaptations of their non-hyphenated counterparts (for example, test 202 replaces concept names with a random string and removes all rdf:comment dc:description relations that hold human-readable comments, whereas tests 202-2, 202-4, etc., change only some of the concepts), and sufficient data were received from those tests that implemented the change in full. In addition, we excluded results from tests 206, 207 and 210, as these tests were related to ontologies being in different languages (English and French in this case) and the current approach is limited to English in scope. While preliminary results on these tests did produce reasonably good results, this was largely due to the structural and etymological similarities of concept names in English and French, which would provide vastly different results depending on the specific language pair. Conversely, we decided to retain results from all tests that involved the scrambling of concept names.
In order to achieve the best balance between minimizing false positives and permitting structural flexibility, we used the following input parameter values:
  • Minimal confidence threshold = 0.7;
  • Confidence value of a synonym = 0.9;
  • Confidence value of other semantic relation = 0.675;
  • Confidence value of an absence = 0.5;
  • Semantic similarity weight = 0.5;
  • String similarity weight = 0.5.
The summed average of correct matches was 60% across the 42 base tests, including 17 tests with a 100% match and 8 tests with a 0% match. Better results were achieved in tests that somehow simplified the original ontology: e.g., the simplification of the OWL language (tests 102, 103, 104), the removal of subclass assertions (test 221), individuals (test 224), property restrictions (test 225), properties (test 228), and the general simplification of the hierarchy (test 222). The test changing concept names to semantically irrelevant strings was quite successful, with 93% accuracy (test 201), which was in all likelihood on account of identical comments in both ontologies that compensated for the scrambling. In test 202, these identical comments were also removed, and the number of correct matches dropped to 23%. A marginal loss of accuracy also occurred when the hierarchy was increased with intermediate classes (test 223, 97%). From the perspective of semantic and string matching, test 209 is perhaps the most relevant, as this introduces the use of synonyms while removing the comments. The accuracy in this test was 58%, which is slightly below the summed average.
We can distinguish two types of weaknesses of the method in the matching process. The first type is the incorrect categorization of a concept as mismatched due to the concept not having any available pair over the minimal confidence threshold. The second type is the occurrence of false positives due to matching incorrect concepts in relation to ground truth. Lowering the minimal confidence threshold would reduce the occurrence of mistakes of the first type, but increase the occurrence of mistakes of the second type. With the current input parameters, the number of false positives remained relatively low in all tests, with the highest share being 8% for test 209.
To provide context for these results, we compared our findings to the final results of the OAEI 2011 campaign [53], which used tests from the same data set. As some tests we ran were not present in the OAEI 2011 campaign and vice versa, we analyzed the intersection of tests relying on the extended 2011 campaign result data available on the OAEI website [54].
Two metrics were used in the OAEI 2011 campaign to assess the effectiveness of a method: precision and recall. Precision indicates the fraction of identified relevant (correct) instances out of all identified instances, whereas recall is the fraction of identified relevant instances out of all relevant instances in the data set. In this case, relevant instances would be the set of matched concept pairs provided in the OAEI reference data, so the two metrics are, respectively,
precision = identified correct matches identified correct matches + false positives
recall = identified correct matches all correct matches
Using the given data set and metrics, our method achieved 73% precision and 51% recall, which both fall near the middle, as seen in Table 4.
For a more detailed insight, we also looked at the cumulative average over the test set for both metrics. This revealed that our method remained a top performer approximately until tests 246–247, as seen in Figure 1 and Figure 2.
Incidentally, all tests from 248 onward include the scrambling of relation values. As our method relies on relation values to contain semantically meaningful data, this drop in precision and recall is to be expected.
Additionally, it should be noted that as only a subset of OAEI 2011 campaign results were used to conduct this analysis, the comparative results provided for other methods vary slightly from what was reported in [54]. However, we believe the results are sufficiently robust to provide a general understanding of how our method performs in comparison.

5.2. Merging Robotics-Related Ontologies

In addition to the OAEI test set, we analyze the results gathered by comparing selected ontologies from the domain of robotics. More specifically, we compared three ontologies introduced in Section 3.3: DOLCE, PMK, SOMA. For comparability, identical input parameter values were used in these comparisons as used with the OAEI test set.
In terms of ontology taxonomy, PMK and SOMA are both domain-level ontologies that differ in terms of subdomain and scope. PMK focuses on autonomous robot perception and manipulation [55], whereas SOMA focuses on the characterization of physical and social activity context [56]. DOLCE is a foundational ontology that aims to “model a commonsense view of reality” [20]. SOMA is partially based on the DOLCE foundational framework, whereas PMK has no direct connection to DOLCE. As such, we can expect SOMA and DOLCE to have the biggest and PMK and DOLCE to have the smallest overlap. The specific version of DOLCE we used for the comparison is version 397 of Dolce-Lite [57]. In terms of size, the SOMA ontology is the biggest, consisting of 839 defined concepts and 6471 triplets. The DOLCE ontology consists of 107 defined concepts and 872 triplets, whereas PMK has 90 concepts and 306 triplets. This clearly shows how ontologies differ not only in the number of concepts but also in their level of detail (average number of triplets/relations per concept).
To have a better understanding of the matched concept pairs, we look at the following meta-properties of the concept pairs:
  • The number of explicit relations of a concept;
  • The number of inherited relations of a concept;
  • The number of relations in the concept pair that were considered similar in the refinement process;
  • The matching confidence of the concept pair.
The number of relations, both explicit and inherited, provides a general characterization of the matched concepts: a considerable difference in the number of relations indicates asymmetry in their structural definitions. Likewise, the number of similar relations indicates to what extent the refinement process (specialization process) consolidated the knowledge from each concept. It should be noted that, at the very minimum, each matched concept pair has at least two similar relations (type and name).
As seen in Table 5, the comparison of PMK and DOLCE resulted in only a single matched concept pair and 195 mismatched concepts. Such a low number of matches may indicate to the different nature of PMK and DOLCE. In the case of the single matched pair, the names of the concepts are identical in both ontologies, but the level of detail of the concepts differs considerably. Based on the identical lexical content, we can assess that this is a valid match.
The comparison between PMK and SOMA produced 12 matched concept pairs and 905 mismatched concepts. In several cases, the concept name is an identical match. However, there are also matches that could be seen as a match between a specification and a general concept (e.g., #hasSensingComponent-#hasComponent, #ActionClass-#Action) and false positives (e.g., #PhysicalEnvironment-#PhysicalAgent and #WspaceClass-#SpaceRegion). The number of relations is relatively low in most of the matched concepts, even though the concepts in SOMA generally have a larger number of relations. Full details of the matched concept pairs can be found in Table 6.
The comparison between DOLCE and SOMA produced 13 positive matches and 920 mismatched concepts. In most cases, the concept name is an identical match, although the number of relations typically differs. We can explain this contrast as the same concept being described in full detail in DOLCE, but being considerably abridged in SOMA. Exceptions to this observation are the clear false positives #particular-#Item and #physical-quality-#PhysicalAttribute. With regards to the latter, the SOMA ontology seemingly does have a more accurate concept, named #PhysicalQuality, which is categorized as a mismatch instead. Full details of the matched concept pairs can be found in Table 7.

6. Discussion

As seen from the results of comparing the PMK, DOLCE, and SOMA ontologies, the number of matched concepts in all comparisons is remarkably low in relation to the total number of concepts in those ontologies. This could be either due to the inherent dissimilarity of the ontologies, the overly high accuracy of the matching process, or both. Determining the ground truth regarding the similarity of semantically close concepts in large weakly related ontologies is a challenge even for a human expert. This is not only because of the complexity of ontological structures but also because of the variety of contexts in which the meaning of a concept pair can exhaustively be interpreted. As such, it is a challenge to determine the extent of semantically equivalent concepts that could have been matched but were not, as this is specific to each ontology, and even ground truth often cannot be assessed with full certainty.
The comparison between DOLCE and SOMA provides some insight in this regard, as SOMA incorporates parts of DOLCE into its ontology. There are 15 concepts in total with identical names between DOLCE and SOMA. Nine of those concepts were matched correctly, and one was matched incorrectly, as discussed above. The remaining five concepts (#Set, #Feature, #DependentPlace, #TimeInterval, #RelevantPart) were not matched at all. Intuitively, this seems to be caused by differences in the relations these concepts have. It should also be noted that the DOLCE concepts that were incorporated into SOMA were re-defined within SOMA, meaning they are provided a new IRI as a SOMA concept. This considerably limits the potential usability of common IRIs as background knowledge. This can also be seen in the type of identical relations discovered by FCA pre-processing: in practice, only a very limited set of relations had a completely identical IRI, typically relations that expressed some kind of datatype defined in OWL or RDF vocabularies.
The number of relations of matched concepts provides another useful insight. Clearly, most matches occur between concepts with a low number of relations (often only the name and type of the concept), whereas concepts with a large number of relations tend to have a lower similarity value in general. Conversely, having a match between concepts with a high number of relations could be seen as an additional validation that these concepts are indeed similar beyond simply having a similar name. One such example from our results is the match between #has-quale and #hasQuale from the comparison of DOLCE and SOMA. It is notable that not a single OWL class description exceeded the minimal confidence threshold in matched concepts. In other words, none of the property restrictions or constructs expressing the set operations AND, OR, and NOT proved to be similar enough to be matched. As such matches did occur in the OAEI test set, this further highlights the inherent differences between ontologies that were specifically created for ontology merging benchmarking purposes and for the three robotics ontologies chosen for this experiment.

7. Conclusions

In this paper, we have introduced the motivation, principles, and methodology behind a new algorithmic tool for ontology merging that relies on semantic and string matching combined with structure-based analysis and a Prolog built-in syntactic unification mechanism, together called weak unification.
We have provided several experimental results to validate the accuracy of the merging process. Experiments run on the OAEI data set reveal relatively strong results of tests focusing on structural variations and synonyms while being less successful on tests where all semantically meaningful labels were removed.
Robustness test experiments run on robotics ontologies chosen relatively randomly produced a surprisingly small number of matched concepts. Therefore, no ground truth could have been reliably established for this type of test. A more comprehensive set of comparisons is needed to assess whether this is due to the dissimilarity of ontologies or the accuracy of the matching process. However, the concept pairs that were matched are generally of good quality, typically linking concepts that could be considered either semantically equivalent or a specification/generalization of one another. In the comparison between the PMK and DOLCE ontologies, the single match is clearly between semantically equivalent concepts. In the comparison between the PMK and SOMA ontologies, 2 matches out of 12 (16.6%) are between semantically equivalent concepts, 8 matches (66.6%) are between concepts that could be considered a specialization–generalization of one another, and 2 matches (16.6%) are clear false positives. In the comparison between the DOLCE and SOMA ontologies, 10 matches out of 13 (77%) are between concepts that are clearly equivalent, 1 match (8%) can be considered to be between specialization-generalization, and 2 matches (15%) are clear false positives. As a future improvement on the method, a number of relations attributed to matched concepts and other metadata can be used to better analyze the nature of a particular match.
Several aspects of the tool can be improved to further enhance the accuracy, in addition to improving the heuristics of the matching process. One of these is replacing the best-first approach in relation to value matching with a more comprehensive search. This is primarily relevant when the concept has many relation values of the same type and a best-first search can produce sub-optimal results.
Secondly, it could be beneficial to permit a more flexible comparison of types. The current implementation restricts concept matching to being strictly by type. However, more flexibility could be achieved by permitting the comparison of concepts between a sub-type and its generalization. This is particularly pertinent in the context of OWL vocabulary that introduces a hierarchical system of property types (e.g., owl:InverseFunctionalProperty, owl:TransitiveProperty and owl:SymmetricProperty, where are all defined as subclasses of owl:ObjectProperty).
Furthermore, the current tool works best under the assumption that input ontologies have a common set of relations defined in a shared vocabulary (RDF, OWL, FOAF, etc.). Relation types that are declared within the context of an input ontology introduce an additional layer of heterogeneity and need to be transformed to a more universal representation in order to be efficiently comparable.

Author Contributions

Conceptualization, N.K. and J.V.; methodology, N.K. and J.V.; software, N.K.; validation, N.K.; writing—original draft preparation, N.K.; writing—review and editing, J.V.; supervision, J.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gruber, T. Ontology. 2007. Available online: http://web.dfc.unibo.it/buzzetti/IUcorso2007-08/mdidattici/ontology-definition-2007.htm (accessed on 19 January 2024).
  2. Euzenat, J.; Shvaiko, P. Ontology Matching, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  3. Muggleton, S.; Lin, D.; Tamaddoni-Nezhad, A. Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited. Mach. Learn. 2015, 100, 49–73. [Google Scholar] [CrossRef]
  4. Guizzardi, G.; Benevides, A.; Fonseca, C.; Porello, D.; Almeida, J.; Prince Sales, T. UFO: Unified Foundational Ontology. Appl. Ontol. 2022, 17, 167–210. [Google Scholar] [CrossRef]
  5. Suggested Upper Merged Ontology (SUMO). Available online: https://www.ontologyportal.org/ (accessed on 29 February 2012).
  6. Maroun, M. A Survey On Ontology Operations Techniques. Math. Softw. Eng. 2021, 7, 7–28. [Google Scholar] [CrossRef]
  7. Choi, N.; Song, I.Y.; Han, H. A Survey on Ontology Mapping. SIGMOD Rec. 2006, 35, 34–41. [Google Scholar] [CrossRef]
  8. Doan, A.; Madhavan, J.; Domingos, P.M.; Halevy, A.Y. Ontology Matching: A Machine Learning Approach. Handb. Ontol. 2004, 385–403. [Google Scholar]
  9. Huseby, K.H. How to Improve the Performance of a Machine Learning Model with Post Processing Employing Levenshtein Distance. Available online: https://towardsdatascience.com/how-to-improve-the-performance-of-a-machine-learning-model-with-post-processing-employing-b8559d2d670a (accessed on 9 March 2024).
  10. Pennington, J.; Socher, R.; Manning, C.D. GloVe: Global Vectors for Word Representation. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
  11. University, P. About WordNet. 2010. Available online: https://wordnet.princeton.edu/ (accessed on 27 October 2023).
  12. Robin, R.; Uma, G. A Novel Algorithm for Fully Automated Ontology Merging Using Hybrid Strategy. Eur. J. Sci. Res. 2010, 47, 74–81. [Google Scholar]
  13. Stoilos, G.; Stamou, G.; Kollias, S. A string metric for ontology alignment. In Proceedings of the Semantic Web–ISWC 2005: 4th International Semantic Web Conference, ISWC 2005, Galway, Ireland, 6–10 November 2005; Proceedings 4. Springer: Berlin/Heidelberg, Germany, 2005; pp. 624–637. [Google Scholar]
  14. Karimi, H.; Kamandi, A. Ontology alignment using inductive logic programming. In Proceedings of the 2018 4th International Conference on Web Research (ICWR), Tehran, Iran, 25–26 April 2018; pp. 118–127. [Google Scholar] [CrossRef]
  15. Rocco, C.; Hernandez-Perdomo, E.; Mun, J. Introduction to Formal Concept Analysis and Its Applications in Reliability Engineering. Reliab. Eng. Syst. Saf. 2020, 202, 107002. [Google Scholar] [CrossRef]
  16. Stumme, G.; Maedche, A. FCA-Merge: Bottom-up merging of ontologies. IJCAI 2001, 1, 225–230. [Google Scholar]
  17. Manzoor, S.; Rocha, Y.; Joo, S.H.; Bae, S.H.; Kim, E.J.; Joo, K.J.; Kuc, T.Y. Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications. Appl. Sci. 2021, 11, 4324. [Google Scholar] [CrossRef]
  18. Olszewska, J.I.; Barreto, M.; Bermejo-Alonso, J.; Carbonera, J.; Chibani, A.; Fiorini, S.; Goncalves, P.; Habib, M.; Khamis, A.; Olivares, A.; et al. Ontology for autonomous robotics. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 189–194. [Google Scholar] [CrossRef]
  19. Prestes, E.; Fiorini, S.; Carbonera, J. Core Ontology for Robotics and Automation. 2014. Available online: https://www.researchgate.net/publication/273122687_Core_Ontology_for_Robotics_and_Automation (accessed on 27 October 2023).
  20. Borgo, S.; Ferrario, R.; Gangemi, A.; Guarino, N.; Masolo, C.; Porello, D.; Sanfilippo, E.M.; Vieu, L. DOLCE: A descriptive ontology for linguistic and cognitive engineering1. Appl. Ontol. 2022, 17, 45–69. [Google Scholar] [CrossRef]
  21. Lim, G.H.; Suh, I.H.; Suh, H. Ontology-Based Unified Robot Knowledge for Service Robots in Indoor Environments. Syst. Man Cybern. Part A Syst. Humans IEEE Trans. 2011, 41, 492–509. [Google Scholar] [CrossRef]
  22. Lemaignan, S.; Ros, R.; Mösenlechner, L.; Alami, R.; Beetz, M. ORO, a knowledge management platform for cognitive architectures in robotics. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3548–3553. [Google Scholar] [CrossRef]
  23. Beetz, M.; Beßler, D.; Haidu, A.; Pomarlan, M.; Bozcuoglu, A.; Bartels, G. Know Rob 2.0—A 2nd Generation Knowledge Processing Framework for Cognition-Enabled Robotic Agents. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 13 September 2018; pp. 512–519. [Google Scholar] [CrossRef]
  24. Beßler, D.; Porzel, R.; Pomarlan, M.; Vyas, A.; Höffner, S.; Beetz, M.; Malaka, R.; Bateman, J. Foundations of the Socio-physical Model of Activities (SOMA) for Autonomous Robotic Agents. arXiv 2020, arXiv:cs.RO/2011.11972. [Google Scholar]
  25. Gonçalves, P.J.; Torres, P.M. Knowledge representation applied to robotic orthopedic surgery. Robot. Comput.-Integr. Manuf. 2015, 33, 90–99. [Google Scholar] [CrossRef]
  26. Bruno, B.; Chong, N.Y.; Kamide, H.; Kanoria, S.; Lee, J.; Lim, Y.; Pandey, A.K.; Papadopoulos, C.; Papadopoulos, I.; Pecora, F.; et al. The CARESSES EU-Japan Project: Making Assistive Robots Culturally Competent. In Ambient Assisted Living; Casiddu, N., Porfirione, C., Monteriù, A., Cavallo, F., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 151–169. [Google Scholar]
  27. Diab, M.; Akbari, A.; Din, U.; Rosell, J. PMK-A Knowledge Processing Framework for Autonomous Robotics Perception and Manipulation. Sensors 2019, 19, 1166. [Google Scholar] [CrossRef]
  28. Sun, X.; Zhang, Y.; Chen, J. High-Level Smart Decision Making of a Robot Based on Ontology in a Search and Rescue Scenario. Future Internet 2019, 11, 230. [Google Scholar] [CrossRef]
  29. Ribino, P.; Bonomolo, M.; Lodato, C.; Vitale, G. A Humanoid Social Robot Based Approach for Indoor Environment Quality Monitoring and Well-Being Improvement. Int. J. Soc. Robot. 2021, 13, 277–296. [Google Scholar] [CrossRef]
  30. Sabri, L.; Bouznad, S.; Fiorini, S.; Chibani, A.; Prestes, E.; Amirat, Y. An integrated semantic framework for designing context-aware Internet of Robotic Things systems. Integr. Comput.-Aided Eng. 2017, 25, 1–20. [Google Scholar] [CrossRef]
  31. Sadik, A.R.; Urban, B. An Ontology-Based Approach to Enable Knowledge Representation and Reasoning in Worker-Cobot Agile Manufacturing. Future Internet 2017, 9, 90. [Google Scholar] [CrossRef]
  32. Kootbally, Z.; Kramer, T.; Schlenoff, C.; Gupta, S. Implementation of an Ontology-Based Approach to Enable Agility in Kit Building Applications. Int. J. Semant. Comput. 2018, 12, 5–24. [Google Scholar] [CrossRef]
  33. Buoncompagni, L.; Capitanelli, A.; Mastrogiovanni, F. A ROS multi-ontology references services: OWL reasoners and application prototyping issues. arXiv 2017, arXiv:1706.10151. [Google Scholar]
  34. on Advanced Robotics Lab in Genoa, E.M. ARMOR. Available online: https://github.com/EmaroLab/armor (accessed on 29 February 2024).
  35. Shvaiko, P.; Euzenat, J. Ontology Matching: State of the Art and Future Challenges. Knowl. Data Eng. IEEE Trans. 2013, 25, 158–176. [Google Scholar] [CrossRef]
  36. Lambrix, P.; Tan, H. SAMBO—A system for aligning and merging biomedical ontologies. J. Web Semant. 2006, 4, 196–206, Semantic Web for Life Sciences.. [Google Scholar] [CrossRef]
  37. Hu, W.; Qu, Y.; Cheng, G. Matching large ontologies: A divide-and-conquer approach. Data Knowl. Eng. 2008, 67, 140–160. [Google Scholar] [CrossRef]
  38. Li, J.; Tang, J.; Li, Y.; Luo, Q. RiMOM: A Dynamic Multistrategy Ontology Alignment Framework. IEEE Trans. Knowl. Data Eng. 2009, 21, 1218–1232. [Google Scholar] [CrossRef]
  39. Melnik, S.; Garcia-Molina, H.; Rahm, E. Similarity flooding: A versatile graph matching algorithm and its application to schema matching. In Proceedings of the Proceedings 18th International Conference on Data Engineering, San Jose, CA, USA, 26 February–1 March 2002; pp. 117–128. [Google Scholar] [CrossRef]
  40. Cruz, I.; Antonelli, F.; Stroe, C. AgreementMaker: Efficient Matching for Large Real-World Schemas and Ontologies. Proc. VLDB Endow. 2009, 2, 1586–1589. [Google Scholar] [CrossRef]
  41. David, J.; Guillet, F.; Briand, H. Matching directories and OWL ontologies with AROMA. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, Arlington, VA, USA, 6–11 November 2006; pp. 830–831. [Google Scholar] [CrossRef]
  42. Schadd, F.; Roos, N. MaasMatch Results for OAEI 2012. 2011, Volume 946. Available online: https://dl.acm.org/doi/10.5555/2887596.2887610 (accessed on 27 October 2023).
  43. Jiménez-Ruiz, E.; Grau, B. LogMap: Logic-Based and Scalable Ontology Matching. In the Semantic Web–ISWC 2011, Proceedings of the 10th International Semantic Web Conference, Bonn, Germany, 23–27 October 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 273–288. [Google Scholar] [CrossRef]
  44. Ngo, D.; Bellahsene, Z. YAM++ : A Multi-strategy Based Approach for Ontology Matching Task. In Knowledge Engineering and Knowledge Management, Proceedings of the 18th International Conference, EKAW 2012, Galway City, Ireland, 8–12 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–425. [Google Scholar] [CrossRef]
  45. Wang, P. Lily-LOM: An efficient system for matching large ontologies with non-partitioned method. In Proceedings of the 2010 International Conference on Posters & Demonstrations Track—Volume 658; CEUR-WS.org: Aachen, Germany, 2010; ISWC-PD’10; pp. 69–72. [Google Scholar]
  46. Shvaiko, P.; Euzenat, J.; Jiménez-Ruiz, E.; Hassanzadeh, O.; Trojahn, C. Proceedings of the 14th ISWC workshop on Ontology Matching (OM). 2020. Available online: https://hal.science/hal-02984947 (accessed on 27 October 2023).
  47. Gracia, J.; Bernad, J.; Mena, E. Ontology matching with CIDER: Evaluation report for OAEI 2011. Ontol. Matching 2011, 814. [Google Scholar]
  48. Baader, F.; Snyder, W.; Narendran, P.; Schmidt-Schauss, M.; Schulz, K. Unification Theory. In Handbook of Automated Reasoning; Elsevier: Amsterdam, The Netherlands, 2001; pp. 445–533. [Google Scholar] [CrossRef]
  49. Cropper, A.; Muggleton, S.H. Logical Minimisation of Meta-Rules Within Meta-Interpretive Learning. In Inductive Logic Programming; Davis, J., Ramon, J., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 62–75. [Google Scholar]
  50. Winkler, W. The State of Record Linkage and Current Research Problems. Statist. Med. 1999, 14. [Google Scholar]
  51. Initiative, O.A.E. Benchmark Test. 2016. Available online: https://oaei.ontologymatching.org/2016/benchmarks/index.html (accessed on 27 October 2023).
  52. Initiative, O.A.E. Ontology Alignment Evaluation Initiative—Test Library. Available online: https://oaei.ontologymatching.org/tests/ (accessed on 15 April 2023).
  53. Euzenat, J.; Ferrara, A.; van Hage, W.; Hollink, L.; Meilicke, C.; Nikolov, A.; Ritze, D.; Shvaiko, P.; Stuckenschmidt, H.; Šváb Zamazal, O.; et al. Final results of the Ontology Alignment Evaluation Initiative 2011. In Proceedings of the OM’11: Proceedings of the 6th International Conference on Ontology Matching—Volume 814, Bonn, Germany, 24 October 2011; Volume 814. [Google Scholar]
  54. Initiative, O.A.E. Ontology Alignment Evaluation Initiative—Benchmark final results. Available online: https://oaei.ontologymatching.org/2011/results/benchmarks/index.html (accessed on 5 June 2023).
  55. Diab, M. PMK. Available online: https://github.com/MohammedDiab1/PMK (accessed on 15 March 2024).
  56. Bremen, I.U. SOMA. Available online: https://ease-crc.github.io/soma/ (accessed on 15 March 2024).
  57. Gangemi, A. DOLCE-Lite. Available online: https://github.com/iddi/sofia/blob/master/eu.sofia.adk.common/ontologies/foundational/DOLCE-Lite.owl/ (accessed on 4 April 2024).
Figure 1. OAEI 2011 cumulative average for precision.
Figure 1. OAEI 2011 cumulative average for precision.
Bdcc 08 00098 g001
Figure 2. OAEI 2011 cumulative average for recall.
Figure 2. OAEI 2011 cumulative average for recall.
Bdcc 08 00098 g002
Table 1. Comparison of different methods.
Table 1. Comparison of different methods.
MethodInput DataMatching TechniquesIs it Fully Automatic?
ProposedTwo OWL files
  • extraction of inherited relation values using FCA
  • lexical: edit distance
  • semantic: WordNet
  • structural: calculated average of semantic and lexical similarity over all inherited relation values;
  • predetermined similarity value to indicate the absence of a relation value
  • absence of a relation as a separate parameter
  • weighted sum of various methods
Yes
Robin and Uma [12]Two OWL files
  • lexical: stemming and string comparison
  • semantic: WordNet
  • structural: numbers of similar and different relation values are compared
Yes
Stoilos et al. [13]Not stated, but OWL-based OAEI tests were used
  • lexical: string similarity metric
Yes
FCA-Merge [16]Two ontologies and a set of natural language documents relevant to the ontologies
  • instance extraction from natural language documents
  • FCA is used to generate a formal concept lattice
No: user manually creates output ontology concepts and their relations based on FCA lattice
SAMBO [36]Any number of OWL files
  • lexical: n-gram, edit distance, stemming
  • semantic: hyperonymy using WordNet
  • structural: matching based on is-a and part-of hierarchies
  • use of domain knowledge: Metathesuarus and UMLS
  • learning matcher: naive Bayes classification algorithm using domain literature
  • weighed sum of various methods
No: user decides whether the suggested alignment is correct
RiMOM [38]Two input ontologies
  • lexical: vector (cosine) distance as a form of edit distance using the concept’s name and other contextual information (comments, instances of the concept)
  • structural: similarity flooding algorithm
  • dynamic strategy selection
Yes
Agreement-Maker [40]Two input ontologies
  • lexical: edit distance, Jaro-Winkler distance, substring-based comparison, cosine similarity
  • semantic: WordNet
  • structural: Descendant’s Similarity Inheritance, Sibling’s Similarity Contribution
  • Linear Weighted Combination (LWC) matcher
  • shortest Augmenting Path algorithm
Yes
AROMA [41]Web directories or ontologies in OWL format
  • extensional and asymmetric
  • extraction of terms relevant to concepts from documents
  • discovery of implicative matching relations between concepts by evaluating association rules between their respective relevant terms sets
Yes: not stated explicitly, but participated in OAEI competition
MaasMtch [42]Ontologies in OWL up to 2000 concepts per ontology
  • syntactic: 3-grams of concept names and labels using Jaccard measure
  • structural: comparison of all of concept’s ancestors based on Levenshtein similarity
  • virtual documents: concept descriptions (e.g., name, label, comments) are collected into a virtual document and compared as a weighted document vector
  • lexical: Wordnet, virtual document similarities are used to identify valid WordNet synets
  • aggregation of similarity matrices
Yes: not stated explicitly, but participated in OAEI competition
LogMap [43]Two input ontologies with emphasis on large-scale ontologies
  • lexical indexation: index concept lables using Wordnet or UMLS
  • structural indexation: interval labeling schema to represent the extended hierarchy of each ontology
  • anchor mappings: using lexical indexes, identify correspondences that are (lexically) almost exact
  • mapping repair and discovery: an iterative process that repairs inconsistencies in already established mappings and discovers new possibly valid mappings
Yes with overlapping estimation to facilitate manual revision
YAM++ [44]Two input ontologies, knowledge base (KB)
  • various terminological metrics
  • machine learning support on the basis of predefined KB
  • similarity Flooding algorithm
  • multilingual support using machine translation tools
Yes: not stated explicitly, but participated in OAEI competition
Lily-LOM [45,46]Large ontologies
  • semantic subgraphs: extraction of the local semantic meaning of ontology elements
  • Semantic Description Document (SDD) matcher: calculate the similarity of concepts’ hierarchies, related properties and instances
  • structural: modification of the similarity flood algorithm
  • ontology matching tuning: parameter optimization based on training data sets
Yes, but ontology matching tuning requires manual oversight
CIDER [47]Two OWL ontologies and a threshold value
  • linguistic similarity between terms using labels and descriptions
  • structural similarity of terms based on their ontological context and vector space modeling
  • weighing of different contributions to provide a final similarity degree
  • application of artificial neural networks
Yes
Table 2. Example of relation value pairs with confidence.
Table 2. Example of relation value pairs with confidence.
Relation TypeValue AValue BConf
rdf:typeObjectPropertyObjectProperty1
rdfs:domainairplanesairplanes1
rdfs:range-battery0.5
nameisGridPoweredhas-grid-power0.8
rdfs:domaindevice-0.5
Table 3. Results from OAEI test data.
Table 3. Results from OAEI test data.
TestC/TFalse Pos%Comment
10197/970100%Compares the ontology to itself.
1020/00100%Compares the ontology to a totally irrelevant one.
10396/97099%Compares the ontology with its generalisation in OWL Lite
10497/970100%Compares the ontology with its restriction in OWL Lite (where unavailable constraints have been discarded).
20190/97293%Names and labels replaced by a random string.
20222/97023%Based on 201, comments have been removed.
20397/970100%Labels and comments have been removed.
TestC/TFalse pos%Comment
20497/970100%Names and labels are written in a variety of different naming conventions.
20593/96*297%Names and labels are replaced by synonyms.
20896/97099%Based on 204, comments have been removed.
20956/96858%Based on 205, comments have been removed.
22197/970100%No class hierarchy: all subclass assertions to named classes are removed.
22293/930100%Reduced class hierarchy.
22394/97397%Extended class hierarchy: numerous intermediate classes are introduced.
22497/970100%All individuals have been removed.
22597/970100%All local restrictions on properties have been removed.
22833/330100%Properties and relations between objects have been completely removed.
23297/970100%Combines 221 and 224
23333/330100%Combines 221 and 228
23633/330100%Combines 224 and 228
23793/931100%Combines 222 and 224
23894/97397%Combines 223 and 228
23929/290100%Combines 222 and 228
2409/33227%Combines 223 and 228
24133/330100%Combines 232, 233 and 236
24629/290100%Combines 236, 237 and 239
24710/33230%Combines 236, 238 and 240
2486/9706%Combines 202 and 221
24924/97025%Combines 202 and 224
2500/3300%Combines 202 and 228
25112/93113%Combines 202 and 222
2523/9733%Combines 202 and 223
2536/9706%Combines 202, 221 and 224
2540/3300%Combines 202, 221, 225
2570/3300%Combines 202, 224 and 228
25812/93113%Combines 202, 222 and 224
25913/97313%Combines 202, 223 and 224
2600/2900%Combines 202, 222 and 228
2610/3300%Combines 202, 223 and 228
2620/3320%Combines 202, 221, 224 and 228
2650/2900%Combines 202, 222, 224 and 225
2660/3300%Combines 202, 223, 224 and 225
3019/52117%Real ontology: BibTeX/MIT
3020/3500%Real ontology: BibTeX/UMBC
3030/4000%Real ontology: Karlsruhe
30444/73160%Real ontology: INRIA
The reference document for test 205 lists 97 matches, however one of the concepts in the test data seems to be renamed after another concept due what seems to be a manual error, effectively making the two concepts indistinguishable. As such, we have omitted the unclear match in the reference document from the results of test 205.
Table 4. Comparison of results to those of the OAEI 2011 campaign.
Table 4. Comparison of results to those of the OAEI 2011 campaign.
NameOur MethodEdnaAgrMakerAromaCSA
Precision73%48%99%98%62%
Recall51%50%52%49%59%
CIDERCODILDOALilyLogMapMaasMtch
98%90%37%98%99%99%
49%59%49%49%49%47%
MapEVOMapPSOMapSSSOptimaYAM
29%61%76%81%99%
20%59%58%50%50%
Table 5. Matched concepts from the PMK–DOLCE comparison.
Table 5. Matched concepts from the PMK–DOLCE comparison.
PMKRelationsDOLCERelations
NameExplicitInheritedNameExplicitInheritedSimilarConfidence
#Region30#region4520.72
Table 6. Matched concepts from the PMK–SOMA comparison.
Table 6. Matched concepts from the PMK–SOMA comparison.
PMKRelationsSOMARelations
NameExplicitInheritedNameExplicitInheritedSimilarConf
#has-Sensing-Component20#has-Component2020.96
#is-RelatedTo20#is-RelatedTo-Concept2020.96
#Region30#Region2020.91
#Task30#Task2020.91
#Action-Class20#Action3220.875
#objectType31#Object2020.84
#Quality-Aggregation30#Quality2020.82
#Physical-Environment30#Physical-Agent2020.79
#Attributes30#Physical-Attribute2020.79
#Situation30#Situation-Transition3020.76
#Wspace-Class20#Space-Region2020.75
#Context-Reasoning-Class20#Reasoning3520.7
Table 7. Matched concepts from the DOLCE–SOMA comparison.
Table 7. Matched concepts from the DOLCE–SOMA comparison.
DOLCERelationsSOMARelations
NameExplicitIndirectNameExplicitIndirectSimilarConf
#quality-space40#Quality2020.84
#overlaps61#overlaps2020.82
#has-quality62#hasQuality2020.82
#particular20#Item3220.82
#region45#Region2020.81
#event39#Event2020.79
#process310#Process2020.79
#time-interval310#Time-Interval2020.79
#physical-object517#Physical-Object2020.77
#space-region513#Space-Region2020.78
#has-quale68#hasQuale6240.74
#physical-quality76#Physical-Attribute2020.73
#state310#State3020.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kuusik, N.; Vain, J. Ontology Merging Using the Weak Unification of Concepts. Big Data Cogn. Comput. 2024, 8, 98. https://doi.org/10.3390/bdcc8090098

AMA Style

Kuusik N, Vain J. Ontology Merging Using the Weak Unification of Concepts. Big Data and Cognitive Computing. 2024; 8(9):98. https://doi.org/10.3390/bdcc8090098

Chicago/Turabian Style

Kuusik, Norman, and Jüri Vain. 2024. "Ontology Merging Using the Weak Unification of Concepts" Big Data and Cognitive Computing 8, no. 9: 98. https://doi.org/10.3390/bdcc8090098

Article Metrics

Back to TopTop