Next Article in Journal
A 2.5D Finite Element Method Combined with Zigzag-Paraxial Boundary for Long Tunnel under Obliquely Incident Seismic Wave
Previous Article in Journal
BiomacEMG: A Pareto-Optimized System for Assessing and Recognizing Hand Movement to Track Rehabilitation Progress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations

1
Department of Information Fusion, Naval Aviation University, Yantai 264001, China
2
The School of Aviation Basis, Naval Aviation University, Yantai 264001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5748; https://doi.org/10.3390/app13095748
Submission received: 8 February 2023 / Revised: 25 April 2023 / Accepted: 27 April 2023 / Published: 6 May 2023

Abstract

:

Featured Application

In practical application, there are different knowledge graphs in different fields, such as financial graph, commodity graph, medical graph, and so on. Entity alignment technique can be applied to the fusion of multiple knowledge graphs in different domains or even across domains.

Abstract

Entity alignment helps discover and link entities from different knowledge graphs (KGs) that refer to the same real-world entity, making it a critical technique for KG fusion. Most entity alignment methods are based on knowledge representation learning, which uses a mapping function to project entities from different KGs into a unified vector space and align them based on calculated similarities. However, this process requires sufficient pre-aligned entity pairs. To address this problem, this study proposes an entity alignment method based on joint learning of entity and attribute representations. Structural embeddings are learned using the triples modeling method based on TransE and PTransE and extracted from the embedding vector space utilizing semantic information from direct and multi-step relation paths. Simultaneously, attribute character embeddings are learned using the N-gram-based compositional function to encode a character sequence for the attribute values, followed by TransE to model attribute triples in the embedding vector space to obtain attribute character embedding vectors. By learning the structural and attribute character embeddings simultaneously, the structural embeddings of entities from different KGs can be transferred into a unified vector space. Lastly, the similarities in the structural embedding of different entities were calculated to perform entity alignment. The experimental results showed that the proposed method performed well on the DBP15K and DWK100K datasets, and it outperformed currently available entity alignment methods by 16.8, 27.5, and 24.0% in precision, recall, and F1 measure, respectively.

1. Introduction

Knowledge graphs (KGs) are structured representations of different real-world entities and their relationships [1,2]. In essence, KG converts unstructured knowledge into a simple and well-defined collection of triples comprising a head entity, relation, and tail entity. KGs are widely used in many areas, such as recommendation systems [3], text classification [4], question-answering systems [5], semantic search [4], and knowledge-based reasoning [5].
Rapid developments of KG techniques have led to the emergence of numerous large-scale KGs; this results in problems such as knowledge overlap between different KGs and ambiguous relationships, which are obstacles to KG integration on a semantic level. Multilingual KGs such as DBpedia [6], YAGO [7], and Freebase [8] contain large quantities of knowledge; however, their heterogeneous data sources make it difficult to fuse KGs into a comprehensive KG. Knowledge fusion aligns and merges heterogeneous and redundant knowledge in KGs, while entity alignment is used to discover entities in different KGs that refer to the same real-world entity. As each KG is developed independently, different KGs will have different knowledge sources, resulting in different surface forms, posing a significant problem when fusing different KGs. Therefore, knowledge source fusion, integration of existing knowledge resources, and large-scale knowledge mapping are crucial endeavors.
Entity alignment can be used to link knowledge from different KGs and fuse KGs into larger, authoritative domain KGs, which can serve as a knowledge base for downstream applications. However, entity alignment requires complex calculations, and the conventional approach involves using entity-based labeling and handcrafted features, both of which are laborious and difficult to scale in terms of practical applications. Translation-based representation learning is a method for learning vector representations for entities in a dense low-dimensional space. TransE [9], a classic example of this approach, is the first method to utilize the structural embeddings of entities in a KG. Extensions of TransE, such as TransR [10], TransH [11], and TransD [12], are used to learn entity embedding representations and perform entity alignment or entity inference in the unified vector space, where the likelihood of alignment between any pair of entities is proportional to their semantic similarity, i.e., closeness in the embedding space. The most recent advances in KG entity alignment are embedding-based techniques, such as MTransE [13]—a translation-based model for multilingual KG embeddings in which existing multilingual KGs are used for cross-lingual entity alignment. Herein, the entities and relations of each language are encoded in separated embedding spaces, whereas an alignment model embeds the multilingual vectors into an independent space where they are aligned by translation and linear transformations. MTRansE aims to facilitate the construction of a coherent multilingual knowledge base and assist machines in dealing with expressions of entity relationships across different human languages. In IPTransE [14], entity alignment is performed using an iterative- and parameter-sharing method for joint knowledge embeddings. As this method requires manual labeling to produce a seed set of pre-aligned entities, whose structure is projected to a low-dimensional vector space, it is highly dependent on the accuracy of the manually labeled seed set. If the labeling is incorrect, the iteration of the seed entity set will be extremely noisy, thereby decreasing the alignment accuracy. The JAPE model [15] jointly embeds the relationship structure of two KGs into a unified vector space and uses attribute correlations to cluster entities. Unlike other models, JAPE embeds attributes while learning entity embeddings and relationships between different KGs in a unified embedding space and uses attribute correlations to optimize entity embeddings. In other words, it is a joint attribute-preserving embedding model for cross-lingual entity alignment. However, if the attributes are heterogeneous and the KGs are ambiguously correlated, the effectiveness of attribute embedding can be significantly reduced. BootEA [16] adds bootstrapping to JAPE to reduce error accumulation in iteratively grown seed sets. Furthermore, it treats entity alignment as a classification problem and maximizes alignment likelihood over all labeled and unlabeled entities based on KG embeddings while using ε-truncated uniform negative sampling to improve the alignment performance. Additionally, labeled entities are edited or removed to reduce error accumulation during iterations. MultiKE [17] divides KG characteristics into multiple subsets, or views and embeds each entity into specific views for complementary learning from multiple views, enhancing alignment accuracy. KDCoE [18] uses the co-training approach to iteratively train two embedding models: one model is trained on the KG structure; the other is trained on entity descriptions for refining entity embeddings.
All embedding-based KG entity alignment methods utilize the TransE family of models to learn the structural embeddings of entities and relations. However, these models can only embed the entity and relations of a single KG into the unified vector space. As the entities in each KG are named using different naming schemes, their structural embeddings fall in different vector spaces, making it impossible to directly calculate the entity similarity, highlighting one of the limitations of entity alignment. The current solution to this problem is to design a mapping function that projects entity embeddings from different KGs into a unified vector space. This mapping function can only be designed if some number of pre-aligned entities are already present; however, these pre-aligned entities are collected manually, which can be problematic. Furthermore, it is difficult to determine whether the entity pairs are aligned accurately, considering that any inaccuracy can profoundly affect the entity alignment process.
To address the above issues, this study proposes to learn structural embeddings in parallel with attribute character embeddings. First, we calculate the similarities in the relations and attributes of the semantic embeddings. Relationships and attributes with the same meaning are then renamed with the same names, with manual inspection and amendment, followed by the joint learning of structural embeddings and attribute character embeddings from the relation and attribute triples. Finally, the structural embeddings of the entities are transferred to a unified vector space, and entity alignment is performed by calculating the similarities of the structural embeddings.
The contributions of this study are summarized as follows:
(1)
We proposed a relation and attribute alignment method based on cosine similarity. The cosine similarities of the semantic embeddings of relations and attributes are calculated, followed by the manual inspection and amendment of the results. Relation–attribute alignment is then performed by renaming relations and attributes with the same meaning to the same name.
(2)
A learning model is proposed to learn the structural and attribute character embeddings from different KGs using their relation and attribute triples. Structural embeddings are learned via a triples modeling method using TransE and PTransE, which use the semantic information in direct and multi-step relation paths to extract structural embedding vectors from the embedding vector space. Attribute character embeddings are learned using the N-gram-based compositional function to encode a character sequence for the attribute values. Then, by using TransE to model attribute triples in the embedding vector space, we obtain the attribute character embedding vectors. Finally, the structural embeddings and attribute character embeddings are jointly learned to transfer the structural embedding vectors of entities from different KGs into a unified vector space.
(3)
We introduce a limit-based loss function that assigns absolutely low and high scores to positive and negative triples, respectively, to improve the loss function for embedding learning and prevent drift while mapping structural embeddings into the unified vector space.

2. Materials and Methods

2.1. Problem Formulation

In a knowledge graph, knowledge is represented as triples, which are either relation or attribute triples [19,20,21]. Relation and attribute triples are expressed as:
T R = { ( h , r , t ) | h , t E , r R }
T A = { ( e , a , v ) | e E , a A , v V }
where E is the set of entities, R is the set of relations, A is the set of attributes, V is the set of attribute values, T R E × R × E is the set of relation triples, and T A E × A × V is the set of attribute triples. Therefore, a KG can be expressed as K G = ( E , R , A , V , T R , T A ) .
Based on this expression for KGs, entity alignment can be described as follows: given a pair of KGs—KG1 and KG2—the aim of entity alignment is to search for an entity pair e 1 , e 2 , where e1 and e2 refer to the same real-world entity, with e 1 K G 1 and e 2 K G 2 .

2.2. Relation and Attribute Alignment

If pre-aligned entity pairs are scarce, using pre-aligned entity pairs in the relation alignment module for transferring embeddings from different KGs into the vector space is not recommended. Instead, it is recommended to directly extract the relation and attribute triples of these KGs for relation and attribute alignment. In this study, this extraction was performed by calculating the similarities of the semantic embeddings of the relations and attributes, followed by manually inspecting and amending the results. Finally, the relations and attributes with the same meaning were renamed to the same name, thereby allowing two different KGs to be embedded in the same vector space and merging the KGs.
Here, similarity refers to cosine similarity [22], given as:
cos ( r 1 , r 2 ) = r 1 r 2 r 1 r 2
cos ( a 1 , a 2 ) = a 1 a 2 a 1 a 2

2.3. Embedding Learning

Embedding learning includes the learning of structural embeddings and attribute character embeddings. Structural embeddings are learned from relation triples, T R = T R K G 1 T R K G 2 , whereas attribute character embeddings are learned from attribute triples, T A = T A K G 1 T A K G 2 .

2.3.1. Structural Embedding Learning

Structural embedding learning is performed using the TransE model, i.e., by minimizing the loss function LSE (Equation (5)):
L S E d i r e c t = ( h , r , t ) T R ( h , r , t ) T R [ γ + h + r t L 2 h + r t L 2 ] +
where TR is the set of valid relation triples; TR is the set of corrupted relation triples, where T R = { ( h , r , t ) | h E } { ( h , r , t ) | t E } { ( h , r , t ) | r R } ; and γ is the margin hyperparameter.
The TransE model exhibits fewer parameters, low complexity and simplicity, and good efficiency in the construction of large KGs. However, its simplicity limits its applications to complex relation modeling, multisource information fusion, and relation path modeling. To address the inadequacies of TransE in handling complex KG relations, we embedded entities and relations into two separate spaces: entity space m and relation space n (m and n are the dimensionalities of these spaces, and m and n can be equal to each other). Each triple has an entity vector h , t m and relation vector r n . First, the head and tail entities of the entity space m are mapped to the relation space n according to the mapping matrix M r m × n , which gives the head and tail entities of the relation space, hr and tr, expressed as:
h r = h M r , t r = t M r
With this, the loss function in Equation (5) can be replaced by the loss function in Equation (7) as:
L S E d i r e c t = ( h , r , t ) T R ( h , r , t ) T R [ γ + h M r + r t M r L 2 h M r + r t M r L 2 ] +
When modeling triples, the aforementioned embedding learning method only considers direct entity relations. However, entities in large KGs often have multi-step relation paths that cannot be modeled by TransE. Therefore, the semantic information contained by these relation paths will be excluded, inevitably reducing the alignment performance. To address this problem, we use the PTransE [23] model to improve our method for embedding learning.
Equation (7) can be used to learn the direct relation r between the entity pair (h, t) of some triple (h, r, t). However, in the case of (h, P, t), where P represents the set of multi-path relation between a pair of entities ( P ( h , t ) = { p 1 , p 2 , , p N } , with p = r 1 r 2 r 3 r l being the path between the pair of entities), the loss function shown in Equation (8) can be used to learn the triple of the entity pair as:
L S E m u l t i s t e p = ( h , r , t ) T R { 1 Z p P ( h , t ) R ( p | h , t ) ( h , r , t ) T R [ γ + p r L 2 p r L 2 ] + }
where R ( p | h , t ) is the plausibility of relation path p, Z = p P ( h , t ) R ( p | h , t ) is the normalization factor, and p is the embedding vector representation of the relation path, defined as p = r 1 + r 2 + + r l in this study.
Combining Equations (7) and (8), the loss function for structural embedding learning can be expressed as:
L S E = L S E d i r e c t + L S E m u l t i s t e p

2.3.2. Attribute Character Embedding Learning

For learning attribute character embeddings, we first define a character sequence for attribute values, v = { v 1 , v 2 , , v t } . The N-gram-based compositional function is then used to encode the character sequence of the attribute values.
f a ( v ) = n = 1 N ( i = 1 t j = i n v j t i 1 )
where v 1 , v 2 , , v t are the character embeddings of the attribute values.
Based on the TransE model, the attribute a in (e, a, v) is the entity-to-attribute value mapping. In other words, adding attribute vector a to entity vector e yields the attribute-value vector fa (v). The learning of attribute character embeddings is performed by minimizing the loss function LCE, given as:
L C E = ( e , a , v ) T A ( e , a , v ) T A [ γ + e + a f a ( v ) L 2 e + a f a ( v ) L 2 ] +
where TA is the set of valid attribute triples and T′A is the set of corrupted attribute triples, where T A = { ( e , a , v ) | e E } { ( e , a , v ) | v V } { ( e , a , v ) | a A } .

2.3.3. Improved Loss Function for Embedded Learning

The loss functions shown in Equations (5), (7), (8) and (11) use the margin-based ranking criterion to learn embeddings. The goal of the learning process is to score positive triples lower than the negative triples. However, the aforementioned loss functions make it impossible for positive triples to obtain absolutely low scores, resulting in drifts during the mapping of entity embeddings to the unified vector space. To address this problem, we propose the limit-based loss function. Modifying the loss function in Equation (5) to the limit-based criterion, we obtain:
L S E d i r e c t = ( h , r , t ) T R [ h + r t L 2 γ 1 ] + + μ ( h , r , t ) T R [ γ 2 h + r t L 2 ] +
where γ1 and γ2 are the margin hyperparameters that satisfy γ1 > 0 and γ2 > 0, respectively, and µ is the balancing factor. The loss function in Equation (12) ensures that positive triples are scored lower than negative triples and provides absolutely low and high scores to positive and negative triples, i.e., h + r t L 2 γ 1 and h + r t L 2 γ 2 , respectively.
The modified versions of the loss functions in Equations (7), (8) and (11) are shown in Equations (13), (14) and (16), respectively.
L S E d i r e c t = ( h , r , t ) T R [ h M r + r t M r L 2 γ 1 ] + + μ ( h , r , t ) T R [ γ 2 h M r + r t M r L 2 ] +
L S E m u l t i s t e p = ( h , r , t ) T R { 1 Z p P ( h , t ) R ( p | h , t ) [ p r L 2 γ 1 ] + } + μ ( h , r , t ) T R { 1 Z p P ( h , t ) R ( p | h , t ) [ γ 2 p r L 2 ] + }
L S E = L S E d i r e c t + L S E m u l t i s t e p
L C E = ( e , a , v ) T A [ e + a f a ( v ) L 2 γ 1 ] + + μ ( e , a , v ) T A [ γ 2 e + a f a ( v ) L 2 ] +
The learning of structural embeddings and attribute character embeddings yields eSE and eCE, which are the structural embeddings and attribute character embeddings of the entities in KG1 and KG2, respectively. In general, eSE tends to fall in different vector spaces, considering the entities in KG1 and KG2 are usually named using different naming schemes. In contrast, the vector representations of the attribute values in KG1 and KG2 always fall into the same vector space, considering the character embeddings are learned from attribute value strings. Consequently, eCE falls into a unified vector space. Therefore, the learned attribute character embeddings can be used to transfer the structural embeddings of entities into a unified vector space. The joint learning of structural embeddings and attribute character embeddings was performed by minimizing LSIM (Equation (17)).
L S I M = e K G 1 K G 2 [ 1 e S E e C E e S E e C E ]
Finally, Equations (15)–(17) are combined to obtain the overall training loss function L (Equation (18)).
L = L S E + L C E + L S I M

2.4. Entity Alignment

By jointly learning the structural embeddings and attribute character embeddings, the entity embeddings of KG1 and KG2 can be placed into a unified vector space. Additionally, similar entities will have short semantic distances, defined as:
E ( e 1 , e 2 ) = e 1 e 2 L 2 , e 1 E 1 , e 2 E 2
Given some unaligned entity e ^ 1 in KG1, the aim of entity alignment is to identify the unaligned entity e ^ 2 in KG2 with the minimum semantic distance to e ^ 1 , i.e., e ^ 2 = arg min e 2 e ^ 1 e 2 L 2 . Furthermore, we define a semantic distance threshold hyperparameter θ, such that e ^ 1 e ^ 2 L 2 < θ , e ^ 1 and e ^ 2 are deemed to refer to the same real-world entity. These considerations form the basis by which the entities in KG1 and KG2 are aligned.
To perform entity alignment, we first construct an aligned entity set M = { ( e ^ 1 , e ^ 2 ) } and minimize the loss function shown in Equation (20):
L E A = ( e ^ 1 , e ^ 2 ) M ( L E A S E ( e ^ 1 , e ^ 2 ) + L E A S E ( e ^ 2 , e ^ 1 ) + L E A C E ( e ^ 1 , e ^ 2 ) + L E A C E ( e ^ 2 , e ^ 1 ) )
where the expressions of L E A S E ( e ^ 1 , e ^ 2 ) , L E A S E ( e ^ 2 , e ^ 1 ) , L E A C E ( e ^ 1 , e ^ 2 ) and L E A C E ( e ^ 2 , e ^ 1 ) are shown in Equations (21)–(24), respectively.
L E A S E ( e ^ 1 , e ^ 2 ) = ( e ^ 1 , r , t ) [ e ^ 2 M r + r t M r L 2 γ 1 ] + + ( e ^ 1 , r , t ) { 1 Z p P ( e ^ 2 , t ) R ( p | e ^ 2 , t ) [ e ^ 2 + p t L 2 γ 1 ] + } + ( h , r , e ^ 1 ) [ h M r + r e ^ 2 M r L 2 γ 1 ] + + ( h , r , e ^ 1 ) { 1 Z p P ( h , e ^ 2 ) R ( p | h , e ^ 2 ) [ h + p e ^ 2 L 2 γ 1 ] + }
L E A S E ( e ^ 2 , e ^ 1 ) = ( e ^ 2 , r , t ) [ e ^ 1 M r + r t M r L 2 γ 1 ] + + ( e ^ 2 , r , t ) { 1 Z p P ( e ^ 1 , t ) R ( p | e ^ 1 , t ) [ e ^ 1 + p t L 2 γ 1 ] + } + ( h , r , e ^ 2 ) [ h M r + r e ^ 1 M r L 2 γ 1 ] + + ( h , r , e ^ 2 ) { 1 Z p P ( h , e ^ 1 ) R ( p | h , e ^ 1 ) [ h + p e ^ 1 L 2 γ 1 ] + }
L E A C E ( e ^ 1 , e ^ 2 ) = ( e 1 , a , v ) [ e 2 + a f a ( v ) L 2 γ 1 ] +
L E A C E ( e ^ 2 , e ^ 1 ) = ( e ^ 2 , a , v ) [ e ^ 1 + a f a ( v ) L 2 γ 1 ] +

2.5. Datasets

Three datasets were used to evaluate the entity alignment methods:
DBP15K: This dataset comprises cross-lingual data extracted from DBpedia, including Chinese-to-English (DBP15KZH-EN), Japanese-to-English (DBP15KJA-EN), and French-to-English (DBP15KFR-EN). Each subset comprises 15,000 aligned entity pairs.
DWY100K: This dataset comprises monolingual data extracted from Dbpedia, Wikidata, and Yago3, and its subsets are DWY100KDBP-WD and DWY100KDBP-YG. Each subset comprises 100,000 aligned entity pairs.
A statistical summary of the DBP15K and DWK100K datasets is presented in Table 1.

3. Results

3.1. Evaluation Indicators and Experimental Setting

Hits@k, mean rank (MR), and mean reciprocal rank (MRR) were used to evaluate the link prediction performance while comparing the methods.
Hits@k is the ratio of correctly aligned entities ranked in the top k. The larger the Hits@k value, the better is the performance of the model.
H i t s @ k = 1 | S | i = 1 | S | ( r a n k i k )
where S is a set of triples, |S| is the number of triples, ranki is the link prediction ranking of the i-th triple, and ( ) is the indicator function, where 1 denotes that the condition is true and 0 denotes otherwise.
MR is the mean rank of correctly aligned entities; smaller values are indicative of better performance.
M R = 1 | S | i = 1 | S | r a n k i = 1 | S | ( r a n k 1 + r a n k 2 + r a n k | S | )
MRR is the mean of the reciprocal rank of the correctly aligned entities; larger values indicate better performance.
M R R = 1 | S | i = 1 | S | 1 r a n k i = 1 | S | ( 1 r a n k 1 + 1 r a n k 2 + 1 r a n k | S | )
The metrics used to evaluate entity alignment were precision, recall, and F1 measure.
Precision is the ratio of correctly aligned entities to all aligned entities, i.e., the accuracy of entity alignment, expressed as:
P r e c i s i o n = T N T N + F N
where TN is the number of correctly aligned entities in the experimental results and FN is the number of incorrectly aligned entities. Higher precision values indicate better performance.
Recall is the ratio of correctly aligned entities to all aligned entities in the dataset, given as:
R e c a l l = T N A N
where AN is the total number of aligned entity pairs in the dataset. Larger recall values indicate better performance.
F1 measure is a combined measure of precision and recall, as shown in Equation (30):
F 1 = 2 × R e c a l l × P r c i s i o n R e c a l l + P r c i s i o n
In the proposed entity alignment model, optimization was performed using stochastic gradient descent (SGD). The model parameters were configured as follows. Margin hyperparameters: γ1 = 0.01 and γ2 = 2; balancing factor: µ = 0.2; learning rate = 0.001; number of iterations: {500, 1500, 2500, 3500, 4500}; θ = 1.0; search step length: k = 0.2; dimensionality of the entity, relation, attribute, and attribute value embeddings = 100.

3.2. Validation of the Proposed Entity Alignment Method

In this section, we investigated the effect of (1) including semantic information from multi-step relation paths in structural embedding learning (Section 3.2) and (2) using improved loss function for embedding learning (Section 3.2) on the entity alignment performance of the proposed model. Furthermore, the proposed method will be compared to the following models to verify its effectiveness in improving entity alignment performance.
Model I: The semantic information in multi-step relation paths is not considered when learning structural embeddings, and the conventional loss function (margin-based ranking criterion) is used for embedding learning.
Model II: The semantic information in multi-step relation paths is considered when learning structural embeddings, and the conventional loss function (margin-based ranking criterion) is used for embedding learning.
Model III: The semantic information in multi-step relation paths is not considered when learning structural embeddings, and the loss function proposed in Section 3.2 (limit-based loss function) is used for embedding learning.
Table 2 and Table 3 show the entity alignment performances of the proposed method and Models I, II, and II on the DBP15K and DWY100K datasets, in terms of Hits@10, MR, and MRR.
As shown in Table 2 and Table 3, the proposed method outperforms Models I–III by varying degrees on the DBP15K and DWY100K datasets in all metrics. Furthermore, models II and III outperform Model I on both datasets. Therefore, including semantic information from multi-step relation paths when learning structural embeddings and using the improved loss function can benefit the entity alignment performance, thereby proving the efficacy of the proposed method.
Figure 1 presents the data in Table 2 and Table 3 as bar charts to provide a better understanding of the experimental results. Figure 1a–c compare the proposed method to six other methods in terms of Hits@10, MR, and MRR on the DBP15K and DWY100K datasets. As shown in Figure 1a,c, the proposed method exhibited significantly higher Hits@10 and MRR scores compared to models I, II, and III. In addition, Figure 1b indicates that our method exhibited a significantly lower MR than models I, II, and III, which provides further evidence of the efficacy of our method in improving entity alignment performance.
Table 4 and Table 5 show the entity alignment performances of our method and Models I–III in terms of precision, recall, and F1 measure on the DBP15K and DWY100K datasets.
According to the experimental results, on the ZH-EN dataset, the proposed method improved precision, recall, and F1 measure by 3.6–8.5%, 2.5–10.5%, and 2.2–9.8%, respectively, compared to the ZH-EN dataset. On the JA-EN dataset, our method improved precision, recall, and F1 measure by 4.6–12.8%, 5.1–10.9%, and 2.7–11.5%, respectively. On the FR-EN dataset, our method improved precision, recall, and F1 measure by 5.7–13.1%, 3.4–11.3%, and 1.8–9.9%, respectively. On the DBP-WD dataset, our method improved precision, recall, and F1 measure by 5.4–12.2%, 3.4–7.5%, and 3.2–10.6%, respectively. On the DBP-YG dataset, our method improved precision, recall, and F1 measure by 4.0–13.4%, 8.3–14.6%, and 4.0–10.9%, respectively. These results indicate that our model improves the performance of all aforementioned datasets in all metrics, and hence, outperforms Models I–III.
The data in Table 4 and Table 5 are shown as bar charts in Figure 2 to facilitate a more intuitive understanding of the experimental results. Figure 2a–e compare our method to Models I–III on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets, in terms of precision, recall, and F1 measure. The comparison results verify that our method significantly outperforms Models I–III on all three metrics, highlighting the effectiveness of our model in improving entity alignment performance.
The epoch-wise changes of our method and Models I–III on the ZH-EN, JA-EN, FR-EN, and DBP-WD and DBP-YG in terms of the F1 measure are shown in Figure 3. The F1 measure gradually improves and stabilizes with each epoch in all methods. Furthermore, our method exhibited the highest F1 measure of the aforementioned methods, highlighting its effectiveness in improving entity alignment performance.

3.3. Validating the Superiority of Our Method in Entity Alignment

In this section, we compare the proposed method to six other entity alignment methods: MTransE [13], IPTransE [14], JAPE [15], BootEA [16], MultiKE [17], and KDCoe [18]. Table 6 and Table 7 show the entity alignment performances of these methods on the DBP15K and DWY100K datasets, in terms of Hits@10, MR, and MRR.
According to the experimental results, the proposed method outperformed all other methods in all metrics on the DBP15K and DWY100K datasets. As expected, MTransE performed poorly in the entity alignment task, considering it loses information when learning the mapping between different embedding spaces. Consequently, it could not compute the similarities between entities from different KGs. IPTransE and JAPE outperformed MTransEE, considering they can utilize relation paths and entity attributes. However, MTransE, IPTransE, and JAPE are reliant on the number of seed alignments, which constrains their ability to improve the entity alignment performance. Although KDCoE uses entity descriptions for entity alignment, the semantic information in these descriptions has limited applications. Furthermore, KDCoE adds newly labeled entities to the seed set during co-training, making it susceptible to error accumulation. As a result, the entity alignment performance of this model is inferior to that of the proposed model. Furthermore, in comparison to the proposed model to the MultiKE model, which utilizes the entity structure, names, and descriptions for entity alignment, our model exhibited higher Hits@10 and MRR scores and lower MR scores, indicating that our model better uses the entity, relation, and attribute information, and hence, performs better in entity alignment.
The experimental results in Table 6 and Table 7 are shown as bar charts in Figure 4. The Hits@10, MR, and MRR of the aforementioned methods on the DBP15K and DWY100K datasets are shown in Figure 4a–c. As shown in Figure 4a,c, our method exhibited higher Hits@10 and MRR scores compared to the six other methods. Figure 4b indicates that our method exhibited the lowest MR values. Therefore, our method is superior to all other methods in terms of entity alignment performance.
Figure 4 contains bar charts comparing the entity alignment performance of our method with that of the six other methods, in terms of Hits@10, MR, and MRR. Table 8 and Table 9 compare our method to other methods in terms of precision, recall, and F1 measure on the DBP15K and DWY100K datasets, respectively.
According to the experimental results, on the ZH-EN dataset, our model improved precision, recall, and F1 measure by 1.4–7.5%, 5.1–27.5%, and 6.0–19.6%, respectively. On the JA-EN dataset, our model improved precision, recall, and F1 measure by 3.0–16.8%, 3.9–26.7%, and 3.5–24.0%, respectively. On the FR-EN dataset, our model improved precision, recall, and F1 measure by 3.8–13.5%, 3.7–18.3%, and 7.0–17.5%, respectively. On the DBP-WD dataset, our model improved precision, recall, and F1 measure by 2.1–9.7%, 2.2–7.1%, and 3.6–10.3%, respectively. On the DBP-YG dataset, our model improved precision, recall, and F1 measure by 1.0–7.4%, 1.7–12.5%, and 2.4–9.6%, respectively. Therefore, our model significantly improved all three metrics on all datasets, thereby confirming that our model is superior to the other entity alignment models in terms of entity alignment performance.
In Figure 5, the data in Table 8 and Table 9 are shown in the form of bar charts. Figure 5a–e compare our method to six other methods in terms of precision, recall, and F1 measure on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets. These figures clearly indicate that our method exhibited higher precision, recall, and F1 measure scores compared to all other methods, suggesting that our method is superior in terms of entity alignment performance.

4. Discussion

Currently, entity alignment is generally performed based on knowledge representation learning, which requires an adequate number of pre-aligned entity pairs to be collected in advance. Considering these pre-aligned entity pairs must be manually collected, the collection process may be plagued by various difficulties. In addition, it is difficult to ensure that the pre-aligned entity pairs are aligned accurately, considering that any inaccuracy can negatively affect the subsequent entity alignment process. Therefore, to address these issues, this study proposed an entity alignment method based on the joint learning of entity and attribute representations. We proposed a method for relation and attribute alignment based on cosine similarity by calculating the cosine similarities between the semantic embeddings of the relations and attributes, followed by manual inspection and amendments. The relations and attributes were then aligned by renaming the relations and attributes with the same meaning to the same name. Then, we proposed a learning model for the structural embeddings and attribute character embeddings of relation and attribute triples from different KGs. The structural embedding learning was performed via a triple modeling method using TransE and PTransE, which incorporated the semantic information contained by direct and multi-step relation paths, thereby extracting the structural embedding vectors from the embedding vector space. The attribute character embeddings were learned using the N-gram-based compositional function to encode a character sequence for the attribute values, with TransE used to model attribute triples in the embedding vector space to obtain attribute character embedding vectors. Additionally, the structural embeddings and attribute character embeddings were jointly learned to enable transferring structural embeddings from different KGs into a unified vector space. Entity alignment was then performed by calculating the similarities of the structural embeddings of entities. Moreover, a limit-based loss function was proposed to replace the conventional margin-based ranking criterion for learning structural and attribute character embeddings, which ensured that positive and negative triples received significantly low and high scores, respectively, thereby preventing drifts during the projection of structural embeddings into the unified vector space. Finally, we experimentally demonstrated that the proposed entity alignment method performed well on the DBP15K and DWK100K datasets and outperformed all entity alignment methods of the same type by a maximum of 16.8, 27.5, and 24.0% in precision, recall, and F1 measure, respectively.

Author Contributions

Conceptualization, C.X.; methodology, C.X.; software, C.X.; validation, C.X., L.Z. and Z.Z.; writing—original draft preparation, C.X.; writing—review and editing, C.X., L.Z. and Z.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant 91538201, in part by Taishan Scholar Project of Shandong Province under Grant ts201511020, and in part by Project supported by Chinese National Key Laboratory of Science and Technology on Information System Security under Grant 6142111190404.

Data Availability Statement

The data supporting the reported results are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, Q.; Wu, Y.; Lin, P.; Dong, L.X.; Sun, H. Mining Summaries for Knowledge Graph Search. IEEE Trans. Knowl. Data Eng. 2018, 30, 1887–1900. [Google Scholar] [CrossRef]
  2. Wang, Q.; Mao, Z.; Wang, B.; Guo, L. Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE Trans. Knowl. Data Eng. 2017, 29, 2724–2743. [Google Scholar] [CrossRef]
  3. Zhang, F.; Yuan, N.J.; Lian, D.; Xie, X.; Ma, W.Y. Collaborative Knowledge Base Embedding for Recommender Systems. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 353–362. [Google Scholar] [CrossRef]
  4. Wang, C.; Song, Y.; Li, H.; Zhang, M.; Han, J. Text classification with heterogeneous information network kernels. AAAI 30th AAAI Conf. Artif. Intell. 2016, 30, 2130–2136. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Dai, H.; Kozareva, Z.; Smola, A.J.; Song, L. Variational Reasoning for Question Answering with Knowledge Graph. Proc. AAAI Conf. Artif. Intell. 2018, 32, 6069–6076. [Google Scholar] [CrossRef]
  6. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.G. DBpedia: A Nucleus for a Web of Open Data. In a Knowledge Base from Multilingual Wikipedias. In Proceedings of the 7th Biennial Conference on Innovative Data Systems Research (CIDR 2015): International Semantic Web Conference, Asilomar, CA, USA, 4–7 January 2015; Volume 2007, pp. 722–735. [Google Scholar]
  7. Suchanek, F.M.; Kasneci, G.; Weikum, G. Yago: A Core of Semantic Knowledge. In Proceedings of the 16th International Conference World Wide Web, Banff, AB, Canada, 8–12 May 2007; pp. 697–706. [Google Scholar]
  8. Kurt, B.; Colin, E.; Praveen, P.; Tim, S.; Jamie, T. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, 10–12 June 2008; Volume 1250. [Google Scholar]
  9. Antoine, B.; Nicolas, U.; Alberto, G.-D.; Jason, W.; Oksana, Y. Translating Embeddings for Modeling Multi-Relational Data. In Proceedings of the NIPS’13: Proceedings of the 26th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5 December 2013; pp. 2787–2795. [Google Scholar]
  10. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning Entity and Relation Embeddings for Knowledge Graph Completion. AAAI Proc. AAAI Conf. Artif. Intell. 2015, 29, 2181–2187. [Google Scholar] [CrossRef]
  11. Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014; Volume 28, pp. 1112–1119. [Google Scholar] [CrossRef]
  12. Ji, G.; He, S.; Xu, L.; Liu, K.; Zhao, J. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China, 26–31 July 2015; pp. 687–696. [Google Scholar] [CrossRef]
  13. Chen, M.; Tian, Y.; Yang, M.; Zaniolo, C. Multilingual Knowledge Graph Embeddings for Cross-Lingual Knowledge Alignment. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 1511–1517. [Google Scholar] [CrossRef]
  14. Zhu, H.; Xie, R.; Liu, Z.; Sun, M. Iterative EntityAlignment via joint knowledge embeddings. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 4258–4264. [Google Scholar]
  15. Sun, Z.; Hu, W.; Li, C. Cross-Lingual Entity Alignment via Joint Attribute-Preserving Embedding. In Proceedings of the International Semantic Web Conference, Vienna, Austria, 21–25 October 2017; pp. 628–644. [Google Scholar] [CrossRef]
  16. Sun, Z.; Hu, W.; Zhang, Q.; Qu, Y. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 4396–4402. [Google Scholar]
  17. Zhang, Q.H.; Sun, Z.Q.; Hu, W.; Chen, M.; Guo, L.; Qu, Y. Multi view knowledge graph embedding for entity alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; Volume 5, pp. 425–429, 435. [Google Scholar]
  18. Chen, M.H.; Tian, Y.T.; Chang, K.W.; Skiena, S.; Zaniolo, C. Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-Lingual Entity Alignment. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; AAAI Press: Stockholm, Sweden, 2018; pp. 3998–4004. [Google Scholar] [CrossRef]
  19. Wang, Z.; Lv, Q.; Lan, X.; Zhang, Y. Cross Lingual Knowledge Graph Alignment via Graph Convolutional Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 349–357. [Google Scholar] [CrossRef]
  20. Wu, Y.; Liu, X.; Feng, Y.; Wang, Z.; Yan, R.; Zhao, D. Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 5278–5284. [Google Scholar] [CrossRef]
  21. Xu, K.; Wang, L.; Yu, M.; Feng, Y.; Song, Y.; Wang, Z.; Yu, D. Cross-Lingual Knowledge Graph Alignment via Graph Matching Neural Network. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–1 August 2019; pp. 3156–3161. [Google Scholar] [CrossRef]
  22. Wei, F.; Vijayakumar, P.; Kumar, N.; Zhang, R.; Cheng, Q. Privacy-Preserving Implicit Authentication Protocol Using Cosine Similarity for Internet of Things. IEEE Internet Things J. 2021, 8, 5599–5606. [Google Scholar] [CrossRef]
  23. Yankai, L.; Zhiyuan, L.; Huanbo, L.; Maosong, S.; Siwei, R.; Song, L. Modeling Relation Paths for Representation Learning of Knowledge Bases. In Proceedings of the Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 705–714. [Google Scholar]
Figure 1. Comparison of the entity alignment performance between our method and Models I–III on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets, in terms of (a) Hits@10; (b) MR; (c) MRR.
Figure 1. Comparison of the entity alignment performance between our method and Models I–III on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets, in terms of (a) Hits@10; (b) MR; (c) MRR.
Applsci 13 05748 g001aApplsci 13 05748 g001b
Figure 2. Comparison between our method and Models I–III in terms of precision, recall, and F1 measure on datasets: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG.
Figure 2. Comparison between our method and Models I–III in terms of precision, recall, and F1 measure on datasets: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG.
Applsci 13 05748 g002aApplsci 13 05748 g002b
Figure 3. Epoch-wise changes in the F1 measure of our method and Models I–III on: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG datasets.
Figure 3. Epoch-wise changes in the F1 measure of our method and Models I–III on: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG datasets.
Applsci 13 05748 g003aApplsci 13 05748 g003bApplsci 13 05748 g003c
Figure 4. Comparison between our method and six other methods on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets in terms of entity alignment performance: (a) Hits@10; (b) MR; (c) MRR).
Figure 4. Comparison between our method and six other methods on the ZH-EN, JA-EN, FR-EN, DBP-WD, and DBP-YG datasets in terms of entity alignment performance: (a) Hits@10; (b) MR; (c) MRR).
Applsci 13 05748 g004aApplsci 13 05748 g004b
Figure 5. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on datasets: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG.
Figure 5. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on datasets: (a) ZH-EN; (b) JA-EN; (c) FR-EN; (d) DBP-WD; (e) DBP-YG.
Applsci 13 05748 g005aApplsci 13 05748 g005b
Table 1. DBP15K and DWY100K datasets.
Table 1. DBP15K and DWY100K datasets.
DatasetEntitiesRelationsAttributesRelation TriplesAttribute Triples
DBP15KZH-ENChinese66,46928308113153,929379,684
English98,12523177173237,674567,755
JA-ENJapanese65,74420435882164,373354,619
English95,68020966066233,319497,230
FR-ENFrench66,85813794547192,191528,665
English105,88922096422278,590576,543
DWY100KDBP-WDDBpedia100,000330351463,294381,166
Wikidata100,000220729448,774789,815
DBP-YGDBpedia100,000302334428,952451,646
Yago3100,0003123502,563118,376
Table 2. Entity alignment performance of our method and Models I, II, and III on the DBP15K dataset, in terms of Hits@10, MR, and MRR.
Table 2. Entity alignment performance of our method and Models I, II, and III on the DBP15K dataset, in terms of Hits@10, MR, and MRR.
MethodZH-ENJA-ENFR-EN
Hits@10MRMRRHits@10MRMRRHits@10MRMRR
Model I0.781179.40.7770.793188.90.7630.728191.30.796
Model II0.868142.30.8370.85498.50.8620.874102.70.884
Model III0.847114.70.8550.867107.90.8340.82788.50.867
Our Method0.95661.40.9160.98056.00.9220.96647.30.935
Table 3. Entity alignment performance of our method and Models I, II, and III on the DWY100K dataset, in terms of Hits@10, MR, and MRR.
Table 3. Entity alignment performance of our method and Models I, II, and III on the DWY100K dataset, in terms of Hits@10, MR, and MRR.
MethodDBP-WDDBP-YG
Hits@10MRMRRHits@10MRMRR
Model I0.843147.70.7850.822133.70.804
Model II0.90966.50.8560.93276.00.911
Model III0.87792.10.8720.91468.30.925
Our Method0.97130.20.9390.98227.80.980
Table 4. Entity alignment performance of our method and Models I, II, and III on the DBP15K dataset, in terms of precision, recall, and F1 measure.
Table 4. Entity alignment performance of our method and Models I, II, and III on the DBP15K dataset, in terms of precision, recall, and F1 measure.
MethodZH-ENJA-ENFR-EN
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Model I0.8660.8070.8340.8350.8190.8210.8370.8030.842
Model II0.9060.8870.9100.9030.8680.8930.9000.8700.907
Model III0.9150.8540.8920.9170.8770.9090.9110.8820.923
Our Method0.9510.9120.9320.9630.9280.9360.9680.9160.941
Table 5. Entity alignment performance of our method and Models I, II, and III on the DWY100K dataset, in terms of precision, recall, and F1 measure.
Table 5. Entity alignment performance of our method and Models I, II, and III on the DWY100K dataset, in terms of precision, recall, and F1 measure.
MethodDBP-WDDBP-YG
PrecisionRecallF1PrecisionRecallF1
Model I0.8760.8810.8820.8530.8160.882
Model II0.9350.9140.9420.9470.8790.951
Model III0.9440.9220.9560.9310.8550.936
Our Method0.9980.9560.9880.9870.9620.991
Table 6. Comparison between our method and other methods in entity alignment performance on the DBP15K dataset, using Hits@10, MR, and MRR.
Table 6. Comparison between our method and other methods in entity alignment performance on the DBP15K dataset, using Hits@10, MR, and MRR.
MethodZH-ENJA-ENFR-EN
Hits@10MRMRRHits@10MRMRRHits@10MRMRR
MTransE0.435356.70.3340.578384.50.4960.562289.50.451
IPTransE0.559208.20.5080.452329.60.3840.441224.30.343
JAPE0.662127.40.6000.449240.20.4080.689191.20.635
BootEA0.629109.80.5890.661109.40.6230.762117.60.719
MultiKE0.70166.20.6270.70278.90.6530.812101.80.746
KDCoE0.66878.00.6110.65291.10.6110.81792.50.802
Our Method0.95661.40.9160.98056.00.9220.96647.30.935
Table 7. Comparison between our method and other methods in entity alignment performance on the DWY100K dataset, using Hits@10, MR, and MRR.
Table 7. Comparison between our method and other methods in entity alignment performance on the DWY100K dataset, using Hits@10, MR, and MRR.
MethodDBP-WDDBP-YG
Hits@10MRMRRHits@10MRMRR
MTransE0.610212.30.5550.513202.70.320
IPTransE0.651199.90.5040.798114.90.760
JAPE0.792108.40.6860.41399.40.508
BootEA0.72197.00.5900.62571.30.430
MultiKE0.63374.50.5070.72583.70.654
KDCoE0.70786.70.7710.73855.00.811
Our Method0.97130.20.9390.98227.80.980
Table 8. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on the DBP15K dataset.
Table 8. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on the DBP15K dataset.
MethodZH-ENJA-ENFR-EN
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
MTransE0.8760.6370.7360.7950.6610.6960.8330.7330.766
IPTransE0.8920.6460.7500.8720.7260.7930.8940.7490.834
JAPE0.9240.6590.7660.9110.8090.8170.8510.8210.857
BootEA0.9280.8030.8620.9210.8150.8570.9210.8470.861
MultiKE0.9350.7750.8490.9050.8890.8690.9110.8550.870
KDCoE0.9370.8610.8720.9330.8740.9010.9300.8790.871
Our Method0.9510.9120.9320.9630.9280.9360.9680.9160.941
Table 9. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on the DWY100K dataset.
Table 9. Comparison between our method and six other methods in terms of entity alignment performance (precision, recall, and F1 measure) on the DWY100K dataset.
MethodDBP-WDDBP-YG
PrecisionRecallF1PrecisionRecallF1
MTransE0.9010.8850.8850.9130.8370.895
IPTransE0.9230.8860.8940.9180.8640.910
JAPE0.9520.8920.9190.9330.9100.922
BootEA0.9550.9070.9330.9550.9370.954
MultiKE0.9670.9210.9340.9680.9390.964
KDCoE0.9770.9340.9520.9770.9450.967
Our Method0.9980.9560.9880.9870.9620.991
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, C.; Zhang, L.; Zhong, Z. Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations. Appl. Sci. 2023, 13, 5748. https://doi.org/10.3390/app13095748

AMA Style

Xie C, Zhang L, Zhong Z. Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations. Applied Sciences. 2023; 13(9):5748. https://doi.org/10.3390/app13095748

Chicago/Turabian Style

Xie, Cunxiang, Limin Zhang, and Zhaogen Zhong. 2023. "Entity Alignment Method Based on Joint Learning of Entity and Attribute Representations" Applied Sciences 13, no. 9: 5748. https://doi.org/10.3390/app13095748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop