Next Article in Journal
TSAD: Transformer-Based Semi-Supervised Anomaly Detection for Dynamic Graphs
Previous Article in Journal
On Bi-Univalent Function Classes Defined via Gregory Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Additive Biderivations of Incidence Algebras

College of Sciences, Northeastern University, Shenyang 110004, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(19), 3122; https://doi.org/10.3390/math13193122
Submission received: 31 August 2025 / Revised: 24 September 2025 / Accepted: 28 September 2025 / Published: 29 September 2025

Abstract

We characterize all additive biderivations on the incidence algebra I ( P , R ) of a locally finite poset P over a commutative ring with unity R . By decomposing P into its connected chains, we prove that any additive biderivation splits uniquely into a sum of inner biderivations and extremal ones determined by chain components. In particular, when every maximal chain of P is infinite, all additive biderivations are inner.
MSC:
16W25; 17C50; 16S90; 16W10; 06A11

1. Introduction

The concept of incidence algebras of partially ordered sets was first introduced by Rota in the 1960s [1] as a tool for solving combinatorial problems. Incidence algebras are fascinating objects that have been the subject of extensive research since their inception. For instance, Pierre and John [2] describe the relationships between the algebraic properties of incidence algebras and the combinatorial features of the partially ordered sets. In [3], Spiegel and O’Donnell provide a detailed analysis of the maximal and prime ideals, derivations and isomorphisms, radicals, and additional ring-theoretic properties of incidence algebras. Further developments on the structure of incidence algebras can be found in [4,5,6,7].
The study of derivations in the context of algebras is a valuable and significant endeavor. Yang provides a detailed account of the structure of nonlinear derivations on incidence algebras in [8], specifically decomposing the nonlinear derivations into three more specific forms. Regarding the decomposition of derivations, two significant findings pertaining to their structure have been established. Notably, Baclawski demonstrated that every derivation of the incidence algebra I ( P , R ) , when R is a field and P is a finite locally poset, can be expressed as the sum of an inner derivation and an additive induced derivation [9]. This result was extended by Spiegel and O’Donnell [3] to cases where R is a commutative ring. Additional insights into the structure of other special derivations on incidence algebra can be found in [10,11,12,13].
Building upon the aforementioned studies of derivations on incidence algebras, it is natural to investigate biderivations in this context. Kaygorodov and Khrypchenko delineate the structure of antisymmetric biderivations of finitary incidence algebras F I ( P , R ) , where P is an arbitrary poset and R is a commutative ring with unity, in [14]. In [15], Benkovič proves that every biderivation of a triangular algebra is the sum of an inner biderivation and an external biderivation. Later, Ghosseiri demonstrates that every biderivation of upper triangular matrix rings is the sum of an inner biderivation, an external biderivation, and a distinct category of biderivations [16]. Ghosseiri also presents particular instances where every biderivation is inner.
In the literature, many biderivations on different algebras have been shown to decompose into the sum of inner and extremal ones. Motivated by this, we ask whether general biderivations on incidence algebras admit a similar decomposition. Compared with [14], which characterizes antisymmetric biderivations on finitary incidence algebras F I ( P , R ) , our work extends the type of biderivations under consideration from antisymmetric to general ones, while the algebraic framework is more restrictive. Moreover, although most existing studies assume linearity, we find this condition unnecessarily strong. Therefore, in this paper, we focus on additive biderivations of incidence algebras and determine their precise structure. Specifically, let R be a commutative ring with unity, and P a locally finite poset such that any maximal chain in P contains at least three elements. The additive biderivation b is the exact sum of several inner biderivations and extremal biderivations. Furthermore, we give that when the number of elements in any maximal chain in P is infinite, b is an inner biderivation.

2. Preliminaries

2.1. Incidence Algebra

Throughout this paper, R denotes a commutative ring with unity. Recall that a relation ≤ is said to be a partial order on a set S if it satisfies the following conditions:
  • Reflexivity: a a for all a S ;
  • Antisymmetry: if a b and b a , then a = b ;
  • Transitivity: if a b and b c , then a c .
A partially ordered set (or poset) is a set equipped with a partial order. A poset ( S , ) is locally finite if, for any x y in S, there are finitely many elements u S satisfying x u y . Let ( P , ) be a locally finite poset. We use the notation x < y to mean that x y and x y . Now, let us recall the definition of incidence algebra.
Definition 1.
The incidence algebra I ( P , R ) is defined as the set of functions
I ( P , R ) = { f : P × P R f ( x , y ) = 0 if x y } ,
with multiplication given by
( f g ) ( x , y ) = x z y f ( x , z ) g ( z , y ) , for all x , y P .
For each pair x y in P, let e x y denote the element in I ( P , R ) defined by
e x y ( u , v ) = 1 , if ( x , y ) = ( u , v ) ; 0 , otherwise .
For brevity, we will use the notation α x y to denote α ( x , y ) , where α I ( P , R ) and x , y P . Consequently, any element α I ( P , R ) can be expressed as α = x y α x y e x y . It is evident that the multiplication in I ( P , R ) satisfies the following properties:
  • If y = u , then e x y e u v = e x v ;
  • If y u , then e x y e u v = 0 .
This relation allows us to derive the following formula, which will be used extensively in this article. For any f I ( P , R ) and x y , u v in P, we have
e x y f e u v = f y v e x v .

2.2. Additive Biderivations

A function d : I ( P , R ) I ( P , R ) is a derivation of I ( P , R ) if, for any α , β I ( P , R ) , it satisfies
d ( α β ) = α d ( β ) + d ( α ) β .
A function b : I ( P , R ) × I ( P , R ) I ( P , R ) is said to be a biderivation of I ( P , R ) if it is a derivation when fixing any one of its arguments. This means that for every α , β , γ I ( P , R ) , b satisfies
b ( α β , γ ) = α b ( β , γ ) + b ( α , γ ) β , b ( α , β γ ) = β b ( α , γ ) + b ( α , β ) γ .
Moreover, b is an additive biderivation if b also satisfies
b ( α + β , γ ) = b ( α , γ ) + b ( β , γ ) , b ( α , β + γ ) = b ( α , β ) + b ( α , γ ) .
The so-called inner biderivation is defined by
b ( α , β ) = λ [ α , β ] ,
where λ R is a fixed element, and [ α , β ] denotes the Lie bracket of α and β . It is known that in prime and semiprime rings, all biderivations are inner (Brešar [17]). However, in certain algebras, such as triangular algebras and upper triangular matrix algebras, non-inner biderivations do occur. To describe the structure of these non-inner parts, Benkovič [15] introduced the notion of extremal biderivations and showed that every biderivation on triangular algebras can be expressed as the sum of an inner biderivation and an extremal one.
More precisely, a biderivation b is called an extremal biderivation if there exists γ I ( P , R ) such that
b ( α , β ) = [ α , [ β , γ ] ] .

2.3. Decomposition of Posets

Our research reveals a close relationship between the structure of additive biderivations in I ( P , R ) and that of the poset P. Accordingly, we proceed to decompose P in this section, which will subsequently be employed in constructing the structure of the additive biderivations. In detail, we will consider and decompose the cases where P is connected or not.
First, let us introduce some notation. Consider a relation ∼ on P where x y indicates that x and y are comparable, i.e., either x y or y x . The relation ≁ indicates that x and y are not comparable. A totally ordered set is a poset in which every pair of elements is comparable.
Definition 2.
A chain in a poset P is a subset that is totally ordered with respect to ≤.
Example 1.
Consider the poset S = { a , b , c , d , e , f , g , h , i } represented by the following Hasse diagram, where x y denotes x y for all x , y S . In this case, the pair of elements b and e is not comparable, and the subset { g , h , i } of S constitutes a chain.
Mathematics 13 03122 i001
1. 
The first decomposition, when P is not connected.
First, let us introduce a relation between any two elements in R .
Definition 3.
Two elements x , y P are defined as being connected if there exists a sequence u 0 , u 1 , , u n P such that x u 0 , u 0 u 1 , …, u n 1 u n , and u n y . A poset P is connected if any pair of elements in it are connected.
It is evident that the relation of being connected is an equivalence relation on P. Thus, we can decompose P into the union of its connected components:
P = i I P i ,
where I is an index set and each P i is a connected poset.
Example 2.
Consider the poset S defined in Example 1. It can be observed that S can be expressed as the union of two disjoint sets: S 1 = { a , b , c , d , e , f } and S 2 = { g , h , i } .
Mathematics 13 03122 i002
It is evident that P i P j = for i j . Therefore, for any α I ( P , R ) , we have
α = i I α i , α i = x y P i α x y e x y I ( P i , R ) .
2.
The second decomposition, when P is connected.
Next, we present a second decomposition of P when P is connected. We begin by defining a key term.
Definition 4.
A chain in P is called maximal if adding any element from P to it would result in it no longer being a chain.
In this context, the notation { x , y , } + will be used to denote a maximal chain in P that contains { x , y , } .
We proceed to define a set of subsets of P:
L = { l P l is a maximal chain } .
Definition 5.
For any pair of elements l , l L , we define the relation l l if there exist elements x < y in P such that x , y l l . We say l and l are connected if there exist chains l 0 , l 1 , , l n L such that l l 0 , l 0 l 1 , …, l n 1 l n , l n l .
With respect to the connectedness relation on L, we can decompose L into the union of its equivalence classes:
L = j J L j ,
where J is an index set corresponding to the decomposition of L.
For each j J , define a subset of P:
P j = { x P there exists l L j such that x l } .
It is evident that P = j J P j . However, it is not necessarily the case that P i P j = for all i j in J .
Example 3.
To illustrate this decomposition, consider S 1 = { a , b , c , d , e , f } from Example 2. The set S 1 can be expressed as the union of two sets: S 1 1 = { a , b , d } and S 2 1 = { a , c , e , f } .
Mathematics 13 03122 i003
An element x P is said to be maximal if there is no element y P such that x < y , and minimal if there is no element y P such that y < x .
Lemma 1.
Let i j in J be defined in (10); then there does not exist a pair of elements x < y in P such that x , y P i P j . Additionally, any element in P i P j is either a minimal or a maximal element of P.
Proof. 
We begin by asserting that any element in P i P j is isolated, meaning that no other element in P i P j can be compared with it. Suppose, for contradiction, that there exists a pair of elements x < y contained in P i P j . Then, there exist chains l 1 L i and l 2 L j such that x , y l 1 and x , y l 2 . It follows that l 1 l 2 , implying P i = P j and i = j , which contradicts the assumption that i j . Therefore, any element in P i P j is isolated.
Now, suppose there exists an element x P i P j that is neither maximal nor minimal. Then there exist elements u < x < v in P. According to the definition of P i and P j and the assumption that any maximal chain of P has at least three elements, there exist elements y i P i and y j P j that can be compared with x, and there exist maximal chains l i L i and l j L j such that x , y i l i and x , y j l j . Without loss of generality, suppose x < y i . It is evident that { u , x , v } + { u , x , y i } + l i , so u , x , v P i . Similarly, u , x , v P j . This implies that both u < x and x < v are elements of P i P j , which contradicts the previous conclusion. Thus, any element in P i P j must be either minimal or maximal.  □
By the lemma above, for any α I ( P , R ) , where P is connected, we can decompose it as
α = j J α j + α D , where α j = x < y P j α x y e x y , and α D = z P α z z e z z .
If i j and the product α i β j = x < y P i u < v P j α x y β u v e x y e u v is non-zero, then there exist x < y in P i and u < v in P j such that e x y e u v 0 . Therefore, we conclude that y = u , implying that y P i P j and x < y < v , which contradicts Lemma 1. This leads to the following corollary.
Corollary 1.
For any i j in J , let α i I ( P i , R ) and β j I ( P j , R ) , then α i β j = 0 .

3. Additive Derivations of I ( P , R )

In the study of additive biderivations, it is necessary to discuss some properties of additive derivations of I ( P , R ) , since an additive biderivation becomes an additive derivation when one of its arguments is fixed. Let P be a locally finite poset, let R be a commutative ring with unity, and let I ( P , R ) be the incidence algebra of P over R . Suppose d is an additive derivation of I ( P , R ) .
We will denote d x y ( α ) to represent d ( α ) ( x , y ) for any α I ( P , R ) and x , y P . We begin by demonstrating a lemma that shows the structure of the value of d ( r e x y ) .
Lemma 2.
Let x y P and r R , then
d ( r e x y ) = p x d p y ( r e x y ) e p y + y < q d x q ( r e x y ) e x q .
Proof. 
Consider a fixed pair x y P and an arbitrary element r R . Suppose u v P with u x and v y . We examine the expression e u u d ( r e x y ) e v v by multiplying d ( r e x y ) on the left by e u u and on the right by e v v :
e u u d ( r e x y ) e v v = e u u d ( r e x x e x y ) e v v = e u u e x x d ( r e x y ) e v v + e u u d ( e x x ) r e x y e v v = 0 .
Since e u u d ( r e x y ) e v v = d u v ( r e x y ) e u v , it follows that d u v ( r e x y ) = 0 unless u = x or v = y . This establishes the desired equality (13).  □
The proof of the aforementioned lemma allows us to derive the following corollary.
Corollary 2.
Let x y , u v P , then d u v ( r e x y ) = 0 unless the elements x , y , u , v satisfy one of the following cases: (1) u = x , v y ; (2) u x , v = y .
In describing the structure of biderivations, we will try to prove that some of them are equivalent. To do this, we introduce a number of instrumental lemmas as described below.
Lemma 3.
Let x y P and r R , then
(1)
For any p < x , d p y ( r e x y ) = r d p y ( e x y ) = r d p x ( e x x ) ;
(2)
For any q > y , d x q ( r e x y ) = r d x q ( e x y ) = r d y q ( e y y ) .
Proof. 
We will demonstrate part (1), and the proof of part (2) follows analogously. Consider fixed elements x y P and r R . For any p < x , using Equation (4), we obtain
d p y ( r e x y ) e p y = e p p d ( r e x y ) e y y = e p p e x x d ( r e x y ) + d ( e x x ) r e x y e y y = r d p x ( e x x ) e p y .
Therefore, we deduce that d p y ( r e x y ) = r d p x ( e x x ) . Setting r to be the unity element of R yields d p y ( e x y ) = d p x ( e x x ) , thereby establishing part (1).  □
Lemma 4.
Let x < y P , then d x y ( e x x ) + d x y ( e y y ) = 0 .
Proof. 
Let x < y P be fixed. Observe that
0 = e x x d ( e x x e y y ) e y y = e x x d ( e x x ) e y y + e x x d ( e y y ) e y y = d x y ( e x x ) + d x y ( e y y ) e x y .
Since the above expression equals zero and e x y is a non-zero element of the incidence algebra, it follows that d x y ( e x x ) + d x y ( e y y ) = 0 .  □
Lemma 5.
Let x < y < z P , then d x z ( e x z ) = d x y ( e x y ) + d y z ( e y z ) .
Proof. 
Let x < y < z P be fixed. Then,
d x z ( e x z ) e x z = e x x d ( e x y e y z ) e z z = e x x d ( e x y ) e y z + e x y d ( e y z ) e z z = d x y ( e x y ) + d y z ( e y z ) e x z .
Since e x z is a non-zero element of the incidence algebra, we deduce that d x z ( e x z ) = d x y ( e x y ) + d y z ( e y z ) .  □

4. Additive Biderivations of I ( P , R )

In this section, we will employ the properties of additive derivations derived in Section 3 to prove our main theorem (Theorem 4), which elucidates the structure of additive biderivations on the incidence algebra I ( P , R ) . Let P be a locally finite poset, and let R be a commutative ring with unity. In this section, we will use the notation b x y ( α , β ) to denote the value b ( α , β ) ( x , y ) for any α , β I ( P , R ) and x , y P .
We begin by considering a corollary that can be readily extended from Lemma 2.
Corollary 3.
Let x y , u v P , and r 1 , r 2 R , then
b ( r 1 e x y , r 2 e u v ) = b x v ( r 1 e x y , r 2 e u v ) e x v + b u y ( r 1 e x y , r 2 e u v ) e u y , if x u , y v ; y q , v q b x q ( r 1 e x y , r 2 e x v ) e x q , if x = u , y v ; p x , p u b p y ( r 1 e x y , r 2 e u y ) e p y , if x u , y = v ; p x b p y ( r 1 e x y , r 2 e x y ) e p y + y < q b x q ( r 1 e x y , r 2 e x y ) e x q , if x = u , y = v .
The preliminary stage will entail a reduction in the complexity of the structure of b. This will be achieved by establishing a sufficient condition that characterizes the circumstances under which b is equal to zero.
Lemma 6.
Let x y , u v P ; if at least one pair among { x , y , u , v } is not comparable, then b ( r 1 e x y , r 2 e u v ) = 0 for any r 1 , r 2 R .
Proof. 
Consider x y , u v P . Suppose at least one pair among { x , y , u , v } is not comparable. This situation can be divided into four cases: (1) x u ; (2) x v ; (3) y u ; (4) y v .
Case 1:
Suppose x u . We consider two subcases: y v and y = v .
If y v , by Corollary 3, we have
b ( r 1 e x y , r 2 e u v ) = b x v ( r 1 e x y , r 2 e u v ) e x v + b u y ( r 1 e x y , r 2 e u v ) e u y .
According to Corollary 2, both b x v ( r 1 e x y , r 2 e u v ) and b u y ( r 1 e x y , r 2 e u v ) are zero because x u .
If y = v , we obtain that
b ( r 1 e x y , r 2 e u y ) = p x , p u b p y ( r 1 e x y , r 2 e u y ) e p y = p < x , p < u b p y ( r 1 e x y , r 2 e u y ) e p y + b x y ( r 1 e x y , r 2 e u y ) e x y + b u y ( r 1 e x y , r 2 e u y ) e u y .
In (20), both b x y ( r 1 e x y , r 2 e u y ) e x y and b u y ( r 1 e x y , r 2 e u y ) e u y are zero by Corollary 2. For the remaining terms in (20), using Lemma 3 on the first argument of each term, we have
p < x , p < u b p y ( r 1 e x y , r 2 e u y ) e p y = p < x , p < u b p x ( r 1 e x x , r 2 e u y ) e p y = 0 .
Noting that p < u and x < y (if x = y , we obtain u < x = v , which contradicts x u ), b p x ( r 1 e x x , r 2 e u y ) e p y in (21) is zero by Corollary 2. Therefore, the lemma is proved when x u .
Case 2:
If x v , then x u and y v . From Corollary 3, we have
b ( r 1 e x y , r 2 e u v ) = b u y ( r 1 e x y , r 2 e u v ) e u y .
We assert that either u < x or u x , because if x u , then x u v , which contradicts x v . For b u y ( r 1 e x y , r 2 e u v ) , it is zero by Lemma 2 if u x . If u < x , by Lemma 2, we get
b u y ( r 1 e x y , r 2 e u v ) = b u x ( r 1 e x x , r 2 e u v ) = 0 .
Therefore, the lemma is proved when x v .
Cases 3 and 4:
If y u or y v , the proof is similar to Cases 1 and 2.
Thus, the lemma is proved.  □
Remark 1.
Intuitively, the lemma asserts that an additive biderivation must vanish on any pair of basis elements that are incomparable. The multiplication rules of incidence basis elements (where e x y e u v = 0 for many incomparable configurations), together with the derivation property in each argument, force all such coefficients to be zero. Hence, nontrivial behavior can only occur along comparable pairs (chains), which is the crucial reduction used in the sequel.
According to the decomposition (8), we can suppose that for any α , β I ( P , R ) ,
α = i I α i , β = i I β i ,
where α i , β i I ( P i , R ) . By applying the decomposition of P, as defined in (7), and the aforementioned lemma, it is straightforward to conclude that b ( α i 1 , β i 2 ) = 0 if i 1 i 2 I , thereby yielding the following result:
b ( α , β ) = i I b ( α i , β i ) .
Accordingly, this decomposition permits us to limit our analysis to the case where P is connected, as will be discussed subsequently. We will subsequently present the theorem, in the case where P is connected, which describes the conditions under which certain terms in Equation (18) are equal to zero. Prior to this, we will present an instrumental lemma.
Lemma 7.
Let α , β , γ , δ I ( P , R ) , then
b ( α , β ) [ γ , δ ] = [ α , β ] b ( γ , δ ) .
Proof. 
Let α , β , γ , δ I ( P , R ) be arbitrary. Since b is a biderivation, it satisfies the derivation property in each argument separately. We begin by applying the derivation property to the first argument of b ( α γ , β δ ) , followed by applying it to the second argument. This yields
b ( α γ , β δ ) = α b ( γ , β δ ) + b ( α , β δ ) γ = α β b ( γ , δ ) + α b ( γ , β ) δ + β b ( α , δ ) γ + b ( α , β ) δ γ .
Conversely, we first apply the derivation property to the second argument and then to the first argument, obtaining
b ( α γ , β δ ) = β b ( α γ , δ ) + b ( α γ , β ) δ = β α b ( γ , δ ) + β b ( α , δ ) γ + α b ( γ , β ) δ + b ( α , β ) γ δ .
Subtracting Equation (26) from Equation (27) gives
0 = [ α , β ] b ( γ , δ ) b ( α , β ) [ γ , δ ] ,
which rearranges to the desired identity:
b ( α , β ) [ γ , δ ] = [ α , β ] b ( γ , δ ) .
The proof is completed.  □
In accordance with the aforementioned lemma, it is possible to select specific values for x , y , u , v in order to demonstrate that b ( e x y , e u v ) = 0 . For x , y , u , v P that are comparable with each other and have [ e x y , e u v ] = 0 , the following two subcases can be identified: (1) x = y = u = b ; (2) x v , y u . The following two theorems will address these subcases in greater detail.
Theorem 1.
Let x P , then
b ( e x x , e x x ) = x < y , y is max b x y ( e x x , e x x ) e x x , x is a minimal element ; y < x , y is min b x y ( e x x , e x x ) e x x , x is a maximal element ; 0 , otherwise .
Proof. 
We prove the lemma by considering three cases based on the position of x in P: minimal, maximal, and neither.
Case 1:
x is a minimal element in P. By Corollary 3, we have
b ( e x x , e x x ) = x q b x q ( e x x , e x x ) e x q .
For any pair x y where y is not a maximal element, let z > y . By Lemma 7, we know that
b x y ( e x x , e x x ) e x z = b ( e x x , e x x ) [ e y z , e z z ] = [ e x x , e x x ] b ( e y z , e z z ) = 0 .
Thus, when x is a minimal element, b ( e x x , e x x ) reduces to the given form
x < y ,   y is max b x y ( e x x , e x x ) e x x .
Case 2:
x is a maximal element in P. The procedure is similar to Case 1.
Case 3:
x is neither a minimal nor maximal element. In this case, there exist y , z such that y < x < z . For any p x , we have
e p p b ( e x x , e x x ) e x z = e p p b ( e x x , e x x ) [ e x z , e z z ] = e p p [ e x x , e x x ] b ( e x z , e z z ) = 0 .
This implies b p x ( e x x , e x x ) = 0 . Similarly, for any x q , we can deduce b x q ( e x x , e x x ) = 0 . Hence, when x is neither a minimal nor maximal element, we have b ( e x x , e x x ) = 0 .
Combining the three cases, the lemma is proven.  □
Remark 2.
The theorem implies that the values of a biderivation on diagonal basis elements e x x can be non-zero only at boundary points of the poset, that is, at minimal or maximal elements. Intuitively, this follows from inserting b ( e x x , e x x ) between other basis elements and applying the commutation identities, which eliminate contributions from interior points: whenever a third comparable element exists, the corresponding terms vanish. Thus, the diagonal component of a biderivation is necessarily supported at the endpoints of the poset, providing the foundation for the extremal components constructed later.
Theorem 2.
Let P be a connected poset such that any maximal chain has at least three elements. For any r 1 , r 2 R and x , y , u , v P that are comparable with each other and have x v , y u . The additive biderivation b satisfies b ( r 1 e x y , r 2 e u v ) = 0 , except in the case where x = y u = v and one of x and u is the maximal element in P and the other is the minimal element.
Proof. 
Let r 1 , r 2 R and x , y , u , v P satisfy the conditions of the theorem. We further divide this case into four subcases:
Case 1:
If x u and y v , except in the case where x = y u = v and one of x and u is the maximal element in P and the other is the minimal element, then by Corollary 3, we have
b ( r 1 e x y , r 2 e u v ) = b x v ( r 1 e x y , r 2 e u v ) e x v + b u y ( r 1 e x y , r 2 e u v ) e u y .
We have b x v ( r 1 e x y , r 2 e u v ) = b y v ( r 1 e y y , r 2 e u v ) by using Lemma 3 in its first argument when x y < v . If v < y , it is also correct because b x v ( r 1 e x y , r 2 e u v ) = b y v ( r 1 e y y , r 2 e u v ) = 0 by Corollary 3. By the same reasoning in its second argument, we obtain b y v ( r 1 e y y , r 2 e u v ) = b y u ( r 1 e y y , r 2 e u u ) . Without loss of generality, we can assume that y < u . It is evident that y is not minimal or u is not maximal; otherwise, the case is excluded. Thus, by Lemma 4 and Theorem 1, we have
b y u ( r 1 e y y , r 2 e u u ) = r 1 r 2 b y u ( e y y , e y y ) = 0
Similarly, b u y ( r 1 e x y , r 2 e u v ) is also equal to 0. Therefore, we have b ( r 1 e x y , r 2 e u v ) = 0 in this case.
Case 2:
If x = u and y v . We consider two subcases: v < y and y < v .
(a)
If v < y , we have
b ( r 1 e x y , r 2 e x v ) = v < y q b x q ( r 1 e x y , r 2 e x v ) e x q = Lemma 3 v < y q b v q ( r 1 e x y , r 2 e v v ) e x q .
Since x = u < v and q y , b v q ( r 1 e x y , r 2 e v v ) = 0 by Lemma 2 and Corollary 2. Therefore, b ( r 1 e x y , r 2 e x v ) = 0 .
(b)
If y < v , the proof is similar.
Case 3:
If x u and y = v .
The proof is similar to subcase 2.
Case 4:
If x = u and y = v , implying x = u < y = v . By Corollary 3, we have
b ( r 1 e x y , r 2 e x y ) = p < x b p y ( r 1 e x y , r 2 e x y ) e p y + y < q b x q ( r 1 e x y , r 2 e x y ) e x q + b x y ( r 1 e x y , r 2 e x y ) e x y .
For any pair p < x , the term b p y ( r 1 e x y , r 2 e x y ) e p y is equal to b p x ( r 1 e x x , r 2 e x y ) e p y by Lemma 3. Let us consider its second argument, whereby because p x and x y , it is equal to 0 by Corollary 2. In the same way, for any y < q , we have b x q ( r 1 e x y , r 2 e x y ) e x q = 0 . Thus,
b ( r 1 e x y , r 2 e x y ) = b x y ( r 1 e x y , r 2 e x y ) e x y .
By the assumption on P, there exists an element z x , y such that z is comparable with x and y. We consider three cases:
  • If z < x < y , applying Lemma 7, we obtain
    b ( e z x , e x x ) [ r 1 e x y , r 2 e x y ] = [ e z x , e x x ] b ( r 1 e x y , r 2 e x y ) .
    The left-hand side is zero, and the right-hand side equals b x y ( r 1 e x y , r 2 e x y ) e z y , implying b ( r 1 e x y , r 2 e x y ) = 0 .
  • If x < y < z , the proof is similar to (a).
  • If x < z < y , applying Lemma 5, we have
    b x y ( r 1 e x y , r 2 e x y ) = b x z ( r 1 e x z , r 2 e x y ) + b z y ( r 1 e z y , r 2 e x y ) = Lemma 2 0 ,
    Consider the second argument of b x z ( r 1 e x z , r 2 e x y ) and b z y ( r 1 e z y , r 2 e x y ) , respectively Because x < z < y , we have b x z ( r 1 e x z , r 2 e x y ) = b z y ( r 1 e z y , r 2 e x y ) = 0 by Corollary 2. This implies b ( r 1 e x y , r 2 e x y ) = 0 .
Considering all the above cases, the theorem is proved.  □
Remark 3.
The theorem shows—after a case-by-case analysis—that most configurations of comparable indices force the corresponding biderivation entries to vanish; only special “endpoint–endpoint” configurations, where one index is minimal and the other maximal, may yield nontrivial terms. The underlying mechanism is the repeated application of the commutation identity (Lemma 7) together with the vanishing on incomparable pairs: whenever an auxiliary comparable element exists, the relevant coefficients are forced to zero. Consequently, the theorem confines all possible non-zero patterns to the finite boundary situations that generate the extremal biderivations.
The objective of the forthcoming project is to provide evidence that certain components of Equation (18) are equal. In particular, it will be demonstrated that b x y ( e x y , e x x ) = b u v ( e u v , e u u ) when x , y , u , v P satisfy specified conditions. Prior to this, two lemmas will be proven.
Lemma 8.
Let x < y < z P , then
b x y ( e x y , e y y ) = b y z ( e y y , e y z ) and b x y ( e y y , e x y ) = b y z ( e y z , e y y ) .
Proof. 
For any x < y < z P , applying Lemma 3 to the left or right argument of b x z ( e x y , e y z ) separately, we have
b y z ( e y y , e y z ) = b x z ( e x y , e y z ) = b x y ( e x y , e y y ) .
Thus, we have b x y ( e x y , e y y ) = b y z ( e y y , e y z ) . Using a similar process for b x z ( e y z , e x y ) , we obtain the other equality.  □
Lemma 9.
Let x < y < z P , then
(1) 
b x y ( e x y , e x x ) = b y z ( e y z , e y y ) = b x z ( e x z , e x x ) ;
(2) 
b x y ( e x x , e x y ) = b y z ( e y y , e y z ) = b x z ( e x x , e x z ) .
Proof. 
Let x < y < z P . For b x z ( e x z , e x x ) and b x z ( e x z , e z z ) , using Lemma 5 for its first argument, we obtain
b x z ( e x z , e x x ) = b x y ( e x y , e x x ) + b y z ( e y z , e x x ) = b x y ( e x y , e x x ) ; b x z ( e x z , e z z ) = b x y ( e x y , e z z ) + b y z ( e y z , e z z ) = b y z ( e y z , e z z )
noting that b y z ( e y z , e x x ) = b x y ( e x y , e z z ) = 0 by Corollary 3. Additionally, we have b x z ( e x z , e x x ) = b x z ( e x z , e z z ) by Lemma 4. Plugging the results from (43) into this, we find that
b x y ( e x y , e x x ) = b y z ( e y z , e z z ) ,
which implies b x y ( e x y , e x x ) = b y z ( e y z , e y y ) . Then b x y ( e x y , e x x ) = b y z ( e y z , e y y ) = b x z ( e x z , e x x ) . Similarly, we can obtain the other equality.  □
Corollary 4.
Let x < y P , then b x y ( e x y , e x x ) = b x y ( e x x , e x y ) .
Proof. 
Let x < y P . There exists z x , y P that is comparable with x and y by the assumption that any maximal chain in P has at least three elements. We consider three cases.
If x < y < z , according to Lemma 8, we have
b x y ( e x y , e y y ) = b y z ( e y y , e y z ) .
From Lemma 9, we have
b x y ( e x x , e x y ) = b y z ( e y y , e y z ) .
Considering (45) and (46), we have
b x y ( e x y , e x x ) = b x y ( e x y , e y y ) = b y z ( e y y , e y z ) = b x y ( e x x , e x y ) .
For the remaining two cases x < z < y and z < x < y , a similar process yields the same result.  □
Theorem 3.
Let x < y , u < v in P j , j J where J is the index set defined in (10), then
b x y ( e x y , e x x ) = b u v ( e u v , e u u ) .
Proof. 
Let x < y , u < v P j , j J . If P j is a totally ordered set, it is evident from Lemma 9 that b x y ( e x y , e x x ) = b u v ( e u v , e u u ) . If P j is not a totally ordered set, according to the construction of P j , there exist chains l 0 , l 1 , , l n L j defined in (10) such that x , y l 0 , u , v l n , and l i l i + 1 for any i { 0 , 1 , , n 1 } . Therefore, there exist x i < y i l i l i + 1 . Because l i is a totally ordered set, using Lemma 9, we thus have
b x y ( e x y , e x x ) = b x 0 y 0 ( e x 0 y 0 , e x 0 x 0 ) = = b x n y n ( e x n y n , e x n x n ) = b u v ( e u v , e u u ) .
This proves (48).  □
Remark 4.
The theorem shows that a certain family of coefficients (for instance, b x y ( e x y , e x x ) ) remains constant along any connected chain component. Intuitively, this follows from propagating local equalities along adjacent links of a chain using Lemmas 8 and 9, so that the local relations, through this chain-wise propagation, ultimately yield a global constancy on the entire component. Consequently, one can associate a single scalar λ j with each chain component, and these scalars serve as the weights for the inner biderivation contributions.
For any j J , let x j < y j P j , and define
λ j = b x j y j ( e x j y j , e x j x j ) = b x j y j ( e x j x j , e x j y j ) .
A crucial conclusion is that for any pair u < v in P j , where j J , the value b u v ( e u v , e u u ) = λ j , as demonstrated by the aforementioned theorem.
Now that the requisite preparations have been completed, we may proceed with the proof of the final theorem.
Theorem 4.
Let R be a commutative ring with unity, and let P be a locally finite poset such that any maximal chain in P contains at least three elements. The additive biderivation b of the incidence algebra I ( P , R ) is the sum of several inner biderivations and extremal biderivations.
Proof. 
We proceed by first considering the case where P is connected. The general case will follow by extending this result to each connected component of P.
Case 1: P is connected.
By the decomposition (12), we can write P as a union of subsets P j :
P = j J P j .
For any α , β I ( P , R ) , we can decompose them as follows:
α = j J α j + α D , where α j = x < y P j α x y e x y , α D = z P α z z e z z ,
    β = j J β j + β D , where β j = u < v P j β u v e u v , β D = w P β w w e w w .
Using these decompositions, we expand the biderivation b ( α , β ) :
b ( α , β ) = b ( α D , β D ) + j , j J b ( α j , β j ) + j J b ( α j , β D ) + j J b ( α D , β j ) .
I.
Evaluating b ( α D , β D ) :
Consider
b ( α D , β D ) = z , w P b ( α z z e z z , β w w e w w ) .
From Lemma 6 and Theorem 2, we have
b ( α D , β D ) = z w P b ( α z z e z z , β w w e w w ) = z < w z min , w max ( b ( α z z e z z , β w w e w w ) + b ( α w w e w w , β z z e z z ) ) + z is min or max b ( α z z e z z , β z z e z z )
By Corollary 3 and Lemma 4, the part of each term in (54) can be expressed as
z < w z min , w max ( b ( α z z e z z , β w w e w w ) + b ( α w w e w w , β z z e z z ) ) = z < w z min , w max ( α z z β w w b z w ( e z z , e w w ) + α w w β z z b z w ( e w w , e z z ) ) e z w = z < w z min , w max ( α z z β w w + α w w β z z ) b z w ( e z z , e w w ) e z w
By Theorems 1 and 4, the other part of each term in (54) can be expressed as
z is min or max b ( α z z e z z , β z z e z z ) = z < w z min w max ( α z z β z z b z w ( e z z , e z z ) + α w w β w w b z w ( e w w , w w w ) ) = z < w z min w max ( α z z β z z + α w w β w w ) b z w ( e z z , e w w ) e z w
Substituting (55) and (56) back into Equation (54), we obtain
b ( α D , β D ) = z < w P z min , w max ( α z z β w w + α w w β z z α z z β z z α w w β w w ) b z w ( e z z , e w w ) e z w = z < w P z min , w max α z z e z z + α w w e w w , [ β z z e z z + β w w e w w , b z w ( e z z , e w w ) e z w ] = z < w P z min , w max [ α ^ , [ β ^ , T ] ] ,
where α ^ = z is min or max α z z e z z , β ^ = z is min or max β z z e z z and T = z < w z min , w max b z w ( e z z , e w w ) e z w
II.
Evaluating j j J b ( α j , β j ) ,
we have
j j J b ( α j , β j ) = j j J x < y P j u < v P j b ( α x y e x y , β u v e u v ) .
Consider elements x < y P j and u < v P j , where each pair in x , y , u , v is comparable. A maximal chain, denoted by l = { x , y , u , v } + , exists in P that contains x , y , u , v . There exist l L i and l L j , where L i , L j are defined in (10), such that x , y l and u , v l . It is obvious that l l l , then l would belong to both L j and L j . Thus, we get x , y , u , v l L j L j , which contradicts Lemma 1.
Hence, the existing pair of elements among x , y , u , v is not comparable, and by Lemma 6, it follows that
b ( α x y e x y , β u v e u v ) = 0 .
Therefore,
j j J b ( α j , β j ) = 0 .
III.
Evaluating j J b ( α j , β j ) ,
we have
j J b ( α j , β j ) = j J x < y , u < v P j b ( α x y e x y , β u v e u v ) .
From Lemmas 2 and 6, the b ( α x y e x y , β u v e u v ) part in above equation is non-zero only when any pair in x , y , u , v is compared and [ e x y , e u v ] 0 , which implies either x < y = u < v or u < v = x < y . Thus, the sum simplifies to
j J b ( α j , β j ) = j J x < y < z P j b ( α x y e x y , β y z e y z ) + b ( α y z e y z , β x y e x y ) = j J x < y < z P j b x z ( α x y e x y , β y z e y z ) + b x z ( α y z e y z , β x y e x y ) e x z .
Applying Lemmas 3 and 8, we obtain
b x z ( α x y e x y , β y z e y z ) e x z + b x z ( α y z e y z , β x y e x y ) e x z = ( α x y β y z b x y ( e x y , e y y ) + α y z β x y b y z ( e y z , e y y ) ) e x z = λ j ( α x y β y z α y z β x y ) e x z ,
where λ j is defined by (50). Consequently, Equation (59) becomes
j J b ( α j , β j ) = j J λ j x < y < z P j ( α x y β y z α y z β x y ) e x z .
IV.
Evaluating j J b ( α j , β D ) + j J b ( α D , β j ) :
First, consider b ( α j , β D ) :
b ( α j , β D ) = x < z P j b ( α x z e x z , β D ) .
From Lemma 2, b ( α x z e x z , β w w e w w ) = 0 unless w = x or w = z . Therefore,
b ( α j , β D ) = x < z P j α x z β x x b x z ( e x z , e x x ) + α x z β z z b x z ( e x z , e z z ) e x z = x < z P j ( α x z β x x α x z β z z ) b x z ( e x z , e x x ) e x z = λ j x < z P j ( α x z β z z α x z β x x ) e x z .
Similarly, for b ( α D , β j ) , we obtain
b ( α D , β j ) = λ j x < z P j ( α x x β x z α z z β x z ) e x z .
Combining these results, we have
j J b ( α j , β D ) + j J b ( α D , β j ) = j J λ j x < z P j ( α x z β z z α x z β x x + α x x β x z α z z β x z ) e x z .
Combining All Components:
Substituting Equations (57), (58), (61), and (63) into Equation (53), we obtain
b ( α , β ) b ( α D , β D ) = j J λ j x < y < z P j ( α x y β y z α y z β x y ) + x < z P j ( α x z β z z α x z β x x + α x x β x z α z z β x z ) e x z = j J λ j x y z P j ( α x y β y z α y z β x y ) e x z = j J λ j [ α j , β j ] ,
where α j = x y P j α x y e x y , β j = x y P j β x y e x y .
Consider the aforementioned points; we thus deduce the following result:
b ( α , β ) = j J λ j [ α j , β j ] + [ α ^ , [ β ^ , T ] ] .
Case 2: P is not connected.
When P is disconnected, we utilize the decomposition (7):
P = i I P i ,
where each P i is a connected component of P. Further, each connected component P i can be decomposed as
P i = j J i P j i
using decomposition (11).
For any α , β I ( P , R ) , write
α = i I α i and β = i I β i ,
where α i , β i I ( P i , R ) , as per decomposition (8).
By Equation (24), the biderivation b satisfies
b ( α , β ) = i I b ( α i , β i ) .
Since each P i is connected, applying the result from Case 1, we obtain
b ( α i , β i ) = j J i λ j i [ α j i , β j i ] + [ α ^ i , [ β ^ i , T i ] ] .
Therefore, combining all components, we conclude that
b ( α , β ) = i I j J i λ j i [ α j i , β j i ] + [ α ^ i , [ β ^ i , T i ] ] ,
as desired. It is evident that b is the sum of several inner biderivations and extremal biderivations.  □
Remark 5.
The decomposition in the main theorem can be understood in three intuitive steps. First, Lemma 6 reduces the problem to comparable pairs, so the analysis localizes to connected chain components of the poset. Second, on each chain component, the constancy result (Theorem 3) produces scalar weights that result in the inner biderivation terms. Third, Theorems 1 and 2 show that only finite maximal chains produce additional terms that cannot be absorbed into inner parts; these survive as [ α ^ i , [ β ^ i , T i ] ] , i.e., the extremal biderivations. Thus, the "inner + extremal" decomposition reflects a structural constraint imposed by the chain decomposition of the poset.
It is evident that if there are no x < y P where x is the minimal element and y is the maximal element, α ^ i = β ^ i = T i = 0 . Consequently, the next corollary holds.
Corollary 5.
Let P be a poset that has at least three elements, and let R be a commutative ring with unity. In the incidence algebra of P over R , if the number of elements in any maximal chain in P is infinite, every additive biderivation is the sum of several inner biderivations.
Example 4.
We conclude by describing the structure of an arbitrary additive biderivation b on I ( S 1 , R ) , where S 1 is the poset introduced in Example 3. Recall that
S 1 = j J 1 S j 1 , J 1 = { 1 , 2 } , S 1 1 = { a , b , d } , S 2 1 = { a , c , e , f } .
For any α , β I ( S 1 , R ) , the biderivation takes the form
b ( α , β ) = λ 1 1 [ α 1 1 , β 1 1 ] + λ 2 2 [ α 2 1 , β 2 1 ] + [ α ^ 1 , [ β ^ 1 , T 1 ] ] , α j 1 = x < y S j 1 α x y e x y , β j 1 = x < y S j 1 β x y e x y , α ^ 1 = z S 1 min or max α z z e z z , β ^ 1 = z S 1 min or max β z z e z z , T 1 = z < w S 1 , z min , w max b z w ( e z z , e w w ) e z w .

Author Contributions

Writing—original draft, Z.G.; Writing—review and editing, C.Z.; Supervision, C.Z.; Project administration, C.Z.; Funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Natural Science Foundation of China grant number 12101111.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rota, G.-C. On the foundations of combinatorial theory: I. theory of Möbius functions. In Classic Papers in Combinatorics; Springer: Berlin/Heidelberg, Germany, 1964; pp. 332–360. [Google Scholar]
  2. Haiman, M.; Schmitt, W. Incidence algebra antipodes and Lagrange inversion in one and several variables. J. Comb. Theory Ser. A 1989, 50, 172–185. [Google Scholar] [CrossRef]
  3. Spiegel, E.; O’Donnell, C. Incidence Algebras; CRC Press: Boca Raton, FL, USA, 1997; Volume 206. [Google Scholar]
  4. Stanley, R.P. Structure of incidence algebras and their automorphism groups. Bull. Am. Math. Soc. 1970, 76, 1236–1239. [Google Scholar] [CrossRef]
  5. Pierre, L.; John, S. Structure of incidence algebras of graphs. Commun. Algebra 1981, 9, 1479–1517. [Google Scholar] [CrossRef]
  6. Cibils, C. Cohomology of incidence algebras and simplicial complexes. J. Pure Appl. Algebra 1989, 56, 221–232. [Google Scholar] [CrossRef]
  7. Schmitt, W.R. Incidence Hopf algebras. J. Pure Appl. Algebra 1994, 96, 299–330. [Google Scholar] [CrossRef]
  8. Xiao, Z. Jordan derivations of incidence algebras. Rocky Mt. J. Math. 2015, 45, 1357–1368. [Google Scholar] [CrossRef]
  9. Baclawski, K. Automorphisms and derivations of incidence algebras. Proc. Am. Math. Soc. 1972, 36, 351–356. [Google Scholar] [CrossRef]
  10. Yang, Y. Nonlinear derivations of incidence algebras. Acta Math. Hung. 2020, 162, 52–61. [Google Scholar] [CrossRef]
  11. Fornaroli, É.Z.; Khrypchenko, M. Skew derivations of incidence algebras. Collect. Math. 2025, 76, 113–132. [Google Scholar] [CrossRef]
  12. Yang, Y. Nonlinear Lie derivations of incidence algebras. Oper. Matrices 2021, 15, 275–292. [Google Scholar] [CrossRef]
  13. Zhang, X.; Khrypchenko, M. Lie derivations of incidence algebras. Linear Algebra Its Appl. 2017, 513, 69–83. [Google Scholar] [CrossRef]
  14. Kaygorodov, I.; Khrypchenko, M. Poisson structures on finitary incidence algebras. J. Algebra 2021, 578, 402–420. [Google Scholar] [CrossRef]
  15. Benkovič, D. Biderivations of triangular algebras. Linear Algebra Its Appl. 2009, 431, 1587–1602. [Google Scholar] [CrossRef]
  16. Ghosseiri, N.M. On biderivations of upper triangular matrix rings. Linear Algebra Its Appl. 2013, 438, 250–260. [Google Scholar] [CrossRef]
  17. Brešar, M. Commuting maps: A survey. Taiwan J. Math. 2004, 8, 361–397. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guan, Z.; Zhang, C. Additive Biderivations of Incidence Algebras. Mathematics 2025, 13, 3122. https://doi.org/10.3390/math13193122

AMA Style

Guan Z, Zhang C. Additive Biderivations of Incidence Algebras. Mathematics. 2025; 13(19):3122. https://doi.org/10.3390/math13193122

Chicago/Turabian Style

Guan, Zhipeng, and Chi Zhang. 2025. "Additive Biderivations of Incidence Algebras" Mathematics 13, no. 19: 3122. https://doi.org/10.3390/math13193122

APA Style

Guan, Z., & Zhang, C. (2025). Additive Biderivations of Incidence Algebras. Mathematics, 13(19), 3122. https://doi.org/10.3390/math13193122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop