1. Introduction
The study of determinants continues as a relevant mathematical area pursued for its utility in various fields of mathematics, engineering, and sciences. The invention of the determinant as a unique number associated with a square array of numbers by Seki Kowa, in the late 17th century, predated the introduction of matrices in the 19th century by James Sylvester and Charles Dodgson (who preferred to use the term blocks) and, consequently, the formal study of linear algebra [
1,
2]. With the development of linear algebra, determinants and matrices have become fundamental and significant mathematical tools due to their efficiency in representing quantities in a compact form. The solution of systems of linear equations, matrix inversion, computations of areas, and representations of complicated expressions in applied sciences, mechanical, and electrical systems can be efficiently modeled by determinants and matrices [
3,
4,
5]. Consequently, explorations of various methods of computing determinants whether of explicit numbers or symbolic representations have become significant mathematical pursuits [
4,
6].
The methods of computing determinants usually presented in elementary linear algebra texts for instructional exposition purposes include the application of Leibniz’s definition of determinants, Laplace’s co-factor expansion, LU decomposition, and reduction by elementary row or column operations [
5,
7,
8]. Research literature also offers a plethora of methods, such as Chio’s condensation, QR decomposition method, Cholesky decomposition, Dodson’s condensation method, Hajrizaj’s method, Salihu and Gjonbalaj’s method, and Sobamowo’s extension of Sarrus’ rule to
matrices [
2,
4,
9,
10,
11]. Other methods are proposed on the basis of reducing built-in computational errors and specialized matrix forms to include division-free algorithms to compute the determinants of quasi-tridiagonal matrices [
12,
13], as well as develop determinant formulas for special matrices involving symmetry, such as block matrices [
14], break-down free algorithms for computing the determinants of periodic tridiagonal matrices [
15], and block diagonalization-based algorithms of block
k-tridiagonal matrices [
16].
While computer programs provide the fastest, most efficient, and highly accurate platforms in computing determinants, there are significant reasons why manually computing the determinants is essential [
4]. In low-technology environments or in controlled environments, such as licensure certifications, where the use of advanced technological tools is restricted, there can only be reliance on one’s computational fluency to carry out problem solving. Hence, familiarity and mastery of efficient methods in manual computations is paramount. The goal of this paper is to develop a straightforward and efficient algorithm for the manual computation of determinants with pedagogical implications in mind. Mathematics education entails the development of procedural knowledge or action sequences for solving problems, as well as the conceptual knowledge or understanding of principles that govern relationships between pieces of knowledge [
17]. With an efficient method that promotes computational fluency through optimized instructional time and cognitive effort, greater appreciation and deeper understanding of underlying concepts is likely facilitated.
1.1. Determinants by Definition
Definition 1. The determinant of an matrix A denoted by or |A| is defined aswhere
are column numbers derived from the permutations of the set
.
The sign is taken as + for even permutations and − for odd permutations [
5,
7]. Accordingly, the determinants of simple matrices can be readily computed. The determinant of a single-entry matrix equals the entry itself,
The butterfly method for
matrices is so-called because the movement of computation suggests butterfly wings, that is,
. By Definition 1, the determinant of a
is expanded as
The Sarrus’ rule is derived from this expansion. By extending the first two columns of the matrix to the right, the downward arrows give the positive terms and the upward arrows give the negative terms in the expansion of the formula.
Applying the definition when computing the determinant of matrix requires Terms, with each containing factors involving multiplications.
1.2. Determinants by Co-factor Expansion
In the computation of determinants by co-factor expansion, two concepts defined below are essential.
Definition 2. The minor of an entry of an matrix A is the determinant of the submatrix of A obtained by deleting the of A. The co-factor of is a real number denoted by and is defined by
After computing the minor and its associated co-factor, we compute the determinant by expanding about an
ith row (the row containing all
):
or
jth column:
In co-factor expansion, we “expand” the matrix, about a certain row or column, into submatrices , each with rows and columns. This means that, at the first expansion, we compute minors and co-factors. For larger matrices, we may not readily compute the minors, but we have to perform a second expansion of each into submatrices, each with rows and columns. The process is repeated over the resulting submatrices until smaller submatrices with readily computed determinants are obtained. The determinant of the matrix is then computed by working backward.
1.3. Determinants by Elementary Row (Column) Operations
The Gaussian method of computing the determinants employs elementary row (column) operations to put the matrix into upper or lower triangular form, in which the resulting entries in the main diagonal yield the determinant. The three elementary row (column) operations and their effects on the determinant of a given matrix are stated as follows. For proofs of the theorems, see [
5,
7].
Type 1. Multiplying a row (column) through by a nonzero constant.
Type 2. Interchanging two rows (columns).
Type 3. Adding a constant times one row (column) to another.
Theorem 1. If is the matrix that results when a single row (column) of is multiplied by a scalar then .
Theorem 2. If is the matrix that results when two rows (columns) of are interchanged, then .
Theorem 3. If is the matrix that results when a multiple of a row (column ) of is added to another row (column then .
By reducing a matrix into upper or lower triangular form and applying co-factor expansion, the determinant of the matrix is determined by the product of the entries in the main diagonal, as stated in the following theorem.
Theorem 4. If A = is upper (lower) triangular, then det(A) = ; that is, the determinant of a triangular matrix is the product of the elements on the main diagonal.
1.4. Dodgson’s Condensation Method
The Dodgson’s condensation method, attributed to the English clergyman Rev. Charles Lutwidge Dodgson (1832–1898), famously known as Lewis Caroll for his literary works Alice in Wonderland and Through the Looking Glass, is found to be more efficient than Leibniz’s definition, especially for larger matrices [
2,
18,
19]. Dodgson’s method aims to ‘condense’ the determinant by producing an
matrix from an
matrix, followed by
, until a
matrix is obtained. The condensation method is founded on Jacobi’s theorem [
19].
Theorem 5. Jacobi’s Theorem. Let A be an matrix, let be an minor of , where , let be the corresponding minor of , and let be the complementary minor of . Then:
With
Dodgson realized that the determinant of
can be readily computed with
His algorithm consists of the following steps.
Check the interior of for a zero entry. The interior of A is an matrix that remains when the first row, last row, first column, and last column of A are deleted. We perform elementary row operations to remove all zeros from the interior of A.
Compute the determinant of every four adjacent terms to form a new matrix B.
Repeat Step 2 to produce an matrix. We then divide each term by the corresponding entry in the interior of the original matrix A to obtain matrix C.
We continue the process of condensation with the succeeding matrices until a matrix is obtained which computes detA.
Note that Dodgson’s method employs division of entries in succeeding matrices beginning with an matrix. This step which may magnify computational error, especially for matrices with non-integral entries.
2. Development of Proposed Algorithm
2.1. Reduction by Cross-Multiplication
We start with an
matrix denoted by
with real entries
where the first entries are non-zero real numbers to be transformed into triangular form to apply Theorem 4. A more straightforward way of zeroing out entries under
in
, starting from the bottom row
, is through
. By Theorem 3, the process introduces the factor
into the determinant. Working our way up, we zero out the first entry of
through
, which then introduces the factor
into the determinant. By repeating the process upward until
, we have introduced the following factors into the determinant of
:
,
At this stage, we obtain an
submatrix
by excluding the first column at the left with zero entries.
|
|
| … |
|
|
| … |
|
… | … | … | … |
|
| … |
|
For the purpose of this algorithm, we do away with the usual matrix notation and we put the rows of
under
. We denote the entries in succeeding matrices by
;
= number of reductions performed and
. In determinant form, we can also denote
, which provides a symmetric (butterfly) movement of the algorithm, hence the term cross-multiplication.
|
|
| | … |
|
|
| | … |
|
. | . | | … | . |
. | . | | … | . |
|
| | |
|
|
| | … |
|
| |
|
| |
|
|
|
| |
|
| . | | … | . |
|
|
| … |
|
|
|
| … |
|
The next stage is obtaining the rows for
, which is equivalent to zeroing out entries under
by following a similar process to obtaining the rows for
. The process introduces the following factors into the determinant:
,
, …,
,
. Assuming no first entries of each
are zero, continuing the process of reduction leads to a
with an introduced factor of
and, eventually,
with entry
By taking the first row from each submatrix, we can reconstruct an
matrix
in upper triangular form that is a row equivalent to
.
|
| | … | | | |
|
0 |
|
| | | | |
|
0 | 0 |
|
| | | |
|
0 | 0 | 0 | . | . | … | | |
0 | 0 | 0 | | . | … | | |
0 | 0 | 0 | | . | … | | |
0 | 0 | 0 | | . | … |
|
|
0 | 0 | 0 | 0 | 0 | … | 0 |
|
Since
is row equivalent to
and these factors
are introduced into the determinant of
in the process of forming
, then
from which
2.2. Alternate Proof of Cross-Multiplication Method
Suppose that
B is an
matrix where all entries under the first element of the first row are zeros,
By co-factor expansion, . If , then computing the determinant of the resulting submatrix requires a full co-factor expansion or elementary row operations. In deriving the proposed method, we first transform into through elementary row operations; then, we apply the definition of co-factor expansion by expanding the first column. We then repeat the process with the succeeding submatrices until a relatively smaller matrix is obtained where we can readily compute the determinant. By introducing the cross-multiplication, we only form one submatrix instead of submatrices in co-factor expansion.
Let
be an
matrix with real entries, where
We zero out entries under
through the process illustrated in this paper: with two successive rows,
and
, with first entries,
and
, respectively, and by
, a matrix
is produced.
In the process, the factors
are introduced into
Thus, by Theorem 1,
By co-factor expansion,
where
.
We repeat the process of reduction on
to form the matrix
and which
Continuing the reduction on subsequent submatrices, we summarize the following results
Working backwards leads to
2.3. Matrices with Zero First Entries
To generalize the determinant formula, we consider cases where there are zero first entries in any of the submatrices
. If an
th row of
has
, the reduction
is not possible since this will introduce a zero factor to the divisor of Equation (1). We reduce the entire submatrix by first moving the rows with zero first entries to the bottom part; then, we perform reduction to rows with nonzero first entries. The rows with zero first entries are treated as “standby rows” and are, then, moved to the next submatrix in the same placement. Considering that
, by Theorem 2, interchanging
and the last row will lead to
. We introduce the factor
, where
indicates the number of row interchanges.
|
|
| | |
|
| | |
… | | | |
0 |
| | |
0 |
| | |
|
| | |
… | | | |
|
| | |
|
|
| | |
|
| | |
… | | | |
|
| | |
| | | |
|
| | |
0 |
| | |
0 |
| | |
| |
|
| |
| … | | |
| | | |
|
| | |
|
| | |
If column interchanges result in a matrix with nonzero first entries, we incorporate
to Equation (1) by applying Type 2 operation.
where
k is the number of column interchanges.
If only one row of has a non-zero first entry, we place this row as with first entry . No reduction is performed since the already takes the reduced form. By Theorem 4, is a diagonal entry; thus, no factor is introduced into the determinant.
If all rows of have zero first entries, then no reduction is performed, determining that
Taking into account cases where there are zero first entries, the general determinant formula can now be expressed as
where
,
r = number of rows with nonzero first entries in
, and
k = number of row permutations.
If the first entries are nonzero (after permutating columns or applying Type 3 operations), we can reduce Equation (1) and incorporate
if column permutations are applied to get
where
k = number of column permutations.
It can be noted that the factors in the denominators are the in-between first entries in the submatrices, as reflected in the matrices below.
|
|
| | … | | |
|
|
| | … | | |
|
. | . | | … | | | . |
. | . | | … | | | . |
|
| | | | |
|
|
| | … | | |
|
| |
|
| | | |
|
|
|
| | | |
|
| . | | … | | | . |
|
|
| … | | |
|
|
|
| | | |
|
| | |
|
| | |
|
| |
|
| | |
|
| | . | . | | | . |
| |
|
| | |
|
| |
|
| | |
|
… | … | … | … | … | | … | … |
| | | | |
|
|
|
| | | |
|
|
|
| | | |
|
|
|
| | | | | |
|
|
| | | | |
|
|
| | | | | | |
|
3. Cross-Multiplication and Dodgson’s Condensation Method in 3 × 3 Matrices
We now show the equivalence of Dodgson’s condensation method and cross-multiplication.
Given that
, where
are nonzero, by Dodgson’s condensation method, we take four adjacent entries at a time and compute their determinants to form a
matrix.
We then compute the determinant of the resulting matrix to obtain a
matrix and divide the resulting entry by the corresponding entry of the interior of
, which is
By Definition 1, = |.
Thus, .
By exchanging columns 1 and 2 of we have , and it can be shown that .
By cross-multiplication,
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
| | |
|
Here, is the common factor; hence, , which is consistent with Equation (1).
Similar to Dodgson’s approach, is the determinant of the matrix formed by the first entries of two adjacent rows and the pair of corresponding entries in the column, that is, . Here, we fix the first entries of two adjacent rows as the pivot in computing the determinants of consecutive matrices. By doing so, is the factor introduced into the original determinant, which is analogous to the corresponding entry of the interior matrix in the condensation method. By fixing the pivot at the first two entries, we can limit the number of divisions to be performed in preserving the determinant.
4. Verifications of Proposed Methods
We provide illustrative examples for the proposed methods, and we verify the results using co-factor expansion, Dodgson’s condensation method, and Matlab (See
Appendix A).
Illustration 1. All first entries are nonzeros ( matrix) The solution is found by the cross-multiplication method,
| 2 | 1 | 5 | 2 | |
2 | 3 | 2 | 3 | |
1 | −1 | 4 | 2 | |
1 | 2 | 4 | 1 | |
| | 4 | −6 | 2 | |
| −5 | 6 | 1 | |
| 3 | 0 | −1 | |
| | | −6 | 14 | |
| | −18 | 2 | |
| | | | 240 | 240/(2)(1)(−5) |
det(A) | | | | −24 | |
by Dodgson’s condensation method,
A | 2 | 1 | 5 | 2 | |
2 | 3 | 2 | 3 | |
1 | −1 | 4 | 2 | |
1 | 2 | 4 | 1 | |
B | | 4 | −13 | 11 | |
| −5 | 14 | −8 | |
| 3 | −12 | −4 | |
C | | | −9 | 50 | Division by the interior in A |
| | 18 | −152 |
D | | | −3 | −25 | |
| | −18 | −38 | |
E | | | | −24 | |
and by co-factor expansion.
Illustration 2. All first entries are nonzeros ( matrix) Computation of determinants is found by the cross-multiplication method,
| 2 | 2 | 1 | 3 | 1 | |
1 | 3 | −1 | 1 | 2 | |
1 | 2 | 4 | −2 | 3 | |
2 | 2 | 3 | 2 | 1 | |
1 | 3 | 2 | 1 | 5 | |
| | 4 | −3 | −1 | 3 | |
| −1 | 5 | −3 | 1 | |
| −2 | −5 | 6 | −5 | |
| 4 | 1 | 0 | 9 | |
| | | 17 | −13 | 7 | |
| | 15 | −12 | 7 | |
| | 18 | −24 | 2 | |
| | | | −9 | 14 | |
| | | −144 | −96 | |
| | | | | 2880 | 2880/(1)(1)(2)(−1)(−2)(15) |
det(B) | | | | | 48 | |
Dodgson’s condensation method,
A | 2 | 2 | 1 | 3 | 1 | |
1 | 3 | −1 | 1 | 2 | |
1 | 2 | 4 | −2 | 3 | |
2 | 2 | 3 | 2 | 1 | |
1 | 3 | 2 | 1 | 5 | |
B | | 4 | −5 | 4 | 5 | |
| −1 | 14 | −2 | 7 | |
| −2 | −2 | 14 | −8 | |
| 4 | −5 | −1 | 9 | |
C | | | 51 | −46 | 38 | |
| | 30 | 192 | −82 | |
| | 18 | 72 | 118 | |
D | | | 17 | 46 | 38 | Division by interior entries in A |
| | 15 | 48 | 41 |
| | 9 | 24 | 59 |
E | | | | 126 | 62 | |
| | | −72 | 1848 | |
F | | | | 9 | −31 | Division by interior entries in B |
| | | 36 | 132 |
G | | | | | 48 | |
and with verification of the results by co-factor expansion.
We compute the determinant of each submatrix separately.
Thus, .
Illustration 3. Some first entries are zeros
| 2 | 1 | 5 | 2 | |
0 | 3 | 2 | 3 | |
1 | −1 | 4 | 2 | |
0 | 2 | 4 | 1 | |
| 2 | 1 | 5 | 2 | |
1 | −1 | 4 | 2 | Interchanged row (−1) |
0 | 3 | 2 | 3 | Standby row |
0 | 2 | 4 | 1 | Standby row |
| | −3 | 3 | 2 | |
| 3 | 2 | 3 | Transferred standby row |
| 2 | 4 | 1 | Transferred standby row |
| | | −15 | −15 | |
| | 8 | −3 | |
| | | | 165 | 165/(3)(−1) |
det(D) | | | | −55 | |
We apply Type 3 column operations by interchanging column 1 and column 2,
| 2 | 1 | 5 | 2 | |
0 | 3 | 2 | 3 | |
1 | −1 | 4 | 2 | |
0 | 2 | 4 | 1 | |
| 1 | 2 | 5 | 2 | Column 1 and column 2 interchange (−1) |
3 | 0 | 2 | 3 | |
−1 | 1 | 4 | 2 | |
2 | 0 | 4 | 1 | |
| | −6 | −13 | −3 | |
| 3 | 14 | 9 | |
| −2 | −12 | −5 | |
| | | −45 | −45 | |
| | −8 | 3 | |
| | | | −495 | −495(−1)/(3)(−1)(3) |
det(D) | | | | −55 | −495(−1)/(3)(−1)(3) |
Dodgson’s condensation method,
A | 2 | 1 | 5 | 2 | |
0 | 3 | 2 | 3 | |
1 | −1 | 4 | 2 | |
0 | 2 | 4 | 1 | |
B | | 6 | −13 | 11 | |
| −3 | 14 | −8 | |
| 2 | −12 | −4 | |
C | | | 45 | −50 | |
| | 8 | −152 | |
D | | | 15 | −25 | Division by interior entries of A |
| | −8 | −38 |
E | | | | −55 | |
and by co-factor expansion, where we can expand about column 1.
Illustration 4. Only one row has a nonzero first entry
| 0 | 2 | 1 | 3 | 1 | |
0 | 0 | −2 | 1 | 1 | |
3 | 3 | 4 | 1 | 5 | |
0 | 2 | 5 | 2 | 1 | |
0 | 3 | 2 | 2 | 5 | |
| 3 | 3 | 4 | 1 | 5 | Interchanged row (−1) |
0 | 0 | −2 | 1 | 1 | Standby row |
0 | 2 | 1 | 3 | 1 | Interchanged row |
0 | 2 | 5 | 2 | 1 | Standby row |
0 | 3 | 2 | 2 | 5 | Standby row |
| | 0 | −2 | 1 | 1 | Transferred/Standby row |
| 2 | 1 | 3 | 1 | Transferred row |
| 2 | 5 | 2 | 1 | Transferred row |
| 3 | 2 | 2 | 5 | Transferred row |
| | | −2 | 1 | 1 | Transferred row |
| | 8 | −2 | 0 | |
| | −11 | −2 | 7 | |
| | | | −4 | −8 | |
| | | −38 | 56 | |
| | | | | 528 | 528(3)/(2)(8)(−1) |
det(A) | | | | | −99 | |
We interchange column 1 and column 3, and we proceed with usual cross-multiplication.
| 0 | 2 | 1 | 3 | 1 | |
0 | 0 | −2 | 1 | 1 | |
3 | 3 | 4 | 1 | 5 | |
0 | 2 | 5 | 2 | 1 | |
0 | 3 | 2 | 2 | 5 | |
| 1 | 2 | 0 | 3 | 1 | Column 1 and column 3 exchange (−1) |
−2 | 0 | 0 | 1 | 1 | |
4 | 3 | 3 | 1 | 5 | |
5 | 2 | 0 | 2 | 1 | |
2 | 3 | 0 | 2 | 5 | |
| | 4 | 0 | 7 | 3 | |
| −6 | −6 | −6 | −14 | |
| −7 | −15 | 3 | −21 | |
| 11 | 0 | 6 | 23 | |
| | | −24 | 18 | −38 | |
| | 48 | −60 | 28 | |
| | 165 | −75 | 70 | |
| | | | 576 | 1152 | |
| | | 6300 | −1260 | |
| | | | | −7,983,360 | −7,983,360(−1)/(−2)(4)(5)(48) |
det(E) | | | | | −99 | |
By Dodgson’s condensation, we interchange row 1 and row 5.
A | 0 | 2 | 1 | 3 | 1 | |
3 | 3 | 4 | 1 | 5 | |
0 | 2 | 5 | 2 | 1 | |
0 | 3 | 2 | 2 | 5 | |
0 | 0 | −2 | 1 | 1 | |
B | | −6 | 5 | −11 | 14 | |
| 6 | 7 | 3 | −9 | |
| 0 | −11 | 6 | 8 | |
| 0 | −6 | 6 | −3 | |
C | | | −72 | 92 | 57 | |
| | −66 | 75 | 78 | |
| | 0 | −30 | −66 | |
D | | | −24 | 23 | 57 | Division by inner entries in A |
| | −33 | 15 | 39 |
| | 0 | −15 | −33 |
E | | | | 399 | 42 | |
| | | 495 | 90 | |
F | | | | 57 | 14 | Division by inner entries in B |
| | | −45 | 15 |
G | | | | | 99 | |
| | | | | −99 | Multiplication of −1 by an exchange of row |
By co-factor expansion, we can expand about column 1:
Then, we compute
by another co-factor expansion.
The above illustrations suggest the following comparisons of the cross-multiplication method, Dodgson’s condensation, and co-factor expansion summarized in
Table 1. Operation and matrix counts are based on cases where there are no zero entries in the first rows for cross-multiplication, in the interior matrix for Dodgson’s condensation, and in an expanded row or column in co-factor expansion.
5. Conclusions
The cross-multiplication method of computing the determinants of
matrices is a strategic use of elementary row operations in reducing a matrix to upper triangular form by using the nonzero first entries of adjacent rows as row multipliers. This technique gives an expression for each entry of the reduced matrix
where
= number of reductions performed, and
, or in determinant form,
. This approach is consistent with Dodgson’s condensation method by taking the first entries of adjacent rows as pivot entries in the computation of the determinant of the adjacent
The algorithm proceeds by computing each entry to produce a submatrix that is one row and one column less than the preceding submatrix until a
matrix with entry
is obtained.
When some first entries in the rows are zero, any of the following adjustments can be performed: applying Type 2 row operation where rows with zero first entries are transferred to the bottom of the submatrix and, then, transferred to the next submatrix; applying Type 2 column operation by interchanging the first column with zero first entries and another column with nonzero entries; applying Type 3 row (column) operations to remove zero first entries. The determinant is then computed for the following cases.
1. The general determinant formula is given by
where
,
r = number of rows with nonzero first entries in
, and
k = number of row (column) permutations.
2. If the first entries are nonzero, the determinant formula is given by
where
k = number of column permutations, if applicable.
3. When all rows of a submatrix have zero first entries, then
Compared to Dodgson’s condensation and co-factor expansion, the cross-multiplication method is the most efficient in terms of the number of matrices generated and the operations employed.
Both the cross-multiplication and Dodgson’s condensation methods apply Type 2 or Type 3 row (column) operations when there are zeros in the first entries or in the interior matrix. These two methods have symmetric algorithmic movement, which makes the execution of operations relatively easier and faster, and at , both methods are equivalent. Unlike the Dodgson’s condensation, the cross-multiplication method can be generalized through formulas.