The prior subsection defines a function to be a determinant if it
satisfies four conditions and
shows that there is at most one
determinant function for
each
.
What is left is to show that for each
such a function exists.
How could such a function not exist?
After all, we have done computations that start with a square matrix,
follow the conditions, and end with a number.
The difficulty is that, as far as we know,
the computation might not give a well-defined result.
To illustrate this possibility,
suppose that we were to change the second condition in the definition of
determinant to be that the value of a determinant does not change on
a row swap.
By Remark 2.2 we know that this conflicts with the
first and third conditions.
Here is an instance of the conflict: here are
two Gauss' method reductions of the same matrix, the first without any row swap
![{\displaystyle {\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\xrightarrow[{}]{-3\rho _{1}+\rho _{2}}}{\begin{pmatrix}1&2\\0&-2\end{pmatrix}}}](../_assets_/eb734a37dd21ce173a46342d1cc64c92/40ea5b08ddf850535e975de4e957cd93cecdfb73.svg)
and the second with a swap.
![{\displaystyle {\begin{pmatrix}1&2\\3&4\end{pmatrix}}{\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}{\begin{pmatrix}3&4\\1&2\end{pmatrix}}{\xrightarrow[{}]{-(1/3)\rho _{1}+\rho _{2}}}{\begin{pmatrix}3&4\\0&2/3\end{pmatrix}}}](../_assets_/eb734a37dd21ce173a46342d1cc64c92/ca769c96e028a2353c25170effe294b86d424b3f.svg)
Following Definition 2.1 gives that both
calculations yield the determinant
since in the second one
we keep track of the fact that the row swap changes the sign of the result
of multiplying down the diagonal.
But if we follow the supposition and
change the second condition then the two calculations yield
different values,
and
.
That is, under the supposition the outcome would not be well-defined — no
function exists that satisfies the changed second condition
along with the other three.
Of course, observing that Definition 2.1 does the right thing
in this one instance is not enough; what we will do in the rest of this
section is to show that there is never a conflict.
The natural way to try this would be
to define the determinant function with: "The value of the
function is the result of doing Gauss' method,
keeping track of row swaps, and finishing by multiplying down the diagonal".
(Since Gauss' method allows for some variation, such as a
choice of which row to use when swapping,
we would have to fix an explicit algorithm.)
Then we would be done if we verified that
this way of computing the determinant satisfies the four properties.
For instance, if
and
are related by a row swap then we would need to show that this
algorithm returns determinants that are negatives of each other.
However, how to verify this is not evident.
So the development below will not proceed in this way.
Instead, in this subsection we will define a different way to compute
the value of a determinant, a formula, and we will use this way to
prove that the conditions are satisfied.
The formula that we shall use is based on an insight gotten from
property (3) of the definition of determinants.
This property shows that determinants are not linear.
- Example 3.1
For this matrix
.

Instead, the scalar comes out of each of the two rows.

Since scalars come out a row at a time, we might guess that
determinants are linear a row at a time.
- Definition 3.2
Let
be a vector space.
A map
is
multilinear
if
-
-
for
and
.
- Lemma 3.3
Determinants are multilinear.
- Proof
The definition of determinants gives property (2)
(Lemma 2.3 following that definition
covers the
case) so we need only check property (1).

If the set
is linearly dependent then all three matrices are singular and so all
three determinants are zero and the equality is trivial.
Therefore assume that the set is linearly independent.
This set of
-wide row vectors has
members, so we can make a basis by
adding one more vector
.
Express
and
with respect to this basis

giving this.

By the definition of determinant, the value of
is unchanged by the pivot operation of
adding
to
.

Then, to the result, we can add
, etc.
Thus

(using (2) for the second equality). To finish, bring
and
back inside in front of
and use pivoting again, this time to reconstruct the expressions of
and
in terms of the basis, e.g., start with the pivot operations of adding
to
and
to
, etc.
Multilinearity allows us to expand a determinant into a sum of
determinants, each of which involves a simple matrix.
- Example 3.4
We can use multilinearity to split this determinant into two,
first breaking up the first row

and then separating each of those two, breaking along the second rows.

We are left with four determinants, such that in each row of each matrix there is a single entry from the original matrix.
- Example 3.5
In the same way,
a
determinant separates into a sum of
many simpler determinants.
We start by splitting along the first row, producing three determinants
(the zero in the
position is underlined to set it off visually
from the zeroes that appear in the splitting).

Each of these three will itself split in three along the second row.
Each of the resulting nine splits in three along the third row, resulting
in twenty seven determinants

such that each row contains a single entry from the starting matrix.
So an
determinant expands into a
sum of
determinants where
each row of each summands contains a single entry from the
starting matrix.
However, many of these summand determinants are zero.
- Example 3.6
In each of these three matrices from the above expansion,
two of the rows have their entry from the starting matrix in the same column,
e.g.,
in the first matrix, the
and the
both come from the first column.

Any such matrix is singular, because in each,
one row is a multiple of the other (or is a zero row).
Thus, any such determinant is zero, by Lemma 2.3.
Therefore, the above expansion of the
determinant into
the sum of the twenty seven determinants simplifies to the sum of these six.

We can bring out the scalars.

To finish, we evaluate those six determinants by row-swapping them
to the identity matrix,
keeping track of the resulting sign changes.

That example illustrates the key idea.
We've applied multilinearity to a
determinant to get
separate determinants, each with one distinguished entry per row.
We can drop most of these new determinants because the matrices are singular,
with one row a multiple of another.
We are left with the one-entry-per-row determinants
also having only one entry per column (one entry from the original determinant,
that is).
And, since we can factor scalars out, we can further reduce to
only considering determinants of
one-entry-per-row-and-column matrices where the entries are ones.
These are permutation matrices.
Thus, the determinant can be computed in this
three-step way
(Step 1) for each permutation matrix, multiply together
the entries from the original matrix
where that permutation matrix has ones,
(Step 2) multiply that by the determinant of the permutation matrix
and
(Step 3) do that for all permutation matrices
and sum the results together.
To state this as a formula, we introduce a notation for permutation matrices.
Let
be the row vector that is all zeroes except for a one in its
-th entry, so that the four-wide
is
.
We can construct permutation matrices by
permuting — that is, scrambling — the numbers
,
, ...,
,
and using them as indices on the
's.
For instance, to get a
permutation matrix
matrix, we can scramble the numbers from
to
into
this sequence
and take the corresponding
row vector
's.

- Example 3.8
The
-permutations are
and
.
These are the associated permutation matrices.

We sometimes write permutations as functions, e.g.,
,
and
.
Then the rows of
are
and
.
The
-permutations are
,
,
,
,
, and
.
Here are two of the associated permutation matrices.

For instance, the rows of
are
,
, and
.
- Definition 3.9
The permutation expansion for determinants is
![{\displaystyle {\begin{vmatrix}t_{1,1}&t_{1,2}&\ldots &t_{1,n}\\t_{2,1}&t_{2,2}&\ldots &t_{2,n}\\&\vdots \\t_{n,1}&t_{n,2}&\ldots &t_{n,n}\end{vmatrix}}={\begin{array}{l}t_{1,\phi _{1}(1)}t_{2,\phi _{1}(2)}\cdots t_{n,\phi _{1}(n)}\left|P_{\phi _{1}}\right|\\[.5ex]\quad +t_{1,\phi _{2}(1)}t_{2,\phi _{2}(2)}\cdots t_{n,\phi _{2}(n)}\left|P_{\phi _{2}}\right|\\[.5ex]\quad \vdots \\\quad +t_{1,\phi _{k}(1)}t_{2,\phi _{k}(2)}\cdots t_{n,\phi _{k}(n)}\left|P_{\phi _{k}}\right|\end{array}}}](../_assets_/eb734a37dd21ce173a46342d1cc64c92/492a6002409713d21651ff2dedac1627c399319f.svg)
where
are all of the
-permutations.
This formula is often written in summation notation

read aloud as
"the sum, over all permutations
, of terms having the form
".
This phrase is just a restating of the three-step process
(Step 1) for each permutation matrix, compute
(Step 2) multiply that by
and (Step 3) sum all such terms together.
- Example 3.10
The familiar formula for the determinant of a
matrix
can be derived in this way.

(the second permutation matrix takes one row swap to pass to the
identity).
Similarly, the formula for the determinant of a
matrix
is this.

Computing a determinant by permutation expansion usually takes longer than
Gauss' method.
However, here we are not trying to do the computation efficiently,
we are instead
trying to give a determinant formula that we can prove to be well-defined.
While the permutation expansion is impractical for computations,
it is useful in proofs.
In particular, we can use it for the result that we are after.
- Theorem 3.11
For each
there is a
determinant function.
The proof is deferred to the following subsection.
Also there is the proof of the next result (they share some features).
- Theorem 3.12
The determinant of a matrix equals the determinant of its transpose.
The consequence of this theorem is that,
while we have so far stated results in terms of rows
(e.g., determinants are multilinear in their rows, row swaps change the
signum, etc.),
all of the results also hold in terms of columns.
The final result gives examples.
- Corollary 3.13
A matrix with two equal columns is singular. Column swaps change the sign of a determinant. Determinants are multilinear in their columns.
- Proof
For the first statement, transposing the matrix results in a matrix with the same determinant, and with two equal rows, and hence a determinant of zero. The other two are proved in the same way.
We finish with a summary
(although the final subsection contains the unfinished business of
proving the two theorems).
Determinant functions exist, are unique, and we know how to compute them.
As for what determinants are about, perhaps these lines
(Kemp 1982) help make it memorable.
Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.
Exercises
These summarize the notation used in this book for the
- and
- permutations.
- This exercise is recommended for all readers.
- Problem 1
Compute the determinant by using the permutation expansion.
-
-
- This exercise is recommended for all readers.
- Problem 2
Compute these both with Gauss' method and with the permutation
expansion formula.
-
-
- This exercise is recommended for all readers.
- Problem 3
Use the permutation expansion formula to derive
the formula for
determinants.
- Problem 4
List all of the
-permutations.
- Problem 6
Prove that
is multilinear if and only if for all
and
, this holds.

- Problem 7
Find the only nonzero term in the permutation expansion of
this matrix.

Compute that determinant by finding the signum of the associated
permutation.
- Problem 8
How would determinants change if we changed property (4) of the
definition to read that
?
- This exercise is recommended for all readers.
- Problem 10
Show that if an
matrix has a nonzero determinant
then any column vector
can be expressed as a linear combination
of the columns of the matrix.
- Problem 11
True or false: a matrix whose entries are only zeros or ones has a determinant equal to zero, one, or negative one. (Strang 1980)
- Problem 12
- Show that there are
terms in the permutation
expansion formula of a
matrix.
-
How many are sure to be zero if the
entry is zero?
- Problem 13
How many
-permutations are there?
- This exercise is recommended for all readers.
- Problem 15
What is the smallest number of zeros, and the placement of
those zeros, needed to ensure that a
matrix has a
determinant of zero?
- This exercise is recommended for all readers.
- Problem 16
If we have
data points
and want to find a
polynomial
passing through those points
then we can plug in the points to get an
equation/
unknown
linear system.
The matrix of coefficients for that system is called the
Vandermonde matrix. Prove that the determinant of the transpose
of that matrix of coefficients

equals the product, over all indices
with
, of terms of the form
.
(This shows that
the determinant is zero, and the linear system has no solution,
if and only if the
's in the data are not distinct.)
- This exercise is recommended for all readers.
- ? Problem 19
The nine positive digits can be arranged into
arrays
in
ways.
Find the sum of the determinants of these arrays. (Trigg 1963)
- Problem 20
Show that

(Silverman & Trigg 1963)
- ? Problem 22
Show that the determinant of the
elements in the upper left
corner of the Pascal triangle

has the value unity.
(Rupp & Aude 1931)
Solutions
References
- Kemp, Franklin (1982), "Linear Equations", American Mathematical Monthly, American Mathematical Society: 608 .
- Silverman, D. L. (proposer); Trigg, C. W. (solver) (1963), "Quickie 237", Mathematics Magazine, American Mathematical Society, 36 (1) .
- Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich
- Trigg, C. W. (proposer) (1963), "Quickie 307", Mathematics Magazine, American Mathematical Society, 36 (1): 77 .
- Trigg, C. W. (proposer); Walker, R. J. (solver) (1949), "Elementary Problem 813", American Mathematical Monthly, American Mathematical Society, 56 (1) .
- Rupp, C. A. (proposer); Aude, H. T. R. (solver) (1931), "Problem 3468", American Mathematical Monthly, American Mathematical Society, 37 (6): 355 .