# Multiplication of Matrices

*Matrices* are rectangular arrays with entries from an arbitrary field. An *m × n**m* by *n*") matrix is thus an array (a_{ik}) where *i* changes from *1* through *m* whereas *k* ranges from *1* through *n*. More explicitly,

_{11}a

_{12}a

_{13}... a

_{1n}

a

_{21}a

_{22}a

_{23}... a

_{2n}

a

_{31}a

_{32}a

_{33}... a

_{3n}

...

a

_{m1}a

_{m2}a

_{m3}... a

_{mn}

which has *m* rows and *n* columns.

With every *m × n* matrix A we may associate m row (1 × n) vectors **a**_{i,} and n column **a**_{,k}. Should there be two matrices: an _{ik})_{ks}),*product* is defined as an _{is}),

_{is}= (

**a**

_{i,}.

**b**

_{,s})

and inside parentheses I used the scalar (*dot*) product of two vectors.

The first thing to note is that not any two matrices can be multiplied. To carry out the multiplication we must have the *column dimension* of the left factor equal to the *row dimension* of the right factor. Nonetheless, wherever defined, the product is associative and distributive relative to the standard matrix addition. Matrix multiplication changes dimensions; so it's hard to talk about a *unit element* in general. A fruitful approach is to confine the study to square (m=n) matrices of the same dimension. So, let's for a while assume that all matrices below have dimension nxn. The benefit is immediate: any two such matrices can be multiplied. Moreover, the product is a matrix with the same dimension. This may be expressed by saying that

*addition*and

*multiplication*.

With respect to addition, this set is an abelian group. Adding
multiplication makes it a ring. The *unit element* is uniquely defined by _{ik}),_{ik} is the *Kronecker*'s symbol _{ik} = 1_{ik} = 0,*identity matrix*. All elements of an *identity matrix* are zero, except for the main diagonal, where all elements are 1.)

Not all square matrices are invertible. But, if both A and B are, then so is their product AB. Furthermore,

^{-1}= B

^{-1}A

^{-1}

This is verified formally:

^{-1}A

^{-1})(AB) = B

^{-1}(A

^{-1}A)B = B

^{-1}EB = B

^{-1}(EB) = B

^{-1}B = E.

and in a similar manner (AB)(B^{-1}A^{-1}) = E. However, in general, AB≠BA. For example,

Thus the set of all invertible matrices does not form a field. To get a *field* we might restrict our discussion even further to the set of invertible *diagonal* matrices. A matrix A is diagonal if all its off-diagonal elements are zero: _{ik} = 0_{ii} ≠ 0

To get a feeling for the definitions and facts claimed above, it's best to consider circumstances in which the theory
acquires an intuitive meaning. In short, matrices give numerical expression to *linear transformations* of vector spaces.

|Contact| |Front page| |Contents| |Up|

Copyright © 1996-2018 Alexander Bogomolny