Addition of Vectors and Matrices

The term vector applies to elements of spaces for which two operations are defined - addition and multiplication by scalar. The definition appears to be circular but actually is not. First one sets up axioms of a vector space for the two operations defined for its elements. Then, sometime later (and in passing :) it's mentioned that it's customary to call elements of a vector space vectors.

An essential part of the study of vector spaces is devoted to the existence of bases and representation of vectors in different bases. For a given vector space, all bases have the same cardinality. When the bases are finite (and, consequently, have the same number of elements) the space is said to be finite-dimensional, and its elements can be identified with $n-\mbox{tuples}$ $(x_{1}, x_{2},\ldots ,x_{n}).$

It would be much less interesting but is still possible, to start with n-tuples and define addition componentwise as has been done for complex numbers. With this definition, it's immediately apparent that complex numbers may be identified with $2-\mbox{vectors}.$ Componentwise addition has a very simple physical and geometric interpretation. If a vector is looked at as an arrow emanating from one of its end-points, then to add a vector to a vector one slides one vector until its beginning coincides with the end of the other vector. The sum of the two is the vector that joins their free ends - the beginning of one to the end of another. This is known as the parallelogram rule. The parallelogram rule implies that the addition of vectors is commutative.

Matrices are vectors whose components are arranged in a rectangular array instead of a single row or column. An $m\times n$ (read "$m$ by $n$") matrix is thus an array $(a_{ij})$ where $i$ changes from $1$ through $m$ whereas $j$ ranges from $1$ through $n.$ More explicitly,

$\begin{array} a_{11} & a_{12} & a_{13} & \ldots & a_{1n}\\ a_{21} & a_{22} & a_{23} & \ldots & a_{2n}\\ a_{31} & a_{32} & a_{33} & \ldots & a_{3n}\\ & \ldots \\ a_{m1} & a_{m2} & a_{m3} & \ldots & a_{mn} \end{array}$

which has $m$ rows and $n$ columns. It's clear that if we define matrix addition again componentwise, the operation will be both associative and commutative. Zero matrix is the one with all components $0.$

Defining only vector and matrix addition is making injustice to both vector and matrix spaces. Both have a much deeper algebraic structures. As I have mentioned, vectors can be multiplied by scalars. In addition, there are scalar,vector, and tensor products. For matrices, we have matrix and tensor products as well as multiplication by a vector.


Direct Sum of Vector Spaces

Collecting numbers into n-tuples may be abstracted in yet another way. Let their be two vector spaces $X_{1}$ and $X_{2}.$ Then we may consider pairs of elements $(x_{1},x_{2})$ with the first component from the first space and second from the second. The set of all such tuples is known as the direct sum of spaces $X_{1}$ and $X_{2}.$ The number of spaces may of course be arbitrary. If we decide to consider a tuple of tuples as a single long tuple obtained by omitting parentheses from the "inner" tuples, the operation of direct sum will become associative. If, in addition, we agree to identify vector spaces whose bases have the same cardinality, the operation of direct sum will become commutative as well. Elsewhere we looked into the direct sum of Boolean Algebras.


Addition of Functions

Function is a correspondence $f$ between elements of a space $X$ and those of a space $Y$ such that any element $x$ of $X$ has a unique corresponding element $y$ of $Y$ which is denoted $y=f(x).$ If $Y$ is a set of numbers, the function is called numeric. If $Y=\mathbb{R},$ the set of all real numbers, the function is called real. The following is a widely used shorthand for "a function $f$ from $X$ to $Y$"

$f: X\,\rightarrow Y.$

Two functions $f$ and $g$ are equal if they define the same correspondence, $f(x) = g(x),$ for all $x\in X.$ Numeric functions can be added. For example, let

$f, g:\,X \rightarrow \mathbb{R}$

be two real functions. Then

$f + g:\,X \rightarrow \mathbb{R}$

is, by definition, another real function $(f + g)$ such that

$(f + g)(x) = f(x) + g(x).$

The value of the sum is the sum of the values. Note that, in general, for an arbitrary function $f,$ its value at one point does not depend in any way on its values at other points. The sum of functions is said to be defined pointwise. Because of this, some property of the addition of numbers are inherited by the addition of functions. Commutativity is one example:

$(f + g)(x) = f(x) + g(x) = g(x) + f(x) = (g + f)(x)$.

Therefore,

$f + g = g + f$.

Associativity is shown in a similar manner. It's also easy to define $-f$, the inverse element for $f$. Indeed, if $(-f)(x) = -f(x),$ then $f + (-f)=0,$ where $0$ is the zero function, i.e. the function that takes on a single value $0$ for all $x:$ $0(x) = 0.$

It is worth noting that vectors are functions defined on finite sets. If $\mathbf{f} = (f_{1}, \ldots , f_{n}),$ then,

$\mathbf{f}: \{1, \ldots , n\} \rightarrow Y,$

and we may consider $f_{i},$ $i = 1, \ldots , n$, as a more common in this context than $f(i)$. (There is a Java applet that illustrates the operations of addition and subtractions of fuctions.)

It's worth noting that pointwise and componentwise definitions are actually the same. Indeed, assume $X = {1, 2, \ldots , n}.$ Then, a function $f: X \rightarrow \mathbb{R}$ is uniquely defined by its $n$ values $f(1), f(2), \ldots , f(n).$ It's up to us to denote them as $f_{1}, f_{2}, \ldots , f_{n}$ and write them in a row as a vector $(f_{1}, f_{2}, \ldots , f_{n}).$ If we do, the pointwise function addition will be reduced to the componentwise addition of vectors. Notations reinforce the analogy. The space of $n-\mbox{tuples}$ of elements from $Y$ is denoted $Y^{n}$ whereas the set of all functions $f:\,X \rightarrow Y$ is denoted $Y^{X}.$

When $Y = {0, 1}$, the Boolean Algebra with two elements, the notation for the set of all functions from $X$ to $Y$ is $2^{X}.$ These functions take on only values $0$ and $1$. If $f$ is such a function then $X_{f} = \{x:\,f(x) = 1\}$ is the support of $f.$ $f,$ in turn, is known as the characteristic function of $X_{f}.$ Thus there is a 1-1 correspondence between subsets of $X$ and their characteristic functions. For this reason the set of all subsets of $X$ is often denoted as $2^{X}.$

It must also be noted that functions become interesting when the spaces $X$ and $Y$ are topological. In which case it's possible to stipulate that the values a function takes at near points in $X$ are near each other in $Y.$ Functions that satisfy this condition are called continuous. For continuous functions, the values $f(x)$ are no longer independent. The sum of two continuous functions is again continuous.

Functions

What Can Be Added?

  1. What Is Addition?
  2. Addition of Chains
  3. Addition of Equations
  4. Addition of Functions
  5. Addition of Numbers
  6. Addition of Sets
  7. Addition of Shapes
  8. Addition of Spaces
  9. Addition of Strings
  10. Addition of Vectors

|Contact| |Front page| |Contents| |Up|

Copyright © 1996-2018 Alexander Bogomolny

71471596