Egwald's popular web pages are provided without cost to users. Please show your support by joining Egwald Web Services as a Facebook Fan:
Follow Elmer Wiens on Twitter:

A vector is a mathematical object with magnitude and direction used to represent items such as the velocity, acceleration, or momentum of an object. A vector v can be represented by an n-tuple of real numbers:

v = [v_{i}] = (v_{1}, v_{2}, . . . . . , v_{n})

considered to be elements (points) of R^{n}, an n-dimensional real space.

If the n-tuples are complex numbers, then v is an element of C^{n}, an n-dimensional complex space.

The 2-dimensional vector v = (v_{1}, v_{2}) in the diagram below has magnitude
12.81, the distance from the origin to (8, 10), and direction, the orientation of the arrow from the origin to (8, 10). Notice that the v_{1} component is measured along the x-axis, while the v_{2} component is measured along the y-axis.

You can change the vector in the form above to see how its diagram changes. For example, try v = (-22, 15) and click "Vector".

The 3-dimensional vector a = (a_{1}, a_{2}, a_{3}) in the diagram below has magnitude
17.83, the distance from the origin to (13, 10, 7), and direction, the orientation of the line from the origin to (13, 10, 7). Notice that the a_{1} component is measured along the x-axis, the a_{2} component is measured along the y-axis, and the a_{3} component is measured along the z-axis.

For example, the acceleration of an object at 17.83 meters per second^{2} in the (13, 10, 7) direction can be represented by the line in the preceding diagram.

Rules of Operations on Vectors.

Scalar Multiplication of a Vector.

The product of a vector v = [v_{i}] in R^{n} by a number s is [s*v_{i}]. A real number scalar is an element of R^{1} = R.

To multiply a scalar by a vector, simply multiply each component of the vector by the scalar.

For example, the linear momentum p of an object is the product of its velocity, a vector v, and its mass, a scalar m:

p = m * v.

Addition of Vectors.

Given vectors v = [v_{i}] and w = [w_{i}] in R^{n}, their sum v + w is another vector u = [u_{i}] in R^{n}, where:

u_{i} = v_{i} + w_{i} , for i = 1, . . . n .

To add two vectors, simply add the two vectors components.

Subtraction of Vectors.

Given vectors v and w of dimension n, their difference w - v is another vector u of dimension n, where:

u = w - v = w + (-1) * v.

The next diagram shows v + w in red, and w - v in green, where v = (6, 3), and w = (2, 7).

You can change the vectors in the form above to see how v + w, and w - v change. For example, try v = (-2, 15) and w = (10, 10) and click "Compute".

3-d Example.

The next diagram shows the resultant F of two forces, F_{1} = (12, 0, 3) and F_{2} = (1, 10, 4) in kg * meters per second^{2}(newtons), acting on an object of mass 1 kg.

F = F_{1} + F_{2} = (13, 10, 7)

Recalling that force = mass * acceleration, the resultant acceleration a = F / m of the object is 17.83 meters per second^{2} in the direction (13, 10, 7).

Zero Vector.

The n dimensional vector o = (0, 0, . . . . ,0) is called the zero vector.

Properties under Scalar Multiplication and Vector Addition.

For scalars s_{1} and s_{2}, and vectors u, v, and w in R^{n}:

v + o = v

v - v = o

v + w = w + v

(u + v) + w = u + (v + w)

v + v = 2 * v

0 * v = o

1 * v = v

-1 * v = -v

s_{1} * (v + w) = s_{1} * v + s_{1} * w

(s_{1} + s_{2}) * v = s_{1} * v + s_{2} * v

The n-dimensional space R^{n} is a vector space, if its elements, vectors, obey the rules above.

Dot (Inner or Scalar) Product of two Vectors.

Given vectors v = [v_{i}] and w = [w_{i}] of dimension n, their dot product v * w is a scalar determined by:

v * w = v_{1} * w_{1} + v_{2} * w_{2} + . . . . + v_{n} * w_{n},

ie the sum of the product of the vectors' components.

Length (Norm) of a vector.

Given a vector v = [v_{i}] of dimension n, the length of v, denoted by ||v||_{2}, is the square root of its own dot product:

The ||.||_{2} norm is called the Euclidean norm of a vector. If v is a 2-dimensional vector, its formula is identical to the one used to find the hypotenuse of a right angled triangle.

If v = s, ie v is a 1-dimensional vector, then ||v||_{2} = |s|, the absolute value of the scalar s.

For p = 1 one gets ||v||_{p} = (|v_{1}| + |v_{2}| + . . . . |v_{n}|), ie the sum of the absolute values of the vector's components.

For p = infinity one gets ||v||_{inf} = max_{1 <= i <= n} {|v_{i}|}, ie the component of the vector with the maximum absolute value.

Properties of Vector Norms.

For a given vector norm || . ||, any scalar s, and vectors v, w in R^{n}:

||v|| = 0 if and only if v = o, the zero vector.

||v|| > 0 for any non-zero vector v.

||s * v|| = |s| * ||v||.

||v + w|| <= ||v|| + ||w||.

The magnitude of a vector |v| = ||v||_{2}, ie the length of v.

Geometric Interpretation of the Dot Product.

If ß is the angle from the vector v to the vector w, the dot product v * w is equivalent to:

v * w = |v| * |w| * cos(ß).

For vectors v = (12, 2) and w = (5, 9), ß = 0.9.

|v| * |w| * cos(ß) = 12.17 * 10.3 * cos(0.9) = 78

v * w = (12, 2) * (5, 9) = 78;

From the diagram, one sees that |w| * cos(ß) is the length of the projection of w = (5, 9) onto v = (12, 2). Therefore, v * w is the length of vector p = (6.32, 1.05), times the length of v.

The vector p obtained by projecting w onto v can be obtained from the formula:

p = ( (v * w) / |v|) * (v / |v|) = ( (v * w) / |v|^{2}) * v = (6.32, 1.05).

You can change the vectors in the form above to see how v * w changes in the diagram. For example, try w = (5, -9) and click "Product".

Work equals Dot Product.

The work done by a constant force F = w, magnitude |w| = 10.3 newtons in the direction (5, 9), in displacing an object of mass 1 kg along the length 12.17 meters of the vector v = (12, 2) is v * w.

W = v * w = |w| newtons * |v| meters * cos(ß) = 78 newton meters (joules);

Properties of the Dot Product.

For any scalar s and vectors u, v, and w in R^{n}:

(u + v) * w = u * v + u * w

v * w = w * v

(s * v) * w = s * (v * w)

v * v >= 0

v * v = 0 if and only if v = o, the zero vector.

Euclidean Space.

The space R^{n} with the specified properties of scalar multiplication, vector addition, and inner product of vectors is called an Euclidean space of dimension n, denoted by E^{n}.

Vectors and Matrices.

Column and Row Vectors.

A vector v = (v_{1}, v_{2}, . . . . . , v_{n}) in R^{n} can be specified as a column or row vector. By convention, v is an n by 1 array (a column vector), while its transpose v^{T} is a 1 by n array (a row vector).

Matrices.

Given k vectors {v_{1}, v_{2}, . . . . v_{k}} in R^{n}, where the components of the i^{th} vector are given by:

v_{i} = (v_{1, i}, v_{2, i}, . . . . v_{n, i})

the arrays

v_{1}, v_{2}, . . . . v_{k}

=

v_{1,1} v_{1,2} . . . . . v_{1,k-1} v_{1,k}

v_{2,1} v_{2,2} . . . . . v_{2,k-1} v_{2,k}

. . . . . . . . . . .

v_{n,1} v_{n,2} . . . v_{n,k-1} v_{n,k}

= V

are alternative ways of writing the matrix V. Thus V is an n by k array of scalars consisting of k n by 1 column vectors.

To understand the properties of matrices, and how matrices interact with vectors see the linear algebra: matrices web page.

Orthogonal Vectors.

Vectors v and w in R^{n} are said to be orthogonal if their inner product is zero. Since,

v * w = |v| * |w| * cos(ß),

where ß is the angle from v to w, non-zero vectors are orthogonal if and only if they are perpendicular to each other, ie when cos(ß) = 0.

Orthogonal vectors v and w are called orthonormal if they are of length one, ie v * v = 1, and w * w = 1.

Linear Dependence.

A set of k vectors {v_{1}, v_{2}, . . . . v_{k}} in R^{n} is linearly dependent if a set of scalars {s_{1}, s_{2}, . . . . s_{k}} can be found, not all zero, such that:

The vector u = (14.25, 16.5) is a linear combination of the vectors v = (11, 4) and w = (4, 9) given by:

u = 0.75 * v + 1.5 * w = (14.25, 16.5).

You can change the vectors and the scalars in the form above to see how u = s_{1} * v + s_{2} * w changes in the diagram. For example, try s_{1} = 1.5 and s_{2} = .75 and click "Combination".

Vectors {v_{1}, v_{2}, . . . . v_{k}} in R^{n} are linearly dependent if and only if one vector can be expressed as a linear combination of the remaining vectors.

The vector v depends linearly on the vector w if v = s * w for some scalar s.

The Span of a Set of Vectors.

Let V be a set of vectors {v_{1}, v_{2}, . . . . v_{k}}. The span of the set of vectors in V, span(V), is the set of all linear combinations of the vectors in V:

span(V) = s_{1} * v_{1} + s_{2} * v_{2} + . . . . + s_{k} * v_{k}, for all possible sets of scalars {s_{1}, s_{2}, . . . . s_{k}}.

Basis Vectors.

Unit Coordinate Vectors.

Write e_{i} as the vector in R^{n} whose components are 0's except for the ith component which is a 1.

The vectors {e_{1}, e_{2}, . . . . e_{n}}, called the unit coordinate vectors, are orthonormal since the vectors satisfy e_{i} * e_{i} = 1, and e_{i} * e_{j} = 0 if i and j are different.

For example, in R^{3}:

e_{1} = (1, 0, 0),

e_{2} = (0, 1, 0),

e_{3} = (0, 0, 1).

The vectors {e_{1}, e_{2}, . . . . e_{n}} in R^{n} are said to form a basis for R^{n}, since any vector v = (v_{1}, v_{2}, . . . . . , v_{n}) in R^{n} can be expressed as a linear combination of the {e_{1}, e_{2}, . . . . e_{n}} vectors:

ie the sum of the product of each component of v with the corresponding basis vector.

The linearly independent set of vectors {e_{1}, e_{2}, . . . . e_{n}} is said to span the n dimensional space R^{n}.

Other Basis Systems.

Theorem: Any basis of R^{n} consists of exactly n linearly independent vectors in R^{n}.

Theorem: Any n linearly independent vectors in R^{n} are a basis for R^{n}.

2-d Example.

Any two linearly independent vectors in R^{2} are a basis. Any three vectors in R^{2} are linearly dependent since any one of the three vectors can be expressed as a linear combination of the other two vectors.

You can change the basis vectors and the vector u in the form above to see how the scalars s_{1} and s_{2} change in the diagram. For example, try u_{1} = -2 and u_{2} = 8 and click "Basis".

Change in Basis.

In the {e_{1}, e_{1}} basis, the vector u = u_{1} * e_{1} + u_{2} * e_{2} = 4 * e_{1} + 7 * e_{2}.

In the {v, w} basis, the vector u = s_{1} * v + s_{2} * w = 1.38 * v + 0.72 * w.

To move from one basis to the other, write the matrix:

A =

v, w

5

-4

3

4

Writing the coordinates u in the {v, w} basis as the vector s = (s_{1}, s_{2})^{T}, the product of the matrix A and the vector s equals u in the {e_{1}, e_{1}} basis:

u = A * s.

The matrix A has an inverse matrix, A^{-1}, if and only if the vectors v and w are linearly independent. Then:

s = A^{-1} * u

s =

0.125

0.125

-0.0938

0.1563

*

4

7

=

1.38

0.72

Linear Transformations

A linear transformation T from a n-dimensional space R^{n} to a m-dimensional space R^{m} is a function defined by a m by n matrix A such that:

y = T(x) = A * x, for each x in R^{n}.

For example, the 2 by 2 change of basis matrix A in the 2-d example above generates a linear transformation from R^{2} to R^{2}.

T(x) =

5

-4

3

4

*

x_{1}

x_{2}

=

5 * x_{1} +

-4 * x_{2}

3 * x_{1} +

4 * x_{2}

=

y_{1}

y_{2}

=

y

Transpose Linear Transformation.

The transpose transformation T^{T} is a function from R^{m} to R^{n} obtained from the transpose of the matrix A used to define T:

T^{T}(y) = A^{T} * y.

Properties of Linear Transformations.

For any scalar s and vectors v and w in R^{n}, a linear transformation T from R^{n} to R^{m} obeys:

T(v + w) = T(v) + T(w),

T(s * v) = s * T(v).

Inverse Transformation T^{-1}.

A linear transformation T has an inverse transformation T^{-1} if and only if the matrix A that defines T has an inverse matrix A^{-1}.

If A^{-1} exists, then

x = T^{-1}(y) = A^{-1} * y.

Note: T^{-1} = T^{T} if and only if the matrix A used to define T is an orthogonal matrix.

The Image of a Transformation T.

Let A be an m by n matrix that defines a linear transformation T from R^{n} to R^{m}. Write A = {a_{1}, a_{2}, ... a_{n}} as the set of m by 1 column vectors of A, ie A = [a_{1}, a_{2}, ... a_{n}].

The image of T, im(T), is the span of the columns of A. Formally:

im(T) = span(A) = {y in R^{m} | y = A * x, for some x in R^{n}.

Sometimes this is referred to as the image of the matrix A, im(A).

The Kernel of a Transformation T.

Define the linear transformation T(x) = A * x for A an m by n matrix. The kernel of T, ker(T), is the set of all vectors x in R^{n} for which T(x) = o, the zero vector in R^{m}. Formally:

ker(T) = {x in R^{n} | A * x = 0 in R^{m}}

Sometimes this is referred to as the kernel of the matrix A, ker(A).

3-d Example.

Define the linear transformation T from R^{3} to R^{3} by T(x) = A * x where:

T(x) =

A * x =

12

1

0

8

10

0

1

3

0

*

x_{1}

x_{2}

x_{3}

The image of T, im(T), is the plane (2 dimensional) in R^{3} spanned by the vectors of the first two columns of A as displayed in the following diagram.

The kernel of T, ker(T), coincides with the z-axis, since the product of A and any vector x^{T} = (0, 0, x_{3}) equals o in R^{3}.

The transpose linear transformation T^{T} from R^{3} to R^{3} is defined by A^{T} as:

T^{T}(y) =

A^{T} * y =

12

8

1

1

10

3

0

0

0

*

y_{1}

y_{2}

y_{3}

The im(T^{T}) coincides with the x-y plane in R^{3}, orthogonal to ker(T).

The ker(T^{T}) coincides with the line through the vector y^{T} = (1.2, -3, 9.6), orthogonal to im(T).

Subspaces of R^{n}.

Let V be a subset of the set of all vectors in R^{n}. V is said to be a vector subspace of R^{n} if V has the following properties:

V contains the zero vector o,

If v and w are in V, then v + w is in V,

If s is a scalar and v is in V, then s * v is in V.

Examples of Vector Subspaces.

a. If T is a linear transformation from R^{n} to R^{m}, then im(T) and ker(T^{T}) are vector subspaces of R^{m}, while ker(T) and im(T^{T}) are vector subspaces of R^{n}.

b. If V is any set of k vectors {v_{1}, v_{2}, . . . . v_{k}} in R^{n}, then span(V) is a vector subspace of R^{n}.

Gram-Schmidt Process.

Given a set of k linearly independent vectors {v_{1}, v_{2}, . . . . v_{k}} that span a vector subspace V of R^{n}, the Gram-Schmidt process generates a set of k orthogonal vectors {q_{1}, q_{2}, . . . . q_{k}} that are a basis for V.

The Gram-Schmidt process is based on an idea contained in the following diagram.

From the diagram above, the vector p obtained by projecting of w = (5, 9) onto v = (12, 2) is p = (6.32, 1.05).

The projection vector is obtained by:

p = ( (v * w) / (v * v) ) * v =
0.527 * v = (6.32, 1.05).

The vector u perpendicular to v is obtained by subtracting the projection of w onto v from w leaving that part of w that is perpendicular to v as the vector u:

u = w - p = (5, 9) - (6.32, 1.05) =
(-1.32, 7.95).

You can change the vectors in the form above. For example, try w = (5, -9) and click "Gram-Schmidt".

3-d Example.

Use the Gram-Schmidt process to obtain an orthogonal basis for the two vectors that span the im(T)subspace.

v, w

12

1

8

10

1

3

The projection of w on v is the vector p given by:

p = ((v * w) / (v * v)) * v = 0.4545 * v

Subtracting p from w yields u, a vector perpendicular to v.

u =

1

10

3

- 0.4545 *

12

8

1

=

-4.4545

6.3636

2.5455

The vectors v and u are an orthogonal basis for im(T). Dividing these two vectors by their norms yields an orthonormal basis for the vector subspace.

Orthonormal Basis.

It is desirable to obtain an orthonormal basis for a vector subspace because it is easy to work with an orthonormal basis. On the matrices web page, the Gram-Schmidt process is used to construct an orthonormal basis from a set of linearly independent vectors.

Specifically, the orthonormal basis matrix Q was obtained for R^{4}:

Q =

q_{1}

q_{2}

q_{3}

q_{4}

0.2132

0.6617

-0.6199

0.3638

0.4264

0.0389

-0.31

-0.8489

0.2132

0.6617

0.7085

-0.1213

0.8528

-0.3503

0.1328

0.3638

Orthonormal Basis Representation of a Vector.

Since the set {q_{1}, q_{2}, q_{3}, q_{4}} is a basis for R^{4}, any vector b^{T} = (b_{1}, b_{2}, b_{3}, b_{4}) in R^{4} can be written as a linear combination of the {q_{k}} basis vectors.

b = s_{1}*q_{1} + s_{2}*q_{2} + s_{3}*q_{3} + s_{4}*q_{4}.

To obtain each scalar s_{k}, notice that q_{i} * q_{j} = 0 if i and j are different, while q_{k} * q_{k} = 1. Therefore, multiplying the expression for b by each basis vector q_{k}^{T} one obtains each scalar multiplier s_{k}:

q_{k}^{T} * b = s_{k}.

Substituting these values for s_{k} into the expression for b:

Let the vector b^{T} = (
1, 1, 1, 1). The vector of scalar multipliers, s = (
1.7056, 1.012, -0.0886, -0.2426). In terms of the orthonormal basis vectors:

While the vector dot product of two vectors produces a scalar, the vector cross product combines two vectors to produce a third vector perpendicular to the first two vectors. Using the conventions of Analytic Geometry, define the unit coordinate vectors in R^{3} by

i = (1, 0, 0),

j = (0, 1, 0),

k = (0, 0, 1).

Let v = v_{1}i + v_{2}j + v_{3}k and w = w_{1}i + w_{2}j + w_{3}k be two vectors in R^{3}. Their cross product v X w is the vector:

u = v X w = (v_{2} w_{3} - v_{3} w_{2})i + (v_{3} w_{1} - v_{1} w_{3})j + (v_{1} w_{2} - v_{2} w_{1})k.

3-d Example.

Change the values for the vectors v and w in the form below and click "Cross Product" to see how their cross product, u changes.

Given vectors:

v = 12i + 3j + 7k, w = 1i + 10j + 3k,

their vector cross product is:

u = v X w = (3 * 3 - 7 * 10)i + (7 * 1 - 12 * 3)j + (12 * 10 - 3 * 1)k = -61i + -29j + 117k.

References.

Apostle, Tom M. Calculus, Vol. 1. New York: Blaisdell, 1962.

Ayres, Frank Jr. Matrices. New York: Schaum McGraw-Hill, 1962.

Ayres, Frank Jr. Modern Algebra. New York: Schaum McGraw-Hill 1965.

Bretscher, Otto. Linear Algebra with Applications. Upper Saddle River: Prentice Hall, 1997.

Cohn, P. M. Linear Equations. London: Routledge, 1964.

Cohn, P. M. Solid Geometry. London: Routledge, 1961.

Cutnell, John D. and Kenneth W. Johnson. Physics. 3rd ed. New York: John Wiley, 1995.

Dowling, Edward T. Mathematics for Economists. New York: Schaum McGraw-Hill, 1980.

Lipschutz, Seymour. Linear Algebra. New York: Schaum McGraw-Hill, 1968.

Spiegel, Murray R. Vector Analysis and an Introduction to Tensor Analysis. New York: Schaum, 1959.

Thomas, George B. Calculus and Analytical Geometry. Reading, Mass.: Addison-Wesley, 1960.