Added to Favorites

Related Searches

Nearby Words

In elementary mathematics, physics, and engineering, a vector (sometimes called a geometric or spatial vector) is a geometric object that has both a magnitude (or length) and a direction. A vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by

- $overrightarrow\{AB\}.$

The magnitude of the vector is the length of the segment and the direction characterizes the displacement of B relative to A: how much one should move the point A to "carry" it to the point B.

Many algebraic operations on real numbers have close analogues for vectors. Vectors can be added, subtracted, multiplied by a number, and flipped around so that the direction is reversed. These operations obey the familiar algebraic laws: commutativity, associativity, distributivity. The sum of two vectors with the same initial point can be found geometrically using the parallelogram law. Multiplication by a positive number, commonly called a scalar in this context, amounts to changing the magnitude of vector, that is, stretching or compressing it while keeping its direction; multiplication by -1 preserves the magnitude of the vector but reverses its direction.

Cartesian coordinates provide a systematic way of describing vectors and operations on them. A vector becomes a tuple of real numbers, its scalar components. Addition of vectors and multiplication of a vector by a scalar are simply done component by component, see coordinate vector.

Vectors play an important role in physics: velocity and acceleration of a moving object and forces acting on a body are all described by vectors. Many other physical quantities can be usefully thought of as vectors. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.

When a vector is thought of an an arrow in Euclidean space, it possesses a definite initial point and terminal point. Such a vector is called a bound vector. In other situations, when only the magnitude and direction of the vector matter, then the particular initial point is of no importance, and the vector is called a free vector. Thus two arrows $overrightarrow\{AB\}$ and $overrightarrow\{A\text{'}B\text{'}\}$ in space represent the same free vector if they have the same magnitude and direction: equivalently, they are equivalent if the quadrilateral ABB′A′ is a parallelogram. If the Euclidean space is equipped with a choice of origin, then any free vector is equivalent to a bound vector whose initial point is the origin.

The term vector also has generalizations to larger dimensions and to more formal approaches with much wider applications. Such generalizations are found in other articles. See the See also links and links within the text below.

The force and displacement are vectors, the magnitudes are scalars, and the coordinates are neither. -->

Typically in Cartesian coordinates, one considers primarily bound vectors. A bound vector is determined by the coordinates of the terminal point, its initial point always having the coordinates of the origin O = (0,0,0). Thus the bound vector represented by (1,0,0) is a vector of unit length pointing from the origin up the positive x-axis.

The coordinate representation of vectors allows the algebraic features of vectors to be expressed in a convenient numerical fashion. For example, the sum of the vectors (1,2,3) and (-2,0,4) is the vector

- $(1,,\; 2,,\; 3)\; +\; (-2,,\; 0,,\; 4)=(1-2,,\; 2+0,,\; 3+4)=(-1,,\; 2,,\; 7).,$

However, it is not always possible or desirable to define the length of a vector in a natural way. This more general type of spatial vector is the subject of vector spaces (for bound vectors) and affine spaces (for free vectors).

In mathematics, a vector is considered more than a representation of a physical quantity. In general, a vector is any element of a vector space over some field, often represented as a coordinate vector. The vectors described in this article are a very special case of this general definition (they are not simply any element of R^{d} in d dimensions), which includes a variety of mathematical objects (algebras, the set of all functions from a given domain to a given linear range, and linear transformations). Note that under this definition, a tensor is a special vector.

Vectors are usually denoted in lowercase boldface, as a or lowercase italic boldface, as a. (Uppercase letters are typically used to represent matrices.) Other conventions include $vec\{a\}$ or __a__, especially in handwriting. Alternately, some use a tilde (~) or a wavy underline drawn beneath the symbol, which is a convention for indicating boldface type. If the vector represents a directed distance or displacement from a point A to a point B (see figure), it can also be denoted as $overrightarrow\{AB\}$ or __AB__. The hat symbol (^) is typically used to denote unit vectors (vectors with unit length), as in $boldsymbol\{hat\{a\}\}$.

Vectors are usually shown in graphs or other diagrams as arrows (directed line segments), as illustrated in the figure. Here the point A is called the origin, tail, base, or initial point; point B is called the head, tip, endpoint, terminal point or final point. The length of the arrow is proportional to the vector's magnitude, while the direction in which the arrow points indicates the vector's direction.

On a two-dimensional diagram, sometimes a vector perpendicular to the plane of the diagram is desired. These vectors are commonly shown as small circles. A circle with a dot at its centre (Unicode U+2299 ⊙) indicates a vector pointing out of the front of the diagram, towards the viewer. A circle with a cross inscribed in it (Unicode U+2297 ⊗) indicates a vector pointing into and behind the diagram. These can be thought of as viewing the tip an arrow front on and viewing the vanes of an arrow from the back.

In order to calculate with vectors, the graphical representation may be too cumbersome. Vectors in an n-dimensional Euclidean space can be represented in a Cartesian coordinate system. The endpoint of a vector can be identified with an ordered list of n real numbers (n-tuple). As an example in two dimensions (see figure), the vector from the origin O = (0,0) to the point A = (2,3) is simply written as

- $mathbf\{a\}\; =\; (2,3).$

The notion that the tail of the vector coincides with the origin is implicit and easily understood. Thus, the more explicit notation $overrightarrow\{OA\}$ is usually not deemed necessary and very rarely used.

In three dimensional Euclidean space (or R^{3}), vectors are identified with triples of numbers corresponding to the Cartesian coordinates of the endpoint (a,b,c):

- $mathbf\{a\}\; =\; (a,\; b,\; c).$

These numbers are often arranged into a column vector or row vector, particularly when dealing with matrices, as follows:

- $mathbf\{a\}\; =\; begin\{bmatrix\}$

a

b

cend{bmatrix}

- $mathbf\{a\}\; =\; [a\; b\; c\; ].$

Another way to express a vector in three dimensions is to introduce the three standard basis vectors:

- $\{mathbf\; e\}\_1\; =\; (1,0,0),\; \{mathbf\; e\}\_2\; =\; (0,1,0),\; \{mathbf\; e\}\_3\; =\; (0,0,1).$

- $(a,b,c)\; =\; a(1,0,0)\; +\; b(0,1,0)\; +\; c(0,0,1)\; =\; a\{mathbf\; e\}\_1\; +\; b\{mathbf\; e\}\_2\; +\; c\{mathbf\; e\}\_3.$

In introductory physics classes, these three special vectors are often instead denoted $boldsymbol\{i\},boldsymbol\{j\},boldsymbol\{k\}$ (or $boldsymbol\{hat\{x\}\},\; boldsymbol\{hat\{y\}\},\; boldsymbol\{hat\{z\}\}$), but such notation clashes with the index notation and the summation convention commonly used in higher level mathematics, physics, and engineering.

The use of Cartesian versors such as $boldsymbol\{hat\{x\}\},\; boldsymbol\{hat\{y\}\},\; boldsymbol\{hat\{z\}\}$ as a basis in which to represent a vector is not mandated. Vectors can also be expressed in terms of cylindrical unit vectors $boldsymbol\{hat\{r\}\},\; boldsymbol\{hat\{theta\}\},\; boldsymbol\{hat\{z\}\}$ or spherical unit vectors $boldsymbol\{hat\{r\}\},\; boldsymbol\{hat\{theta\}\},\; boldsymbol\{hat\{phi\}\}$. The latter two choices are more convenient for solving problems which possess cylindrical or spherical symmetry respectively.

- $\{mathbf\; e\}\_1\; =\; (1,0,0),\; \{mathbf\; e\}\_2\; =\; (0,1,0),\; \{mathbf\; e\}\_3\; =\; (0,0,1)$

- $\{mathbf\; a\}\; =\; a\_1\{mathbf\; e\}\_1\; +\; a\_2\{mathbf\; e\}\_2\; +\; a\_3\{mathbf\; e\}\_3.$

Two vectors are said to be equal if they have the same magnitude and direction. Equivalently they will be equal if their coordinates are equal. So two vectors

- $\{mathbf\; a\}\; =\; a\_1\{mathbf\; e\}\_1\; +\; a\_2\{mathbf\; e\}\_2\; +\; a\_3\{mathbf\; e\}\_3$

- $\{mathbf\; b\}\; =\; b\_1\{mathbf\; e\}\_1\; +\; b\_2\{mathbf\; e\}\_2\; +\; b\_3\{mathbf\; e\}\_3$

- $a\_1\; =\; b\_1,quad\; a\_2=b\_2,quad\; a\_3=b\_3.,$

The sum of a and b is

- $mathbf\{a\}+mathbf\{b\}$

The addition may be represented graphically by placing the start of the arrow b at the tip of the arrow a, and then drawing an arrow from the start of a to the tip of b. The new arrow drawn represents the vector a + b, as illustrated below:

This addition method is sometimes called the parallelogram rule because a and b form the sides of a parallelogram and a + b is one of the diagonals. If a and b are bound vectors that have the same base point, it will also be the base point of a + b. One can check geometrically that a + b = b + a and (a + b) + c = a + (b + c).

The difference of a and b is

- $mathbf\{a\}-mathbf\{b\}$

Subtraction of two vectors can be geometrically defined as follows: to subtract b from a, place the ends of a and b at the same point, and then draw an arrow from the tip of b to the tip of a. That arrow represents the vector a − b, as illustrated below:

A vector may also be multiplied, or re-scaled, by a real number r. In the context of conventional vector algebra, these real numbers are often called scalars (from scale) to distinguish them from vectors. The operation of multiplying a vector by a scalar is called scalar multiplication. The resulting vector is

- $rmathbf\{a\}=(ra\_1)mathbf\{e\_1\}$

Intuitively, multiplying by a scalar r stretches a vector out by a factor of r. Geometrically, this can be visualized (at least in the case when r is an integer) as placing r copies of the vector in a line where the endpoint of one vector is the initial point of the next vector.

If r is negative, then the vector changes direction: it flips around by an angle of 180°. Two examples (r = -1 and r = 2) are given below:

Scalar multiplication is distributive over vector addition in the following sense: r(a + b) = ra + rb for all vectors a and b and all scalars r. One can also show that a - b = a + (-1)b.

The length of the vector a can be computed with the Euclidean norm

- $left|mathbf\{a\}right|=sqrt\{\{a\_1\}^2+\{a\_2\}^2+\{a\_3\}^2\}$

which is a consequence of the Pythagorean theorem since the basis vectors e_{1}, e_{2}, e_{3} are orthogonal unit vectors.

This happens to be equal to the square root of the dot product of the vector with itself:

- $left|mathbf\{a\}right|=sqrt\{mathbf\{a\}cdotmathbf\{a\}\}.$

The dot product of two vectors a and b (sometimes called the inner product, or, since its result is a scalar, the scalar product) is denoted by a ∙ b and is defined as:

- $mathbf\{a\}cdotmathbf\{b\}$

where θ is the measure of the angle between a and b (see trigonometric function for an explanation of cosine). Geometrically, this means that a and b are drawn with a common start point and then the length of a is multiplied with the length of that component of b that points in the same direction as a.

The dot product can also be defined as the sum of the products of the components of each vector as

- $mathbf\{a\}\; cdot\; mathbf\{b\}\; =\; a\_1\; b\_1\; +\; a\_2\; b\_2\; +\; a\_3\; b\_3.$

A unit vector is any vector with a length of one; normally unit vectors are used simply to indicate direction. A vector of arbitrary length can be divided it by its length to create a unit vector. This is known as normalizing a vector. A unit vector is often indicated with a hat as in â.

To normalize a vector a = [a_{1}, a_{2}, a_{3}], scale the vector by the reciprocal of its length ||a||. That is:

- $mathbf\{hat\{a\}\}\; =\; frac\{mathbf\{a\}\}\{left|mathbf\{a\}right|\}\; =\; frac\{a\_1\}\{left|mathbf\{a\}right|\}mathbf\{e\_1\}\; +\; frac\{a\_2\}\{left|mathbf\{a\}right|\}mathbf\{e\_2\}\; +\; frac\{a\_3\}\{left|mathbf\{a\}right|\}mathbf\{e\_3\}$

The null vector (or zero vector) is the vector with length zero. Written out in coordinates, the vector is (0,0,0), and it is commonly denoted $vec\{0\}$, or 0, or simply 0. Unlike any other vector, it does not have a direction, and cannot be normalized (that is, there is no unit vector which is a multiple of the null vector). The sum of the null vector with any vector a is a (that is, 0+a=a).

The cross product (also called the vector product or outer product) is only meaningful in three dimensions. The cross product differs from the dot product primarily in that the result of the cross product of two vectors is a vector. The cross product, denoted a × b, is a vector perpendicular to both a and b and is defined as

- $mathbf\{a\}timesmathbf\{b\}$

where θ is the measure of the angle between a and b, and n is a unit vector perpendicular to both a and b which completes a right-handed system. The right-handedness constraint is necessary because there exist two unit vectors that are perpendicular to both a and b, namely, n and (–n).

The cross product a × b is defined so that a, b, and a × b also becomes a right-handed system (but note that a and b are not necessarily orthogonal). This is the right-hand rule.

The length of a × b can be interpreted as the area of the parallelogram having a and b as sides.

The cross product can be written as

- $\{mathbf\; a\}times\{mathbf\; b\}\; =\; (a\_2\; b\_3\; -\; a\_3\; b\_2)\; \{mathbf\; e\}\_1\; +\; (a\_3\; b\_1\; -\; a\_1\; b\_3)\; \{mathbf\; e\}\_2\; +\; (a\_1\; b\_2\; -\; a\_2\; b\_1)\; \{mathbf\; e\}\_3.$

For arbitrary choices of spatial orientation (that is, allowing for left-handed as well as right-handed coordinate systems) the cross product of two vectors is a pseudovector instead of a vector (see below).

- $(mathbf\{a\}\; mathbf\{b\}\; mathbf\{c\})$

It has three primary uses. First, the absolute value of the box product is the volume of the parallelepiped which has edges that are defined by the three vectors. Second, the scalar triple product is zero if and only if the three vectors are linearly dependent, which can be easily proved by considering that in order for the three vectors to not make a volume, they must all lie in the same plane. Third, the box product is positive if and only if the three vectors a, b and c are right-handed.

In components (with respect to a right-handed orthonormal basis), if the three vectors are thought of as rows (or columns, but in the same order), the scalar triple product is simply the determinant of the 3-by-3 matrix having the three vectors as rows

- $(mathbf\{a\}\; mathbf\{b\}\; mathbf\{c\})=left|begin\{pmatrix\}$

a_1 & a_2 & a_3

b_1 & b_2 & b_3

c_1 & c_2 & c_3end{pmatrix}right|.

The scalar triple product is linear in all three entries and anti-symmetric in the following sense:

- $$

All examples thus far have dealt with vectors expressed in terms of the same basis, namely, e_{1},e_{2},e_{3}. However, a vector can be expressed in terms of any number of different bases that are not necessarily aligned with each other, and still remain the same vector. For example, using the vector a from above,

- $$

where n_{1},n_{2},n_{3} form another orthonormal basis not aligned with e_{1},e_{2},e_{3}. The values of u, v, and w are such that the resulting vector sum is exactly a.

It is not uncommon to encounter vectors known in terms of different bases (for example, one basis fixed to the Earth and a second basis fixed to a moving vehicle). In order to perform many of the operations defined above, it is necessary to know the vectors in terms of the same basis. One simple way to express a vector known in one basis in terms of another uses column matrices that represent the vector in each basis along with a third matrix containing the information that relates the two bases. For example, in order to find the values of u, v, and w that define a in the n_{1},n_{2},n_{3} basis, a matrix multiplication may be employed in the form

- $begin\{bmatrix\}\; u\; v\; w\; end\{bmatrix\}\; =\; begin\{bmatrix\}\; c\_\{11\}\; \&\; c\_\{12\}\; \&\; c\_\{13\}\; c\_\{21\}\; \&\; c\_\{22\}\; \&\; c\_\{23\}\; c\_\{31\}\; \&\; c\_\{32\}\; \&\; c\_\{33\}\; end\{bmatrix\}\; begin\{bmatrix\}\; a\_1\; a\_2\; a\_3\; end\{bmatrix\}$

where each matrix element c_{ij} is the direction cosine relating n_{i} to e_{j}. The term direction cosine refers the cosine of the angle between two unit vectors, which is also equal to their dot product.

By referring collectively to e_{1},e_{2},e_{3} as the e basis and to n_{1},n_{2},n_{3} as the n basis, the matrix containing all the c_{ij} is known as the "transformation matrix from e to n", or the "rotation matrix from e to n" (because it can be imagined as the "rotation" of a vector from one basis to another), or the "direction cosine matrix from e to n" (because it contains direction cosines).

The properties of a rotation matrix are such that its inverse is equal to its transpose. This means that the "rotation matrix from e to n" is the transpose of "rotation matrix from n to e".

By applying several matrix multiplications in succession, any vector can be expressed in any basis so long as the set of direction cosines is known relating the successive bases.

The basis for the Cartesian coordinate system used above is an Orthonormal basis where the basic vectors are orthogonal and of unit length. The above results also apply for other orthonormal bases, such as cylindrical with unit vectors $boldsymbol\{hat\{r\}\},\; boldsymbol\{hat\{theta\}\},\; boldsymbol\{hat\{z\}\}$, or spherical with unit vectors $boldsymbol\{hat\{r\}\},\; boldsymbol\{hat\{theta\}\},\; boldsymbol\{hat\{phi\}\}$.

Addition, subtraction, and scalar multiplication also generalise naturally if the basis vectors are linearly independent. The dot product can also be defined for such bases; however, its interpretation as a length does not follow.

With the exception of the cross and triple products, the above formula generalise to two dimensions and higher dimensions. For example, addition generalises to two dimensions the addition of

- $(a\_1\{mathbf\; e\}\_1\; +\; a\_2\{mathbf\; e\}\_2)+(b\_1\{mathbf\; e\}\_1\; +\; b\_2\{mathbf\; e\}\_2)\; =\; (a\_1+a\_2)\{mathbf\; e\}\_1\; +\; (a\_2+b\_2)\{mathbf\; e\}\_2$

- $begin\{align\}(a\_1\{mathbf\; e\}\_1\; +\; a\_2\{mathbf\; e\}\_2\; +\; a\_3\{mathbf\; e\}\_3\; +\; a\_4\{mathbf\; e\}\_4)\&\; +\; (b\_1\{mathbf\; e\}\_1\; +\; b\_2\{mathbf\; e\}\_2\; +\; b\_3\{mathbf\; e\}\_3\; +\; b\_4\{mathbf\; e\}\_4)$

The cross product generalises to the exterior product, whose result is a bivector, which in general is not a vector. In two dimensions this is simply a scalar

- $(a\_1\{mathbf\; e\}\_1\; +\; a\_2\{mathbf\; e\}\_2)wedge(b\_1\{mathbf\; e\}\_1\; +\; b\_2\{mathbf\; e\}\_2)\; =\; a\_1\; b\_2\; -\; b\_2\; a\_1.$

If a vector is a function of one or more scalar variables, the vector function can be differentiated with respect to those variables. The result will be a vector function with the same number of dimensions as the original. For example,

- $frac\{mathrm\; dbold\{v\}(t)\}\{mathrm\; d\; t\}=bold\{a\}(t)$

The partial derivative of a vector function a with respect to a scalar variable q is defined as

- $frac\{partialmathbf\{a\}\}\{partial\; q\}\; =\; sum\_\{i=1\}^\{3\}frac\{partial\; a\_i\}\{partial\; q\}mathbf\{e\}\_i$

where a_{i} is the scalar component of a in the direction of e_{i}. It is also called the direction cosine of a and e_{i} or their dot product. The vectors e_{1},e_{2},e_{3} form an orthonormal basis fixed in the reference frame in which the derivative is being taken.

If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t,

- $frac\{mathrm\; dmathbf\{a\}\}\{mathrm\; d\; t\}\; =\; sum\_\{i=1\}^\{3\}frac\{mathrm\; d\; a\_i\}\{mathrm\; d\; t\}mathbf\{e\}\_i.$

If the vector a is a function of a number n of scalar variables q_{r} (r = 1...n), and each q_{r} is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as

- $frac\{mathrm\; dmathbf\; a\; \}\{mathrm\; d\; t\}\; =\; sum\_\{r=1\}^\{n\}frac\{partial\; mathbf\; a\}\{partial\; q\_r\}\; frac\{mathrm\; d\; q\_r\}\{mathrm\; d\; t\}\; +\; frac\{partial\; mathbf\; a\}\{partial\; t\}.$

Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables q_{r}.

The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e_{1},e_{2},e_{3} are constant, and therefore have a derivative of zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e_{1},e_{2},e_{3} are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is

- $frac\{\{\}^mathrm\{N\}mathrm\; dmathbf\{a\}\}\{mathrm\; d\; t\}\; =\; sum\_\{i=1\}^\{3\}frac\{mathrm\; d\; a\_i\}\{mathrm\; d\; t\}mathbf\{e\}\_i\; +\; sum\_\{i=1\}^\{3\}a\_ifrac\{\{\}^mathrm\{N\}mathrm\; d\; mathbf\{e\}\_i\}\{mathrm\; d\; t\}$

where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e_{1},e_{2},e_{3} are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is

- $frac\{\{\}^mathrm\; Nmathrm\; dmathbf\; a\}\{mathrm\; dt\}\; =\; frac\{\{\}^mathrm\; Emathrm\; dmathbf\; a\; \}\{mathrm\; dt\}\; +\; \{\}^mathrm\; N\; mathbf\; omega^mathrm\; E\; times\; mathbf\; a$

where ^{N}ω^{E} is the angular velocity of the reference frame E relative to the reference frame N.

One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity ^{N}v^{R} in inertial reference frame N of a rocket R located at position r^{R} can be found using the formula

- $\{\}^mathrm\; N\; mathbf\; v^mathrm\; R\; =\; \{\}^mathrm\; E\; mathbf\; v^mathrm\; R\; +\; \{\}^mathrm\; N\; mathbf\; omega^mathrm\; E\; times\; mathbf\; r^mathrm\; R.$

where ^{E}v^{R} is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth, and ^{N}ω^{E} is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, ^{N}v^{R} and ^{E}v^{R} are the derivatives of r^{R} in reference frames N and E, respectively.

The derivative of the products of vector functions behaves similarly to the derivative of the products of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,

- $frac\{partial\}\{partial\; q\}(pmathbf\; a)\; =\; frac\{partial\; p\}\{partial\; q\}mathbf\; a\; +\; pfrac\{partial\; mathbf\; a\}\{partial\; q\}.$

In the case of dot multiplication, for two vectors a and b that are both functions of q,

- $frac\{partial\}\{partial\; q\}(mathbf\; a\; cdot\; mathbf\; b)\; =\; frac\{partial\; mathbf\; a\; \}\{partial\; q\}\; cdot\; mathbf\; b\; +\; mathbf\; a\; cdot\; frac\{partial\; mathbf\; b\}\{partial\; q\}.$

Similarly, the derivative of the cross product of two vector functions is

- $frac\{partial\}\{partial\; q\}(mathbf\; a\; times\; mathbf\; b)\; =\; frac\{partial\; mathbf\; a\; \}\{partial\; q\}\; times\; mathbf\; b\; +\; mathbf\; a\; times\; frac\{partial\; mathbf\; b\}\{partial\; q\}.$

Vectors have many uses in physics and other sciences.

- $\{mathbf\; p\}\; =\; p\_1\; \{mathbf\; e\}\_1\; +\; p\_2\{mathbf\; e\}\_2\; +\; p\_3\{mathbf\; e\}\_3.$

Given two points p=(p_{1}, p_{2}, p_{3}), q=(q_{1}, q_{2}, q_{3}) their displacement is a vector

- $\{mathbf\; q\}-\{mathbf\; p\}=(q\_1-p\_1)\{mathbf\; e\}\_1\; +\; (q\_2-p\_2)\{mathbf\; e\}\_2\; +\; (q\_3-p\_3)\{mathbf\; e\}\_3.$

The velocity v of a point or particle is a vector, its length gives the speed. For constant velocity the position at time t will be

- $\{mathbf\; p\}\_t=\; t\; \{mathbf\; v\}\; +\; \{mathbf\; p\}\_0,$

Acceleration a of a point is vector which is the time derivative of velocity. Its dimensions are length/time^{2}.

- $\{mathbf\; F\}\; =\; m\{mathbf\; a\}$

Energy or work is the dot product of force and displacement

- $E\; =\; \{mathbf\; F\}\; cdot\; (\{mathbf\; p\}\_2\; -\; \{mathbf\; p\}\_1).$

A component of a vector is the influence of that vector in a given direction. Components are themselves vectors.

A vector is often described by a fixed number of components that sum up into this vector uniquely and totally. When used in this role, the choice of their constituting directions is dependent upon the particular coordinate system being used, such as Cartesian coordinates, spherical coordinates or polar coordinates. For example, an axial component of a vector is a component whose direction is determined by a projection onto one of the Cartesian coordinate axes, whereas radial and tangential components relate to the radius of rotation of an object as their direction of reference. The former is parallel to the radius and the latter is orthogonal to it. Both remain orthogonal to the axis of rotation at all times. (In two dimensions this requirement becomes redundant as the axis degenerates to a point of rotation.) The choice of a coordinate system doesn't affect properties of a vector or its behaviour under transformations.

- $frac\{df\}\{dtau\}\; =\; sum\_\{alpha=1\}^n\; frac\{dx^alpha\}\{dtau\}frac\{partial\; f\}\{partial\; x^alpha\}.$

where the index $alpha$ is summed over the appropriate number of dimensions (for example, from 1 to 3 in 3-dimensional Euclidian space, from 0 to 3 in 4-dimensional spacetime, etc.). Then consider a vector tangent to $x^alpha\; (tau)$:

- $t^alpha\; =\; frac\{dx^alpha\}\{dtau\}.$

The directional derivative can be rewritten in differential form (without a given function $f$) as

- $frac\{d\}\{dtau\}\; =\; sum\_alpha\; t^alphafrac\{partial\}\{partial\; x^alpha\}.$

Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative. A vector can therefore be defined precisely as

- $mathbf\{a\}\; equiv\; a^alpha\; frac\{partial\}\{partial\; x^alpha\}.$

In the language of differential geometry, the requirement that the components of a vector transform according to the same matrix of the coordinate transition is equivalent to defining a vector to be a tensor of contravariant rank one. However, in differential geometry and other areas of mathematics such as representation theory, the "coordinate transitions" need not be restricted to rotations. Other notions of spatial vector correspond to different choices of symmetry group. As a particular case where the symmetry group is important, all of the above examples are vectors which "transform like the coordinates" under both proper and improper rotations. An example of an improper rotation is a mirror reflection. That is, these vectors are defined in such a way that, if all of space were flipped around through a mirror (or otherwise subjected to an improper rotation), that vector would flip around in exactly the same way. Vectors with this property are called true vectors, or polar vectors. However, other vectors are defined in such a way that, upon flipping through a mirror, the vector flips in the same way, but also acquires a negative sign. These are called pseudovectors (or axial vectors), and most commonly occur as cross products of true vectors.

One example of an axial vector is angular velocity. Driving in a car, and looking forward, each of the wheels has an angular velocity vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the reflection of this angular velocity vector points to the right, but the actual angular velocity vector of the wheel still points to the left, corresponding to the minus sign. Other examples of pseudovectors include magnetic field, torque, or more generally any cross product of two (true) vectors. This distinction between vectors and pseudovectors is often ignored, but it becomes important in studying symmetry properties. See parity (physics).

- Affine space, which distinguishes between vectors and points
- Four-vector, the specialization to space-time in relativity
- Normal vector
- Null vector
- Pseudovector
- Tangential and normal components (of a vector)
- Unit vector
- Vector calculus
- Vector bundle
- Vector notation
- Function space
- Banach space
- Hilbert space
- Coordinate system
- Complex number
- Quaternion
- Tensor
- Covariance and contravariance of vectors

- Apostol, T. (1967).
*Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra*. John Wiley and Sons. - Apostol, T. (1969).
*Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications*. John Wiley and Sons. - Pedoe, D. (1988).
*Geometry: A comprehensive course*. Dover. .

Physical treatments

- Aris, R. (1990).
*Vectors, Tensors and the Basic Equations of Fluid Mechanics*. Dover. - Feynman, R., Leighton, R., and Sands, M. (2005).
*The Feynman Lectures on Physics, Volume I*. 2nd ed, Addison Wesley.

- Online vector identities (PDF)
- Introducing Vectors A conceptual introduction (applied mathematics)

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Saturday October 11, 2008 at 18:23:25 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Saturday October 11, 2008 at 18:23:25 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.