Loading AI tools

Algebraic structure in linear algebra From Wikipedia, the free encyclopedia

In mathematics and physics, a **vector space** (also called a **linear space**) is a set whose elements, often called *vectors*, can be added together and multiplied ("scaled") by numbers called *scalars*. The operations of vector addition and scalar multiplication must satisfy certain requirements, called *vector axioms*. **Real vector spaces** and **complex vector spaces** are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.

Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations.

Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic). A vector space is *finite-dimensional* if its dimension is a natural number. Otherwise, it is *infinite-dimensional*, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension.

Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces.

In this article, vectors are represented in boldface to distinguish them from scalars.^{[nb 1]}^{[1]}

A vector space over a field F is a non-empty set V together with a binary operation and a binary function that satisfy the eight axioms listed below. In this context, the elements of V are commonly called *vectors*, and the elements of F are called *scalars*.^{[2]}

- The binary operation, called
*vector addition*or simply*addition*assigns to any two vectors**v**and**w**in V a third vector in V which is commonly written as**v**+**w**, and called the*sum*of these two vectors.

- The binary function, called
*scalar multiplication*, assigns to any scalar a in F and any vector**v**in V another vector in V, which is denoted*a***v**.^{[nb 2]}

To have a vector space, the eight following axioms must be satisfied for every **u**, **v** and **w** in V, and a and b in F.^{[3]}

Axiom | Statement |
---|---|

Associativity of vector addition | u + (v + w) = (u + v) + w |

Commutativity of vector addition | u + v = v + u |

Identity element of vector addition | There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V. |

Inverse elements of vector addition | For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0. |

Compatibility of scalar multiplication with field multiplication | a(bv) = (ab)v ^{[nb 3]} |

Identity element of scalar multiplication | 1v = v, where 1 denotes the multiplicative identity in F. |

Distributivity of scalar multiplication with respect to vector addition | a(u + v) = au + av |

Distributivity of scalar multiplication with respect to field addition | (a + b)v = av + bv |

When the scalar field is the real numbers, the vector space is called a *real vector space*, and when the scalar field is the complex numbers, the vector space is called a *complex vector space*.^{[4]} These two cases are the most common ones, but vector spaces with scalars in an arbitrary field F are also commonly considered. Such a vector space is called an F-*vector space* or a *vector space over F*.^{[5]}

An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms (related to vector addition) say that a vector space is an abelian group under addition, and the four remaining axioms (related to the scalar multiplication) say that this operation defines a ring homomorphism from the field *F* into the endomorphism ring of this group.^{[6]}

Subtraction of two vectors can be defined as

Direct consequences of the axioms include that, for every and one has

- implies or

Even more concisely, a vector space is a module over a field.^{[7]}

- Linear combination
- Given a set G of elements of a F-vector space V, a linear combination of elements of G is an element of V of the form where and The scalars are called the
*coefficients*of the linear combination.^{[8]} - Linear independence
- The elements of a subset G of a F-vector space V are said to be
*linearly independent*if no element of G can be written as a linear combination of the other elements of G. Equivalently, they are linearly independent if two linear combinations of elements of G define the same element of V if and only if they have the same coefficients. Also equivalently, they are linearly independent if a linear combination results in the zero vector if and only if all its coefficients are zero.^{[9]} - Linear subspace
- A
*linear subspace*or*vector subspace*W of a vector space V is a non-empty subset of V that is closed under vector addition and scalar multiplication; that is, the sum of two elements of W and the product of an element of W by a scalar belong to W.^{[10]}This implies that every linear combination of elements of W belongs to W. A linear subspace is a vector space for the induced addition and scalar multiplication; this means that the closure property implies that the axioms of a vector space are satisfied.^{[11]}

The closure property also implies that*every intersection of linear subspaces is a linear subspace.*^{[11]} - Linear span
- Given a subset G of a vector space V, the
*linear span*or simply the*span*of G is the smallest linear subspace of V that contains G, in the sense that it is the intersection of all linear subspaces that contain G. The span of G is also the set of all linear combinations of elements of G.

If W is the span of G, one says that G*spans*or*generates*W, and that G is a*spanning set*or a*generating set*of W.^{[12]} - Basis and dimension
- A subset of a vector space is a
*basis*if its elements are linearly independent and span the vector space.^{[13]}Every vector space has at least one basis, or many in general (see Basis (linear algebra) § Proof that every vector space has a basis).^{[14]}Moreover, all bases of a vector space have the same cardinality, which is called the*dimension*of the vector space (see Dimension theorem for vector spaces).^{[15]}This is a fundamental property of vector spaces, which is detailed in the remainder of the section.

*Bases* are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often called Hamel bases, depends on the axiom of choice. It follows that, in general, no base can be explicitly described.^{[16]} For example, the real numbers form an infinite-dimensional vector space over the rational numbers, for which no specific basis is known.

Consider a basis of a vector space V of dimension n over a field F. The definition of a basis implies that every may be written
with in F, and that this decomposition is unique. The scalars are called the *coordinates* of **v** on the basis. They are also said to be the *coefficients* of the decomposition of **v** on the basis. One also says that the n-tuple of the coordinates is the coordinate vector of **v** on the basis, since the set of the n-tuples of elements of F is a vector space for componentwise addition and scalar multiplication, whose dimension is n.

The one-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus a vector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates.^{[17]}

Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, French mathematicians René Descartes and Pierre de Fermat founded analytic geometry by identifying solutions to an equation of two variables with points on a plane curve.^{[18]} To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines, and planes, which are predecessors of vectors.^{[19]} Möbius (1827) introduced the notion of barycentric coordinates.^{[20]} Bellavitis (1833) introduced an equivalence relation on directed line segments that share the same length and direction which he called equipollence.^{[21]} A Euclidean vector is then an equivalence class of that relation.^{[22]}

Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions by the latter.^{[23]} They are elements in **R**^{2} and **R**^{4}; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.

In 1857, Cayley introduced the matrix notation which allows for harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations.^{[24]} In his work, the concepts of linear independence and dimension, as well as scalar products are present. Grassmann's 1844 work exceeds the framework of vector spaces as well since his considering multiplication led him to what are today called algebras. Italian mathematician Peano was the first to give the modern definition of vector spaces and linear maps in 1888,^{[25]} although he called them "linear systems".^{[26]} Peano's axiomatization allowed for vector spaces with infinite dimension, but Peano did not develop that theory further. In 1897, Salvatore Pincherle adopted Peano's axioms and made initial inroads into the theory of infinite-dimensional vector spaces.^{[27]}

An important development of vector spaces is due to the construction of function spaces by Henri Lebesgue. This was later formalized by Banach and Hilbert, around 1920.^{[28]} At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of *p*-integrable functions and Hilbert spaces.^{[29]}

The first example of a vector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forces or velocities.^{[30]} Given any two such arrows, **v** and **w**, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the *sum* of the two arrows, and is denoted **v** + **w**. In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positive real number *a*, the arrow that has the same direction as **v**, but is dilated or shrunk by multiplying its length by *a*, is called *multiplication* of **v** by *a*. It is denoted *a***v**. When *a* is negative, *a***v** is defined as the arrow pointing in the opposite direction instead.^{[31]}

The following shows a few examples: if *a* = 2, the resulting vector *a***w** has the same direction as **w**, but is stretched to the double length of **w** (the second image). Equivalently, 2**w** is the sum **w** + **w**. Moreover, (−1)**v** = −**v** has the opposite direction and the same length as **v** (blue vector pointing down in the second image).

A second key example of a vector space is provided by pairs of real numbers x and y. The order of the components x and y is significant, so such a pair is also called an ordered pair. Such a pair is written as (*x*, *y*). The sum of two such pairs and the multiplication of a pair with a number is defined as follows:^{[32]}

The first example above reduces to this example if an arrow is represented by a pair of Cartesian coordinates of its endpoint.

The simplest example of a vector space over a field *F* is the field *F* itself with its addition viewed as vector addition and its multiplication viewed as scalar multiplication. More generally, all *n*-tuples (sequences of length *n*)
of elements *a*_{i} of *F* form a vector space that is usually denoted *F*^{n} and called a **coordinate space**.^{[33]}
The case *n* = 1 is the above-mentioned simplest example, in which the field *F* is also regarded as a vector space over itself. The case *F* = **R** and *n* = 2 (so **R**^{2}) reduces to the previous example.

The set of complex numbers **C**, numbers that can be written in the form *x* + *iy* for real numbers *x* and *y* where *i* is the imaginary unit, form a vector space over the reals with the usual addition and multiplication: (*x* + *iy*) + (*a* + *ib*) = (*x* + *a*) + *i*(*y* + *b*) and *c* ⋅ (*x* + *iy*) = (*c* ⋅ *x*) + *i*(*c* ⋅ *y*) for real numbers *x*, *y*, *a*, *b* and *c*. The various axioms of a vector space follow from the fact that the same rules hold for complex number arithmetic. The example of complex numbers is essentially the same as (that is, it is *isomorphic* to) the vector space of ordered pairs of real numbers mentioned above: if we think of the complex number *x* + *i* *y* as representing the ordered pair (*x*, *y*) in the complex plane then we see that the rules for addition and scalar multiplication correspond exactly to those in the earlier example.

More generally, field extensions provide another class of examples of vector spaces, particularly in algebra and algebraic number theory: a field *F* containing a smaller field *E* is an *E*-vector space, by the given multiplication and addition operations of *F*.^{[34]} For example, the complex numbers are a vector space over **R**, and the field extension is a vector space over **Q**.

Functions from any fixed set Ω to a field *F* also form vector spaces, by performing addition and scalar multiplication pointwise. That is, the sum of two functions *f* and *g* is the function given by
and similarly for multiplication. Such function spaces occur in many geometric situations, when Ω is the real line or an interval, or other subsets of **R**. Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such a property still have that property.^{[35]} Therefore, the set of such functions are vector spaces, whose study belongs to functional analysis.

Systems of homogeneous linear equations are closely tied to vector spaces.^{[36]} For example, the solutions of
are given by triples with arbitrary and They form a vector space: sums and scalar multiples of such triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used to condense multiple linear equations as above into one vector equation, namely

where is the matrix containing the coefficients of the given equations, is the vector denotes the matrix product, and is the zero vector. In a similar vein, the solutions of homogeneous *linear differential equations* form vector spaces. For example,

yields where and are arbitrary constants, and is the natural exponential function.

The relation of two vector spaces can be expressed by *linear map* or *linear transformation*. They are functions that reflect the vector space structure, that is, they preserve sums and scalar multiplication:
for all and in all in ^{[37]}

An *isomorphism* is a linear map *f* : *V* → *W* such that there exists an inverse map *g* : *W* → *V*, which is a map such that the two possible compositions *f* ∘ *g* : *W* → *W* and *g* ∘ *f* : *V* → *V* are identity maps. Equivalently, *f* is both one-to-one (injective) and onto (surjective).^{[38]} If there exists an isomorphism between *V* and *W*, the two spaces are said to be *isomorphic*; they are then essentially identical as vector spaces, since all identities holding in *V* are, via *f*, transported to similar ones in *W*, and vice versa via *g*.

For example, the arrows in the plane and the ordered pairs of numbers vector spaces in the introduction above (