Loading AI tools

Infinite sum From Wikipedia, the free encyclopedia

In mathematics, a **series** is, roughly speaking, an addition of infinitely many quantities, one after the other.^{[1]} The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.

For a long time, the idea that a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could *never* reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise.

In modern terminology, any (ordered) infinite sequence of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the *a*_{i} one after the other. To emphasize that there are an infinite number of terms, a series may be called an **infinite series**. Such a series is represented (or denoted) by an expression like

or, using the summation sign,

The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as *n* tends to infinity (if the limit exists) of the finite sums of the *n* first terms of the series, which are called the *n*th **partial sums** of the series. That is,

When this limit exists, one says that the series is **convergent** or **summable**, or that the sequence is **summable**. In this case, the limit is called the **sum** of the series. Otherwise, the series is said to be **divergent**.^{[2]}

The notation denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by both the addition—the process of adding—and its result—the *sum* of a and b.

Commonly, the terms of a series come from a ring, often the field of the real numbers or the field of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.

An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form^{[3]}

where is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (for instance elements of any abelian group in abstract algebra). This is an expression that is obtained from the list of terms by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as

If an abelian group *A* of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in *A*, called the *sum of the series*. This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series , its *k*th **partial sum** is^{[2]}

By definition, the series *converges* to the limit *L* (or simply *sums* to *L*), if the sequence of its partial sums has a limit *L*.^{[3]} In this case, one usually writes

A series is said to be *convergent* if it converges to some limit, or *divergent* when it does not. The value of this limit, if it exists, is then the value of the series.

A series Σ*a*_{n} is said to converge or to *be convergent* when the sequence (*s*_{k}) of partial sums has a finite limit. If the limit of *s*_{k} is infinite or does not exist, the series is said to diverge.^{[4]}^{[2]} When the limit of partial sums exists, it is called the value (or sum) of the series

An easy way that an infinite series can converge is if all the *a*_{n} are zero for *n* sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.

Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example

It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is *equal* to 2 (although it is), but it does prove that it is *at most* 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted *S*, it can be seen that

Therefore,

The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in

encodes the series

Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that 9 × 0.111... = 0.999... = 1, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999....

- A
*geometric series*is one where each successive term is produced by multiplying the previous term by a constant number (called the common ratio in this context). For example:^{[2]}

In general, a geometric series with initial term and common ratio ,

converges if and only if , in which case it converges to .

- The
*harmonic series*is the series^{[5]}

The harmonic series is divergent.

- An
*alternating series*is a series where terms alternate signs. Examples:

the alternating harmonic series, and

converges if the sequence *b*_{n} converges to a limit *L*—as *n* goes to infinity. The value of the series is then *b*_{1} − *L*.

- An
*arithmetico-geometric series*is a generalization of the geometric series, which has coefficients of the common ratio equal to the terms in an arithmetic sequence. Example: - The Dirichlet series

converges for *p* > 1 and diverges for *p* ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of *p*, the sum of this series is Riemann's zeta function.

and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.^{[6]}

- There are some elementary series whose convergence is not yet known/proven. For example, it is unknown whether the Flint Hills series converges or not. The convergence depends on how well can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of
*n*with large numerical contributions to the sum are the numerators of the continued fraction convergents of , a sequence beginning with 1, 3, 22, 333, 355, 103993, ... (sequence A046947 in the OEIS). These are integers*n*that are close to for some integer*m*, so that is close to and its reciprocal is large.

^{[2]}

Partial summation takes as input a sequence, (*a*_{n}), and gives as output another sequence, (*S*_{N}). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that

In computer science, it is known as prefix sum.

Series are classified not only by whether they converge or diverge, but also by the properties of the terms a_{n} (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term a_{n} (whether it is a real number, arithmetic progression, trigonometric function); etc.

When *a _{n}* is a non-negative real number for every

For example, the series

is convergent, because the inequality

and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.

When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof.

A series

*converges absolutely* if the series of absolute values

converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit.

A series of real or complex numbers is said to be **conditionally convergent** (or **semi-convergent**) if it is convergent but not absolutely convergent. A famous example is the alternating series

which is convergent (and its sum is equal to ), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the are real and is any real number, that one can find a reordering so that the reordered series converges with sum equal to .

Abel's test is an important tool for handling semi-convergent series. If a series has the form

where the partial sums are bounded, has bounded variation, and exists:

then the series is convergent. This applies to the point-wise convergence of many trigonometric series, as in

with . Abel's method consists in writing , and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series to the absolutely convergent series

The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof).

When conditions of the alternating series test are satisfied by , there is an exact error evaluation.^{[7]} Set to be the partial sum of the given alternating series . Then the next inequality holds:

Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated.

By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.^{[8]}

For the matrix exponential:

the following error evaluation holds (scaling and squaring method):^{[9]}^{[10]}^{[11]}

There exist many tests that can be used to determine whether particular series converge or diverge.

*n-th term test*: If , then the series diverges; if , then the test is inconclusive.- Comparison test 1 (see Direct comparison test): If is an absolutely convergent series such that for some number and for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely (though it could still be conditionally convergent, for example, if the alternate in sign).
- Comparison test 2 (see Limit comparison test): If is an absolutely convergent series such that for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely (though it could still be conditionally convergent, for example, if the alternate in sign).
- Ratio test: If there exists a constant such that for all sufficiently large , then converges absolutely. When the ratio is less than , but not less than a constant less than , convergence is possible but this test does not establish it.
- Root test: If there exists a constant such that for all sufficiently large , then converges absolutely.
- Integral test: if is a positive monotone decreasing function defined on the interval with for all , then converges if and only if the integral is finite.
- Cauchy's condensation test: If is non-negative and non-increasing, then the two series and are of the same nature: both convergent, or both divergent.
- Alternating series test: A series of the form (with ) is called
*alternating*. Such a series converges if the sequence*is monotone decreasing and converges to . The converse is in general not true.* - For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test.

A series of real- or complex-valued functions

converges pointwise on a set *E*, if the series converges for each *x* in *E* as an ordinary series of real or complex numbers. Equivalently, the partial sums

converge to *ƒ*(*x*) as *N* → ∞ for each *x* ∈ *E*.

A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function *ƒ*(*x*), and the error in approximating the limit by the *N*th partial sum,

can be made minimal *independently* of *x* by choosing a sufficiently large *N*.

Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the *ƒ*_{n} are integrable on a closed and bounded interval *I* and converge uniformly, then the series is also integrable on *I* and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.

More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions **converges in mean** on a set *E* to a limit function *ƒ* provided

as *N* → ∞.

A **power series** is a series of the form

The Taylor series at a point *c* of a function is a power series that, in many cases, converges to the function in a neighborhood of *c*. For example, the series

is the Taylor series of at the origin and converges to it for every *x*.

Unless it converges only at *x*=*c*, such a series converges on a certain open disc of convergence centered at the point *c* in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients *a*_{n}. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.

Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.

While many uses of power series refer to their sums, it is also possible to treat power series as *formal sums*, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.

Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.^{[12]} If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.

Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form

If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.

A Dirichlet series is one of the form

where *s* is a complex number. For example, if all *a*_{n} are equal to 1, then the Dirichlet series is the Riemann zeta function

Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of *s* is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(*s*) > 1, but the zeta function can be extended to a holomorphic function defined on with a simple pole at 1.

This series can be directly generalized to general Dirichlet series.

A series of functions in which the terms are trigonometric functions is called a **trigonometric series**:

The most important example of a trigonometric series is the Fourier series of a function.

Greek mathematician Archimedes produced the first known summation of an infinite series with a
method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.^{[13]}^{[14]}

Mathematicians from the Kerala school were studying infinite series c. 1350 CE.^{[15]}

In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.

The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series

on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.

Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms *convergence* and *divergence* had