Calculates a sum containing a finite number of terms (for infinite series, see nsum()). The terms will be converted to mpmath numbers. For len(terms) > 2, this function is generally faster and produces more accurate results than the builtin Python function sum().
>>> from mpmath import *
>>> mp.dps = 15
>>> fsum([1, 2, 0.5, 7])
mpf('10.5')
With squared=True each term is squared, and with absolute=True the absolute value of each term is used.
Computes the sum
S = \sum_{k=a}^b f(k)
where (a, b) = interval, and where a = -\infty and/or b = \infty are allowed. Two examples of infinite series that can be summed by nsum(), where the first converges rapidly and the second converges slowly, are:
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nsum(lambda n: 1/fac(n), [0, inf])
2.71828182845905
>>> nsum(lambda n: 1/n**2, [1, inf])
1.64493406684823
When appropriate, nsum() applies convergence acceleration to accurately estimate the sums of slowly convergent series. If the sum is finite, nsum() currently does not attempt to perform any extrapolation, and simply calls fsum().
Options
Methods
Unfortunately, an algorithm that can efficiently sum any infinite series does not exist. nsum() implements several different algorithms that each work well in different cases. The method keyword argument selects a method.
The default method is 'r+s', i.e. both Richardson extrapolation and Shanks transformation is attempted. A slower method that handles more cases is 'r+s+e'. For very high precision summation, or if the summation needs to be fast (for example if multiple sums need to be evaluated), it is a good idea to investigate which one method works best and only use that.
Basic examples
A finite sum:
>>> nsum(lambda k: 1/k, [1, 6])
2.45
Summation of a series going to negative infinity and a doubly infinite series:
>>> nsum(lambda k: 1/k**2, [-inf, -1])
1.64493406684823
>>> nsum(lambda k: 1/(1+k**2), [-inf, inf])
3.15334809493716
nsum() handles sums of complex numbers:
>>> nsum(lambda k: (0.5+0.25j)**k, [0, inf])
(1.6 + 0.8j)
The following sum converges very rapidly, so it is most efficient to sum it by disabling convergence acceleration:
>>> mp.dps = 1000
>>> a = nsum(lambda k: -(-1)**k * k**2 / fac(2*k), [1, inf],
... method='direct')
>>> b = (cos(1)+sin(1))/4
>>> abs(a-b) < mpf('1e-998')
True
Examples with Richardson extrapolation
Richardson extrapolation works well for sums over rational functions, as well as their alternating counterparts:
>>> mp.dps = 50
>>> nsum(lambda k: 1 / k**3, [1, inf],
... method='richardson')
1.2020569031595942853997381615114499907649862923405
>>> zeta(3)
1.2020569031595942853997381615114499907649862923405
>>> nsum(lambda n: (n + 3)/(n**3 + n**2), [1, inf],
... method='richardson')
2.9348022005446793094172454999380755676568497036204
>>> pi**2/2-2
2.9348022005446793094172454999380755676568497036204
>>> nsum(lambda k: (-1)**k / k**3, [1, inf],
... method='richardson')
-0.90154267736969571404980362113358749307373971925537
>>> -3*zeta(3)/4
-0.90154267736969571404980362113358749307373971925538
Examples with Shanks transformation
The Shanks transformation works well for geometric series and typically provides excellent acceleration for Taylor series near the border of their disk of convergence. Here we apply it to a series for \log(2), which can be seen as the Taylor series for \log(1+x) with x = 1:
>>> nsum(lambda k: -(-1)**k/k, [1, inf],
... method='shanks')
0.69314718055994530941723212145817656807550013436025
>>> log(2)
0.69314718055994530941723212145817656807550013436025
Here we apply it to a slowly convergent geometric series:
>>> nsum(lambda k: mpf('0.995')**k, [0, inf],
... method='shanks')
200.0
Finally, Shanks’ method works very well for alternating series where f(k) = (-1)^k g(k), and often does so regardless of the exact decay rate of g(k):
>>> mp.dps = 15
>>> nsum(lambda k: (-1)**(k+1) / k**1.5, [1, inf],
... method='shanks')
0.765147024625408
>>> (2-sqrt(2))*zeta(1.5)/2
0.765147024625408
The following slowly convergent alternating series has no known closed-form value. Evaluating the sum a second time at higher precision indicates that the value is probably correct:
>>> nsum(lambda k: (-1)**k / log(k), [2, inf],
... method='shanks')
0.924299897222939
>>> mp.dps = 30
>>> nsum(lambda k: (-1)**k / log(k), [2, inf],
... method='shanks')
0.92429989722293885595957018136
Examples with Euler-Maclaurin summation
The sum in the following example has the wrong rate of convergence for either Richardson or Shanks to be effective.
>>> f = lambda k: log(k)/k**2.5
>>> mp.dps = 15
>>> nsum(f, [1, inf], method='euler-maclaurin')
0.38734195032621
>>> -diff(zeta, 2.5)
0.38734195032621
Increasing steps improves speed at higher precision:
>>> mp.dps = 50
>>> nsum(f, [1, inf], method='euler-maclaurin', steps=[250])
0.38734195032620997271199237593105101319948228874688
>>> -diff(zeta, 2.5)
0.38734195032620997271199237593105101319948228874688
Divergent series
The Shanks transformation is able to sum some divergent series. In particular, it is often able to sum Taylor series beyond their radius of convergence (this is due to a relation between the Shanks transformation and Pade approximations; see pade() for an alternative way to evaluate divergent Taylor series).
Here we apply it to \log(1+x) far outside the region of convergence:
>>> mp.dps = 50
>>> nsum(lambda k: -(-9)**k/k, [1, inf],
... method='shanks')
2.3025850929940456840179914546843642076011014886288
>>> log(10)
2.3025850929940456840179914546843642076011014886288
A particular type of divergent series that can be summed using the Shanks transformation is geometric series. The result is the same as using the closed-form formula for an infinite geometric series:
>>> mp.dps = 15
>>> for n in arange(-8, 8):
... if n == 1:
... continue
... print n, 1/(1-n), nsum(lambda k: n**k, [0, inf],
... method='shanks')
...
-8.0 0.111111111111111 0.111111111111111
-7.0 0.125 0.125
-6.0 0.142857142857143 0.142857142857143
-5.0 0.166666666666667 0.166666666666667
-4.0 0.2 0.2
-3.0 0.25 0.25
-2.0 0.333333333333333 0.333333333333333
-1.0 0.5 0.5
0.0 1.0 1.0
2.0 -1.0 -1.0
3.0 -0.5 -0.5
4.0 -0.333333333333333 -0.333333333333333
5.0 -0.25 -0.25
6.0 -0.2 -0.2
7.0 -0.166666666666667 -0.166666666666667
Uses the Euler-Maclaurin formula to compute an approximation accurate to within tol (which defaults to the present epsilon) of the sum
S = \sum_{k=a}^b f(k)
where (a,b) are given by interval and a or b may be infinite. The approximation is
S \sim \int_a^b f(x) \,dx + \frac{f(a)+f(b)}{2} + \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} \left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right).
The last sum in the Euler-Maclaurin formula is not generally convergent (a notable exception is if f is a polynomial, in which case Euler-Maclaurin actually gives an exact result).
The summation is stopped as soon as the quotient between two consecutive terms falls below reject. That is, by default (reject = 10), the summation is continued as long as each term adds at least one decimal.
Although not convergent, convergence to a given tolerance can often be “forced” if b = \infty by summing up to a+N and then applying the Euler-Maclaurin formula to the sum over the range (a+N+1, \ldots, \infty). This procedure is implemented by nsum().
By default numerical quadrature and differentiation is used. If the symbolic values of the integral and endpoint derivatives are known, it is more efficient to pass the value of the integral explicitly as integral and the derivatives explicitly as adiffs and bdiffs. The derivatives should be given as iterables that yield f(a), f'(a), f''(a), \ldots (and the equivalent for b).
Examples
Summation of an infinite series, with automatic and symbolic integral and derivative values (the second should be much faster):
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> sumem(lambda n: 1/n**2, [32, inf])
0.03174336652030209012658168043874142714132886413417
>>> I = mpf(1)/32
>>> D = adiffs=((-1)**n*fac(n+1)*32**(-2-n) for n in xrange(999))
>>> sumem(lambda n: 1/n**2, [32, inf], integral=I, adiffs=D)
0.03174336652030209012658168043874142714132886413417
An exact evaluation of a finite polynomial sum:
>>> sumem(lambda n: n**5-12*n**2+3*n, [-100000, 200000])
10500155000624963999742499550000.0
>>> print sum(n**5-12*n**2+3*n for n in xrange(-100000, 200001))
10500155000624963999742499550000
Calculates a product containing a finite number of factors (for infinite products, see nprod()). The factors will be converted to mpmath numbers.
>>> from mpmath import *
>>> mp.dps = 15
>>> fprod([1, 2, 0.5, 7])
mpf('7.0')
Computes the product
P = \prod_{k=a}^b f(k)
where (a, b) = interval, and where a = -\infty and/or b = \infty are allowed.
This function is essentially equivalent to applying nsum() to the logarithm of the product (which, of course, becomes a series). All keyword arguments passed to nprod() are forwarded verbatim to nsum().
Examples
A simple finite product:
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nprod(lambda k: k, [1, 4])
24.0
A large number of infinite products have known exact values, and can therefore be used as a reference. Most of the following examples are taken from MathWorld [1].
A few infinite products with simple values are:
>>> 2*nprod(lambda k: (4*k**2)/(4*k**2-1), [1, inf])
3.14159265358979
>>> nprod(lambda k: (1+1/k)**2/(1+2/k), [1, inf])
2.0
>>> nprod(lambda k: (k**3-1)/(k**3+1), [2, inf])
0.666666666666667
>>> nprod(lambda k: (1-1/k**2), [2, inf])
0.5
Next, several more infinite products with more complicated values:
>>> nprod(lambda k: exp(1/k**2), [1, inf])
5.18066831789712
>>> exp(pi**2/6)
5.18066831789712
>>> nprod(lambda k: (k**2-1)/(k**2+1), [2, inf])
0.272029054982133
>>> pi*csch(pi)
0.272029054982133
>>> nprod(lambda k: (k**4-1)/(k**4+1), [2, inf])
0.8480540493529
>>> pi*sinh(pi)/(cosh(sqrt(2)*pi)-cos(sqrt(2)*pi))
0.8480540493529
>>> nprod(lambda k: (1+1/k+1/k**2)**2/(1+2/k+3/k**2), [1, inf])
1.84893618285824
>>> 3*sqrt(2)*cosh(pi*sqrt(3)/2)**2*csch(pi*sqrt(2))/pi
1.84893618285824
>>> nprod(lambda k: (1-1/k**4), [2, inf])
0.919019477593744
>>> sinh(pi)/(4*pi)
0.919019477593744
>>> nprod(lambda k: (1-1/k**6), [2, inf])
0.982684277742192
>>> (1+cosh(pi*sqrt(3)))/(12*pi**2)
0.982684277742192
>>> nprod(lambda k: (1+1/k**2), [2, inf])
1.83803895518749
>>> sinh(pi)/(2*pi)
1.83803895518749
>>> nprod(lambda n: (1+1/n)**n * exp(1/(2*n)-1), [1, inf])
1.44725592689037
>>> exp(1+euler/2)/sqrt(2*pi)
1.44725592689037
The following two products are equivalent and can be evaluated in terms of a Jacobi theta function. Pi can be replaced by any value (as long as convergence is preserved):
>>> nprod(lambda k: (1-pi**-k)/(1+pi**-k), [1, inf])
0.383845120748167
>>> nprod(lambda k: tanh(k*log(pi)/2), [1, inf])
0.383845120748167
>>> jtheta(4,0,1/pi)
0.383845120748167
This product does not have a known closed form value:
>>> nprod(lambda k: (1-1/2**k), [1, inf])
0.288788095086602
References
Computes an estimate of the limit
\lim_{t \to x} f(t)
where x may be finite or infinite.
For finite x, limit() evaluates f(x + d/n) for consecutive integer values of n, where the approach direction d may be specified using the direction keyword argument. For infinite x, limit() evaluates values of f(\mathrm{sign}(x) \cdot n).
If the approach to the limit is not sufficiently fast to give an accurate estimate directly, limit() attempts to find the limit using Richardson extrapolation or the Shanks transformation. You can select between these methods using the method keyword (see documentation of nsum() for more information).
Options
The following options are available with essentially the same meaning as for nsum(): tol, method, maxterms, steps, verbose.
If the option exp=True is set, f will be sampled at exponentially spaced points n = 2^1, 2^2, 2^3, \ldots instead of the linearly spaced points n = 1, 2, 3, \ldots. This can sometimes improve the rate of convergence so that limit() may return a more accurate answer (and faster). However, do note that this can only be used if f supports fast and accurate evaluation for arguments that are extremely close to the limit point (or if infinite, very large arguments).
Examples
A basic evaluation of a removable singularity:
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> limit(lambda x: (x-sin(x))/x**3, 0)
0.166666666666666666666666666667
Computing the exponential function using its limit definition:
>>> limit(lambda n: (1+3/n)**n, inf)
20.0855369231876677409285296546
>>> exp(3)
20.0855369231876677409285296546
A limit for \pi:
>>> f = lambda n: 2**(4*n+1)*fac(n)**4/(2*n+1)/fac(2*n)**2
>>> limit(f, inf)
3.14159265358979323846264338328
Calculating the coefficient in Stirling’s formula:
>>> limit(lambda n: fac(n) / (sqrt(n)*(n/e)**n), inf)
2.50662827463100050241576528481
>>> sqrt(2*pi)
2.50662827463100050241576528481
Evaluating Euler’s constant \gamma using the limit representation
\gamma = \lim_{n \rightarrow \infty } \left[ \left( \sum_{k=1}^n \frac{1}{k} \right) - \log(n) \right]
(which converges notoriously slowly):
>>> f = lambda n: sum([mpf(1)/k for k in range(1,n+1)]) - log(n)
>>> limit(f, inf)
0.577215664901532860606512090082
>>> +euler
0.577215664901532860606512090082
With default settings, the following limit converges too slowly to be evaluated accurately. Changing to exponential sampling however gives a perfect result:
>>> f = lambda x: sqrt(x**3+x**2)/(sqrt(x**3)+x)
>>> limit(f, inf)
0.992518488562331431132360378669
>>> limit(f, inf, exp=True)
1.0
The following functions provide a direct interface to extrapolation algorithms. nsum() and limit() essentially work by calling the following functions with an increasing number of terms until the extrapolated limit is accurate enough.
The following functions may be useful to call directly if the precise number of terms needed to achieve a desired accuracy is known in advance, or if one wishes to study the convergence properties of the algorithms.
Given a list seq of the first N elements of a slowly convergent infinite sequence, richardson() computes the N-term Richardson extrapolate for the limit.
richardson() returns (v, c) where v is the estimated limit and c is the magnitude of the largest weight used during the computation. The weight provides an estimate of the precision lost to cancellation. Due to cancellation effects, the sequence must be typically be computed at a much higher precision than the target accuracy of the extrapolation.
Applicability and issues
The N-step Richardson extrapolation algorithm used by richardson() is described in [1].
Richardson extrapolation only works for a specific type of sequence, namely one converging like partial sums of P(1)/Q(1) + P(2)/Q(2) + \ldots where P and Q are polynomials. When the sequence does not convergence at such a rate richardson() generally produces garbage.
Richardson extrapolation has the advantage of being fast: the N-term extrapolate requires only O(N) arithmetic operations, and usually produces an estimate that is accurate to O(N) digits. Contrast with the Shanks transformation (see shanks()), which requires O(N^2) operations.
richardson() is unable to produce an estimate for the approximation error. One way to estimate the error is to perform two extrapolations with slightly different N and comparing the results.
Richardson extrapolation does not work for oscillating sequences. As a simple workaround, richardson() detects if the last three elements do not differ monotonically, and in that case applies extrapolation only to the even-index elements.
Example
Applying Richardson extrapolation to the Leibniz series for \pi:
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
... for m in range(1,30)]
>>> v, c = richardson(S[:10])
>>> v
3.2126984126984126984126984127
>>> nprint([v-pi, c])
[7.11058e-2, 2.0]
>>> v, c = richardson(S[:30])
>>> v
3.14159265468624052829954206226
>>> nprint([v-pi, c])
[1.09645e-9, 20833.3]
References
Given a list seq of the first N elements of a slowly convergent infinite sequence (A_k), shanks() computes the iterated Shanks transformation S(A), S(S(A)), \ldots, S^{N/2}(A). The Shanks transformation often provides strong convergence acceleration, especially if the sequence is oscillating.
The iterated Shanks transformation is computed using the Wynn epsilon algorithm (see [1]). shanks() returns the full epsilon table generated by Wynn’s algorithm, which can be read off as follows:
For convenience, so the extrapolation is stopped at an odd index so that shanks(seq)[-1][-1] always gives an estimate of the limit.
Optionally, an existing table can be passed to shanks(). This can be used to efficiently extend a previous computation after new elements have been appended to the sequence. The table will then be updated in-place.
The Shanks transformation
The Shanks transformation is defined as follows (see [2]): given the input sequence (A_0, A_1, \ldots), the transformed sequence is given by
S(A_k) = \frac{A_{k+1}A_{k-1}-A_k^2}{A_{k+1}+A_{k-1}-2 A_k}
The Shanks transformation gives the exact limit A_{\infty} in a single step if A_k = A + a q^k. Note in particular that it extrapolates the exact sum of a geometric series in a single step.
Applying the Shanks transformation once often improves convergence substantially for an arbitrary sequence, but the optimal effect is obtained by applying it iteratively: S(S(A_k)), S(S(S(A_k))), \ldots.
Wynn’s epsilon algorithm provides an efficient way to generate the table of iterated Shanks transformations. It reduces the computation of each element to essentially a single division, at the cost of requiring dummy elements in the table. See [1] for details.
Precision issues
Due to cancellation effects, the sequence must be typically be computed at a much higher precision than the target accuracy of the extrapolation.
If the Shanks transformation converges to the exact limit (such as if the sequence is a geometric series), then a division by zero occurs. By default, shanks() handles this case by terminating the iteration and returning the table it has generated so far. With randomized=True, it will instead replace the zero by a pseudorandom number close to zero. (TODO: find a better solution to this problem.)
Examples
We illustrate by applying Shanks transformation to the Leibniz series for \pi:
>>> from mpmath import *
>>> mp.dps = 50
>>> S = [4*sum(mpf(-1)**n/(2*n+1) for n in range(m))
... for m in range(1,30)]
>>>
>>> T = shanks(S[:7])
>>> for row in T:
... nprint(row)
...
[-0.75]
[1.25, 3.16667]
[-1.75, 3.13333, -28.75]
[2.25, 3.14524, 82.25, 3.14234]
[-2.75, 3.13968, -177.75, 3.14139, -969.937]
[3.25, 3.14271, 327.25, 3.14166, 3515.06, 3.14161]
The extrapolated accuracy is about 4 digits, and about 4 digits may have been lost due to cancellation:
>>> L = T[-1]
>>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
[2.22532e-5, 4.78309e-5, 3515.06]
Now we extend the computation:
>>> T = shanks(S[:25], T)
>>> L = T[-1]
>>> nprint([abs(L[-1] - pi), abs(L[-1] - L[-3]), abs(L[-2])])
[3.75527e-19, 1.48478e-19, 2.96014e+17]
The value for pi is now accurate to 18 digits. About 18 digits may also have been lost to cancellation.
Here is an example with a geometric series, where the convergence is immediate (the sum is exactly 1):
>>> mp.dps = 15
>>> for row in shanks([0.5, 0.75, 0.875, 0.9375, 0.96875]):
... nprint(row)
[4.0]
[8.0, 1.0]
References