Convergence acceleration of series
(Click here
for a Postscript version of this page.)
1 Introduction
Numerous mathematical constants are calculated as limit of series and many
of those series are very slow to converge requiring therefore methods to
accelerate their convergence. In this section, we recall some definitions
and elementary results on well known series.
For any series å_{k}a_{k}, we denote by s_{n} its partial sums, that
is

ê ê ê
ê ê ê



s_{n}=a_{0}+a_{1}+...+a_{n}= 
n å
k=0

a_{k}. 




We are interested in the behavior of s_{n} as n tends to infinite (that
is infinite series). The series whose terms have alternative signs
are said to be alternating series, we write them as
s_{n}=a_{0}a_{1}+...+(1)^{n}a_{n}= 
n å
k=0

(1)^{k}a_{k}, 

where the (a_{k}) are positive numbers.
Definition 1
An infinite series å a_{k} is said to be convergent if its partial sums
(s_{n}) tends to a limit S (called the sum of the series) as n
tends to infinite. Otherwise the series is said to be divergent.
The following examples are elementary and occur frequently and it's usual to
compare a given series to one of those.
Example 2
(Geometric series) For the geometric series which is defined by a_{k}=x^{k}, the partial sum s_{n} is then given by
s_{n}=1+x+x^{2}+...+x^{n}= 
1x^{n+1}
1x

, 

hence it converges for  x < 1 to S=1/(1x) and diverges
otherwise.
Example 3
(Harmonic series) The harmonic series is defined for k > 0 by a_{k}=1/k, and the partial sum s_{n} (sometimes denoted H_{n} and
called Harmonic number) is
s_{n}=1+ 
1
2

+ 
1
3

...+ 
1
n

. 

We have for n=2^{p}



æ è

1+ 
1
2

ö ø

+ 
æ è

1
3

+ 
1
4

ö ø

+ 
æ è

1
5

+..+ 
1
8

ö ø

+...+ 
æ è

1
2^{p1}+1

+...+ 
1
2^{p}

ö ø


 

1. 
1
2

+2. 
1
4

+4. 
1
8

+...+2^{p1}. 
1
2^{p}

= 
p
2

, 


therefore the partial sums are unbounded and the harmonic series is
divergent (this important result was known to Jakob Bernoulli (16541705) and much before to the french bishop Nicolas Oresme (13231382)).
Example 4
(Riemann Zeta series) A natural generalization of the harmonic series is to
consider the partial sum
s_{n}=1+ 
1
2^{s}

+ 
1
3^{s}

+...+ 
1
n^{s}



which converges to the Riemann Zeta function z (s) for any complex
numbers s such as Â (s) > 1. The case s=1 is the Harmonic series and
the limits for s=2p are well known since Euler who established for example
that z(2)=p^{2}/6, z(4)=p^{4}/90,...
2 Kummer's acceleration method
Ernst Kummer's (18101893) method is very natural to understand and may be
used to accelerate the convergence of many series. It goes back to 1837 and
the idea is to subtract from a given convergent series åa_{k} another
equivalent series åb_{k} whose sum C=å_{k ³ 0}b_{k} is
well known. More precisely suppose that

lim
k®¥


æ è

a_{k}
b_{k}

ö ø

=r ¹ 0, 

then the transformation is given by

¥ å
k=0

a_{k}=r 
¥ å
k=0

b_{k}+ 
¥ å
k=0


æ è

1r 
b_{k}
a_{k}

ö ø

a_{k}=rC+ 
¥ å
k=0


æ è

1r 
b_{k}
a_{k}

ö ø

a_{k}. 
 (1) 
The convergence of the latest series is faster because 1rb_{k}/a_{k}
tends to 0 as k tends to infinity [1].
Example 5
Consider


 


¥ å
k=1


1
k(k+1)

= 
¥ å
k=1


æ è

1
k

 
1
k+1

ö ø

=1, 


we have r = C=1 and Kummer's transformation (1) becomes
S=1+ 
¥ å
k=1


æ è

1 
k^{2}
k(k+1)

ö ø


1
k^{2}

=1+ 
¥ å
k=1


1
k^{2}(k+1)

. 

The process can be repeated with this time


 


¥ å
k=1


1
k(k+1)(k+2)

= 
1
4

, 


giving
S= 
5
4

+2 
¥ å
k=1


1
k^{2}(k+1)(k+2)

. 

After N applications of the transformation
S= 
N å
k=1


1
k^{2}

+N! 
¥ å
k=1


1
k^{2}(k+1)(k+2)...(k+N)

, 

whose convergence may be rapid enough for suitable values of N (this
example is due to Knopp [8]). Note, that we have used the following
family of series to improve the convergence



¥ å
k=1

b_{k}= 
¥ å
k=1


1
k(k+1)...(k+N)

= 
1
N.N!

, 
 



3 Richardson Extrapolation method
Suppose that the partial sum verifies the relation
S=s_{n}+ 
a
n^{p}

+O 
æ è

1
n^{p+1}

ö ø

, 
 (2) 
where S is the limit of the sum, p a known integer and a a
constant which we don't need to determine. Then, following the ideas of
Lewis Fry Richardson (18811953) [10], it's natural to define
the transformation
s_{n}^{(1)}= 
2^{p}s_{2n}s_{n}
2^{p}1

=s_{2n}+ 
1
2^{p}1

(s_{2n}s_{n}) 

in order to eliminate the term in a from relation (2). The process may be repeated on the new series



2^{p+1}s_{2n}^{(1)}s_{n}^{(1)}
2^{p+1}1

, 
 


2^{p+2}s_{2n}^{(2)}s_{n}^{(2)}
2^{p+2}1

, 
 



Example 6
Suppose that you have computed, by geometric means and like Archimedes did,
the perimeters of the circumscribed polygon with respectively 6,12,24,48
and 96 sides on a unit circle then in this case S=p, p=2 and
Richardson's process becomes here


s_{2n}+ 
1
3

( s_{2n}s_{n}) , 
 

s_{2n}^{(1)}+ 
1
15

(s_{2n}^{(1)}s_{n}^{(1)}) , 
 

s_{2n}^{(2)}+ 
1
63

(s_{2n}^{(2)}s_{n}^{(2)}) , 
 



It produces the approximations (the number of sides is 6n)
and finally we find p » s_{1}^{(4)} =3.141592653(637...) which
is correct to nearly 10 digits and therefore much more accurate than the 2
digits value estimated by Archimedes !
4 Aitken's acceleration and related methods
4.1 Aitken's d^{2}process
In 1926, Alexander Aitken (18951967) found a new procedure to accelerate
the rate of convergence of a series, the Aitken d^{2}process [2]. It consists in constructing a new series S^{(1)},
whose partial sums s_{n}^{(1)} are given by
s_{n}^{(1)}=s_{n+1} 
(s_{n+1}s_{n})^{2}
s_{n+1}2s_{n}+s_{n1}

= 
s_{n1}s_{n+1}s_{n}^{2}
s_{n+1}2s_{n}+s_{n1}

, 
 (3) 
where (s_{n1},s_{n},s_{n+1}) are three successive partial sums of the
original series. Of course, it's possible to repeat the process on the new
series S^{(1)} to obtain S^{(2)}... This process was in fact known to
the Japanese mathematician Takakazu Seki Kowa (16421708) who used it to
compute 10 correct digits of the constant p by accelerating the
convergence of the polygon algorithm.
For example for the converging geometric series with 0 < x < 1:


1+x+x^{2}+...+x^{n}= 
1x^{n+1}
1x

and the limit is 
 



hence
s_{n}=(1x^{n+1})S=Sx^{n+1}S, 

and if we replace (s_{n1},s_{n},s_{n+1}) by this expression in (3) we have s_{n}^{(1)}=S ..., the exact limit is given just
by three evaluations of the original series. This indicates that for a
series almost geometric, Aitken's process may be very efficient.
Example 7
If we consider the extremely slow (logarithmic rate) converging series
S= 
lim
n® ¥

s_{n}= 
lim
n® ¥


æ è

n å
k=0


(1)^{k}
k+1

ö ø

=log(2), 

The following array illustrates the result of three consecutive applications
of Aitken's process to the 7 first computed terms of this series :
A formal expression may be obtained in this case ; the expressions of (s_{n1},s_{n},s_{n+1}) are
s_{n}=s_{n1}+ 
(1)^{n}
n+1

,s_{n+1}=s_{n}+ 
(1)^{n+1}
n+2



so that
s_{n}^{(1)}=s_{n+1}+ 
(1)^{n}(n+1)
(n+2)(2n+3)



For n=1000, it becomes
4.2 The ealgorithm
In 1956, Peter Wynn obtained a generalization of Aitken's algorithm which is
called the ealgorithm [11]. It is defined by the
transformation
e_{n}^{(k+1)}=e_{n+1}^{(k1)}+ 
1
e_{n+1}^{(k)}e_{n}^{(k)}


 (4) 
starting with e_{n}^{(1)}=0 and e_{n}^{(0)}=s_{n}
where s_{n} is the series to be accelerated.
Example 8
Again we take the series

lim
n®¥


æ è

n å
k=0


(1)^{k}
k+1

ö ø

=log(2), 


the following array illustrates the result of consecutive applications of
Wynn's process (4) to the same as in the previous example 7
first computed terms of the series :
and finally e_{n}^{(6)}=0.6931524547... Note that the odd values
of the index are intermediate quantities.
5 EulerMaclaurin summation formula
Suppose that the (a_{k}) may be written as the image of a given
differentiable function f, that is
and so
s_{n}= 
n å
k=0

a_{k}= 
n å
k=0

f(k) 


the following famous result due to Leonhard Euler (1732) and Colin Maclaurin
(1742, [9]) states that
s_{n}= 
ó õ

n
0

f(t)dt+ 
1
2

( f(0)+f(n)) + 
m å
k=1


B_{2k}
(2k)!

( f^{2k1}(n)f^{2k1}(0)) +e_{m,n} 


where e_{m,n} is a remainder given by
e_{m,n}= 
1
(2m+1)!


ó õ

n
0

B_{2m+1}(x
ëx
û )f^{2m+1}(x)dx 


where the B_{2k} are Bernoulli's Numbers and B_{i}(x) being Bernoulli's
polynomials (see Bernoulli's number essay for more details or [7]
for a proof).
Example 9
Suppose that s > 1 and
then



1+ 
1 2^{s}

+ 
1 3^{s}

+...+ 
1 n^{s}


 


1 s1


æ ç
è

1 
1 n^{s1}

ö ÷
ø

+ 
1 2


æ ç
è

1+ 
1 n^{s}

ö ÷
ø

+ 
B_{2} 2


æ ç
è

s
1

ö ÷
ø


æ ç
è

1 
1 n^{s+1}

ö ÷
ø

+... 
 

+ 
B_{2m} 2m


æ ç
è

s+2m2
2m1

ö ÷
ø


æ ç
è

1 
1 n^{s+2m1}

ö ÷
ø

+e_{m,n} 

 

if n® ¥ in this identity, we find the relation
z(s) = 
¥ å
k = 1


1 k^{s}

= 
1 s1

+ 
1 2

+ 
B_{2} 2


æ ç
è

s
1

ö ÷
ø

+...+ 
B_{2m} 2m


æ ç
è

s+2m2
2m1

ö ÷
ø

+e_{m,¥} 

and by computing z (s)s_{n1} we find the useful algorithm to
estimate z (s)
z(s) = 1+ 
1 2^{s}

+...+ 
1 n^{s}

+ 
1 s1


1 n^{s1}

 
1 2n^{s}

+ 
m å
i = 1


B_{2i} 2i


æ ç
è

s+2i2
2i1

ö ÷
ø


1 n^{s+2i1}

+( e_{m,¥}e_{m,n}) , 

and for any given integer m, the remainder tends to zero (Exercise : check
this with care!).
6 Convergence improvement of alternating series
Very efficient methods exist to accelerate the convergence of an alternating
series
one of the first is due to Euler. Usually the transformed series converges
geometrically with a rate of convergence depending on the method.
Because the speed of converge may be dramatically improved, there is a trick
due to Van Wijngaarden which permits to transform any series åb_{k}
with positive terms b_{k} into an alternating series. It may be written

¥ å
k=0

b_{k}= 
¥ å
k=0

(1)^{k}a_{k} 


with
a_{k}= 
¥ å
j=0

2^{j} b_{2j(k+1)1}=b_{k}+2b_{2k+1}+4b_{4k+3}+8b_{8k+7}+... 


and because 2^{j}(k+1)1 increases very rapidly with j, only a few terms
of this sum are required.
Sometimes a closed form exists for the a_{k}, for example if
then



¥ å
j=0

2^{j}b_{2j(k+1)1}= 
¥ å
j=0


2^{j}
( 2^{j}(k+1)) ^{2}

= 
1
(k+1)^{2}


¥ å
j=0


1
2^{j}


 



and

¥ å
k=0


1
(k+1)^{2}

=2 
¥ å
k=0


(1)^{k}
(k+1)^{2}

. 

6.1 Euler's method
In 1755, Euler gave a first version of what is now called Euler's
transformation. To establish it, observe that the sum of the alternating
series may be written as
2S=a_{0}+(a_{0}a_{1})(a_{1}a_{2})+(a_{2}a_{3})...=a_{0}+S_{1}, 


with
S_{1}=D^{1}a_{0}D^{1}a_{1}+D^{1}a_{2}...+(1)^{k}D^{1}a_{k}+..., 


and the first difference operator is denoted by
D^{1}a_{k}=a_{k}a_{k+1}. 


The new series S_{1 }is also alternating and the process may be
applied again
2S_{1}=D^{1}a_{0}+(D^{1}a_{0}D^{1}a_{1})(D^{1}a_{1}D^{1}a_{2})+(D^{1}a_{2}D^{1}a_{3})...=D^{1}a_{0}+T_{1}, 


with this time
T_{1}=D^{2}a_{0}D^{2}a_{1}+D^{2}a_{2}...+(1)^{k}D^{2}a_{k}+..., 


and the second difference operator is denoted by
D^{2}a_{k}=D^{1}a_{k}D^{1}a_{k+1}, 


the following theorem is then easily deduced by repeating the process (we
omit some details given in [8]).
Theorem 10
(Euler) Let S= å_{k=0}^{n}(1)^{k}a_{k} be a convergent alternating
series, then
S= 
lim
n®¥

s_{n}= 
lim
n®¥


æ è

1
2


n å
k=0


(1)^{k}
2^{k}

D^{k}a_{0} 
ö ø

, 
 (5) 
and D is the forward difference operator defined by
D^{k}a_{0} = D^{k1}a_{0}D^{k1}a_{1} = (1)^{k} 
k å
j = 0

(1)^{j} 
æ ç
è

k
j

ö ÷
ø

a_{j}. 

The first values of D^{k}a_{0} are


 

 

D^{1}a_{0}D^{1}a_{1}=a_{0}2a_{1}+a_{2}, 
 

 

a_{0}ka_{1}+ 
k(k1)
2!

a_{2} 
k(k1)(k2)
3!

a_{3}+...+(1)^{k}a_{k}. 


Example 11
Sometimes it's possible to find a closed form for the transformation (5), take



p
4

=1 
1
3

+ 
1
5

 
1
7

+...= 
¥ å
k=0

(1)^{k}a_{k} with 
 



the rate of convergence of this famous series is extremely slow, but Euler's
method gives


 

 

D^{1}a_{0}D^{1}a_{1}= 
1.2
1.3

 
1.2
3.5

= 
1.2.4
1.3.5

, 
 

D^{2}a_{0}D^{2}a_{1}= 
1.2.4
1.3.5

 
1.2.4
3.5.7

= 
1.2.4.6
1.3.5.7

, 
 



this suggest the expression (prove it by induction ...)
D^{n}a_{0}= 
1.2.4...(2n)
1.3.5...(2n+1)

. 

and


1 
1
3

+ 
1
5

 
1
7

+...= 
1
2


æ è

1+ 
1.2
1.3


1
2

+ 
1.2.4
1.3.5


1
2^{2}

+... 
ö ø


 


1
2


æ è

1+ 
1
3

+ 
1.2
3.5

+ 
1.2.3
3.5.7

... 
ö ø

= 
1
2


¥ å
k=0


2^{k}k!^{2}
(2k+1)!

, 


relation also given by Euler.
6.2 Improvement
In 1991, a generalization of Euler's method was given in [4] (this
work in fact generalizes a technique that was used to obtain irrationality
measures of some classical constants like log(2) for example). This
algorithm is very general, powerful and easy to understand. For an
alternating series å(1)^{k} a_{k}, we assume that there exist a
positive function w(x) such that
a_{k}= 
ó õ

1
0

x^{k}w(x)dx. 


This relation permits to write the sum of the series as

¥ å
k=0

(1)^{k}a_{k} = 
ó õ

1
0


æ è

¥ å
k=0

(1)^{k}x^{k} w(x)dx 
ö ø

= 
ó õ

1
0


w(x)
1+x

dx. 


Now, for any sequence of polynomials P_{n}(x) of degree n with P_{n}(1) ¹ 0, we denote by S_{n} the number
S_{n}= 
1
P_{n}(1)


ó õ

1
0


P_{n}(1)P_{n}(x)
1+x

w(x)dx. 


Notice that S_{n} is a linear combination of the number (a_{k})_{0 £ k < n} since if we write P_{n}(x) = å_{k=0}^{n} p_{k} (x)^{k}, we have
easily
S_{n} = 
1
P_{n}(1)


n1 å
k=0

c_{k} (1)^{k} a_{k} with c_{k}= 
n å
j = k+1

p_{j} 
 (6) 
The number S_{n} satisfies
S_{n} = 
ó õ

1
0


w(x)
1+x

dx  
ó õ

1
0


P_{n}(x)w(x)
P_{n}(1)(1+x)

dx = S 
ó õ

1
0


P_{n}(x)w(x)
P_{n}(1)(1+x)

dx. 


Therefore we deduce
 S_{n}S £ 
1
 P_{n}(1)


ó õ

1
0


 P_{n}(x) w(x)
(1+x)

dx £ 
M_{n}
P_{n}(1)

 S , M_{n}= 
sup
x Î [0,1]

p_{n}(x) 


This inequality suggests to choose polynomials with small value of M_{n}/P_{n}(1).
6.2.1 Choice of a family of polynomials
 A first possible choice is
P_{n}(x) = (1x)^{n} = 
n å
k = 0


æ ç
è

n
k

ö ÷
ø

(x)^{k}, for which P_{n}(1) = 2^{n},M_{n} = 1. 

It leads to the acceleration
where
S_{n} = 
1 2^{n}


n1 å
k = 0

(1)^{k}c_{k}a_{k}, c_{k} = 
n å
j = k+1


æ ç
è

n
j

ö ÷
ø

. 

This choice corresponds in fact to Euler's method.
 Another choice is
P_{n}(x) = (12x)^{n} = 
n å
k = 0

2^{k} 
æ ç
è

n
k

ö ÷
ø

(x)^{k}, for which P_{n}(1) = 3^{n},M_{n} = 1. 

It leads to the acceleration
where
S_{n} = 
1 3^{n}


n1 å
k = 0

(1)^{k}c_{k}a_{k}, c_{k} = 
n å
j = k+1

2^{j} 
æ ç
è

n
j

ö ÷
ø

. 

 Chebyshev's polynomials (see [1]) shifted to [0,1]
provide a more efficient acceleration. They satisfy the relation
and explicitly writes in the form
P_{n}(x) = 
n å
j = 0


n n+j


æ ç
è

n+j
2j

ö ÷
ø

4^{j}(x)^{j}. 

The relation (7) show that M_{n}=1 and P_{n}(1) ³ [ 1/2](3+Ö8)^{n} > 5.828^{n}/2. This family
leads to the following acceleration process :
 S_{n}S £ 
2 S
5.828^{n}




where
S_{n} = 
1 P_{n}(1)


n1 å
k = 0

(1)^{k}c_{k}a_{k}, c_{k} = 
n å
j = k+1


n n+j


æ ç
è

n+j
2j

ö ÷
ø

4^{j}. 

 Other families of orthogonal polynomials such as Legendre's
polynomials, Niven's polynomial may give interesting accelerations. More
details and results may be found in the very interesting paper [4].
Once the choice of a sequence of polynomials is made, it can be applied to
compute the value of many alternating series such as
log(2)= 
¥ å
k=0


(1)^{k}
k+1




a_{k}= 
1
k+1

= 
ó õ

1
0

x^{k} dx 


p
4

= 
¥ å
k=0


(1)^{k}
2k+1




a_{k}= 
1
2k+1

= 
ó õ

1
0

x^{k} 
x^{1/2}
2

dx 

(12^{s})z(s)= 
¥ å
k=0


(1)^{k}
(k+1)^{s}

, 


a_{k}= 
1
(k+1)^{s}

= 
1
G(s)


ó õ

1
0

x^{k}  log(x) ^{s1}dx. 


Notice that this latest method is very efficient and may be used
to compute the value of the zeta function at values of s with
Â(s) > 0 (see [3]). Another beautiful alternating
series whose convergence can be accelerated in this way is
log(2) 
æ è

g 
1
2

log(2) 
ö ø

= 
log(2)
2

 
log(3)
3

+ 
log(4)
4

 
log(5)
5

+¼ 

where g is the Euler constant (see [6] for a proof of
this formula).
7 Acceleration with the Zeta function
If you know how to evaluate the Zeta function at integers values, there is
an easy and convenient way to transform your original series in a geometric
converging one (based on [5]). Suppose that you want to estimate
a series which has the form
where A is a constant and where the analytic function f may be written
then
S=A+ 
¥ å
k=2


æ è

¥ å
n=2


a_{n}
k^{n}

ö ø

=A+ 
¥ å
n=2

a_{n} 
æ è

¥ å
k=2


1
k^{n}

ö ø




hence (8) may be transformed to
S=A+ 
¥ å
n=2

a_{n}( z(n)1) . 
 (9) 
Observe that
therefore the transformed series (9) has a geometric rate
of convergence. This rate may be improved if a few terms of the original
series are computed, this time the limit is given by
S=A+ 
M å
k=2

f 
æ è

1
k

ö ø

+ 
¥ å
k=M+1

f 
æ è

1
k

ö ø

=A+ 
M å
k=2

f 
æ è

1
k

ö ø

+ 
¥ å
n=2

a_{n} 
æ è

¥ å
k=M+1


1
k^{n}

ö ø




and again
S=A+f 
æ è

1
2

ö ø

+...+f 
æ è

1
M

ö ø

+ 
¥ å
n=2

a_{n} 
æ è

z(n)1 
1
2^{n}

... 
1
M^{n}

ö ø




but this time
z(n)1 
1
2^{n}

... 
1
M^{n}

=z(n,M+1) ~ 
1
(M+1)^{n}




and z(s,a) is the Hurwitz Zeta Function. The rate of
convergence remains geometric but this rate may be taken as large as desired
by taking a large enough value for M.
7.1 Examples
 The first natural example comes with the (almost) definition series
for Euler's constant
S=g = 1+ 
¥ å
k=2


æ è

1
k

+log 
æ è

1 
1
k

ö ø

ö ø




here
f(z)=z+log(1z)= 
¥ å
n=2


z^{n}
n




and the transformed series is
g = 1 
¥ å
n=2


( z(n)1)
n

. 


 Another interesting series is Mercator's relation :
S=log(2)=1 
1
2

+ 
1
3

 
1
4

+...= 
1
2

+ 
¥ å
k=2


1
(2k1)2k




and this time
f(z)= 
z^{2}
2(2z)

= 
¥ å
n=2


z^{n}
2^{n}




giving
log(2)= 
1
2

+ 
¥ å
n=2


( z(n)1)
2^{n}




and if we compute two more terms
log(2)= 
37
60

+ 
¥ å
n=2


(z(n)11/2^{n}1/3^{n})
2^{n}

. 


 A check relation
Observe that if we take for the function f ,a_{n}=1 for every n
f(z)= 
¥ å
n=2

z^{n}= 
1
1z

1z= 
z^{2}
1z




so that for k > 1
f 
æ è

1
k

ö ø

= 
1
k(k1)

= 
1
k1

 
1
k




and because clearly
S= 
¥ å
k=2

f 
æ è

1
k

ö ø

= 
¥ å
k=2


æ è

1
k1

 
1
k

ö ø

=1, 


we find the relation
With the same method come the two other relations
which may be used to check the evaluations of the Zeta function on
consecutive integer values.
References
 [1]
 M. Abramowitz and I. Stegun, Handbook of
Mathematical Functions, Dover, New York, (1964)
 [2]
 A.C. Aitken, On Bernoulli's numerical solution of
algebraic equations, Proc. Roy. Soc. Edinburgh, (1926), vol. 46, p. 289305
 [3]
 P. Borwein, An efficient algorithm for the Riemann
Zeta function, (1995)
 [4]
 H. Cohen, F. Rodriguez Villegas, D. Zagier, Convergence acceleration of alternating series, Bonn, (1991)
 [5]
 P. Flajolet and I. Vardi, Zeta Function Expansions
of Classical Constants, (1996)
 [6]
 X. Gourdon and P. Sebah, Numbers, Constants and
Computation, World Wide Web site at the adress http://numbers.computation.free.fr/Constants/constants.html, (1999)
 [7]
 R.L. Graham, D.E. Knuth and O. Patashnik, Concrete
Mathematics, AddisonWesley, (1994)
 [8]
 K. Knopp, Theory and application of infinite series,
Blackie & Son, London, (1951)
 [9]
 C. Maclaurin, A Treatise of fluxions, Edinburgh,
(1742)
 [10]
 L.W. Richardson, The deferred Approach to the
Limit, Philosophical Transactions of the Royal Society of London, (1927),
serie A, vol. 226
 [11]
 P. Wynn, On a device for computing the e_{m}(S_{n}) transformation, MTAC, (1956), vol. 10, p. 9196
Back to Numbers,
Constants and Computation
File translated from
T_{E}X
by
T_{T}H,
version 3.01.
On 8 Jan 2002, 16:57.