(Click here for a Postscript version of this page).
The story of logarithms really began with the Scot John Napier (1550-1617) in a work published in 1614. This treatise was in Latin and entitled Mirifici logarithmorum canonis descriptio [7]. However it should be pointed out that the Swiss clockmaker Jost Bürgi (1552-1632) independently invented logarithms but his work remained unpublished until 1620.
Thanks to the possibility to replace painful multiplications and divisions by additions and subtractions respectively, this invention received an extraordinary welcome and spread rapidly on the Continent (in particular thanks to the enthusiasm of astronomers like Kepler). George Gibson wrote during Napier Tercentenary Exhibition: ''the invention of logarithms marks an epoch in the history of science''.
Very soon tables of logarithms were published. One of the first was due to the English mathematician Henry Briggs (1561-1631) who, in 1624, published his work in Arithmetica logarithmica [4]; the tables were to an accuracy of fourteen digits and containing the common logarithm of integers up to 20,000 and from 90,000 up to 101,000. The remaining gap was completed [12] by the Dutchman Adrian Vlacq (1600-1667) in 1628.
Mathematicians prefer to use the so-called natural or hyperbolic logarithm of a number (denoted log or ln, that is logarithms having base e=2.7182818284...) and the following definition allows to derive easily the main properties of logarithms.
Definition 1 Let x >
0
|
It may be interpreted as the area under the hyperbola y=1/t with t going from 1 to x. (this geometric interpretation was showed by the Jesuit Grégoire de Saint-Vincent (1584-1667) in 1647).
Therefore, log 2 will be defined by
|
Theorem 2 log 2 is an irrational and transcendental number.
From the integral representation of log 2 we may deduce some formulae. The first one is deduced form the rectangular approximation of the integral (here h=(b-a)/n)
|
|
This sequence is increasing. Here are the numerical values of some partial sum for different value of n.
|
It is a very slow convergence, but it can be improved if we use the trapezoidal rule
|
|
and some iterates are
|
The rate of convergence has been improved but it remains logarithmic. We now give a last improvement on the computation of the integral, known as the Simpson's rule
|
|
and the new iterates
|
From those examples we can notice that such formulae may only be used to compute a few digits of log 2, the rate of convergence is too slow to find constants with high precision.
It is of interest to observe that logarithms may be computed by some very simple formulas that are not directly related to geometrical area and involving only square roots extractions.
We know that the constant e may be defined by the famous limit
|
and it is interesting to note that a similar formula exists for the logarithm constant:
|
it is obviously deduced from the expansion
|
With n=1,000, we find
|
A better formula is given by
|
again, with n=1,000
|
the error is O(1/n2). To conclude this section, we give the unusual infinite product published by Seidel in 1871
|
we apply it for x=2
|
In 1668, Nicolas Mercator (1620-1687) published in his Logarithmotechnia [6] the well known series expansion for the logarithmic function:
Theorem 3 (Mercator)
|
The Mercator series is very similar to another series studied by James Gregory (1638-1675) at the same time:
|
One of the first mathematician to use Mercator's series is Isaac Newton (1643-1727) who made various computation of logarithms in the celebrated ''De Methodis Serierum et Fluxionum'' written in 1671 but only published and translated from Latin to English in 1736, by John Colson. He made several accurate computations for small values of x, like ±0.1,±0.2 and was then able to compute logarithms of larger numbers using relations like log 2=2log(1.2)-log(0.9)-log(0.8).
Applying the Mercator series for x=1 leads to the famous and very
slowly convergent sequence
|
which first values, taking respectively 10, 100, 1000 and 10000 terms are:
|
A much better idea is to apply the series with x=-1/2, this produces a formula that was often used for numerical computation:
|
(1) |
The rate of convergence is now geometric (p iterations are needed to obtain one more digits).
|
Applying the formula up to k=1000 gives more than 300 digits for log 2. The number of iterations k needed to obtain d decimal digits is given by (with N=2 in the previous formula)
|
which gives for large values of k
|
If we use the trivial decomposition 2=(4/3)(3/2) and take the logarithm of both side of the equality we find a new formula
|
The partial sum of this series up to k=1000 will give more than 470 digits.
From the relation log 2 = -1/2log(1-1/4)+arctanh(1/2), (see next paragraph for the definition of arctanh) we deduce the relations
|
Those formulae can also be used to compute directly the n-th binary digit of log 2 without computing the previous ones.
Using Mercator series has improved a lot our ability to compute many digits for log 2, but like for the number p it is possible to go further. We recall the definition of the inverse hyperbolic tangent.
Definition 4 Let |x| <
1:
|
For x=1/3, this leads easily to the basic formula log 2=2 arctanh(1/3) which is similar to the formula p = arctan 1. We rewrite it as:
|
Two iterates are then:
|
Applying the formula up to k=1000 gives more than 950 digits for log 2. Is it possible to do better? There is a famous formula for p, this formula is known as Machin's formula and is given by the celebrated relation:
|
The idea is to look for relations like this one for log 2. If we note (a1, a2, ..., an) a set of rational numbers and (p1, p2, ..., pn) a set of increasing integers we are looking for relations like
|
We also want the formula to be efficient. A good relation is a compromise between a small number of terms (n must be small) and large values for the pi. According to Lehmer, we define the efficiency E by
Definition 5 (Lehmer's measure). Let pi >
0 a set of
n numbers, then:
|
For example the efficiency of log 2=2 arctanh(1/3) (n=1, p1=3) is 1.05. With the same definition the efficiency of Machin's formula is 0.926 (n=2, p1=5, p2=239), so it is a little bit more efficient than the formula for log 2.
It is possible to show the following decomposition theorem:
Theorem 6 Let p >
1:
|
Applying this theorem for p=3 gives relations with n=2:
|
The efficiencies are respectively E=1.307, E=1.121, E=0.998 and E=0.893. Using other decomposition formulas and computer calculation it is possible to find many other relations of this nature (given in [8]). To illustrate this, we select another formula with 3 terms and two others with 4 terms.
Theorem 8 (1997). The constant log 2 may be obtained by one of the
following fast converging series:
|
The efficiency are respectively (E=0.616, E=0.659, E=0.689). The two last formulae were used to compute more than 108 digits for log 2. One was used for the computation and the other for the verification and, as you can see, only the computation of 5 arctanh were necessary because of the similitude of the 2 formulae. Thanks to relation (6), the record was increased up to more than 5.108 a few years later.
Note that it is also possible to perform separately the computation of each term, making those relations parallelisable.
We give, for example, the first iterates of the second formula (the one with 251):
|
and each iteration will add about 4.8 digits.
It may be convenient to look for relations with rational numbers as arguments
of the arctanh function, that is identities of the form:
|
with mi being integers eventually different of 1. Lehmer's measure
must be adapted to take this in account
|
but it only gives a rough estimation.
|
with respective efficiency E=0.784 and E=0.722.
Note that relation (10) was used for several high precision computation of the constant.
Gauss studied the following family of series
|
where a, b, c are real numbers. The series converges in |z| < 1. On the circle |z|=1, the series converges when c > a+b. Those functions are called hypergeometric functions [1]. Many usual functions can be represented as hypergeometric functions with suitable values for a, b, c.
Example 10
|
In 1797, Johann Friederich Pfaff (1765-1825), a friend of Gauss, gave the interesting relation
|
If we apply this identity to the arctanh function
|
we find
|
giving a new sequence for this function
|
and with log 2=2 arctanh(1/3); after some easy manipulations comes the series:
Corollary 11
|
Each new term of this series is multiplied by -k/(8k+4) where k=1,2,... The same kind of formula was given by Euler to compute p. One nice application of this corollary is to make it feasible to write a tiny code to compute log 2 just like it was done for p.
The rate of convergence of all the previous series were logarithmic or geometric. It is possible to find for log 2 just as for p, a sequence that will have quadratic convergence, based on the AGM (Arithmetic-Geometric Mean) (see [2]).
The most classical among the AGM based quadratic convergence algorithms for
log 2 is the following. Starting from a0 and
b0 > 0, we consider the AGM iteration
|
|
Theorem 12 Let
N ³
3 and
1/2
£ x
£ 1,
|
(11) |
By choosing x=1/2, this algorithm gives about 2N decimal digits of log 2. The convergence is quadratic and should be stopped when 2n-1(an2-bn2) is less than 10-2N, which occurs for n » log2(N).
Notice that this algorithm is not specific to the computation of log 2 and can be used to evaluate log x for any real number x. Using FFT multiplication, its complexity is O(nlog2n) to obtain n digits of x.
Starting with a good approximation of log 2, it is easy to find the logarithm of all the integers thanks to the relation
|
thus we deduce that:
Theorem 13 Let p >
0
|
This may be useful when you need to compute the logarithms of many successive integers.
Example 14 With p=2, p=4 and p=7
|
Integral and series formulae for log 2 can be found in Collection of formulae for log 2.
Number of digits | When | Who | Notes |
16 | ~ 1671 | I. Newton | He used log 2=2log(1.2)-log(0.9)-log(0.8) and Mercator's series for log. |
25 | 1748 | L. Euler | Relation (2) was used. Euler also computed the natural logarithm of integers from to 3 to 10 with 25 digits [5]. |
137 | 1853 | W. Shanks | Shanks also computed log 3, log 5 and log 10 with 137 digits. His work was published with a p calculation [9]. |
273 | 1878 | Adams | Adams also computed log 3, log 5 and log 7 with the same precision. |
330 | 1940 | H. S. Uhler | Uhler also computed log 3, log 5, log 7 and log 17 with the same precision [11]. |
3 683 | 1962 | D.W. Sweeney | The computation of log 2 was necessary for a computation of Euler's constant up to 3566 digits in the same article). The formula used for log 2 was the expansion (1) (see [10]). |
2 015 926 | 1997 | P. Demichel | The computation used a classical approach on formula (2). It took 16 hours on a HP9000/871 at 160MHz. |
5 039 926 | 1997 | P. Demichel | The same technique was used and the computation took 130 hours on the same computer. |
10 079 926 | 1997 | P. Demichel | Same technique, 400 hours on the same computer. |
29 243 200 | 1997 Dec | X. Gourdon | The AGM formulae (11) was used (with FFT techniques). The computation was done on a SGI R10000 workstation and took 20 hours 58 minutes. |
58 484 499 | 1997 Dec | X. Gourdon | Same AGM technique, 83 hours on the same computer. |
108 000 000 | 1998 Sep | X. Gourdon | The four term formula (7) was used together with a binary splitting process. Formula (8) was used for the verification. The computation took 47 hours on a PII 450 with 256 Mo. |
200 001 000 | 2001 Sep | X. Gourdon & S. Kondo | Formulae (6) and (5) were used. The computation took 8 hours on an Athlon 1300 with 2 Go. |
240 000 000 | 2001 Sep | X. Gourdon & P. Sebah | Formulae (6) and (10) were used. The computation took 14 hours on a PIV 1400 with 1024 Mo. |
500 000 999 | 2001 Sep | X. Gourdon & S. Kondo | Formulae (6) and (10) were used. The computation took 28 hours on a PIV 1400 with 1024 Mo. |
Back to Numbers,
Constants and Computation