(Click here for a Postscript version of this page.)
The basic numerical data types provided in most langages (C, C++, Fortran, ...) are limited to a given precision which depends on the data type itself and the machine. As a consequence, the computation of a numerical constant with more than (say) 10 or 20 digits needs a specific development to handle high precision numbers.
Several mathematical softwares (Maple, Mathematica, ...) directly provide the possibility to work with a non limited precision. They can be used to prototype algorithms to compute constants for example, but are highly non efficient since they are not dedicated to the numerical computation. Computer libraries (in C or C++ for example) exists that are much more efficient for an practical usage (see links below). Most of these are dedicated to the computation with a reasonable number of digits (say less than several thousands) and suffer from the fact that efficient algorithms on very large numbers are not implemented. Several isolated packages are dedicated to computation of huge numbers.
Netherveless, it remains of interest to know how to handle high precision numbers, either for writing your own package to have more control on it (local optimizations, memory management, ...) or to have an idea about the complexity of the problem.
The arbitrary precision computational aspect depends on a choice of a representation for large numbers.
Usually, a big integer N is handled thanks to it representation in a given base B (B usually depends on the maximal size of the basic data types) :

The choice of the base B is important for several reasons :
As for the choice of the basic data type used to store the coefficients x_{i}, it seems more natural to take the type long (in C). Netherveless, long type is usually a 32 bits data storage and the use of the C double type (usually 53 bits of precision) enables to have a larger basic storage. Moreover, some processors (like pentium) are often optimized for operations on double, making this choice better for such architectures.
Big negative integers can be representated either as big integers with a sign associated to it, or only in the form (*) with x_{n} < 0 and 0 £ x_{i} < B for i < n.
Real numbers can be represented in two ways : either as a fixed point representation


The basic operations on large numbers are the sum, difference and product of large numbers, and the division by a short integer. The division by a large number is treated in the next section.


Starting from i = 0, it consists essentially in euclidian divisions of x_{i} by B and reporting the carry on x_{i+1}.
Given two big integers X and Y in canonical form

Let q an integer such that qB fits in the basic storage data type of coefficients. The product of a large integer X by q is obtained by multiplying each x_{i} by q and reporting the carry, starting at i = 0.


For large integers of size n, all the operations +,  and product or division by a small integer gave a complexity of O(n). That means that the number of basic operations on basic data storage type is proportionnal to n. As for the classical product, the complexity is O(n^{2}) for multiplying two integers of size n. When n becomes big, this cost becomes very large compared to O(n). When handling huge integers, it is then of interest to try to obtain a faster algorithm for multiplication.
Karatsuba was the first to observe that multiplication of large integer can be made faster than O(n^{2}).
The technique is recursive. Starting from two large integers X and Y of size n, we cut X and Y in two parts




The operation can be continued recursively, so finally the cost C(n) satisfy




Some care should be taken in the implementation of Karatsuba multiplication in order to make it really efficient. The recursion should be stopped at a size m depending on the implementation and the machine. Under size m, the classical product is used. In the practice, Karatsuba multiplication starts to be better than the classical multiplication on integers with more than several hundred digits.
Karatsuba algorithm can be generalized : instead of cutting in two parts, one can cut in k parts and perform 2k1 products to recover the final product XY, leading to a complexity of O(n^{a}) with a = log(2k1)/log(k). Nethertheless, these generalizations are difficult to implement and often worst than the FFT method presented below.
The product of two larges integers of size n can be done in time O(n log(n)) thanks to Fast Fourier Transform techniques. See FFT based multiplication of large numbers for more details.
GMP home page : the most popular multiprecision package, with impressive performances on large numbers.
CLN : The multiprecision number library CLN from Bruno Haible.
Multiprecision page from Paul Zimmermann. Includes links and comparisons between several multiprecision packages.
numbers, constants and computation