Modern iterative algorithms are among the most efficient ways to compute billion of digits of p. The first of those algorithms were discovered during the years 1970. We give without proof a selection and a brief description of some important iterations.
Before the discovery of extremely fast converging modern iterations, a few algorithms were already known but with slow rates of convergence.
Based on the well known formula discovered by François Viète (15401603)

an algorithm may be written as

and

then

It converges with a geometrical rate and for example it produces

Archimedes' method is equivalent to the Pfaffalgorithm published in 1800 (Pfaff was Gauss's teacher).

starting with

then

again, the rate of convergence is geometric with an error O(4^{k}) (that is you need roughly about 5 iterations to obtain 3 new digits). It can be accelerated by

which also converges geometrically, but with a faster rate of convergence (O(16^{k})). The first iterates of the two sequences are

There are a few quadratic algorithms converging to the constant p. Quadratic convergence means that the number of correct digits is doubled at each iteration. The existence of such extraordinarily rapid algorithms using only basic operations (multiplication, addition, division) and square roots extraction is a remarkable fact.
The first known quadratic algorithm was published in 1976 by Eugene Salamin in [6] and it was also discovered independently by Richard Brent the same year [5]. It uses the real AGM iteration.
Algorithm

with the initial values

then

The first four iterations are

and computing p_{30} will provide more than a billion digits of p ! So the computation of only 30 square roots and less than a hundred multiplications is required to find a huge number of digits.
This second algorithm is due to Jonathan and Peter Borwein and was published in 1984 [1].
Algorithm

starting with

then

The rate of convergence is also quadratic and the first iterations are

In [2], the following algorithm is given :
Algorithm

starting with

then

The rate of convergence is again quadratic (the number of digits is doubled at each iteration). The first iterates are given by

and 1/a_{5} has 40 correct digits, 1/a_{6} has 83 correct digits, ...
In [3], a cubic algorithm is described :
Algorithm

with the initial values

then

and the rate of convergence is now cubic (the number of digits is tripled at each iteration). Again let's take a look to the first iterations

Two iterations of the third quadratic algorithm gives a new iteration [2] :
Algorithm

starting with

then

and it has the impressive quartic rate of convergence (each successive iteration quadruples the number of correct digits). The first two iterates are

This algorithm is among the most efficient and is often used with a quadratic one to compute billions of digits of the constant p (one is used for the computation and the other is used to check the result of the first one).
In [4], this remarkable algorithm is given :
Algorithm

starting with

we have

and the rate of convergence is quintic (the number of digits is multiplied by five at each iteration).

More complex algorithms with higher order are available like septic or nonic algorithms (in the nonic iteration the number of correct digits is multiplied by 9 at each step !).
Starting with any approximation of p, the following algorithm was given by Beeler in 1972 and has a cubic convergence

and for example using Archimedes' approximation

it gives

and a_{3} has 88 correct digits,...
It's possible to find algorithms of this kind at any order and the following iterations have respectively quintic, septic, nonic, ... order of convergence

the coefficients are those of the series expansion of the arcsine function. If one prefers to evaluate tangent functions we also have

with respectively cubic, quintic, septic and nonic convergence and the coefficients are those of the expansion of the arctan function. For example, starting with the famous approximation a_{0} = 355/113, just one iteration of the nonic iterations gives 60 correct digits of p !. Similar formulae exist with cosines.
The major drawback of those algorithms is to evaluate sine or tangent function at a given accuracy which is not so easy.