Consider the problem of computing the exponential of a given number. We would like a procedure that takes as arguments a base b and a positive integer exponent n and computes bn. One way to do this is via the recursive definition

which translates readily into the procedure
(define (expt b n)
(if (= n 0)
1
(* b (expt b (- n 1)))))
This is a linear recursive process, which requires
(n) steps
and
(n) space. Just as with factorial, we can readily
formulate an equivalent linear iteration:
(define (expt b n)
(expt-iter b n 1))
(define (expt-iter b counter product)
(if (= counter 0)
product
(expt-iter b
(- counter 1)
(* b product))))
This version requires
(n) steps and
(1) space.
We can compute exponentials in fewer steps by using successive squaring. For instance, rather than computing b8 as

we can compute it using three multiplications:

This method works fine for exponents that are powers of 2. We can also take advantage of successive squaring in computing exponentials in general if we use the rule

We can express this method as a procedure:
(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))
where the predicate to test whether an integer is even is defined in terms of the primitive procedure remainder by
(define (even? n)
(= (remainder n 2) 0))
The process evolved by fast-expt grows logarithmically with n
in both space and number of steps. To see this, observe that
computing b2n using fast-expt requires only one more
multiplication than computing bn. The size of the exponent we can
compute therefore doubles (approximately) with every new
multiplication we are allowed. Thus, the number of multiplications
required for an exponent of n grows about as fast as the logarithm
of n to the base 2. The process has
(log n)
growth.noteMore precisely, the number of multiplications required is equal to 1 less than the log base 2 of n plus the number of ones in the binary representation of n. This total is always less than twice the log base 2 of n. The arbitrary constants k1 and k2 in the definition of order notation imply that, for a logarithmic process, the base to which logarithms are taken does not matter, so all such processes are described as (log n).
The difference between
(log n) growth and
(n) growth
becomes striking as n becomes large. For example, fast-expt
for n = 1000 requires only 14 multiplications.noteYou may wonder why anyone would care about raising numbers to the 1000th power. See section 1.2.6. It is also possible to use the idea of
successive squaring to devise an iterative algorithm that computes
exponentials with a logarithmic number of steps
(see exercise 1.16), although, as is often
the case with iterative algorithms, this is not written down so
straightforwardly as the recursive algorithm.noteThis iterative algorithm is ancient. It appears in the Chandah-sutra by Áchárya Pingala, written before 200 B.C. See Knuth 1981, section 4.6.3, for a full discussion and analysis of this and other methods of exponentiation.
(define (* a b)
(if (= b 0)
0
(+ a (* a (- b 1)))))
This algorithm takes a number of steps that is linear in b. Now suppose we include, together with addition, operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic number of steps.
a + b and b
a. Call this transformation T, and observe that applying T over
and over again n times, starting with 1 and 0, produces the pair
Fib(n + 1) and Fib(n). In other words, the Fibonacci
numbers are produced by applying Tn, the nth power of the
transformation T, starting with the pair (1,0). Now consider T
to be the special case of p = 0 and q = 1 in a family of
transformations Tpq, where Tpq transforms the pair (a,b)
according to a
bq + aq + ap and b
bp + aq. Show
that if we apply such a transformation Tpq twice, the effect is
the same as using a single transformation Tp'q' of the same form,
and compute p' and q' in terms of p and q. This gives us an
explicit way to square these transformations, and thus we can compute
Tn using successive squaring, as in the fast-expt
procedure. Put this all together to complete the following procedure,
which runs in a logarithmic number of steps:noteThis exercise was suggested to us by Joe Stoy, based on an example in Kaldewaij 1990.
(define (fib n)
(fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
(cond ((= count 0) b)
((even? count)
(fib-iter a
b
<??> ; compute p'
<??> ; compute q'
(/ count 2)))
(else (fib-iter (+ (* b q) (* a q) (* a p))
(+ (* b p) (* a q))
p
q
(- count 1)))))