Question

Write 6 big paragraphs Describing the algorithms that are actually used by modern computers to add,...

Write 6 big paragraphs Describing the algorithms that are actually used by modern computers to add, subtract, multiply and divide positive integers.

Homework Answers

Answer #1

Addition - here is an algorithm for integer addition. In the algorithm, d is a carry bit. Our algorithms are given in a language that mixes mathematical notation and syntax similar to that found in many high-level computer languages. It should be straightforward to translate into a language such as C. Note that “:=” indicates a definition, and “←” indicates assignment. Line numbers are included if we need to refer to individual lines in the description or analysis of the algorithm.

Algorithm

. Let T be the number of different values taken by the data type representing the coefficients ai ,bi . (Clearly, β ≤ T, but equality does not necessarily hold, for example β = 109 and T = 232.) At step 3, the value of s can be as large as 2β − 1, which is not representable if β = T. Several workarounds are possible: either use a machine instruction that gives the possible carry of ai + bi , or use the fact that, if a carry occurs in ai + bi , then the computed Multiplication 3 sum – if performed modulo T – equals t := ai +bi −T < ai ; thus, comparing t and ai will determine if a carry occurred. A third solution is to keep a bit in reserve, taking β ≤ T/2.

The subtraction code is very similar. Step 3 simply becomes s ← ai−bi+d, where d ∈ {−1, 0} is the borrow of the subtraction, and −β ≤ s < β. The other steps are unchanged, with the invariant A − B + din = dβn + C. We use the arithmetic complexity model, where cost is measured by the number of machine instructions performed, or equivalently (up to a constant factor) the time on a single processor. Addition and subtraction of n-word integers cost O(n), which is negligible compared to the multiplication cost. However, it is worth trying to reduce the constant factor implicit in this O(n) cost. We shall see in §1.3 that “fast” multiplication algorithms are obtained by replacing multiplications by additions (usually more additions than the multiplications that they replace). Thus, the faster the additions are, the smaller will be the thresholds for changing over to the “fast” algorithms.

Multiplication:-

A nice application of large integer multiplication is the Kronecker–Schonhage ¨ trick, also called segmentation or substitution by some authors. Assume we want to multiply two polynomials, A(x) and B(x), with non-negative integer coefficients . Assume both polynomials have degree less than n, and the coefficients are bounded by ρ. Now take a power X = β k > nρ2 of the base β, and multiply the integers a = A(X) and b = B(X) obtained by evaluating A and B at x = X. If C(x) = A(x)B(x) =, Ci xi  we clearly have C(X) = Ci xi   . Now since the ci are bounded by nρ2 < X, the coefficients ci can be retrieved by simply “reading” blocks of k words in C(X). Assume for example that we want to compute (6x 5 + 6x 4 + 4x 3 + 9x 2 + x + 3)(7x 4 + x 3 + 2x 2 + x + 7), with degree less than n = 6, and coefficients bounded by ρ = 9. We can take X = 103 > nρ2 , and perform the integer multiplication 6006004009001003 × 7001002001007 = 42048046085072086042070010021, from which we can read off the product 42x 9 + 48x 8 + 46x 7 + 85x 6 + 72x 5 + 86x 4 + 42x 3 + 70x 2 + 10x + 21.Conversely, suppose we want to multiply two integers a= 0<=i<n biβi and b= 0<=j<n bj  βj . Multiply the polynomials

A(x)= 0<=i<n biβi and B(x)= 0<=j<n bj  βj j , obtaining a polynomial C(x), then evaluate C(x) at x = β to obtain ab. Note that the coefficients of C(x) may be larger than β, in fact they may be up to about nβ2 . For example, with a = 123, b = 456, and β = 10, we obtain A(x) = x 2 + 2x + 3, B(x) = 4x 2 + 5x + 6, with product C(x) = 4x 4 +13x 3 +28x 2 +27x+18, and C(10) = 56088. These examples demonstrate the analogy between operations on polynomials and integers, and also show the limits of the analogy. A common and very useful notation is to let M(n) denote the time to multiply n-bit integers, or polynomials of degree n − 1, depending on the context. In the polynomial case, we assume that the cost of multiplying coefficients is constant; this is known as the arithmetic complexity model, whereas the bit complexity model also takes into account the cost of multiplying coefficients, and thus their bit-size.

Division:

Many algorithms have been developed for implementing division in hardware. These algorithms differ in many aspects, including quotient convergence rate, fundamental hardware primitives, and mathematical formulations. The paper presents a taxonomy of division algorithms which classifies the algorithms based upon their hardware implementations and impact on system design. Division algorithms can be divided into five classes: digit recurrence, functional iteration, very high radix, table look-up, and variable latency. Many practical division algorithms are hybrids of several of these classes. These algorithms are explained and compared. It is found that, for low-cost implementations where chip area must be minimized, digit recurrence algorithms are suitable. An implementation of division by functional iteration can provide the lowest latency for typical multiplier latencies. Variable latency algorithms show promise for simultaneously minimizing average latency while also minimizing area.

Generally, the exact quotient of two integers is a fractional number. One can stop the division process at any stage, and return the approximate quotient and the remainder. It will be convenient to assume that the quotient is expanded as q = q0 + q-1β −1 + q−2β −2 + . . . , (18) where 0 ≤ qj < β are the digits. Some division algorithms use generalized digits that satisfy a more relaxed condition such as −β < qj < β, cf. (6) and (8). In order to have the quotient in the above form, we assume that a and b are positive and normalized, such that a = a0 + a−1β −1 + a−2β −2 + . . . , b = b0 + b−1β −1 + b−2β −2 + . . . , (19) with a0 > 0 and b0 > 0, that is, we assume that 1 ≤ a < β and 1 ≤ b < β. This normalization can be achieved by pre-scaling, and one needs to do the corresponding adjustments to the final quotient after the division has been done.

Restoring division. The division algorithms we discuss here are only effective for small values of the radix β, and for simplicity, we consider the case β = 2 first. The first algorithm is called the restoring division, and works as follows. We compute a − b, and if a − b ≥ 0, then we set q0 = 1 and a ′ = a − b. In this case, since a < 2 and b ≥ 1, we have a ′ = a − b < 1 ≤ b. On the other hand, if a − b < 0, then we set q0 = 0 and a ′ = a. In this case, we also have a ′ = a < b. Now we apply the same process to a ′ and b ′ = 1 2 b, to compute the digit q−1. This process is repeated until the partial remainder becomes small enough. More formally, the algorithm sets α0 = a, and for j = 0, 1, . . ., performs

(αj+1, q−j) =(αj − 2 −j b, 1) if αj − 2 −j b ≥ 0, (αj , 0) if αj − 2 −j b < 0. (20)

The quantities αj are called partial remainders. We have 0 ≤ α0 = a < 2 ≤ 2b. Taking 0 ≤ αj < 21−j b as the induction hypothesis, we have 0 ≤ αj+1 = αj − 2 −j b < 2 1−j b -2 −j b = 2 −j b, (21)

for the first branch of (20), while for the second branch 0 ≤ αj+1 = αj < 2 −j b holds trivially. By construction, we have αj = qj2 −j b + αj+1, (22) and so a = α0 = q0b + α1 = q0b + q−12 −1 b + α2 = . . . = q0b + q−12 −1 b + . . . + q−n2 −n b + αn+1 = qb + αn+1, (23)

where q = q0 + q−12 −1 + . . . + q−n2 −n . In light of the bound 0 ≤ αn+1 = αj < 2 −n b, this implies 0 ≤ a b − q < 2 −n , (24)

meaning that the digits of q are precisely the first n + 1 digits of the quotient a/b. Since the algorithm generates one correct bit per step, it terminates after n steps, where n is how many quotient digits we need. Furthermore, each step costs us a bit shift, a subtraction, and a comparison operation.In practical implementations of (20), it is more convenient to scale up the partial remainders, than to scale down the divisor. Thus, putting αj = 2 j rj into (20), we get the recurrence (rj+1, q−j) = (2(rj − b), 1) if rj − b ≥ 0, (2rj , 0) if rj − b < 0. (25) for j = 0, 1, . . .. On the other hand, the scaling αj = 2 1−j rj would give (rj+1, q−j) = (2rj − b, 1) if 2rj − b ≥ 0, (2rj , 0) if 2rj − b < 0, (26) for j = 0, 1, . . ., which is more standard in the literature. Note that in the latter case, the initialization becomes r0 = 2 −1a (or equivalently, b is replaced by 2b). If one computes the difference 2rj −b and overwrites its value into 2rj , then in case 2rj −b < 0, one has to “restore” the value 2rj by adding b back. This is the reason for the name “restoring division.”

Know the answer?
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for?
Ask your own homework help question
Similar Questions
ADVERTISEMENT
Need Online Homework Help?

Get Answers For Free
Most questions answered within 1 hours.

Ask a Question
ADVERTISEMENT