The docstring in the code sets the stage. We're looking for a value like this:
This calculation works on most machines, where the precision is a whole number of digits. On a machine with fractional precision, such as a hexadecimal machine with 22 significant bits – five hex digits plus two extra significant bits – the computed precision will be the floor of the true precision. See the exercise below.
def find_precision_big_B_to_nth(b):
"""Compute the number of B-digits in the arithmetic and the power
of B sufficient to have the ones place fall off the right.
Args:
b: the global radix B, accepted as an argument
Returns:
precision: number of B digits in arithmetic
power of B such that the low-order digit is the B's place
"""
The loop proceeds until the added 1.0
rounds off
or rounds up into the B
's place of y
.
In log arithmetic,
most small whole numbers cannot be represented exactly.
What seem like obvious
computations, such as adding 1.0
to small
whole numbers, are inexact. That's why we skip this function
altogether in log arithmetic.
big_b = ONE
precision = ZERO
while True:
precision = precision + ONE
big_b = big_b * b
y = big_b + ONE
if y - big_b != ONE: break
return precision, big_b
Exercise: The hexadecimal diagram below
shows the loop in progress. There are five and a fraction digits, that is,
the low-order hex digit has one, two, or three low-order bits always zero
(and not represented in the storage format). Work out what might happen to
the 1
added to a hex digit that has a hard-wired 0
there. What is the computed precision?