11.001001000011111101101010100010001000 Arithmazium
Home

find_big_2_to_nth()

This function seeks the smallest value \(2^{n}\) such that \( | ( ( 2^{n} + 1 ) - 2^{n} ) - 1 | \geq 1 \). For small values of \(n\), the expression sums to zero. For very large values of \(n\), the partial sum \( 2^{n} + 1 \) will have to round. Both outcomes will satisfy the inequality, as we'll now see.

The binary case is easy to visualize. Intuitively, the target is the smallest power of \(2\) for which \(+ 1\) falls off the right hand side. Usually, the \(1\) will just round off, but some arithmetics might round it up to the next bit position, in which case the rounded sum is \( 2^{n} + 2 \). In the diagram, the binary point is aligned to put the 1's place to the right of the digit block.

y = x + ONE    010 ... 001 Largest exact sum 2**n + 1 y = x + ONE # must round    10  ... 0?1 Will the trailing 1 round off or up to 2? = x + TWO # rounded up    10  ... 010 In the unlikely event of rounding up the trailing 1 is in the 2's place

Octal and hexadecimal arithmetic behave very much the same way. Here are the values the step before \( 2^{n} + 1 \) must be rounded. Italic values are octal. Bold italic values are hex.

y = x + ONE    040 ... 001 Largest exact sum 2**n + 1, octal y = x + ONE    080 ... 001 Largest exact sum 2**n + 1, hexadecimal

Our technique works in decimal, or radix 36 or 100, too. As in the octal and hex cases, repeated doubling produces a 1 digit in any radix the leading position, when there's a carry into a new place. Here is what happens with 5 significant decimal digits.

y = x + ONE    032769 2**15 + 1, exact in 5 digits y = x + ONE    065537 2**16 + 1, exact y = x + ONE    131073 2**16 + 1, with the 3 to be rounded off

You may notice that while the mathematical description of the calculation involves a comparison against \(1\), the implementation below compares against \(0\). Historically, comparison against zero was less likely to be anomalous, hence a preference for such comparisons throughout Paranoia.

# =============================================
# Computations of fundamental system constants like radix and precisioon.
# =============================================
def find_big_2_to_nth():
    """Compute 2**n satisfying |((2**n + 1)-2**n)-1| >= 1."""
    x = ONE
    while True:
        x = x + x
        y = x + ONE
        z = y - x
        y = z - ONE
        if MINUS_ONE + fabs(y) >= ZERO: break
    # ... now x is just big enough that |((x+1)-x)-1| >= 1 ...
    return x


A more subtle issue that can arise in octal or hexadecimal arithmetic is fractional precision, which we'll see again. For example, a hex system with 26 bits available for precision would have what we call 4.5 significant hex digits, where the low-order half-digit may be thought of as a full hex digit taking just the values 0, 4, 8, or C. That is, the low-order two bits are always zero. The analysis above applies here, too.

What could go wrong?

Can you work through the loop in the next diagram, first with 6 hex digits then with 6 digits plus another low-order half-digit? It can be useful to think in 26-bit binary, remembering that when a carry-out happens on the left-hand side, it's a new hex digit that comes into play. The circled digits indicate the positions of the half-digits, of which only the top two bits can hold final values.

y = x + ONE    04000010 2**22 + 1, hex y = x + ONE    08000010 2**23 + 1 y = x + ONE    1000001 2**24 + 1, the 1 must round off, whether the precision is 6 or 6.5 hex digits
Home