This website is supported by the iPhone app called Frrraction.
Current with Frrraction Version 0.96.13
Modular numbers, or residues, are a generalization of circular measurements such as hourly time: 1 through 24 o'clock on an analog clock (residues modulo 24); or compass heading: 1 through 360 degrees on a circle (residues modulo 360).
Modular arithmetic is about position and distance around such circles, adjusted to ignore full trips around the circle in either direction, as in: "What time is 17 hours later than 10 o'clock?"," or "4 hours earlier than 3 o'clock ?" The answers can be calculated by ordinary arithmetic as 10 + 17 = 27 and 3 - 4 = -1 — and then adjusted for the circle to get the normal residue answers: 27 - 24 = 3 (wee hours) and -1 + 24 = 23 o'clock (late evening) .
Frrraction treats only the most interesting version of modular arithmetic, the exact integer version.
↑ to menuReal numbers are well-represented by positions and distances along an infinitely long straight line, the real line. Modular numbers are similarly represented by positions and distances around a circle, the modular circle.
The modular circle brings a wrinkle to the position metaphor: Each position around the modular circle, from zero to just shy of the modulus m, represents a unique residue modulo m. But whereas each position on the real line has a unique label, each position on the modular circle logically admits infinitely many labels. The labels are integers, subject to integer arithmetic, while the positions they label are residues, subject to modular arithmetic. Adding integer multiples of the integer m to any label yields an alias for that label—another equivalent label for the identical residue position.
↑ to menuIt is customary but not mandatory to refer to residues by their remainders. For instance it would not be wrong to say that the sum 16 + 11 modulo 24 is 27, but commonly — as is standard usage in common examples like time and compass heading — we prefer to use the remainder, 3 — even though that requires a bit of extra computation.
Virtually every computer language supports that custom by providing a special and very efficient instruction — often expressed something like r=mod(i,m) — which replaces any integer i by its modulo-m remainder r. Frrraction uses such a function at the end of almost all its modular computations.
↑ to menuNegative positions on the number line are shown to the left of the 0-position. In contrast, a negative label on the modular circle, say t−m where t < m, requires no new space on the circle—it's simply an alias for the label t.
↑ to menuThe arithmetic of addition and subtraction plays well with such positional metaphors as displacements, on the modular circle as well as on the real number line: To add a positive number to a position is to move the added distance to reach a position—to the right on the line or clockwise on the circle; subtraction moves linearly left or circularly counterclockwise.
Modular multiplication, too, is mostly straightforward, but needs extra interpretation. Division has always been and remains the trickiest math op.
↑ to menuWhen 'op' is '+' or '−' then one but not both of the operands of the infix expression "a op b" can be a position, the other one or both must be a distance (displacement). If one operand is a position the result will be a position; it doesn't matter which of the two operands a or b is the distance. If both operands are distances, the result will be a distance.
↑ to menuMultiplication and division require careful interpretation, especially in the modular case. They operate on and produce only scalars and distances, not positions.
What could 3 o'clock times 10 o'clock or "three 10 o'clocks" mean? Nothing, certainly not along the lines of "30 square o'clocks". But, treating 3 as a scalar operating on 10 and 30 as distances, the arithmetic 3⋅10=30 hours later has clear meaning "on the clock"—and is the same "on the clock" as 6 hours later, and the same as 18 hours earlier, etc. This makes perfect sense of the formula for modular multiplication: multiply two residues by performing integer multiplication on their remainders, then adjust via multiples of the modulus to get the remainder of the result.
So. Multiplying or dividing a scalar by a scalar yields a scalar. Multiplying or dividing a distance by a scalar yields a distance. Dividing a distance by a distance yields a scalar.
Operationally, you don't need to be thinking about the scalar/distance/position distinctions all the time—they really are all just residues.
↑ to menuDivision always has been more devious than multiplication.
The original idea of division was to share some given quantity in equal portions among several recipients. The numerator n was a measure of the given quantity; the denominator d specified the intended number of recipients; and the quotient q was a measure of each of the resulting equal portions. The language suggests that d is a counting number — an answer to the question "How many?" — but usage evolved as arithmetic grew more abstract.
Real numbers use distances along the real line to model all three aspects: The measure of the given quantity, the count of the desired number of portions, and the measure of each resulting portion. Modular numbers do the same with distances around the modular circle.
Generically, denominator d is said to divide numerator n, and we write d | n, if there exists a solution of the equation d⋅q = n. If a solution for q exists and is unique, the solution is called the quotient and we write q = n/d.
The existence of quotients is more easily resolved for residues than for the euclidean integers, which actually stumble over it: All integers except 1 and −1 (the units) suffer a severe restriction on which real integers they can divide. One answer to the general question of which integers a given integer can divide is easily stated: d | n if and only if n is in the trace of d. (The trace of any number is defined to be the set of all multiples of the number. Unfortunately, q⋅d is such a multiple of d, so traces rather beg the question.) The Fundamental Theorem of Arithmetic provides constructive help when it's feasible to factor n and d: It offers that d | n if the list of factors of n includes all the prime power factors of d.
Existence of residue quotients is decisively solved by traces, even though traces are unhelpful for real integers.
Uniqueness remains a big issue for modular division. It proved to be no obstacle for real integer division because integers have primes and a property called The Fundamental Theorem of Arithmetic. Problematically, the residue systems of most moduli do not completely support such a theorem.
The (euclidean-)integer division rule is: Find a solution of d⋅q + r = n, and make up side conditions such as Euclid's 0 ≤ r < d to make q be the quotient. That may seem a bit lame, but the world has survived productively ever since. Interestingly, Euclid's remainders — a mere trick for getting unique integer quotients — actually originated our topic: remainders as a new type of number. And the idea of adding side conditions to the basic equation of division helps with modular quotients as well.
For modular-division, the founders (Euler and Gauss) parlayed real units along technical lines rather than intuitive lines. For instance, the result creates oddities like numbers equal to their own reciprocals: 1÷17≡17, and 1÷37≡37 (modulo 72). This has striking consequences, like: For any residue r modulo 72 it makes r⋅17 ≡ r÷17 as well as r⋅172 ≡ r. You don't see things like that every day in the "real" world but they're common in the residue world. Such quirks help make residues much more interesting than integers.
What the ur-theorists were struck by was the existence of real integers that can divide all real integers: the so-called units, 1 and -1. They appear to have entirely ignored the fact that each real integer i is part of an infinitely large trace of real integers — the multiples of i — which i can perfectly well divide. A special thing about the units 1 and -1 is that their trace contains all the real integers.
The founders then proceeded to notice that unit-like residues are also present in every residue system: Each unit has a reciprocal such that the product of it times its reciprocal is congruent to 1. Gauss called those unit-like residues units. Because of the recent discovery of more units, this website calls the originals euler units and the others residual units (or modal units because of their deep connection with the modulus). In this section we'll stick with euler units.
Gauss based his version of residue division upon that unit-fluke.
* Residue Division a la Euler and Gauss: They must have reasoned something like this: Let the quotient n ÷ d be some residue q. The most fundamental property of any quotient is that it is a solution of an equation or congruence like d⋅q ≡ n. Starting with that, we have n ≡ q⋅d, so
Thus, they defined dividing n by d to mean multiplying n by the reciprocal of d, i.e.
Good news: Euler found some fast and efficient ways to find the value of d-1 which are still good to this day.
This surely should be making sense, since most everything said so far has been fully established number theory for two centuries. It helps if you remember that residues and integers are very different number-systems, and the ancient arithmetic verbs "add", "subtract", "multiply", and "divide" are misnomers in the case of residues.
Consider, for example: The result of adding residues [5] and [7] modulo 10 is [2]--but doesn't adding something result in more? is 2 really more than 5?
Similarly: If you wanted to share 2 things among 3 people, you'd calculate 2 divided by 3, right? If we do it modulo 10 we get [2]/[3] ≡ [4] (mod 10). Try to interpret THAT as sharing 2 things among 3 people! Are we to give each person 4 of the 2 things? Well, yes, in the residue way: If we take 4 things from the original 2, then 2 minus 4 things remain—and 2−4 ≡ 8 (mod 10); we can do it twice more, leaving 8-4-4 ≡ 0, none left—and we did manage to deliver three sets of 4 to our three people, a total of 4 + 4 + 4 ≡ 2 things—which indeed matches the number we originally had.
This may sound like double-talk but it is no embarrassment within the mathematics because actually, residues are technically NOT "an ordered set". "Equal" is well defined (as "congruence") but notions like "smaller" and "larger", "less" and "more", "above" and "below" are undefined.
A mathematician's takeaway from this is that residues and reals are different kinds of numbers, effective to use in different applications. Quite specifically, if accumulation can occur, if less and more are notions that matter, beware of residues. But if residuals, leftovers, periodicity are in some way paramount — then residues might be the numbers to use.
A ship's captain must understand that changing the ship's course by 450 degrees is the same as making a 90-degree right turn: Navigational headings are residues. The ship's First Mate, on the other hand, as logistics officer, must understand that having no fuel oil is different from having m gallons—no matter what modulus m they might be tempted to consider as a way to save money.
So. The answer to the question "Does this all make sense?" is "Well...yes, and sometimes residues are the right numbers to use, and sometimes euclidean integers are the right numbers to use.
↑ to menu
Zm "division ring Z/mZ"
also called "residue system modulo m"
PPS(i,p) a "maximal prime-power stack" of integer i:
p, p2, ... , pj, ... , pk for 1 ≤ j ≤ k
where p is prime and pk divides i but pk+1 does not.
a | b "a divides b" (residues)
a .|. b "a divides b" (euclidean integers)
a / b "a over b" (residues)
a . /. b or a .÷. b "a over b" (euclidean integers)
= equals (generic)
.=. equals (euclidean integers)
≡ (mod m) "is congruent to" (residues)
≡e: Gauss's "euler-units-only quotients"
≡iu: general "iu-quotient"
Cr cluster containing r
cr center gcd(r,m) of Cr
|S| "size of a set S" (number of members)
The original solution of the congruence d⋅q ≡ n (mod m) was Gauss's quotient q = n/d ≡ n⋅inv(d) (mod m) where the numerator n can be any residue modulo m but the denominator d is restricted to be an euler unit†.
† Euler units comprise the algebraic group C1 defined by gcd(d, m)=1.
It was an excellent beginning for modular division, but the situation in that corner of number theory has languished for nearly two centuries. Even the best attempts to permit other denominators have until now been universally deprecated due to an endemic ambiguity whenever gcd(d, m)>1. The ambiguity occurs because that number n = gcd(d, m) is also the number n of distinct residues q that solve the defining congruence d⋅q ≡ n (mod m) — and quotients should never be ambiguous. We feel we've solved the problem.
↑ to menuResidue systems have two distinct kinds of residues: unit-like residues and integer-like residues. That's the key.
To see the distinction, assume d divides n and consider the division sequence (DSQ):
DSQ(n,d): q1=n/d, q2=q1/d, q3=q2/d,...
If d is integer-like then that sequence ends at some quotient which d does not divide — and the sequence stops; that's the way common integers behave, e.g. DSQ(24,2): 24/2=12, 12/2=6, 6/2=3, 3/2=undefined (in the integers). If d is unit-like, however, then the sequence of quotients continues forever with valid quotients. (In the real integers there are only the two units: 1 and –1 — while residue systems typically have many units.)
In Z96, for example, powers of 2 up thru 24 and their associates† are integer-like, and — using the iu-version of modular division developed in this article —we have
iu-DSQ(40,2): 40/2≡20, 20/2≡10, 10/2≡5, 5/2≡undefined,
which is perfectly integer-like behavior.
On the other hand, 3 and its associates in Z48 are unit-like:
iu-DSQ(15,3): 15/3≡21, 21/3≡39, 39/3≡45, 45/3≡15,...
which has looped back and obviously continues indefinitely with all quotients well defined: clearly that is unit-like, not integer-like.
† The associates of any residue r are the residues computable as s≡r⋅e where e is any euler unit.
Theorem 1: Residue r is unit-like if and only if gcd(r⋅r, m)= gcd(r, m); otherwise† r is integer-like.
Subsequently, we drop the annoying "-like" terminology in favor of calling them simply units and ints (reserving the noun "integer" for its classical sense).
Definition: For each prime divisor p of the modulus m there is a maximal† prime-power stack PPS(m, p)= p, p2, ..., pk.
† "Maximal" in the sense that, while pj divides m for all 1 ≤ j ≤ k, pk+1 does not.
Corollary 1a:
In that notation, pk is the center of a unit-group, and (if k ≠ 1) each pj for 1 ≤ j < k is the center of a separate non-group-int-cluster of Zm.
The total collection of the system's residues — all m of the units, ints, and composites — consists of the euler units together with these prime-powers for each prime divisor of m, plus all modular products of these with each other.
(As a formula for generating the members of Zm, Corollary 1a is of course severely inefficient, as it tends to generate each member many times over.)
Division sequences, and divisors-shared-with-its-modulus, are two ways that residue r
reveals its style of division. Another way is its power sequence (PSQ):
PSQ(r)=r, r2, r3, ...
This is significant because PSQ's make no reference, direct or indirect, to division: The existence of ints and units is simply a fact about residues, not an artifact of any particular notion of modular division.
Theorem 2: If and only if r is a unit, its PSQ exponent eventually reaches a value u > 1 where ru ≡ r.
Every PSQ eventually loops back to some earlier power, creating a cycle that repeats continually thereafter. That's because there are only a finite number of residues in the system. Sometimes the cycle for PSQ(r) includes the starting residue r, sometimes not; that property distinguishes units from non-units.
For example, modulo 96: PSQ(2)=2,4,8,16,32,64,32,64,32,... which loops back and repeats the 32,64-subsequence forever, so obviously the PSQ can never return to 2, a fact that shows 2 to be an int; 4 and 8 and 16 are also only seen once in their power sequences so they too are seen to be ints; but PSQ(32)=32,64,32,... and this reveals 32 to be a unit with 32u ≡ 32 for u=3; similarly PSQ(3)=3,9,27,81,51,57,75,33,3... which returns to 3 at 3u with u=9.
Curiously, the cycle itself always consists solely of units. This causes a peculiarity: Somewhere along the PSQ for any non-unit, the division-character of the powers in the sequence changes from non-unit into unit. More generally, for every int r there are ints s (only in the same power-stack as r) such that r ⋅ s is a unit. Products of units × units always yield units, but ints can "shape-shift".
Corollary 1b: Residue r is a member of Cr. All and only the members of Cr can be expressed as modular products s = r ⋅ e where e is some not necessarily unique euler unit for each s. In other words, Cr consists of r and all of its associates.
(As a formula for generating the members of Cr, Corollary 1b is inefficient; it generates each member several times, specifically EulerPhi(m)/EulerPhi(m/cr) times, which is always 2 or greater if cr>2.)
Corollary 1c: All members of Cr have the same division-character as r, unit-like or integer-like. This includes its center cr.
Division clusters provide residue systems with a comprehensive structure and deserve an article of their own. Their importance is established byResidue r is a unit if and only if the division cluster Cr is an algebraic group with multiplication (mod m) as its group-operation.
The key parts of the proof are within the sequence PSQ(r): r⋅ru-1≡r so the unit-group's identity is ωr=ru-1; and r⋅ru-2≡ru-1≡ωr so the inverse of r is inv(r)=ru-2.
↑ to menuDivision by unit denominator
Definition: When d is a unit we define q=n/d≡n⋅inv(d) if n and d satisfy the iu-divisibility condition n⋅ωd≡n (mod m).
Note: Inv(d) is ωd/d — i.e. n/d with n=ωd; the divisibility requirement n⋅ωd≡n is thus ωd⋅ωd≡ωd which is true.
The quotient 1/d is undefined unless d is an euler unit.
Division by int denominator
Definition: When d is an int at the center of its cluster, we define q = n/d = n . /. d (where the symbol . /. designates euclidean integer division). The int-divisibility condition d .|. n is that the euclidean remainder after dividing n by d be 0.
[Note: In both cases of pure denominator (unit or int divisor of modulus) the quotients are unique when n and d are both unambiguously known.]
iu-factorization: Division by composite denominator
Theorem 5: Any residue r can be expressed as the product of an int ir and a unit ur: r ≡ ir ⋅ ur.
If r is a unit then its int-factor is ir=1.
If r is an int then its unit-factor ur is an euler unit: gcd(ur,m)=1. In this case ir is the center of its cluster Cr of ints — the unique member of Cr that actually divides modulus m. More specifically, in the prime-power decomposition of m, ir always appears as pj for some j<k , in one of m's prime-power stacks PPS(m,p) = p, p2, ... , pj, ... , pk.
The int-factor is unique, but ir as a common integer is actually the number of distinct units ur that solve the defining congruence.
There are simple algorithms to construct ir, and the value is unique. With ir in hand, the natural unit factor r . /. ir is a specific one of the possible values of ur . Like Euclid choosing remainder r = 0 to make euclidean quotients unique, we choose euclidian division ur = r . /. ir to make ui-factorization unique.
Unfortunately, the quotient-ambiguity problem for n/d remains unless the int-factor id of the denominator is 1.
That's because for any int i there is an algebraic group Ëi of euler units ë that are transparent† to i. The number |Ëi| of residues in Ëi is the integer gcd(i,m). This means that wherever an int-factor i occurs, it is congruent to and can be replaced by i⋅ë where ë is an unknowable member of Ëi. That seems harmless enough, until we apply it in the intuitive "raw" version of iu:division defined below, which shows vulnerability.
† "Transparent" is used here in the sense that if ë is in Ëir then ë⋅r ≡ r and ë⋅ir ≡ ir .
Transparency Lemma: If n is divisible by d, and if ë is transparent to id, then ë-1 is transparent to in.
This is because Ëid is a group, so ë-1 is also in Ëid. Also, if n is divisible by d then in is integer-divisible by id, so there exists an int j such that in .=. j ⋅ id. Thus
in ⋅ ë-1 .=. j ⋅ (id ⋅ ë-1) .=. j ⋅ (id) .=. in.
Modular iu:division, the raw idea
The plan of the raw version of iu:division is to separately check the divisibility condition for ints id .|. in and for units
ud | un. Calculated separately, the two divisions produce partial quotients qi = in . /. id and qu ≡ un/ud. Multiplying the two partial quotients yields
q = n/d ≡ qi⋅qu.
Natural, straightforward, intuitive. The very kind of modular division that has eluded number theory for so long.
But, despite the uniqueness of natural iu-factorizations, the persistent ambiguity has asserted itself in a new form: As observed above, every residue must be considered to be r = ir⋅ur ≡ (ir⋅ë)⋅ur ≡ ir⋅(ë⋅ur) where ë is an unknown and unknowable member of Ëir — this is all true, but it does not follow that ë⋅ur is always congruent to ur. An example should clarify:
e.g. Consider the quotient 20/14 ≡iu: q (mod 12):
The relevant natural iu-factorizations are numerator n = 20 = in⋅un = 4⋅5, and denominator d = 14 = id⋅ud = 2⋅7. The numerator's int-transparency group is Ë4={1,7,13,19}, and d's is Ë2={1,7} so 4⋅1≡4⋅7≡4⋅13≡4⋅19≡4 and 2⋅1≡2⋅7≡2 but — as forewarned — the transparency of those Ë-groups to ints does not always extend to the corresponding units; ë=1 is always transparent to both, of course, but counterexamples include un⋅7=5⋅7≡11≢un and ud⋅7=7⋅7≡1≢ud, which shows that ë=7 is transparent to neither un nor ud.
The consequences of this new variant of ambiguity show up in the proposed raw division process, which calls for doing iu-factorization and separating the ints from the units. In the present example, the factorizations are:
n = 4⋅5 = (4⋅ën)⋅5 = 4⋅(ën⋅5), and
d = 2⋅7 = (2⋅ëd)⋅7 = 2⋅(ëd⋅7).
When you separate the ints from the units, the int factor you get for the quotient is thus qi = in . /. id = 4 . /. 2 ≡ 2 (uniquely) but the unit factor is qu ≡ (ën⋅un)/(ëd⋅ud), where the ë's are those unknowable members of Ë4 or Ë2. The unit portion of the raw division can thus turn out to be any of 5, 11, 17, or 23, so — integer-multiplying each of those by 2 — we see that the complete raw quotient q = qi⋅qu=2⋅qu can be either 10 or 22.
Spoiler alert: As we'll see below, the true iu-quotient is
q=20/14 ≡iu: 10 (mod 24).
Modular iu:division, refined
Fortunately, there is finally a way to immunize division against the ambiguity that has inflicted modular division ever since Gauss: The opportunity is provided by the same fact that caused the problem in the first place: id absorbs all members of its associated group Ëid of transparent eulers. We can use it to neutralize the problem at the problem's very source!
The key is the Transparency Lemma: in shares the absorbency of id, since in is euclidean-divisible by id — which means that in contains id (as a factor). But... if the id portion of in is stripped from in in the process of separately calculating qi = in . /. id ... then n, deprived of its id portion, would no longer be inoculated against the ambiguity caused by Ëid . In the raw division process, the ambiguity got carried up to the no-longer-protected numerator when multiplying by inv(ëd⋅ud) during the separate production of qu.
The final solution: Do not iu-factor n before multiplying by inv(ud), and don't iu-factor that result before euclidean-dividing by id. Leaving the numerator intact in this way is supported by the
Lemma: If i is an int and i .|. a, then i .|. (a⋅b) provided that gcd(i,b)=1.†
† In the real integers, the lemma's conclusion is true without qualification. In the residual case some qualification is necessary to avoid the product of two ints (such as the int-factors of a and b) in the same prime-power stack becoming a unit — because ints (such as i in the lemma) do not ui-divide units.
In summary, the solution to the Ëid and Ëin ambiguity problems is threefold: Use the unique natural unit factor for ud; keep in unchanged and still connected to un within n — until after un has incorporated inv(ëid⋅ud) while still connected to (never disconnected from) in; finally, do the euclidean division by id to complete the production of q. Formally:
Note: There are minor variations of this algorithm which are not entirely equivalent but might deceptively seem to be. The present refined iu-division algorithm stands quite alone.
e.g. modulus 24: n=20 ≡iu: 4⋅5, d=14 ≡iu: 2⋅7, inv(7)≡7.
ud does divide un: ωud=1, un⋅ωud=7⋅1≡7=un;
and id does euclidian divide in: 4 . /. 2 = 2 with 0 remainder.
so q≡iu: (n / 7) . /. 2 ≡ (20) . /. 2 = 10. Unambiguously.
e.g. modulus 360: n=60 ≡iu:12⋅5, d=105 ≡iu: 3⋅35, inv(35)≡35.
ud does divide un: ωud=145, un⋅ωud=5⋅145≡5=un;
and id does euclidian divide in: 12 . /. 3 = 4 with 0 remainder.
so q ≡iu: (n / 35) . /. 3 ≡ (300) . /. 3 = 100. Unambiguously.
Progress on q ↔ d symmetry
A longtime nemesis of modular division has been the failure of various formulations to have the following widely expected "cancellation property":
a⋅b ≡ a⋅c and a ≢ 0 imply b ≡ c.
An equivalent statement is:The congruence a⋅x ≡ a⋅b has x ≡ b as its unique solution if a ≢ 0.
Another is:If r ≡ a⋅b ≢ 0 then r/a ≡ b and r/b ≡ a.
Failure of the first two variations is an unavoidable property of any residue system whose modulus is not prime — that is, any residue system that has ints and/or more than one class of units. The third one notably shares with division the responsibility for its failure, which provides a partial escape for more diverse residue systems.
Better framed in the context of modular division is a property we call
quotient ↔ denominator symmetry
and abbreviate as q ↔ d:
Definition: The ordered triplet (n,d,q) is said to show q ↔ d if d and q each divide n, with n/d ≡ q and n/q ≡ d.
[Note: In the case of iu-division, if either d or q divides n then both divide n with unique quotients, so we often use n/d instead of the triplet (n,d,q) and say "n/d shows (or does not show) q ←iu→ d"].
In past versions of modular division, violations of q ↔ d were common and — for most systems Zm unless m were prime — the violations were incoherently scattered among n/d ratios.
The issue is not new to modular arithmetic. In the Gauss "e formulation" of division, only euler units (residues co-prime with the modulus) could be denominators, but q ←e→ d then required both n and d to be euler units because, if n isn't euler then q ≡ n⋅inv(d) is also not euler, so q didn't e-divide n and q ←e→ d was out of the question.The present "iu formulation" of division has finally rationalized the issue. We can now prove at least three classes of cases — Theorems 6, 7, and 8:
Theorem 6: If n and d are in the same division cluster (either ints or units) then q ←iu→ d.
That's because, with euler denominator, the quotient q is in the same non-euler cluster as n, so q cannot possibly be congruent to the euler d.
The broadest case so far, my favorite, is
Theorem 8: If the centers cn and cd of the division clusters Cn and Cd are in Zm's same prime-power stack ( p, p2,⋅⋅⋅, pk ) with p ≤ cd ≤ cn < pk, then n/d has q ←iu→ d symmetry if and only d is one of the Size(Cn)-integer-smallest members of Cd.
It's an awkward mouthful to be sure, but it streamlines our understanding of the vast majority of all n/d ratios.
[Note that pk — the top of the p-prime-power stack —is always a unit, and ints never iu-divide units, so the strict inequality cn < pk is required in Theorem 8. The special case of cd = 1 (sic) and cn = pk is covered by Th.7. The special case of cd = cn = pk is covered by Th.6.]
The residues, as presented in a ring's "Chinese Cube", can be rearranged — sorted by integer-size top to bottom within columns, left to right within rows (separately within each division cluster) — as shown below. This makes Th.8 easy to visualize: Check the size |Cn| of the numerator's division cluster Cn, then any of that number |Cn| of the smallest residues in the denominator's Cd will show q ←iu→ d symmetry:
e.g. Consider the quotient q≡20/26. The cube shows many things: It shows that the numerator n=20 is an int, not a unit; and the numerator's cluster C20 is closer to the lower-right corner than the denominator's is, so 26 does iu-divide 20; further, numerator n=20 and denominator d=26 are in the same prime-power stack 2,4,8; it also shows for the numerator that C20 has |C20|=six residues; and residue 26 is among the smallest six members of C20. Thus, Theorem 8 proves that 20/26 shows q ←iu→ d symmetry; the same is true of 20/2, 20/10, 20/14, 20/22, and 20/34 … but not 20/d for d=38 nor for any of the other five residues integer-greater-than 38 in C26.
Moreover, everything we just said about n=20 is equally true for all the residues in C20. For instance, 28/14 shows q ←iu→ d, but 28/38 does not.
There are other cases of qd-symmetry, involving mixed int⋅unit residues, but they are in the minority. I think the principles are already all in play but the details are tricky.
↑ to menuConclusion
What iu-division amounts to is not just a small pointless collection of isolated definitions. It is a major natural extension of a thread in number theory initiated by Carl Friedrich Gauss in the early 1800's: the treatment of residues as a number system in its own right, not just an adjunct to the system of real integers. It might even find its way to into the growing body of applications of modular number theory.
The following sections of this webpage explore it with greater attention to detail but are not yet fully up to date with this overview. Where differences appear, the overview has it right.
Examines residues from the original point of view: Each residue is just an infinite set of euclidean integers, and Zm is a collection of m such sets.
Myself, I prefer to think of residues as Gauss pointed the way: Residues are a richly structured category of numbers essentially independent of the integers.
The older point of view is analogous to focussing on real numbers as infinite series of rational numbers: It's a tool, and sheds some insight, but I find it awkward and distracting.