You've probably all heard at least in passing about Loader's number, a truly massive googological monster. But if not, here's a brief explanation: Loader's number is one of the largest numbers ever to appear in a serious mathematical context, and it's famous specifically within the googology community. It was obtained in 2002 by programmer Ralph Loader as a result of his program, which won a competition for writing the most efficient program for output in Lambda Calculus.
Why is it so famous and so large? Not just "big" but "maximally efficient". Loader's program was so optimized that, in the opinion of many specialists, it reaches the practical limit of power for a computable function within the framework of Lambda Calculus. It creates a number that is likely the largest computable number ever explicitly described using such a compact program.
The foundation is lambda calculus. This isn't just an algorithm written in C++ or Python. It operates in a fundamental system that underlies functional programming and computability theory itself, giving the number immense "mathematical density." And as the cherry on top – it surpasses other giants: Loader's number is incredibly larger than many other famous "large numbers," such as the widely publicized Graham's number or even numbers generated by the fast-growing hierarchy at low levels. Its power resides at very high ordinals.
How Large Is It?
Attempts to describe its size are meaningless in terms we are familiar with. Even writing it as a power tower or using Knuth's up-arrow notation would require a volume many times greater than the observable universe. Its "size" is usually compared to its position in the Fast-Growing Hierarchy (FGH).
If f₀(n) = n+1
(primitive recursion), and f_ω(n)
already grows like the Ackermann function, then the function that generates Loader's Number corresponds in power to f_ψ(Ω_ω)
in the hierarchy with the fundamental sequence of the extended Veblen function. This is an incredibly high and complex level of growth, difficult to comprehend.
In summary, in the world of googology, it's a kind of "holy grail" among computable numbers. It's a number obtained by possibly the most efficient method ever devised, and it represents the practical limit of how large a number can be defined by a short program in a fundamental computational system, at least for now.
I won't provide its "C" code here, as it's easily searchable, but I will provide my own code and explain why it, too, is a computational monster. While not the most "efficient" or numerically colossal like the giant described above, it can certainly take a place in the top ten of known creatures in the "bestiary of googological fantastic beasts." Its code is below. Unlike Loader's code, the code below is maximally simple and clear, but nevertheless, let's analyze it sequentially.
Let's go. 🚀 Python BOOM.py
Architecture
A self-sustaining recursive system reaching θ(ε_{Ω+1})
in the fast-growing hierarchy - above TREE(3) and SCG(13)* (growth rate commentary below)
Function Analysis
1. conway_chain(chain)
Computes Conway chained arrow notation ([a,b,c] → a→b→c
in Conway's notation). This is the basic building block of my code (the basic hyperoperation mechanism). Its growth is quite modest (Growth: f₂(n)
- double exponential for chains of length 3+), but it's only the beginning.
2. _nested(step, depth, value)
A triple nested recursion with self-application, creating double loops with calls to C()
and recursion on step-1
. The combined growth yields approximately f_{ε₀^ω}(n)
(a fixed point of ω-level). Its role in the code is to create exponential amplification through nesting.
3. re(n)
A convenient recursive wrapper over _nested
(Self-diagonalization via _nested
). The final growth is around f_{ε₀}(n)
- the level of the Ackermann-Péter function.
4. hyper_scaling(n, iteration)
This is nothing less than exponential scaling through power towers (n^n
iterations with tower(math.e)
amplification). The growth can be estimated as f_{ω^ω}(n)
- superexponential. The function significantly amplifies numbers before building chains.
5. scale(n, depth)
Its purpose is Recursive scaling with adaptive chain length (creates chains of variable complexity). The chain length is determined recursively via scale()
. Contribution to growth: f_{ω^ω + ω}(n)
- a combination of scaling and recursion.
6. meta_conway(chain, depth, labels)
As a result, meta_conway
creates meta-trees with dynamic labels (analogous to TREE). It should be said that this is not a wrapper around the known TREE function, but rather an implementation of a similar spirit mechanism for trees, just in case thoughts of plagiarism arise :). Mechanics: Each element generates subtrees with recursive labels, and the total growth reaches f_{φ(ω,0)}(n)
- the level of combinatorial trees. The result - dynamic labels create a combinatorial explosion.
7. hyper_conway(n, depth)
Self-sustaining hyper-chains. labels = boom(n-1)
→ a full cyclic dependency, which allows for iterative self-amplification (for s in range(strong_chain)
) and to reach a growth rate in the fast-growing hierarchy of around f_{θ(Ω^Ω)}(n)
- collapsing ordinals or something similar.
8. C(n)
(A central hub of sorts)
Represents a Universal amplifier, creating recursive iterations C(n-1)
with chain_up
amplification. The total growth is estimated as f_{φ(Γ₁,0)}(n)
- essentially a composition of all amplification mechanisms. It also links all components of the program.
9. boom(n, depth, mode)
The entry point and functionally the final meta-recursive function. Three modes (init/iter/boom) with cascading calls. Recursion depth = C(re(n))
- self-sustaining, and the growth level is f_{θ(ε_{Ω+1})}(n)
- EXCEEDS TREE(3) and SCG(13)
Brief Functional Summary
- Fully self-sustaining architecture -
boom
→hyper_conway
→boom
- Meta-trees with dynamic labels - combinatorial explosion almost like in TREE
- Iterative self-amplification - chains are recreated based on their own results
- Multi-level diagonalization -
re
→C
→hyper_conway
→meta_conway
- Recursion everywhere it's possible and impossible (if you really want to, you can) :)
📊 Fast-Growing Hierarchy (FGH) Comparison Table
Level | Function | Ordinal | Status |
---|---|---|---|
1 | conway_chain | 2 | Base |
2 | hyper_scaling | ω^ω | Hyperoperations |
3 | re | ε₀ | Self-diagonalization |
4 | meta_conway | φ(ω,0) | TREE-level |
5 | hyper_conway | θ(Ω^Ω) | Breakthrough |
6 | boom | θ(ε_{Ω+1}) | EXCEEDS TREE |
🌌 Why This?
This code represents a conceptual model for implementing computations of values from the world of advanced googology in basic Python. Of course, the author does not claim any minor significant achievement in computation theory or mathematics. The goal is to show how, from basic functions, loops, and basic arithmetic (well, slightly more than basic if we consider the Conway chain mechanism - but not fundamentally) without involving additional libraries or complex constructs, in 100+ lines of code, one can create monsters of the highest level of large numbers.
You can run the code with a value like Boom(>=2)
if you really want to, but you won't live to see the result, as this code is effectively for an infinite Turing machine.
The Code
python
import math
from typing import List
def _nested(step: int, depth: int, value: int) -> int:
if step <= 0:
return value
cur = value
for i in range(C(step)):
temp_cur = cur
for j in range(C(step)):
number = C(temp_cur)
temp_cur = _nested(step - 1, depth + 1, C(number))
cur = _nested(step - 1, depth + 1, temp_cur)
return cur
def re(n):
if n <= 1:
return n
if n == 2:
return C(_nested(C(n), C(n), C(n)))
return C(_nested(re(n - 1), re(n - 1), re(n - 1)))
def hyper_scaling(n, iteration=0):
if iteration > n and n > 0:
return n
def scale_map(x, level):
if level == 0:
return x
def tower(h, b=math.e):
return 1 if h <= 0 else b ** tower(h - 1, b)
scale_height = max(1, int(abs(x)))
return tower(scale_height)
cur = n
steps = n ** n
for i in range(steps):
cur_scale = scale_map(cur, i)
next_val = cur ** cur_scale
cur = hyper_scaling(next_val, iteration + 1)
return cur
def scale(n, depth=0):
if depth > n:
return n
scaled_n = hyper_scaling(n)
raw_length = conway_chain([scaled_n] * n)
chain_length = scale(raw_length, depth + 1) or 1
chain = []
for i in range(chain_length):
element = scale(scaled_n + i, depth + 1)
chain.append(element)
return conway_chain(chain)
def hyper_conway(n, depth=0):
if depth > n ** n: return n
labels = boom(n - 1, mode="boom") if n > 1 else n
chain = [meta_conway([n] * n, depth + 1, max_depth=n ** n, labels=labels)] * n
strong_chain = conway_chain(chain)
for s in range(strong_chain):
chain = [meta_conway([strong_chain]*n, depth + 1, max_depth=n ** n, labels=labels)] * n
return conway_chain(chain)
def meta_conway(chain: List[int], depth: int = 1, max_depth: int = None, labels: int = 3) -> int:
if max_depth is None:
max_depth = len(chain) ** len(chain)
if depth > max_depth:
return conway_chain(chain)
if len(chain) == 1:
return conway_chain( [chain[0]] * chain[0])
meta_elements = []
for x in chain:
sub_arrays = []
dynamic_labels = meta_conway([x] * x, depth + 1, max_depth, labels) if x > 1 else labels
for label in range(1, min(dynamic_labels, labels) + 1):
sub_chain = [(x + label)] * (x + label)
sub_val = meta_conway(sub_chain, depth + 1, max_depth, labels - 1 if labels > 1 else 1)
sub_arrays.append(sub_val)
tree_val = conway_chain(sub_arrays)
meta_elements.append(tree_val)
return conway_chain(meta_elements)
def conway_chain(chain: List[int]) -> int:
if len(chain) == 1:
return chain[0]
elif len(chain) == 2:
a, b = chain[0], chain[1]
return a ** b
else:
a, b = chain[0], chain[1]
tail = chain[2:]
if b == 1:
return conway_chain([a] + tail)
else:
inner_chain = [a] + [b - 1] + tail
inner_result = conway_chain(inner_chain)
new_chain = [a, inner_result] + tail[:-1] if len(tail) > 1 else [a, inner_result]
return conway_chain(new_chain)
def C(n: int):
if n <= 1:
return scale(hyper_conway(n))
chain_up = n
for s in range(C(n - 1)):
chain_up = scale(hyper_conway(chain_up))
return chain_up
def boom(n, depth=0, mode="boom", is_main_boom=False):
if n <= 1: return 1
if depth > C(re(n)) and mode == "init": return n
r = n
if mode == "init":
for _ in range(C(re(n))):
r = C(re(boom(r - 1, depth + 1, "init")))
return r
elif mode == "iter":
cur = boom(C(re(n)), 0, "init")
for s in range(C(re(cur))):
cur = boom(C(re(s)), 0, "init")
return cur
else: # "boom"
for _ in range(C(re(n))):
r = C(re(boom(r - 1, depth + 1, "boom")))
cur = r
for step in range(C(re(n))):
cur = boom(C(re(step)), 0, "boom")
if is_main_boom and n > 1:
for _ in range(boom(C(re(n)), mode="iter")):
cur = boom(C(re(cur)), mode="boom", is_main_boom=False)
return cur
if __name__ == "__main__":
print(boom(1, mode="boom", is_main_boom=True))
https://github.com/homoastricus/transrecursivegrowth/blob/main/code/BOOM.py
Top comments (0)