I built VedaRta entirely on a 3.4GB Android phone with Termux. No GPU. No cloud.
6 novel Vedic mathematical algorithms replace GPU-requiring operations:
Sphota Attention — 1,308× faster than O(n²) Softmax
Urdhva Matmul — 10.2× faster than BLAS on ARM64
Tri-Nadi Activation — Converges where SiLU explodes (loss 0.12 vs ∞)
Shunyam Norm — Zero-centered, no DC drift
Chitta KV Cache — 80% memory reduction
Katapayadi Encoder — Phoneme to vector
VedaRta Sphota is O(n) linear approximate attention — trades cross-token interaction for mobile efficiency. Different operation, different trade-off. Honest science matters.
"Aham Brahmasmi" produces PHI (1.6188) resonance from embeddings.
Trained a 49KB specialist model in 43 seconds on the phone.
GitHub: github.com/divineearthly/VedaRta
Model: huggingface.co/divinesouljoy/VedaRta-0.5B
I'm here to answer questions.
Top comments (0)