DEV Community

zayoka
zayoka

Posted on

I built a custom Deep Learning framework in pure Rust just to simulate Arknights: Endfield gacha luck (Talos-XII)

Hello everyone,

I've been working on Talos-XII, which started as a simple idea to simulate gacha pulls for Arknights: Endfield but eventually turned into a massive rabbit hole of optimisation.

Instead of just using standard Python bindings or a basic RNG, I decided to over-engineer the hell out of it. I built a custom Deep Learning engine entirely in Rust to run the simulation agents.

The goal? To use RL algorithms (PPO & DQN) to find the absolute best pulling strategies for F2P/Monthly card players.

Some technical implementation details for the Rustaceans here:

No Python: The core engine is pure Rust. I wrote a custom reverse-mode Autograd system that feels a bit like PyTorch but without the bloat.

Performance: I'm abusing Rayon for parallelising tensor ops and hand-wrote SIMD kernels (AVX2 for x86, NEON for ARM) to speed up the critical paths.

The Model: It uses a Deep Belief Network (DBN) for environment noise simulation and a Transformer backend for the agent.

Optimisation: I actually implemented some ideas from the DeepSeek mHC (Manifold-Constrained Hyper-Connections) paper for the optimiser design, which was a fun challenge to port over.

It basically simulates millions of pulls to tell you exactly how likely you are to get the UP character using only free resources (Neural Luck Optimiser, essentially).

If you are interested in Rust DL frameworks or just want to see how much "Unpolished Crystal" you need to save, feel free to check it out.

Note: It is CLI-only for now. I haven't built a GUI. Apologies!

Repo: https://github.com/zayokami/Talos-XII

Reference paper: https://arxiv.org/abs/2512.24880

(Big shoutout to the DeepSeek team for their mHC paper—it was a huge reference for the project's optimiser design!)

Top comments (0)