Hey Dev Community! 👋
So… I built something. Something atomic. Something quantum. Something that makes your laptop fan spin like it’s about to take off to Mars. 🌌
Meet Atoqu — the Atomic + Quantum-inspired search engine core in C++.
It’s not just a search engine. It’s THE search engine.
👉 GitHub: https://github.com/OverLab-Group/atoqu
Go star ⭐, fork 🍴, open issues 🐛, submit PRs 🔥, and let’s make history together.
☢️ Atomic? Like Nuclear? But Safer (and Cooler)
“It’s Atomic, like Nuclear, but not! 💣”
No mushroom clouds here — just Atomic LEGOs.
- Each search mode is an independent atomic unit.
- You can snap them together like LEGO bricks 🧱.
- Want keyword search? Plug in LiteralMode.
- Want embeddings? Snap in VectorMode.
- Want both? Smash them together in HybridMode.
Atomic = modular, composable, indestructible.
Basically, if LEGO and Iron Man had a baby, it would be Atoqu. 🦾
⚛️ Quantum? Schrödinger’s Search Engine 🐱
Quantum-inspired means parallel evaluation.
Multiple modes exist at once — like Schrödinger’s cat. 🐱⚛️
- Atoqu evaluates all modes in parallel.
- Then collapses them into a weighted ranking.
- Your query decides which mode “lives” and which “dies.”
Quantum = superposition of search strategies.
It’s like having Google, Bing, DuckDuckGo, and your grandma’s recipe book all answer at once. 🍲
🛠️ Core Architecture (v1.2)
-
Engine:
AtoquEngine -
Stores:
-
DocumentStore(tags + metadata) -
VectorStore(cosine similarity, CPU + optional GPU acceleration)
-
-
Modes:
- NormalMode — browser-style, classic search
- LiteralMode — keyword search
- VectorMode — embedding-based search
- HybridMode — combined scoring
- BM25Mode — term-based ranking
- RecencyMode — time-aware ranking
- TagBoostMode — tag-aware ranking
-
Embeddings:
- HashEmbeddingProvider — deterministic, dependency-free
- LlmEmbeddingProvider — config-driven, LLM-ready
-
GPU Backends:
- CUDA — production-ready cosine similarity
- OpenCL — skeleton, safe CPU fallback
- Vulkan, Metal, SYCL, HIP — safe stubs, fail-closed
⚡ GPU Acceleration (Turbo Mode)
Here’s the magic: Atoqu can offload heavy vector similarity operations to your GPU.
That means faster searches, less CPU pressure, and more bragging rights. 🏎️
Step 1: Configure GPU backend
Edit config/gpu.json:
{
"backend": "cuda",
"dimension": 384,
"maxdocsper_batch": 8192
}
Supported values:
-
"none"(default, CPU-only) -
"cuda"(NVIDIA GPUs) -
"opencl"(cross-vendor) -
"vulkan"(stub) -
"metal"(stub) -
"sycl"(stub) -
"hip"(stub for AMD)
If initialization fails, Atoqu falls back to CPU-only vector search.
No crashes, no drama. 🎭
🎛️ Modes Configuration (So Simple It’s Almost Funny)
All modes are controlled by config/modes.json.
Here’s the default setup:
[
{ "name": "NormalMode", "enabled": true, "weight": 1.0 },
{ "name": "HybridMode", "enabled": true, "weight": 1.0 },
{ "name": "LiteralMode", "enabled": true, "weight": 0.8 },
{ "name": "VectorMode", "enabled": true, "weight": 0.8 },
{ "name": "BM25Mode", "enabled": true, "weight": 0.9 },
{ "name": "RecencyMode", "enabled": false, "weight": 0.5 },
{ "name": "TagBoostMode", "enabled": false, "weight": 0.5 }
]
Want to boost fresh results? Flip "RecencyMode" to true.
Want tags to matter more? Enable "TagBoostMode".
It’s literally editing a JSON file. That’s it. 📝
🧑💻 Building Atoqu (All Methods)
Method 1: Classic CMake (CPU-only)
mkdir -p build
cd build
cmake ..
cmake --build .
./atoqu --http 8080
Method 2: CMake with GPU support
mkdir -p build
cd build
cmake -DUSE_GPU=ON ..
cmake --build .
./atoqu --http 8080
Make sure your GPU backend is set in config/gpu.json.
Method 3: Docker (CPU-only)
docker build -t atoqu:cpu .
docker run -p 8080:8080 atoqu:cpu
Method 4: Docker (GPU-enabled with NVIDIA)
docker build -t atoqu:gpu .
docker run --gpus all -p 8080:8080 atoqu:gpu
Method 5: Docker Compose (CPU + GPU variants)
docker-compose -f docker-compose.cpu.yml up
docker-compose -f docker-compose.gpu.yml up
Method 6: GitHub Actions / GitLab CI
CI pipelines are already set up.
Push your fork, and tests will run automatically:
- Unit tests
- Integration tests
- GPU-aware tests
- Sanitizers
- Static analysis
📚 Documentation
Generate docs with one flag:
mkdir -p build
cd build
cmake -DATOQUBUILDDOCS=ON ..
cmake --build . --target docs
Docs will appear in:
- Doxygen HTML:
docs/doxygen/html/ - Sphinx HTML:
docs/sphinx/_build/html/
🗺️ Roadmap
- Short-term: finish GPU kernels, expand docs.
- Mid-term: smarter HybridMode, pluggable storage backends.
- Long-term: full-featured search service, benchmarks vs existing engines, plugin ecosystem.
Basically: today it’s a search engine core. Tomorrow it’s Skynet. 🤖 (but friendly, promise).
😂 Fun Facts
- If your laptop fan spins up, that’s Atoqu saying “hi” in CUDA.
- If it doesn’t, that’s Atoqu saying “hi” in CPU mode.
- Either way, it’s polite. 😎
🎯 Call to Action (Do It Now!)
Atoqu is open-source, reference-grade, and waiting for your contributions.
- ⭐ Star it — show support.
- 🍴 Fork it — experiment with modes.
- 🛠️ Open issues — ask questions, propose features.
- 🔥 Submit PRs — tests, docs, kernels, optimizations.
- 📊 Benchmark it — share your results.
👉 https://github.com/OverLab-Group/atoqu
💡 Final thought:
Atoqu is Atomic (like LEGOs), Quantum (like Schrödinger’s cat), and GPU-accelerated (like a rocket).
It’s more powerful than nuclear bombs — but instead of destruction, it builds the future of search. 💥🧪🚀
Top comments (0)