DEV Community

Javad
Javad

Posted on

🚀 Atoqu — The Atomic Quantum Search Engine

Yes! It's a Manifesto for the Next Generation of Search

(and yes… v1.2 is coming very soon)

Hey Dev Community đź‘‹

This is not just another “we built a search engine” post.

This is a manifesto — a declaration of intent, a spark, a challenge to the old world of search.

Atoqu is the name.

Atomic + Quantum.

A search engine core built from scratch by OverLab Group — a young, creative, fearless team that believes the future of search should be fast, open, independent, and beautifully engineered.

Let’s dive in.


⚛️ Atomic — True Atomic Architecture

Atoqu is built on a simple philosophy:

“Every component should be small, independent, testable, and replaceable.”

This means:

  • Zero external dependencies
  • No legacy baggage
  • No heavy frameworks
  • No JVM, no Python, no Lucene, no FAISS
  • Pure C++17
  • GPU kernels written by hand
  • Every module is an atomic unit

This is something Elasticsearch, Solr, Vespa, and Lucene simply cannot offer.


🧬 Quantum — Multi‑Mode, Probabilistic, Future‑Ready Search

Quantum in Atoqu is not a buzzword.

It’s a design principle:

  • Multi‑Mode Ranking
  • Literal + Vector + Hybrid + BM25 + Recency + TagBoost
  • Probabilistic scoring
  • Collapse‑style ranking
  • Parallel evaluation
  • Embedding‑aware search
  • LLM‑ready architecture

Atoqu doesn’t just search — it interprets, evaluates, and adapts.


⚡ Speed — Beyond Google Core, Beyond Elasticsearch

Let’s be honest:

Google has optimized its search core for 20–30 years.

Elasticsearch has been tuned for more than a decade.

But they carry massive legacy weight.

Atoqu doesn’t.

Atoqu is:

  • Pure C++
  • Zero dependency
  • GPU‑accelerated
  • CUDA kernels
  • OpenCL backend
  • Zero‑copy flattening
  • Memory reuse
  • Atomic architecture

The result?

Atoqu is built like a Tesla Model X Plaid.

Google Core is a 30‑year‑old gasoline car with a turbo strapped on.

One is born fast.

The other is trying to stay fast.


🎮 GPU Support — Real, Native, Production‑Ready

Atoqu is the only modern open‑source search engine core with real GPU support:

  • âś… CUDA backend (production‑ready)
  • âś… OpenCL backend (production‑ready host side)
  • âś… GPU‑aware VectorStore
  • âś… Cosine similarity on GPU
  • âś… Batch processing
  • âś… Buffer reuse
  • âś… Safe stubs for Vulkan / Metal / SYCL / HIP

Elasticsearch? No.

Lucene? No.

DuckDuckGo? No.

Brave? No.

Google? Yes — but behind closed doors.

Atoqu brings GPU search to the open‑source world.


🧠 LLM‑Ready — Built for the Future

Atoqu includes:

  • HashEmbeddingProvider
  • LlmEmbeddingProvider
  • Semantic search
  • Hybrid ranking
  • Embedding‑aware scoring

Plug in any embedding model — local or cloud — and Atoqu adapts instantly.


🔮 OLFCE — OverLab Forward Compatibility Engine

This is one of Atoqu’s secret weapons.

Big companies struggle with:

  • Legacy APIs
  • Backward compatibility
  • Old systems they can’t remove
  • Technical debt that grows forever

Atoqu solves this with OLFCE:

  • Old APIs can be removed safely
  • New APIs automatically get aliases
  • Old code keeps working
  • New code stays clean
  • Zero legacy debt
  • Zero backward‑compatibility nightmares

This is something Google, Bing, Elasticsearch, and Brave simply cannot do.


đź§± Full Feature List (v1.2)

âś… Core Engine

  • DocumentStore
  • VectorStore (GPU‑aware)
  • Multi‑mode ranking
  • Embedding providers
  • Optimizer
  • Indexer

âś… GPU Backends

  • CUDA
  • OpenCL
  • Vulkan (stub)
  • Metal (stub)
  • SYCL (stub)
  • HIP (stub)

âś… Modes

  • NormalMode
  • LiteralMode
  • VectorMode
  • HybridMode
  • BM25Mode
  • RecencyMode
  • TagBoostMode

âś… Tooling

  • CMake
  • Makefile
  • Docker (CPU/GPU)
  • GitHub Actions
  • GitLab CI
  • Sanitizers
  • Static Analysis
  • Benchmarks

âś… Documentation

  • Doxygen
  • Sphinx
  • Breathe
  • Architecture docs
  • Full changelog

âś… License

Apache 2.0 (tentative — see community questions below)


🗣️ Community Questions

Atoqu is not released yet — and we want your voice in shaping it.

âś… 1) What license should Atoqu use?

We’re considering:

  • Apache v2.0
  • MPL 2.0
  • GPLv3
  • AGPLv3

Which one do you think fits a next‑generation search engine core?


âś… 2) How fast do you think Atoqu is compared to Google Core?

We know our internal benchmarks.

But we want to hear your predictions:

  • 2Ă— faster?
  • 5Ă— faster?
  • 10Ă— faster?
  • 20Ă— faster?
  • More?

Drop your guess in the comments — let’s see what the community thinks.


âś… 3) What features do you want in v2.0?

We’re planning:

  • Distributed indexing
  • Web crawler
  • Sharding
  • Query planner
  • Ranking pipeline
  • Caching layer
  • Quantum‑inspired search algorithms

But we want your ideas too.


🤝 Join the Atoqu Community

When Atoqu v1.2 is released, we invite you to:

âś… Explore the code

âś… Read the architecture

âś… Run the benchmarks

âś… Test GPU kernels

âś… Open issues

âś… Submit PRs

âś… Star the repo

âś… Join discussions

âś… Help shape the future of search

Atoqu is not just a project.

It’s a movement.


🙏 Thank You — and See You Soon

Thanks for reading this manifesto.

Thanks for being part of this journey.

Thanks for believing that search can be faster, cleaner, smarter, and more open.

Atoqu v1.2 is coming soon.

Stay atomic. Stay quantum. Stay curious.

— OverLab Group

— Atoqu Project

🚀💛

Top comments (1)

Collapse
 
javadinteger profile image
Javad

React, share, save, and follow for more deep-dive blogs like this!