AI models like LLMs don’t interpret words literally. Each word or token is converted into a numeric vector using embeddings — dense representations in high-dimensional vector spaces (sometimes with thousands of dimensions).
🔢 For example:
“cat” → [0.2, -1.3, 4.7, …]
“car” → [0.1, -1.2, 4.5, …]
Vectors that are close together indicate words with similar meanings or contexts. So “king” and “queen” have similar vectors, while “airplane” is distant from “dog.”
This vectorization enables calculations like cosine similarity or vector arithmetic for analogies (e.g. “king” - “man” + “woman” ≈ “queen”). It’s the foundation for generating text, code, or answering questions coherently.
✨ Ultimately, AI is pure math transforming language into complex numerical patterns.
Top comments (0)