IMO, it is very fast, considering the performance against the OpenAI's Rust implementation (wrapped into a Python package). Both pure JS and Web Assembly (WASM) ports have decent speed, even interpreted JS version is within 10% margin on small and medium texts.
Below you can find execution times for text to tokens encoding (smaller is better).
- Small text (68 tokens)
Python/Rust (tiktoken 0.5.2) ████ (0.04ms)
Pure JS (js-tiktoken 1.0.8) █████ (0.05ms)
JS/WASM (tiktoken 1.0.11) ██████████ (0.11ms)
@dqbd/WASM 1.0.7 ██████████████████ (0.18ms)
- Medium text (1068 tokens)
Python/Rust (tiktoken 0.5.2) ██████ (0.54ms)
JS/WASM (tiktoken 1.0.11) █████████ (0.78ms)
@dqbd/WASM 1.0.7 █████████ (0.80ms)
Pure JS (js-tiktoken 1.0.8) ██████████ (0.96ms)
- Large text (923942 tokens)
Python/Rust (tiktoken 0.5.2) ████████████████ (359.49ms)
@dqbd/WASM 1.0.7 ████████████████████ (421.71ms)
JS/WASM (tiktoken 1.0.11) ██████████████████████ (451.92ms)
Pure JS (js-tiktoken 1.0.8) █████████████████████████████████████ (1005.69ms)
Tested on Apple Silicon M1 Pro, with Python 3.11.6 and Node.js 21.2.0. Here's the repo.
Dependencies tested:
- OpenAI's refference tokeniser - https://github.com/openai/tiktoken
- Pure JS (js-tiktoken 1.0.8) - https://www.npmjs.com/package/js-tiktoken
- JS/WASM (tiktoken 1.0.11) - https://www.npmjs.com/package/tiktoken
- @dqbd/WASM 1.0.7 - https://www.npmjs.com/package/@dqbd/tiktoken
Top comments (0)