DEV Community

Tabnine
Tabnine

Posted on

Tabnine Adds Native Support for Apple Silicon (M1)

Last week we released native support for Apple Silicon (M1), bringing our efficient inference engine to the latest Apple architecture. You can read Apple’s M1 announcement here.

The Tabnine Neural Engine running locally (a.k.a deep-local) is our own implementation of an efficient neural inference using low-level intrinsics. Our original version of the engine was based on x86 vector instructions (FMA, SSE/AVX).

With the release of Apple Silicon, we extended the engine to support the vector instructions of the M1 processor.

The M1 processor is based on ARM 128-bit Neon technology. While Neon registers are not as wide as x86 registers, the overall throughput for (our kind of) vector operations for Neon is superior to that of Intel.

While earlier versions of Tabnine can run via Rosetta and use Tabnine cloud, running the engine locally on M1 requires the latest version of Tabnine.

Most official Tabnine plugins have been already updated to support M1.

Note that you need to run the native M1 version of the editor for the engine to correctly detect the M1 processor. See instructions for your IDE below.

Alt Text

API Trace View

Struggling with slow API calls? 👀

Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more →

Top comments (0)

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay