DEV Community

Cover image for Using TensorFlow 2.8 on an Apple Silicon arm64 chip
David Haley
David Haley

Posted on

Using TensorFlow 2.8 on an Apple Silicon arm64 chip

My computer recently had an unfortunate interface with dihydrogen monoxide. To be determined if it will come back to life but it's not looking good. So, I bought a new Macbook which means, the M3 chip (arm64). I had a nice experience using the M1 from my previous job so I was looking forward to it.

Of course ๐Ÿ˜ฉ the x86 vs arm architecture issues started immediately when I tried using TensorFlow.

Here's how I fixed it. The pull request: deepcell-imaging#229

DeepCell uses TF 2.8 so that's what we have to use. Unfortunately the 2.8.4 package doesn't come with ARM binaries. Incidentally TF 2.16.1 does have arm64 binaries ... but I can't use it here ๐Ÿ˜‘

Apple has some documentation for installing TensorFlow and the "metal" plugin. In particular,

For TensorFlow version 2.12 or earlier:

python -m pip install tensorflow-macos

In our case we need tensorflow-macos==2.8.0 as found in the tensorflow-macos release history. Unfortunately the files list reveals there's no Python 3.10 distribution so I need to downgrade to Python 3.9.

As for tensorflow-metal the package documentation says we need v0.4.0.

I packaged a new requirements file for Mac arm64 users:

$ cat requirements-mac-arm64.txt
tensorflow-macos==2.8.0
tensorflow-metal==0.4.0
Enter fullscreen mode Exit fullscreen mode

Then you install the mac requirements:

pip install -r requirements-mac-arm64.txt
Enter fullscreen mode Exit fullscreen mode

Of course, the shenanigans don't stop! Running pip install -r requirements.txt fails to install DeepCell, because it depends on tensorflow โ€“ย not tensorflow-macos (which provides the same Python module tensorflow).

So I ran it this way to skip dependencies after installing the ones we could:

pip install -r requirements-mac-arm64.txt
pip install -r requirements.txt
pip install -r requirements.txt --no-deps
Enter fullscreen mode Exit fullscreen mode

Then I got an interesting protobuf failure.

If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Enter fullscreen mode Exit fullscreen mode

Quick fix: grab the most recent 3.20.x protobuf version, 3.20.3.

Apple provides a test script:

import tensorflow as tf

cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
    include_top=True,
    weights=None,
    input_shape=(32, 32, 3),
    classes=100,)

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
Enter fullscreen mode Exit fullscreen mode

One 180 MB model download later โ€ฆ we're golden.

2024-06-05 23:10:30.794862: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
Enter fullscreen mode Exit fullscreen mode

Just to confirm, let's check Activity Monitorย โ€“ and yes, it's using the gpu. ๐ŸŽ‰ ๐Ÿ˜ค

Screenshot of the Activity Monitor showing GPU % usage and GPU time used.

Phew. Well, hopefully this is a one-time thing. Most of our development is cloud which is x86, the more common binary format.

Until our next adventure with binaries โœŒ

Cover image by Kari Shea on Unsplash

Top comments (0)