DEV Community

StackFoss
StackFoss

Posted on • Originally published at stackfoss.com on

Tinygrad: A Simple Deep Learning Framework

Introduction to tinygrad: A Simple Deep Learning Framework

logo

tinygrad is a deep learning framework that aims to provide a balance between the simplicity of karpathy/micrograd and the functionality of PyTorch. Maintained by tiny corp, tinygrad is designed to be an easy-to-use framework for adding new accelerators and supports both inference and training. While it may not be the most advanced deep learning framework available, it offers a straightforward and accessible solution for developing machine learning models. In this article, we will explore the features, architecture, and potential applications of tinygrad.

Features

LLaMA and Stable Diffusion

One notable feature of tinygrad is its ability to run LLaMA and Stable Diffusion algorithms. These algorithms are useful in various applications, and tinygrad provides a convenient environment for implementing them. To learn more about LLaMA and Stable Diffusion, refer to the LLaMA showcase and Stable Diffusion showcase in the documentation.

Laziness

tinygrad leverages the power of laziness to optimize computations. For example, when performing a matrix multiplication (matmul), the framework fuses the operation into a single kernel, resulting in efficient execution. Consider the following code snippet:

DEBUG=3 python3 -c "from tinygrad.tensor import Tensor;
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.permute(1,0).reshape(1, N, N)).sum(axis=2);
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"

Enter fullscreen mode Exit fullscreen mode

By executing the code and setting the DEBUG variable to 3, you can observe the optimization performed by tinygrad. Additionally, increasing the DEBUG value to 4 reveals the generated code. This laziness feature allows for efficient computation and optimization of deep learning models.

Neural Networks

tinygrad recognizes that a significant portion of building neural networks relies on a reliable autograd/tensor library. With tinygrad, you can easily construct neural networks using the available tensor operations and autograd capabilities. The following example demonstrates the construction and training of a neural network using tinygrad:

from tinygrad.tensor import Tensor
import tinygrad.nn.optim as optim

class TinyBobNet:
  def __init__ (self):
    self.l1 = Tensor.uniform(784, 128)
    self.l2 = Tensor.uniform(128, 10)

  def forward(self, x):
    return x.dot(self.l1).relu().dot(self.l2).log_softmax()

model = TinyBobNet()
optim = optim.SGD([model.l1, model.l2], lr=0.001)

# ... complete data loader here

out = model.forward(x)
loss = out.mul(y).mean()
optim.zero_grad()
loss.backward()
optim.step()

Enter fullscreen mode Exit fullscreen mode

In this example, a simple neural network, TinyBobNet, is defined with two linear layers (l1 and l2). The forward method specifies the forward pass of the network. The example also demonstrates the usage of an optimizer, SGD, and the typical training loop involving forward and backward passes.

Accelerators

tinygrad supports various accelerators out of the box, including CPU, GPU (OpenCL), C Code (Clang), LLVM, METAL,

CUDA, Triton, and even PyTorch. These accelerators provide hardware acceleration for the computations performed by tinygrad, improving the performance and efficiency of deep learning models. Adding support for additional accelerators is also straightforward. An accelerator only needs to implement a set of low-level operations, totaling 26 (optionally 27). For more information on adding new accelerators, consult the documentation on adding new accelerators.

Installation

To install tinygrad, the recommended method is to build it from source. Follow the steps below to install tinygrad on your system:

From Source

  1. Clone the tinygrad repository:
git clone https://github.com/geohot/tinygrad.git
cd tinygrad

Enter fullscreen mode Exit fullscreen mode
  1. Install the package using pip:
python3 -m pip install -e . # or `py3 -m pip install -e .` if you are on windows

Enter fullscreen mode Exit fullscreen mode

Remember to include the . at the end of the command.

Documentation

The tinygrad documentation, including a quick start guide, can be found in the docs/ directory. The documentation provides detailed information on the various aspects of using tinygrad, such as tensors, autograd, optimizers, and more.

Quick Example comparing to PyTorch

Here is a quick example that demonstrates the usage of tinygrad and compares it to the equivalent code in PyTorch:

from tinygrad.tensor import Tensor

x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy

Enter fullscreen mode Exit fullscreen mode

The equivalent code in PyTorch:

import torch

x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()

print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy

Enter fullscreen mode Exit fullscreen mode

This example demonstrates the similarity between tinygrad and PyTorch in terms of tensor operations and autograd functionality.

Contributing

tinygrad has received significant interest from the community, and contributions are welcome. If you're interested in contributing to the project, here are some guidelines to follow:

  • Bug fixes are highly appreciated and always welcome. If you encounter a bug, feel free to submit a fix.
  • When modifying the code, make sure you understand the changes you're making.
  • Code golf pull requests will be closed, but conceptual cleanups are encouraged.
  • If you're adding new features, please include appropriate tests to ensure their correctness.
  • Improving test coverage is highly beneficial. Reliable and non-brittle tests are encouraged.

For more detailed guidelines, refer to the CONTRIBUTING.md file in the repository.

Running Tests

To run the full test suite of tinygrad, you can use the following examples:

python3 -m pip install -e '.[testing]'
python3 -m pytest
python3 -m pytest -v -k TestTrain
python3 ./test/models/test_train.py TestTrain.test_efficientnet

Enter fullscreen mode Exit fullscreen mode

These commands install the necessary dependencies for testing and execute the tests.

Conclusion

tinygrad offers a simple yet powerful framework for deep learning. Its ease of use, support for various accelerators, and compatibility with PyTorch make it an attractive option for developers and researchers. Whether you're building neural networks, implementing advanced algorithms like

LLaMA and Stable Diffusion, or exploring new accelerators, tinygrad provides a solid foundation. With ongoing development and community contributions, tinygrad is poised to grow and become a valuable tool in the machine learning ecosystem.

To learn more about tinygrad, visit the GitHub repository and the official website.

Top comments (0)