DEV Community

Cover image for ML unification is all about family
Adedoyinsola Ogungbesan
Adedoyinsola Ogungbesan

Posted on

ML unification is all about family

In a never-ending twist and turns, we haven't seen the last of Dom and his ever-growing family.

Machine learning like the Fast and Furious franchise keeps getting more fascinating as the day goes by.

ML unification, according to OpenAI ChatGpt refers to the convergence and integration of various machine learning (ML) techniques, methodologies, and frameworks into a unified framework or ecosystem. It aims to create a standardized and cohesive ML development, deployment, and management approach.

The need for ML unification arises from the growing complexity and diversity of ML models, algorithms, and tools. ML practitioners often work with different frameworks and libraries for specific tasks, such as deep learning, reinforcement learning, or natural language processing. This fragmented landscape can lead to inefficiencies, interoperability challenges, and duplication of efforts.

ML unification addresses these challenges by providing a unified platform combining multiple ML techniques, frameworks, and tools. It involves creating common standards, interfaces, and protocols that enable seamless integration and collaboration across different ML domains.

Dominic Toretto is to Fast and Furious as Ivy is to ML unification.

Ivy is both an ML transpiler and a framework, currently supporting JAX, TensorFlow, PyTorch and Numpy.

In consonance with Kapa.ai, Ivy unifies all ML frameworks enabling you not only to write code that can be used with any of these frameworks as the backend but also to convert any function, model or library written in any of them to your preferred framework. This makes it broadly applicable to a wide range of applications, from cutting-edge deep learning to more conventional machine learning, general numerical computing, and data analytics.

TWIST
Dan Fu, Stanford University via Ivy paper reading group talked about FlashAttention.

As stated by their paper on FlashAttention published a year ago on ArXiv, FlashAttention is an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM.

Basically, transformer models are known as essential building blocks for natural language processing and image classification. they have grown larger and deeper, but equipping them with a longer context remains difficult,
due to the self-attention module at their core
having time and memory complexity quadratic in sequence length. In other words slow and less memory efficient.

FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3× speedup on GPT-2 (seq. length 1K), and 2.4× speedup on long-range arena (seq. length 1K-4K).

FlashAttention and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models (0.7 better perplexities on GPT-2 and 6.4 points of lift on long-document classification) and entirely new
capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge
(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).

EXIT
With more and more major advances being reported every day in the vast world of Machine learning. Accelerating your AI with one line of code is here to stay.

Additional Reading
unify.ai

Reference(s)
Dao, T., Fu, D. Y., Ermon, S., Rudra, A., & Ré, C. (2022). FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. ArXiv. /abs/2205.14135

Top comments (0)