DEV Community

Julien Simon
Julien Simon

Posted on • Originally published at Medium on

Amazon EC2 c5n and p3dn: more network bandwidth for Deep Learning

AWS re:Invent — Amazon EC2 c5n and p3dn: more network bandwidth for Deep Learning

Training a Deep Learning model isn’t only a compute intensive task: a lot of I/O is also required. Let’s see why.

Loading training and inference data

Large datasets are usually stored on network storage, such as Amazon S3. Thus, during the training process, data needs to be loaded from network storage to instance RAM. This data loading process needs to happen as fast and as steadily as possible to keep CPUs and GPUs busy. As they are blazingly fast, any delay or unexpected latency in loading data is likely to stall them and to waste valuable training time.

I/O speed and latency are also critical to inference performance. Although many applications predict one sample at a time, overall throughput is likely to suffer if I/O isn’t consistently fast.

Exchanging information during distributed training

The purpose of training a Deep Learning model is to gradually discover the optimal set of weights (aka parameters) for that model, i.e. the set of weights that minimizes a specific metric (usually the validation error).

This involves running an optimization function (SGD or one of its many variants) to compute gradients, which reflect the difference between ground truth and predictions. When training on a distributed cluster of nodes, each node receives a batch of data, forwards it through the model and computes the gradients for that batch. Then, each node pushes the gradients to a master server where results from all nodes are consolidated. Before processing a new batch, a node first pulls the latest results, which guarantees that all nodes share the same state.

This is a general description and there are nuances to this behavior. If you’re curious about the details, you can read about how distributed training works in Apache MXNet and TensorFlow.

Gradients for large models can be huge: 97MB for Resnet-50. That’s a lot of data that each node has to push and pull again and again. This puts a lot of strain on network bandwidth and can become a serious performance bottleneck. A number of techniques have been designed to compress and quantize gradients, and they help reduce the amount of data that needs to be exchanged [1, 2]. Still, network performance remains a very important factor in speeding up large distributed training jobs.

c5n and p3dn: 100 Gbit networking on Amazon EC2

The newly-announced c5n and p3dn instances support 100 Gbit networking, bringing low latency and high throughput to demanding applications like HPC and Deep Learning. Give them a try!

Happy to answer any question! Please follow me on Twitter for similar news and content.

[1] “Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training”, Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally, 2017

[2] “Gradient Compression”, Apache MXNet.

Latest comments (0)