<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aaron Langford</title>
    <description>The latest articles on DEV Community by Aaron Langford (@aaronlangford31).</description>
    <link>https://dev.to/aaronlangford31</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aaronlangford31"/>
    <language>en</language>
    <item>
      <title>Lessons Learned from Using Torch Inductor For Inference</title>
      <dc:creator>Aaron Langford</dc:creator>
      <pubDate>Sat, 16 Nov 2024 03:45:53 +0000</pubDate>
      <link>https://dev.to/aaronlangford31/lessons-learned-from-using-torch-inductor-for-inference-1ma7</link>
      <guid>https://dev.to/aaronlangford31/lessons-learned-from-using-torch-inductor-for-inference-1ma7</guid>
      <description>&lt;p&gt;The purpose of this blog post is to give an intro to compiling models using Torch Inductor along with some helpful advice to avoid pitfalls. I began using Torch Inductor this year as a part of a dive into optimization of PyTorch models for inference. My team needed a way to cut latency down for a proprietary diffusion architecture. This is when I stumbled on Torch's Inductor tool kit. Since then I've been able to help 3 different teams at Amazon speed up their inference using Torch Inductor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Just In Time compilation with Inductor
&lt;/h2&gt;

&lt;p&gt;I first played with Inductor via a "Just In Time" method. This is not to be confused with the JIT tracing feature available in Torch. This is where torch can wrap a model with &lt;code&gt;torch.compile()&lt;/code&gt; and let the compiler find optimizations as multiple inference requests progress. Torch's documentation currently heavily steers users towards using this approach for compilation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;p&gt;The advantage of &lt;code&gt;torch.compile()&lt;/code&gt; is the ease of use. I recommend anyone just getting started to explore how much torch can accelerate their models by using this approach. Here are some of the big benefits I found:&lt;/p&gt;

&lt;h4&gt;
  
  
  1) More Forgiving Compilation with Graph Breaks
&lt;/h4&gt;

&lt;p&gt;Compilation in this API is more forgiving compared to other APIs because of a feature called "graph breaks". When Torch compiles code, it first creates a graph representation of the model. It's helpful to picture this as a sequence of tasks chained together. A compiler scans this chain of tasks and produces more optimal versions of each link in the chain (or node in the graph). For some of these links, the compiler just can't figure out how to compile successfully. Instead of erroring out, it just leaves the original Python for that node in place, and will optimize all nodes in the graph that it can manage.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Dynamic Shapes
&lt;/h4&gt;

&lt;p&gt;Additionally if there are different shapes (like image size or number of frames in a video) in the input, Torch is able to recompile the network on the fly to handle those different shapes.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Decoupled From Weights
&lt;/h4&gt;

&lt;p&gt;This is not applicable in all cases, but a popular feature in AI services is to allow customers to swap in their own weights for a given architecture, or change the default weights through a fine tuning offered by the model provider. Swapping weights out is pretty easy for the Just In Time compilation, because compilation happens right before an inference request is served. Other approaches that compile models ahead of time are more difficult because they typically embed the weights in the executable, requiring a recompilation of the same model with different weights.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Multiple Backends
&lt;/h4&gt;

&lt;p&gt;With the &lt;code&gt;torch.compile()&lt;/code&gt; interface, there are multiple choices for backend. While Inductor is the default, NVidia's TensorRT can also be used as a backend. This allows for quick evaluation of multiple optimization platforms for a torch model.&lt;/p&gt;

&lt;p&gt;Hopefully in the future, this will be a path for more hardware platforms (like Amazon's Neuron platform or Google's TPU platform) to be easily compiled to.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Flexible Targets
&lt;/h4&gt;

&lt;p&gt;A model with dozens of layers can be compiled as well as just a simple attention layer. This allows compilation to be applied in many ways to a model, even if that model is doing some more complex things like caching, dynamic runtime pruning, or cross-device communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disadvantages
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) No way to "save" an optimized model
&lt;/h4&gt;

&lt;p&gt;The disadvantage of this approach is that all the decisions for optimizations are gone once the process ends. This means that the optimal model must be found each time an inference server is started.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Slow start for Inference
&lt;/h4&gt;

&lt;p&gt;Inference servers using &lt;code&gt;torch.compile()&lt;/code&gt; will be slower at first unless something else runs some warm up requests before the server takes the first real requests.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Recompilations Introduce Latency Spikes
&lt;/h4&gt;

&lt;p&gt;Any recompilations that Torch deems necessary will result in higher latency for some requests. Torch should only need to recompile if input shapes or control flow in the forward pass changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ahead of Time Compilation with AOTInductor
&lt;/h2&gt;

&lt;p&gt;I was motivated enough by these disadvantages to look for an "Ahead of Time" version of &lt;code&gt;torch.compile()&lt;/code&gt;. Sure enough, Torch does have this! They have a module called &lt;code&gt;AOTInductor&lt;/code&gt; under the &lt;code&gt;torch.export&lt;/code&gt; package, which can do much of what the "Just in Time" approach does.&lt;/p&gt;

&lt;h3&gt;
  
  
  How AOTInductor works
&lt;/h3&gt;

&lt;p&gt;AOTInductor takes any torch.nn.module (something with a &lt;code&gt;forward&lt;/code&gt; function) or a plain old python function. First, the arbitrary Python code is traced with Torch's Dynamo module and the execution is translated into a Torch FX Graph. This is an intermediate representation of the code that will be optimized.&lt;/p&gt;

&lt;p&gt;The Torch FX Graph version of the model is then handed off to Inductor. It generates an optimized version of the model composed of Triton Kernels and a C++ orchestrator. The kernels are compiled to &lt;code&gt;.cubin&lt;/code&gt; files and the C++ is compiled to a &lt;code&gt;.so&lt;/code&gt; file. Torch provides both a C++ and a Python wrapper which can be called in the same way the original function was.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;p&gt;In my experience with AOTInductor, I was able to knock off 30-40% of latency of the eager PyTorch forward path for UNet and UViT based diffusion models.&lt;/p&gt;

&lt;p&gt;Additionally, the AOTInductor compiler enables an automated build pipeline, so scientists can stay in PyTorch, numpy, etc and have an automated step produce an optimized inference time ready version of their model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;p&gt;I compiled a list of best practices to use when writing models that can be compiled by the AOTInductor compiler.&lt;/p&gt;

&lt;h4&gt;
  
  
  1) Only use torch.Tensor types as input to the model.
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch
import torch.nn as nn

class IncorrectModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.layer = nn.Linear(10, 10)

    def forward(self, input_tensor, config_dict, mode="default"):
        # Check the config_dict and mode string to determine model behavior
        if mode == "special" and config_dict.get("use_special", False):
            # Apply some custom logic based on the non-tensor config
            x = input_tensor * 2
        else:
            x = input_tensor

        return self.layer(x)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prefer to keep non-tensor inputs as properties of the module. If the model's forward pass requires something that's fundamentally not a tensor, lift it out of the forward pass, make a wrapper and compile logic that does not depend on this kind of parameter. If all else fails, try boxing parameters in a tensor. But beware, this is a bit of an anti-pattern in my opinion. that this is a smell of an overcomplicated forward pass.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CorrectModel(nn.Module):
    def __init__(self, use_special=False, mode="default"):
        super().__init__()
        self.layer = nn.Linear(10, 10)
        self.use_special = use_special
        self.mode = mode

    def forward(self, input_tensor):
        # Use model properties instead of passing non-tensor inputs
        if self.mode == "special" and self.use_special:
            x = input_tensor * 2
        else:
            x = input_tensor

        return self.layer(x)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2) Avoid using optional arguments, kwargs, and argument expansions anywhere in the forward pass.
&lt;/h4&gt;

&lt;p&gt;Prefer to use explicit arguments everywhere in forward passes. Either the tensor will be there or it won't.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Avoid problematic Python code in the forward pass
&lt;/h4&gt;

&lt;p&gt;Calls to things like &lt;code&gt;isfunction&lt;/code&gt; will break the compiler.&lt;/p&gt;

&lt;p&gt;Calls to regex libraries also tend to break the compiler. This tends to come up when people write code that conditionally check if certain versions of packages (like &lt;code&gt;xformers&lt;/code&gt;) before using one implementation of a layer vs another.&lt;/p&gt;

&lt;p&gt;Instead, take care of selecting layers in initialization logic.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Forward pass should be a pure function.
&lt;/h4&gt;

&lt;p&gt;For a model that will be compiled, it should be stateless. This means that &lt;code&gt;forward(x)&lt;/code&gt; should produce the same output no matter how many times it is called, or what else is called between calls to &lt;code&gt;forward(x)&lt;/code&gt;, as long as x is the same. State can be managed outside of the target compiled model.&lt;/p&gt;

&lt;p&gt;When the forward call changes the state of the model, this causes a few problems. For example, writes to &lt;code&gt;self&lt;/code&gt; are completely disallowed by the Inductor compiler.&lt;/p&gt;

&lt;p&gt;Another common pattern I've seen is caching in between forward passes. This is common in diffusion models where one image or video requires many steps, and each step requires a forward pass. This leads to frequent state sharing in models that are intended to be used from step to step. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class IncorrectCachingModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.layer1 = nn.Linear(10, 10)
        self.layer2 = nn.Linear(10, 10)

        # Register a buffer for caching, marked as non-persistent
        self.register_buffer("cached_output", None, persistent=False)

    def forward(self, x):
        # Check if there's a cached output to reuse
        if self.cached_output:
            print("Using cached output")
            return self.cached_output

        # Perform the full forward pass and cache the result
        x = self.layer1(x)
        x = torch.relu(x)
        x = self.layer2(x)

        # Cache the result by setting it in the registered buffer
        self.register_buffer("cached_output", x, persistent=False)
        return x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead, lift the tensors that need to be cached out of the forward pass, or make them a parameter of the class if they can be computed ahead of time.&lt;/p&gt;

&lt;p&gt;Sometimes, the caching is of data that can't be known until after a forward pass takes place. For example, the DeepCache paper suggests that for UNet based diffusion, the middle blocks of the network may be skipped. In the place of the output of those middle blocks, the output of the middle blocks from a prior step is retrieved from a cache.&lt;/p&gt;

&lt;p&gt;In this case, I'd recommend including the part of the network that needs to be cached as an output of the forward pass. This might be concatenated along with the actual output of the forward pass. The downside here is that matching shapes of the cached data match the output of the forward pass can be awkward.&lt;/p&gt;

&lt;p&gt;Another approach to consider could be compiling individual blocks of the model, and coordinating the caching in Python (or whatever runtime is used to execute the network).&lt;/p&gt;

&lt;h4&gt;
  
  
  5) Minimize complexity in the forward pass.
&lt;/h4&gt;

&lt;p&gt;Most forward pass complexity that I've seen stems from branching. While the compiler can handle branching, and is getting better at this in more recent versions of Torch, the compiler will usually try to just pick the right branch at compile time.&lt;/p&gt;

&lt;p&gt;Most of this trouble comes when engineers try to create highly configurable models with loads of flags. This is a desirable property, especially in a rapidly evolving model. Scientists want to be able to include a new optional architecture configuration and test it with different combinations of previous features.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch
import torch.nn as nn

class ComplexConfigurableModel(nn.Module):
    def __init__(self, use_special_layer=False, apply_dropout=False, activation="relu"):
        super().__init__()
        self.use_special_layer = use_special_layer
        self.apply_dropout = apply_dropout
        self.activation = activation

        # Define layers, some of which are optional
        self.layer1 = nn.Linear(10, 20)
        self.special_layer = nn.Linear(20, 20) if use_special_layer else None
        self.dropout = nn.Dropout(p=0.5) if apply_dropout else None
        self.layer2 = nn.Linear(20, 10)

    def forward(self, x):
        # Forward pass with branching logic based on configuration flags
        x = self.layer1(x)

        # Apply special layer if specified
        if self.use_special_layer and self.special_layer is not None:
            x = self.special_layer(x)

        # Apply specified activation function
        if self.activation == "relu":
            x = torch.relu(x)
        elif self.activation == "sigmoid":
            x = torch.sigmoid(x)
        elif self.activation == "tanh":
            x = torch.tanh(x)

        # Apply dropout if specified
        if self.apply_dropout and self.dropout is not None:
            x = self.dropout(x)

        x = self.layer2(x)
        return x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of forward pass code with tons of branches, aim to push the complexity inherent in rapidly evolving models into the initialization of the model. Builder pattern, composite pattern, and strategy pattern are classic object oriented patterns for addressing this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class SimplifiedModel(nn.Module):
    def __init__(self, activation="relu"):
        super().__init__()
        self.layer1 = nn.Linear(10, 20)
        self.layer2 = nn.Linear(20, 10)

        # Set activation function based on input; done at initialization
        if activation == "relu":
            self.activation = torch.relu
        elif activation == "sigmoid":
            self.activation = torch.sigmoid
        elif activation == "tanh":
            self.activation = torch.tanh
        else:
            raise ValueError("Unsupported activation")

    def forward(self, x):
        # Forward pass without branching
        x = self.layer1(x)
        x = self.activation(x)
        x = self.layer2(x)
        return x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, it may be better to maintain experimental classes that define networks that are actively evolving. Then when an architecture configuration is settled on, create a class that represents a stripped down version of the model. This is beneficial because it makes debugging issues with compiled models simpler.&lt;/p&gt;

&lt;h4&gt;
  
  
  6) Use a package distribution service to distribute compiled artifacts
&lt;/h4&gt;

&lt;p&gt;Depending on your inference time environment, the choice might be different. I opted to wrap my &lt;code&gt;.so&lt;/code&gt; and &lt;code&gt;.cubin&lt;/code&gt; files in a Python wheel and publish the wheel to AWS Code Artifact. Because some of these artifacts can be many GBs large, a limit increase with AWS Code Artifact may be necessary.&lt;/p&gt;

&lt;p&gt;This made it extremely easy to distribute the model because the entire package was versioned. We had to produce new versions of this model nearly weekly as changes to weights and architecture rolled in. A package that could be easily pulled in via a bump to someone's requirements.txt made version upgrades very easy.&lt;/p&gt;

&lt;p&gt;Finally, these artifacts have strict requirements for dependencies matching. Python wheels gave us an easy way to communicate that a specific version of Torch had to be there in the inference runtime environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  7) Use an absolute path when writing out compiled artifacts
&lt;/h4&gt;

&lt;p&gt;If you use a relative path, Inductor will dump &lt;code&gt;.cubin&lt;/code&gt; files into a directory in &lt;code&gt;/tmp&lt;/code&gt;. The C++ uses absolute paths to refer to the kernels defined in the &lt;code&gt;.cubin&lt;/code&gt; files, and &lt;code&gt;/tmp&lt;/code&gt; is not a great place to keep executables that need to be around for a while.&lt;/p&gt;

&lt;p&gt;I recommend choosing a place in &lt;code&gt;/opt/&lt;/code&gt; to write these files. This does mean that at run time, the files need to be in the same path they were initially written to on compilation. This is another reason why a wheel is a good choice for distribution, as it allowed us to include logic for setting up symlinks along with the compiled artifacts.&lt;/p&gt;

&lt;h4&gt;
  
  
  8) Test for numerical differences
&lt;/h4&gt;

&lt;p&gt;The compiled version of a model is not guaranteed to produce a bit exact model. Even if the compiler can produce a bit exact version of the model, it's likely that some of the PyTorch model needed to change to be compilable. If that model has already been trained, the risk of mangling some of the architecture is high, which will translate to bad output from the compiled model.&lt;/p&gt;

&lt;p&gt;For example, I had a model that I needed to compile that had a NormLayer in it. The weights of the NormLayer were being cast to bfloat16 before inference. NormLayer is not stable at this precision, and this was made more obvious when I compiled it. The compiled version of the NormLayer was producing outputs that were different by as much as 1e-1 from the eager version.&lt;/p&gt;

&lt;p&gt;So test, test, and test again. The tolerance for differences may differ depending on the model, but starting at the defaults for &lt;code&gt;torch.allclose(...)&lt;/code&gt; is a good idea. I've seen some models do OK with as much as 1e-3 of difference, but this is a matter to be resolved via experiments in a specific model!&lt;/p&gt;

&lt;h4&gt;
  
  
  9) Turn on trace logs when compiling
&lt;/h4&gt;

&lt;p&gt;Use an environment variable to enable trace logging during compilation: &lt;code&gt;TORCH_TRACE="/tmp/tracedir"&lt;/code&gt;. The folks at Meta have this option on by default when they compile their models. This is the best way to understand exactly what decisions the compiler made. The trace logs will include copies of the Triton Kernels that were generated.&lt;/p&gt;

&lt;h4&gt;
  
  
  10) Use an IDE with a Debugger when compiling
&lt;/h4&gt;

&lt;p&gt;Compilation, especially for more custom models, is bound to fail. The errors printed in the output can be extremely cryptic, making it impossible to trace a problem back to the Python it originated from. IDE debuggers (like VS Code) provide easy tools to walk backwards in the stack trace and piece together which part of the model the issue emerges from.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What's the Fastest Way to Convert a Tensor to an Image File?</title>
      <dc:creator>Aaron Langford</dc:creator>
      <pubDate>Wed, 13 Nov 2024 17:45:00 +0000</pubDate>
      <link>https://dev.to/aaronlangford31/whats-the-fastest-way-to-convert-a-tensor-to-an-image-file-138j</link>
      <guid>https://dev.to/aaronlangford31/whats-the-fastest-way-to-convert-a-tensor-to-an-image-file-138j</guid>
      <description>&lt;p&gt;When serving generative image models in a production environment, a tensor representation of an image needs to be converted to a common image format like PNG, JPEG, or WEBP. However this conversion can be costly and those interested in super speedy inference need to know what the fastest way to get a file back to their users.&lt;/p&gt;

&lt;p&gt;When making choices about how to deliver these image files, there will be a trade off between inference speed, image quality, and the number of bytes in the file.&lt;/p&gt;

&lt;p&gt;In this post, I'll lay out some options to choose from share some benchmarks. All of my benchmarks are taken on a c6a.4xlarge EC2 instance. I use only a single 702x1248 image for each benchmark:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15dgl6zj8qh9eeou9bvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15dgl6zj8qh9eeou9bvv.png" alt="An AI generated image of the next big thing in EDM" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See the appendix for list of versions of packages I'm using in my benchmarks.&lt;/p&gt;

&lt;p&gt;Library Options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python Image Library&lt;/li&gt;
&lt;li&gt;Torch Vision&lt;/li&gt;
&lt;li&gt;OpenCV&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Formats:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PNG&lt;/li&gt;
&lt;li&gt;WEBP&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I won't consider lossy formats like JPEG because I think diffusion models services shouldn't risk any quality degradation as a result of lossy image compression.&lt;/p&gt;

&lt;h2&gt;
  
  
  PNG Benchmarks
&lt;/h2&gt;

&lt;p&gt;Here's a sample of the code I used for Python Image Library (PIL). I assume the reader can figure out how to modify to produce all the results I report in the table that comes after the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io
import time
from PIL import Image
import torchvision.transforms.functional as F

path = "/home/user/00000.png"
pil_image = Image.open(path)
pil_image = pil_image.convert("RGB")

image_tensor = F.to_tensor(pil_image)

def pil_png(out):
    pil_image: Image = F.to_pil_image(image_tensor)
    pil_image.save(out, format="PNG", compress_level=4)

t0 = time.time()
for i in range(100):
    out = io.BytesIO()
    pil_png(out)

print(f"Bytes in file:{len(out.getvalue())}")
print(f"Average time: {(time.time() - t0) / 100}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results for Python Image Library:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Options&lt;/th&gt;
&lt;th&gt;Time (s)&lt;/th&gt;
&lt;th&gt;File Size (bytes)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;optimize=True&lt;/td&gt;
&lt;td&gt;2.153&lt;/td&gt;
&lt;td&gt;1066223&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;optimize=False&lt;/td&gt;
&lt;td&gt;0.368&lt;/td&gt;
&lt;td&gt;1098439&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compress_level=0&lt;/td&gt;
&lt;td&gt;0.057&lt;/td&gt;
&lt;td&gt;2766507&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compress_level=1&lt;/td&gt;
&lt;td&gt;0.085&lt;/td&gt;
&lt;td&gt;1273428&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compress_level=4&lt;/td&gt;
&lt;td&gt;0.15&lt;/td&gt;
&lt;td&gt;1114614&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compress_level=9&lt;/td&gt;
&lt;td&gt;2.13&lt;/td&gt;
&lt;td&gt;1114614&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I also tested Torch Vision, which has a png encoder. Code for Torch Vision:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io
import time
from PIL import Image
import torch
import torchvision.transforms.functional as F
import torchvision.io

path = "/home/user/00000.png"
pil_image = Image.open(path)
pil_image = pil_image.convert("RGB")

image_tensor = F.to_tensor(pil_image) * 255.0
image_tensor = image_tensor.to(torch.uint8)

def tv_png():
    return torchvision.io.encode_png(image_tensor, compression_level = 1)

t0 = time.time()
for i in range(100):
    val = tv_png()

print(f"Bytes in file:{len(val)}")
print(f"Average time: {(time.time() - t0) / 100}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results for Torch Vision:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Options&lt;/th&gt;
&lt;th&gt;Time (s)&lt;/th&gt;
&lt;th&gt;File Size (bytes)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;compression_level=0&lt;/td&gt;
&lt;td&gt;0.036&lt;/td&gt;
&lt;td&gt;2770032&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compression_level=1&lt;/td&gt;
&lt;td&gt;0.071&lt;/td&gt;
&lt;td&gt;1272572&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compression_level=4&lt;/td&gt;
&lt;td&gt;0.117&lt;/td&gt;
&lt;td&gt;1116922&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compression_level=9&lt;/td&gt;
&lt;td&gt;1.535&lt;/td&gt;
&lt;td&gt;1069100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;My 3rd library that I considered was OpenCV. Here's the code I used to test that option:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path = "/home/aalangfo/00000.png"
pil_image = Image.open(path)
pil_image = pil_image.convert("RGB")

image_tensor = F.to_tensor(pil_image) * 255.0
image_tensor = image_tensor.to(torch.uint8)
image_tensor = image_tensor.numpy()
image_tensor = image_tensor.transpose((1, 2, 0))

def cv_png():
    result, buffer = cv2.imencode('.png', image_tensor, [cv2.IMWRITE_PNG_COMPRESSION, 0])
    return buffer

t0 = time.time()
for i in range(100):
    out = cv_png()

print(f"Bytes in file:{len(out.tobytes())}")
print(f"Average time: {(time.time() - t0) / 100}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results for OpenCV:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Options&lt;/th&gt;
&lt;th&gt;Time (s)&lt;/th&gt;
&lt;th&gt;File Size (bytes)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;cv2.IMWRITE_PNG_COMPRESSION, 0&lt;/td&gt;
&lt;td&gt;0.035&lt;/td&gt;
&lt;td&gt;2770047&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cv2.IMWRITE_PNG_COMPRESSION, 1&lt;/td&gt;
&lt;td&gt;0.063&lt;/td&gt;
&lt;td&gt;1272600&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cv2.IMWRITE_PNG_COMPRESSION, 4&lt;/td&gt;
&lt;td&gt;0.093&lt;/td&gt;
&lt;td&gt;1186740&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cv2.IMWRITE_PNG_COMPRESSION, 9&lt;/td&gt;
&lt;td&gt;2.088&lt;/td&gt;
&lt;td&gt;1107430&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  WebP Benchmarks
&lt;/h2&gt;

&lt;p&gt;WebP is an image format developed by Google that allows for both lossless and lossy compression for images. It was designed to offer more efficient compression than JPEG and PNG. It also supports animation like GIF does, but improves on GIFs compression.&lt;/p&gt;

&lt;p&gt;The downside of WebP is that it may not be supported in older browsers and devices. There's also a few more levers in the WebP encoding spec, so it may take a bit more time to find the right settings for you. You should explore a list of those options &lt;a href="https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#webp" rel="noopener noreferrer"&gt;here&lt;/a&gt; before reviewing the results.&lt;/p&gt;

&lt;p&gt;I will stick with only lossless encoding for WebP for the same reasons mentioned above.&lt;/p&gt;

&lt;p&gt;Here's my code for benchmarking WebP with Python Image Library (PIL):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import io
import time
from PIL import Image
import torchvision.transforms.functional as F

path = "/home/user/00000.png"
pil_image = Image.open(path)
pil_image = pil_image.convert("RGB")

image_tensor = F.to_tensor(pil_image) * 255.0
image_tensor = image_tensor.to(torch.uint8)

def pil_webp(out):
    pil_image: Image = F.to_pil_image(image_tensor)
    pil_image.save(out, format="WebP", lossless=True, quality=0, method=0)

t0 = time.time()
for i in range(100):
    out = io.BytesIO()
    pil_webp(out)

print(f"Bytes in file:{len(out.getvalue())}")
print(f"Average time: {(time.time() - t0) / 100}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results for PIL:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Options&lt;/th&gt;
&lt;th&gt;Time (s)&lt;/th&gt;
&lt;th&gt;File Size (bytes)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;quality=0, method=0&lt;/td&gt;
&lt;td&gt;0.047&lt;/td&gt;
&lt;td&gt;1046120&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=0, method=3&lt;/td&gt;
&lt;td&gt;0.218&lt;/td&gt;
&lt;td&gt;814500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=0, method=6&lt;/td&gt;
&lt;td&gt;0.270&lt;/td&gt;
&lt;td&gt;808084&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=50, method=0&lt;/td&gt;
&lt;td&gt;0.080&lt;/td&gt;
&lt;td&gt;1046734&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=50, method=3&lt;/td&gt;
&lt;td&gt;0.324&lt;/td&gt;
&lt;td&gt;811578&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=50, method=6&lt;/td&gt;
&lt;td&gt;0.397&lt;/td&gt;
&lt;td&gt;804762&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=100, method=0&lt;/td&gt;
&lt;td&gt;0.304&lt;/td&gt;
&lt;td&gt;1033038&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=100, method=3&lt;/td&gt;
&lt;td&gt;0.745&lt;/td&gt;
&lt;td&gt;809758&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;quality=100, method=6&lt;/td&gt;
&lt;td&gt;7.203&lt;/td&gt;
&lt;td&gt;791030&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As of this post, torch vision does not support writing images using WebP.&lt;/p&gt;

&lt;p&gt;While OpenCV does support WebP, the only way to do lossless image encoding is to set the quality level higher than 100; For the curious, I was able to get the following results at quality level 101:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Average time: 0.462&lt;/li&gt;
&lt;li&gt;Bytes in file: 811340&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Generally OpenCV is the fastest way to encode images. Because of lack of support for WebP encoding flags in OpenCV, OpenCV with PNG at compression level 0 or 1 seems like a great way to go.&lt;/p&gt;

&lt;p&gt;If number of bytes are really important for the use case, WebP provides superior lossless compression to PNG, saving 100s of KB or 1s of MB depending on the configuration.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
