DEV Community

Cover image for Reducing JPEG UIImage RAM Usage by 50%
Peter Iakovlev
Peter Iakovlev

Posted on

Reducing JPEG UIImage RAM Usage by 50%

In 2013, Apple shifted from the detailed, texture-rich designs of the past to a simpler style filled with basic shapes and vector icons. But even in this modern era of straight lines and plain gradients, raster graphics are still not obsolete. This is particularly true on iOS, where there's limited support for fast vector shape rendering. The most effective way to draw complex user interfaces is by turning static parts into simple images, an area where the iOS compositor excels. In addition to UI elements, raster graphics also play an indispensable role in non-UI elements, such as photos, which can't be adequately represented using vector graphics alone.

Rendering raster images can greatly impact the device's memory in non-obvious ways. This is because the memory used by an image is decided by the number of its pixels, not its file size. In this article, we'll exploit one of the more obscure techniques to render such images on Apple platforms.

RGB vs YUV

In iOS, we typically work with raster images represented as an array of ARGB values (where A stands for alpha) for each pixel:

let pixels: [UInt8] = [
//  a     r     g     b
    0xff, 0xff, 0x00, 0x00,
    ...
]
Enter fullscreen mode Exit fullscreen mode

In this representation, one pixel requires exactly 4 bytes of storage. To put this in perspective, an iPhone 14 Pro Max with a resolution of 2796 x 1290, would need to allocate 2796 * 1290 * 4 = 14427360 bytes (or 14.4 megabytes) of RAM to fill the entire screen with pixels. For comparison, an average JPEG photo of the same size would occupy around 1 megabyte of disk space.

JPEGs use a number of tricks to reduce the size of image data, with one of the first being a clever re-organization of colors known as YUV. RGB data can be losslessly converted into the YUV representation:

func rgbToYuv(r: Double, g: Double, b: Double) -> (y: Double, u: Double, v: Double) {
    let luminance = (0.299 * r) + (0.587 * g) + (0.114 * b)
    let u = (1.0 / 1.772) * (b - luminance)
    let v = (1.0 / 1.402) * (r - luminance)
    return (luminance, u, v)
}
Enter fullscreen mode Exit fullscreen mode

And back:

func yuvToRgb(y: Double, u: Double, v: Double) -> (r: Double, g: Double, b: Double) {
    let r = 1.402 * v + y
    let g = (y - (0.299 * 1.402 / 0.587) * v - (0.114 * 1.772 / 0.587) * u)
    let b = 1.772 * u + y
    return (r, g, b)
}
Enter fullscreen mode Exit fullscreen mode

In the YUV representation, the first component (Y) represents the brightness of a pixel, while the other two encode color.

Simply converting our RGB data into the YUV format will not result in any data storage savings, however. To actually compress the data, we'll need to discard some information, making the color transformation lossy.

Chroma Subsampling

The human eye is quite sensitive when it comes to variations in pixel brightness but is less perceptive of color. It turns out we can store color data at half the resolution and not lose much in the way of visual clarity, especially at higher resolutions and with natural, photographic images. This is known as 4:2:0 subsampling.

The image above illustrates the extreme scenario where chroma subsampling is visually apparent — where two bright-colored, contrasting objects overlap. In naturally occurring images, the effect is much less pronounced.

The 4:2:0-sampled image requires 50% less bytes of storage, as every four YUV pixels share the same color value:

4 RGB pixels = 4 * (4 * 3) = 12 bytes
4 YUV pixels = 4 * 1 + 1 + 1 = 6 bytes
Enter fullscreen mode Exit fullscreen mode

UIImage and YUV

Unfortunately, iOS does not provide any APIs that would allow us to store uncompressed YUV 4:2:0 data in a UIImage. However, there are a couple of ways to work around this.

First, it is possible to use an AVSampleBufferDisplayLayer to display static image content. AVSampleBufferDisplayLayer can display ARGB content, but to achieve RAM space savings, we will convert the image to YUV format using vImage:

func imageToYUVCVPixelBuffer(image: UIImage) -> CVPixelBuffer? {
    // 1
    guard let image = image.preparingForDisplay(), let cgImage = image.cgImage, let data = cgImage.dataProvider?.data, let bytes = CFDataGetBytePtr(data), let colorSpace = cgImage.colorSpace, case .rgb = colorSpace.model, cgImage.bitsPerPixel / cgImage.bitsPerComponent == 4 else {
        return nil
    }

    let width = cgImage.width
    let height = cgImage.width

    // 2
    var pixelBuffer: CVPixelBuffer? = nil
    let _ = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, [
        kCVPixelBufferIOSurfacePropertiesKey: NSDictionary()
    ] as CFDictionary, &pixelBuffer)
    guard let pixelBuffer else {
        return nil
    }

    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    defer {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    }
    guard let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)?.assumingMemoryBound(to: CVPlanarPixelBufferInfo_YCbCrBiPlanar.self) else {
        return nil
    }

    // 3
    var pixelRange = vImage_YpCbCrPixelRange(Yp_bias: 0, CbCr_bias: 128, YpRangeMax: 255, CbCrRangeMax: 255, YpMax: 255, YpMin: 0, CbCrMax: 255, CbCrMin: 0)
    var info = vImage_ARGBToYpCbCr()
    if vImageConvert_ARGBToYpCbCr_GenerateConversion(kvImage_ARGBToYpCbCrMatrix_ITU_R_709_2, &pixelRange, &info, kvImageARGB8888, kvImage420Yp8_Cb8_Cr8, vImage_Flags(kvImageDoNotTile)) != kvImageNoError {
        return nil
    }

    // 4
    var srcBuffer = vImage_Buffer(data: UnsafeMutableRawPointer(mutating: bytes), height: vImagePixelCount(height), width: vImagePixelCount(width), rowBytes: cgImage.bytesPerRow)
    var dstBufferY = vImage_Buffer(data: UnsafeMutableRawPointer(mutating: baseAddress).advanced(by: Int(CFSwapInt32BigToHost(UInt32(baseAddress.pointee.componentInfoY.offset)))), height: vImagePixelCount(height), width: vImagePixelCount(width), rowBytes: Int(CFSwapInt32BigToHost(baseAddress.pointee.componentInfoY.rowBytes)))
    var dstBufferCbCr = vImage_Buffer(data: UnsafeMutableRawPointer(mutating: baseAddress).advanced(by: Int(CFSwapInt32BigToHost(UInt32(baseAddress.pointee.componentInfoCbCr.offset)))), height: vImagePixelCount(height / 2), width: vImagePixelCount(width / 2), rowBytes: Int(CFSwapInt32BigToHost(baseAddress.pointee.componentInfoCbCr.rowBytes)))

    // 5
    let permuteMap: [UInt8] = [3, 0, 1, 2]
    if vImageConvert_ARGB8888To420Yp8_CbCr8(&srcBuffer, &dstBufferY, &dstBufferCbCr, &info, permuteMap, vImage_Flags(kvImageDoNotTile)) != kvImageNoError {
        return nil
    }

    return pixelBuffer
}
Enter fullscreen mode Exit fullscreen mode

1 - Ensure the source image is fully loaded into memory and in the appropriate format.
2 - Create a CVImagePixelBuffer that will store data in the kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format.
3 - Prepare vImage for the RGB to YUV conversion.
4 - Populate the source and destination vImage buffer structures.
5 - Execute the actual conversion.

To display a CVPixelBuffer within an AVSampleBufferDisplayLayer, it needs to be encapsulated in a CMSampleBuffer:

public func makeCMSampleBuffer(pixelBuffer: CVPixelBuffer) -> CMSampleBuffer? {
    var sampleBuffer: CMSampleBuffer?

    var videoInfo: CMVideoFormatDescription? = nil
    CMVideoFormatDescriptionCreateForImageBuffer(allocator: nil, imageBuffer: pixelBuffer, formatDescriptionOut: &videoInfo)
    guard let videoInfo else {
        return nil
    }

    var timingInfo = CMSampleTimingInfo(
        duration: CMTimeMake(value: 1, timescale: 30),
        presentationTimeStamp: CMTimeMake(value: 0, timescale: 30),
        decodeTimeStamp: CMTimeMake(value: 0, timescale: 30)
    )
    CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault, imageBuffer: pixelBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: videoInfo, sampleTiming: &timingInfo, sampleBufferOut: &sampleBuffer)

    guard let sampleBuffer else {
        return nil
    }

    let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: true)! as NSArray
    let dict = attachments[0] as! NSMutableDictionary
    dict.setValue(kCFBooleanTrue as AnyObject, forKey: kCMSampleAttachmentKey_DisplayImmediately as NSString as String)

    return sampleBuffer
}
Enter fullscreen mode Exit fullscreen mode

Now we can finally display the YUV image on screen:

let videoLayer = AVSampleBufferDisplayLayer()
videoLayer.frame = CGRect(origin: CGPoint(x: 10.0, y: 70.0), size: CGSize(width: 200.0, height: 200.0))
videoLayer.enqueue(makeCMSampleBuffer(pixelBuffer: imageToYUVCVPixelBuffer(image: image)!)!)
view.layer.addSublayer(videoLayer)
Enter fullscreen mode Exit fullscreen mode

A More Efficient Alternative

AVSampleBufferDisplayLayer is a comparatively heavy-duty UI element that conducts extensive hidden state management in the background. The time it takes to initialize a layer of this type is generally unpredictable, and it can sporadically cause animation delays. While AVSampleBufferDisplayLayer excels at displaying YUV video content, its advanced functionalities aren't necessary for presenting simple images. Instead, we can utilize an obscure feature of UIKit that allows us to assign an IOSurface-backed CVPixelBuffer directly to CALayer.contents:

let directLayer = CALayer()
directLayer.frame = CGRect(origin: CGPoint(x: 10.0, y: 70.0), size: CGSize(width: 200.0, height: 200.0))
directLayer.contents = imageToYUVCVPixelBuffer(image: image)!
view.layer.addSublayer(directLayer)
Enter fullscreen mode Exit fullscreen mode

If you run this code in a simulator, you will not see any image. It only works on actual iOS devices and native macOS apps.

Conclusion

This technique can be trivially expanded to manage YUV+Alpha images (use kCVPixelFormatType_420YpCbCr8BiPlanarFullRange and adjust the conversion routine as needed).

A 4:2:0-coded YUV image requires 50% less RAM space than an equivalent RGB image. For illustrative purposes, the code in this article converts an existing RGB image to the YUV format, causing the application to utilize 150% of the RGB-equivalent RAM during the conversion process. To avoid this peak, it is advised to source the image in YUV format directly. This can be achieved using a libjpeg-derived library capable of outputting YUV data directly.

Top comments (1)

Collapse
 
ag profile image
Aviel Gross

This looks super interesting! Do you know of any example of this method being used in an app? Also - any idea about the CPU cost of reformatting the image, and presenting it, compared to keeping a regular raster and UIImageView?