DEV Community

Cover image for Memory-Safe Mobile Video Editing Engine: Timeline Design & Optimizations
beefed.ai
beefed.ai

Posted on • Originally published at beefed.ai

Memory-Safe Mobile Video Editing Engine: Timeline Design & Optimizations

The symptoms you see in the field are consistent: the editor plays fine in short demos but users report OOM kills during heavy scrubbing, preview stalls when multiple filters are applied, exports that crash mid‑way, and background uploads that never finish. Those failures come from a single design anti-pattern — eagerly materializing full‑resolution frames for many layers and operations instead of evaluating the timeline as a stream and bounding the working set.

Contents

  • [Why a non-destructive timeline beats in-place edits on mobile]
  • [Designing a memory-safe pixel pipeline for constrained devices]
  • [Delivering smooth, low-memory scrubbing and real-time preview]
  • [Building a pragmatic, low-memory transcoding pipeline for export]
  • [Crash-proofing: profiling, fail-safes, and UX signals]
  • [Implementation checklist: ship a memory-safe timeline editor]

Why a non-destructive timeline beats in-place edits on mobile

A non-destructive timeline stores edits as metadata — ranges, trims, transforms, effect descriptors, keyframes — and evaluates those descriptors only when you need a frame or an export. That model avoids copying or rewriting source media and lets the engine choose when and at what fidelity to materialize pixels. On iOS, this is the mental model behind AVMutableComposition and AVMutableVideoComposition, which let you assemble tracks and apply video composition instructions without mutating originals . (developer.apple.com)

Concrete design rules that matter on mobile

  • Treat the timeline as a mapping from composition time → (source asset, source time, effect chain). Do not pre-render layers unless you absolutely must.
  • Represent effects as descriptors (small JSON/binary blobs) that can be evaluated on GPU/CPU when needed; avoid serializing full pixel results into the project file.
  • Favor lazy evaluation and incremental render: only render frames visible to the user or those explicitly requested for export.
  • Use immutable source assets and keep edits as diffs. This makes undo/redo cheap and avoids duplicating data.

Contrarian insight: non‑destructive doesn't automatically equal low‑memory. The common trap is a non‑destructive editor that still pre-renders every effect output into full-resolution RGBA buffers "just in case" — that defeats the point and multiplies memory by tracks × layers × frames.

Example data model (pseudocode)

struct Clip {
  let sourceURL: URL
  let srcRange: CMTimeRange
  let transform: TransformDescriptor
  let filters: [FilterDescriptor] // lightweight descriptors only
}

struct Timeline {
  var tracks: [Track]
  func mapping(at compositionTime: CMTime) -> [(Clip, CMTime)] { ... } // returns which source+time to fetch
}
Enter fullscreen mode Exit fullscreen mode

When you evaluate a frame, walk the mapping, fetch only the required sample(s), composite with GPU shaders, present, then release or return the buffers to a pool.

Designing a memory-safe pixel pipeline for constrained devices

The pixel pipeline is where memory blows up fastest. A single full-resolution RGBA frame is expensive — treat that as the top-level metric when you architect buffers.

Frame-size math (approximate, bytes per frame)
| Resolution | Pixels | RGBA (4 B/pixel) | YUV420 (1.5 B/pixel) |
|---:|---:|---:|---:|
| 1280×720 (720p) | 921,600 | 3.52 MiB | 1.32 MiB |
| 1920×1080 (1080p) | 2,073,600 | 7.91 MiB | 2.97 MiB |
| 3840×2160 (4K) | 8,294,400 | 31.64 MiB | 11.86 MiB |

Important: Holding many full‑res RGBA frames multiplies memory linearly — 4K is unforgiving.

Key tactics

  1. Pixel‑buffer reuse and pools

    Use an OS-provided pixel buffer pool rather than allocating buffers per-frame. On iOS, CVPixelBufferPool is designed for this; create one sized for your pipeline concurrency and reuse buffers via CVPixelBufferPoolCreatePixelBuffer. That pattern avoids frequent heap allocations and fragmentation . (developer.apple.com)

  2. Process in YUV where possible

    Decoders output YUV (often YUV420); keep processing in YUV and only convert to RGBA for the GPU shader or final compositor if necessary. Each conversion costs memory and CPU.

  3. Zero-copy surfaces and hardware surfaces

    Feed decoders/encoders and renderers via native surfaces whenever available. On Android, using MediaCodec.createInputSurface() lets you avoid CPU copies between codec and EGL/Surface; on iOS, use kCVPixelBufferIOSurfacePropertiesKey with CVPixelBuffer to enable efficient handoff to Metal/CoreAnimation . (developer.android.com)

  4. Pool sizing heuristic

    Derive pool size from pipeline concurrency, not total frames. Example: poolSize = rendererBuffers + encoderBuffers + decoderBuffers + safetyMargin. For a typical pipeline: renderer(2) + encoder(2) + decoder(1) + safety(1) => 6 buffers.

Swift example: create and use a CVPixelBufferPool and an AVAssetWriterInputPixelBufferAdaptor safely.

let attrs: [String: Any] = [
  kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA,
  kCVPixelBufferWidthKey as String: width,
  kCVPixelBufferHeightKey as String: height,
  kCVPixelBufferIOSurfacePropertiesKey as String: [:] // enable IOSurface
]
var pool: CVPixelBufferPool?
CVPixelBufferPoolCreate(nil, nil, attrs as CFDictionary, &pool)

// later, when writing frames:
var pb: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(nil, pool, &pb)
// fill pb via Metal/OpenGL or pixel copy, then append using adaptor
adaptor.append(pb!, withPresentationTime: pts)
Enter fullscreen mode Exit fullscreen mode

Android note: ImageReader.newInstance(width, height, ImageFormat.YUV_420_888, maxImages)'s maxImages controls how many images the system will buffer — smaller is lower memory but must be enough to cover concurrent stages . (developer.android.com)

Blockquote callout

Never keep more decoded full‑resolution frames in memory than your pool budget allows. A single 4K RGBA frame (~31 MiB) times a dozen buffers kills mid‑range phones.

Delivering smooth, low-memory scrubbing and real-time preview

Scrubbing is an I/O + decode problem that becomes a memory problem if you eagerly decode many frames. The solution mixes lower‑fidelity proxies, smart seeking, and a tiny decode cache.

Patterns that work

  • Lightweight proxies at import

    Generate low-res, low-bitrate proxy assets (e.g., quarter resolution or lower bitrate H.264/HEVC) during import. Use proxies for fast scrubbing, then swap to original media for final export. Proxy generation can be backgrounded and resumed; it's far cheaper than trying to keep many decoded full‑res frames.

  • Keyframe-aware seeking + progressive refinement

    Seek to nearest keyframe (fast) then decode forward to the exact frame if needed. For fast scrubs, stick with the keyframe result or a downscaled version; only decode exact frames when the user pauses. Many media stacks (including AVAssetImageGenerator) expose tolerance settings to make seeks cheaper; use those to let the engine return a near‑frame quickly . (developer.apple.com)

  • Small LRU decode cache + velocity heuristics

    Keep a tiny LRU cache of decoded frames (e.g., 3–6 frames at the resolution you need). When scrubbing, adapt the cache window size to scrubbing velocity: large window when user moves slowly, tiny window when fast. Cancel outstanding decodes when velocity increases.

Scrub prefetch pseudocode

onScrub(position, velocity):
  if velocity > HIGH_THRESHOLD:
    displayProxyFrame(position) // cheap
    cancel(allHeavyDecodes)
  else:
    targets = pickFramesAround(position, prefetchCountForVelocity(velocity))
    for t in targets: scheduleDecode(t) // bounded concurrency
Enter fullscreen mode Exit fullscreen mode
  • Use GPU compositing for overlays and effects

    Composite multiple layers in GPU (Metal/OpenGL) into a single surface and reuse it. Avoid CPU copyback; render to a CVPixelBuffer or a Surface that your encoder can consume directly.

  • Thumbnails & sprite sheets

    Pre-generate a timeline thumbnail sprite sheet (e.g., every Nth frame at import) and use it as the immediate visual during scrubbing; decode high‑quality frames asynchronously.

Real-world tradeoff: proxies + keyframe approximation reduce memory and decoding load massively, and they are what separates a janky demo from a production‑grade mobile video editor.

Building a pragmatic, low-memory transcoding pipeline for export

Export must be reliable and bounded in peak memory. Design the pipeline as a streaming set of stages with disk-backed spooling when needed.

Pipeline pattern (streaming, chunked)

  1. Build composition graph (metadata) and create a read plan: sequence of source ranges to read.
  2. Create a streaming decode stage: read packets/frames for a small time window, decode to CVPixelBuffer / Image pooled buffers.
  3. Apply GPU/CPU effects per frame, render to encoder input surface if possible.
  4. Feed frames to hardware encoder incrementally and write muxed output using the platform muxer.
  5. Use disk for temporary files or segments; do not accumulate final frames in memory.

Why streaming matters: FFmpeg and other media systems explicitly model transcoding as a pipeline of demuxer → decoder → filters → encoder → muxer; buffering between stages must be bounded or you'll allocate unbounded memory . (ffmpeg.org)

Use hardware encoders

  • iOS: VTCompressionSession or AVAssetWriter backed by hardware via VideoToolbox — hardware encoding reduces CPU and can accept zero‑copy pixel buffers in many cases . (developer.apple.com)
  • Android: MediaCodec with createInputSurface() to accept frames without extra copies; use MediaMuxer to write MP4/WEBM . (developer.android.com)

Export resilience: chunk, checkpoint, resume

  • Export in segments (e.g., 30s chunks). After each chunk is encoded and muxed, write to disk and optionally upload. If the process crashes, you only need to re-encode the last incomplete chunk.
  • Keep a small JSON checkpoint file with current position and active parameters so the export can resume.

Example (high-level) Swift pattern using AVAssetReader + AVAssetWriter:

let reader = try AVAssetReader(asset: composition)
let writer = try AVAssetWriter(outputURL: outURL, fileType: .mp4)
let writerInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
let adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: attrs)
writer.add(writerInput)
writer.startWriting(); reader.startReading()
writer.startSession(atSourceTime: .zero)
while let sample = readerOutput.copyNextSampleBuffer() {
  // render effects into pixelBuffer from pool
  adaptor.append(pixelBuffer, withPresentationTime: pts)
}
Enter fullscreen mode Exit fullscreen mode

Edge notes: do not hold the whole encoded output in memory; write to disk, and stream uploads with background transfers (or WorkManager on Android) to avoid tying up the UI process . (developer.apple.com)

Crash-proofing: profiling, fail-safes, and UX signals

Profiling and graceful degradation are the difference between an editor that crashes for 1% of users and one that runs reliably across millions.

Profiling checklist

  • Capture representative workloads: long timelines with filters, multi‑track mixes, 1080p/4K assets.
  • Use Instruments (Allocations, VM Tracker, Leaks) and follow Apple’s guide to minimize memory footprint and interpret Persistent Bytes . (developer.apple.com)
  • On Android use Android Studio Memory Profiler and heap dumps to inspect retained objects and buffer allocations.

Fail‑safes and guard rails

  • Watch for memory warnings and trim caches: implement UIApplication.didReceiveMemoryWarning (iOS) and onTrimMemory/ComponentCallbacks2 (Android) to free caches and reduce buffer pool sizes [7search0]. (learn.microsoft.com)
  • Catch and handle catastrophic allocation failures: on Android handle OutOfMemoryError at boundary points (decode/encode loops) and fall back to proxies or cancel a heavy operation; on iOS rely on memory warnings and design to avoid hitting malloc failure.
  • Timeouts and watchdogs: set per-stage timeouts and a supervising controller that can cleanly abort the export and write a checkpoint if a stage stalls.

UX polish that prevents crashes

  • Communicate when the app switches to proxy mode or reduces preview quality to maintain responsiveness.
  • Allow users to choose an export profile (e.g., Max Quality vs. Fast/Low‑Memory Export) and persist that as a project preference.
  • Provide a progress UI that also reports memory‑based degradations (e.g., “Switched to low‑res preview to conserve memory”).

Telemetry: capture memory high‑water marks around crashes (never send raw frames, only metrics and stack traces). These traces show whether spikes happen during decode, composite, or encode.

Implementation checklist: ship a memory-safe timeline editor

Use the checklist below as a release gate. Each item is actionable and measurable.

  1. Data model & edit storage

    • [ ] Timeline stores edits as descriptors, not materialized frames.
    • [ ] Composition graph correctly maps composition time → source/time + descriptor.
  2. Pixel buffer & pool strategy

    • [ ] Implement CVPixelBufferPool (iOS) or controlled ImageReader buffer counts (Android). (developer.apple.com)
    • [ ] Keep poolSize derived from measured concurrency; test under load.
  3. Proxy assets & thumbnails

    • [ ] Generate proxy assets on import (background, resumable).
    • [ ] Precompute thumbnail sprite sheets for timeline scrubbing.
  4. Scrub UX & prefetching

    • [ ] Implement keyframe seeking + progressive refinement. (developer.apple.com)
    • [ ] LRU decode cache with adaptive window based on velocity.
  5. Export & transcoding pipeline

    • [ ] Streaming pipeline: decode → effect → encode → mux (no all‑in‑memory stage). (ffmpeg.org)
    • [ ] Use hardware encoders (VTCompressionSession/MediaCodec) where possible. (developer.apple.com)
  6. Background uploads & resume

    • [ ] Chunked exports + checkpoint files; schedule uploads using background-capable APIs (iOS URLSession background sessions, Android WorkManager). (developer.apple.com)
  7. Observability & hardening

    • [ ] Instruments and memory traces collected from representative devices. (developer.apple.com)
    • [ ] Implement didReceiveMemoryWarning / onTrimMemory to purge caches and shrink pools. 7search0
  8. QA: stress tests

    • [ ] Run scripted scenarios: multi-track scrubbing, long export while background uploading, import of large 4K assets; assert no OOMs and controlled tail latency.

A small checklist for first shipping (minimal viable safety)

  • Use proxies for scrubbing by default.
  • Limit in‑memory decoded frames to <= 4 at 1080p (adjust via profiling).
  • Export in streaming chunks with a checkpoint file.

Sources

Sources:
CVPixelBufferPoolRelease (CoreVideo) - Reference for CVPixelBufferPool APIs and the recommended reuse pattern for pixel buffers. (developer.apple.com)

Editing — AVFoundation Programming Guide - How AVMutableComposition/AVMutableVideoComposition model non‑destructive edits and instructions. (developer.apple.com)

AVAssetWriterInputPixelBufferAdaptor.Create Method - Documentation on creating an adaptor for feeding CVPixelBuffer instances into AVAssetWriter. (learn.microsoft.com)

MediaCodec (Android Developers) - Low‑level Android codec API and guidance for createInputSurface() and buffer handling. (developer.android.com)

ImageReader (Android Developers) - Notes on newInstance(..., maxImages) and how maxImages affects memory usage. (developer.android.com)

FFmpeg Documentation - Overview of how a transcoding pipeline (demuxer → decoder → filters → encoder → muxer) should be structured to avoid unbounded buffering. (ffmpeg.org)

Technical Note TN2434: Minimizing your app's Memory Footprint - Apple guidance on profiling memory and interpreting persistent allocations with Instruments. (developer.apple.com)

Energy Efficiency Guide for iOS Apps — Defer Networking - Guidance on NSURLSession background sessions and discretionary transfers. (developer.apple.com)

WorkManager (Android Developers) - Recommended API for reliable background work and uploads on Android. (developer.android.com)

VTCompressionSession EncodeFrame (VideoToolbox) - VideoToolbox API for hardware-accelerated encoding on Apple platforms. (developer.apple.com)

UIApplication.DidReceiveMemoryWarningNotification (UIKit) - Memory warning notification reference for purging caches on iOS. (learn.microsoft.com)

Build the timeline around bounded memory: design metadata-first, reuse pixel buffers, prefer proxies for interactivity, stream exports, and harden against memory warnings — the result is an editor that stays usable on real phones, not just in the lab.

Top comments (0)