<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manikandan T</title>
    <description>The latest articles on DEV Community by Manikandan T (@manikandan_t_6d72e32ac4e8).</description>
    <link>https://dev.to/manikandan_t_6d72e32ac4e8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manikandan_t_6d72e32ac4e8"/>
    <language>en</language>
    <item>
      <title>72B Parameters, Zero Quantization, One GPU: Benchmarking Qwen2-VL on AMD MI300X</title>
      <dc:creator>Manikandan T</dc:creator>
      <pubDate>Wed, 13 May 2026 08:02:13 +0000</pubDate>
      <link>https://dev.to/manikandan_t_6d72e32ac4e8/72b-parameters-zero-quantization-one-gpu-benchmarking-qwen2-vl-on-amd-mi300x-15mh</link>
      <guid>https://dev.to/manikandan_t_6d72e32ac4e8/72b-parameters-zero-quantization-one-gpu-benchmarking-qwen2-vl-on-amd-mi300x-15mh</guid>
      <description>&lt;p&gt;I loaded Qwen2-VL-72B-Instruct at full BF16 precision on a single GPU, served 64 concurrent DocVQA streams, and kept the system stable at 99.5% KV cache utilization - all for $1.99/hour on the AMD Developer Cloud.&lt;/p&gt;

&lt;p&gt;This post walks through exactly how I did it: the hardware economics that make it possible, the deployment configuration that makes it stable, and the benchmark results that prove it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Building enterprise-grade visual RAG architectures - Invoice extraction, contract intelligence, automated RFP processing, document QA, OCR-heavy PDF understanding, and long-context retrieval pipelines - requires vision-language models that don't hallucinate structural details. Qwen2-VL-72B is still one of the most capable open-weights models for these tasks.&lt;/p&gt;

&lt;p&gt;The problem is running it. A 72-billion parameter model in BF16 precision consumes roughly 144GB of VRAM just to load the weights. Traditional 80GB GPUs force you into aggressive 4-bit quantization, which severely degrades OCR accuracy and multimodal reasoning.&lt;/p&gt;

&lt;p&gt;The AMD Instinct MI300X changes the deployment calculus entirely. With 192GB of HBM3 memory, it fits the full unquantized model on a single GPU and leaves 48GB of headroom for KV caches and concurrent workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Economics: MI300X vs. A100 and H100
&lt;/h2&gt;

&lt;p&gt;Before diving into deployment details, let's address the cost question — because hardware costs cannot be evaluated in a vacuum. You have to evaluate the cost per usable gigabyte of VRAM required to serve your specific model.&lt;/p&gt;

&lt;h3&gt;
  
  
  The NVIDIA 80GB Constraint
&lt;/h3&gt;

&lt;p&gt;If you deploy on NVIDIA infrastructure using A100 (80GB) or H100 (80GB) GPUs, a single GPU is physically incapable of loading Qwen2-VL-72B unquantized. You are forced into one of two compromises.&lt;/p&gt;

&lt;p&gt;The first option is aggressive quantization: crush the model down to 4-bit (AWQ/GPTQ) to fit it on a single 80GB card. This severely degrades OCR and multimodal reasoning capabilities — exactly the capabilities you need for enterprise document processing.&lt;/p&gt;

&lt;p&gt;The second option is tensor parallelism (TP=2): provision a multi-GPU node and shard the model across two cards using &lt;code&gt;--tensor-parallel-size 2&lt;/code&gt;. This works, but it introduces cross-device NCCL communication overhead on every forward pass, inflating inter-token latency beyond what the raw memory bandwidth would suggest.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cost Breakdown
&lt;/h3&gt;

&lt;p&gt;Using standard tier-2 cloud pricing (Lambda Cloud, CoreWeave - generally cheaper than AWS/GCP on-demand):&lt;/p&gt;

&lt;p&gt;A 2x A100 (80GB) node runs approximately $3.00 to $4.00 per hour per card. You get the 160GB of pooled VRAM you need, but on older Ampere architecture with slower memory bandwidth, plus the NCCL overhead between cards.&lt;/p&gt;

&lt;p&gt;A 2x H100 (80GB) node runs approximately $6.00 to $8.00+ per hour per card. Hopper is blazing fast, but you are paying for two cards' worth of compute just to get 160GB of pooled VRAM - and you still carry the TP=2 communication overhead.&lt;/p&gt;

&lt;p&gt;A single AMD MI300X (192GB) node on the AMD Developer Cloud costs $1.99 per hour (Price may vary for production).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5ujfry4uhdtikbemc2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5ujfry4uhdtikbemc2i.png" alt="AMD Developer Cloud - MI300X" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtvjp6ie67t3qy0jabiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtvjp6ie67t3qy0jabiu.png" alt="1 GPU vs 8 GPU Comparison" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architectural Advantage
&lt;/h3&gt;

&lt;p&gt;The MI300X doesn't just cut the hourly cost by 50-75%. It completely eliminates the complexity of multi-GPU tensor parallelism. There is no cross-device communication overhead. The inter-token latency is bounded strictly by the 5.3 TB/s memory bandwidth of a single HBM3 pool — and my stress test benchmarks confirmed ITL of 43-66ms at the synchronous baseline, which validates that the memory subsystem delivers on its theoretical bandwidth promises.&lt;/p&gt;

&lt;p&gt;For enterprise teams scaling visual RAG pipelines, this shifts the unit economics of multimodal inference from prohibitive to profitable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hardware and Environment
&lt;/h2&gt;

&lt;p&gt;I provisioned the environment on the AMD Developer Cloud. Here are the system specifications:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GPU:&lt;/strong&gt; 1x AMD Instinct MI300X (192GB HBM3 VRAM)&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Compute:&lt;/strong&gt; 20 vCPUs, 240GB RAM&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Boot Storage:&lt;/strong&gt; 720GB NVMe SSD&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Scratch Storage:&lt;/strong&gt; 5TB NVMe SSD&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Software Stack:&lt;/strong&gt; Ubuntu 22.04, ROCm 7.2.0, Docker  &lt;/p&gt;

&lt;p&gt;The 192GB VRAM is the critical specification. With ~144GB consumed by the model weights, that leaves approximately 48GB of headroom. That 48GB is what allows processing massive base64-encoded images, maintaining large context windows, and handling concurrent batch requests without triggering OOM errors.&lt;/p&gt;
&lt;h3&gt;
  
  
  Preparing NVMe Storage
&lt;/h3&gt;

&lt;p&gt;The 5TB NVMe scratch disk needs to be mounted for the HuggingFace cache. Downloading 144GB of weights to the boot disk will exhaust space and throttle loading times.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Format the scratch disk with XFS (excellent large-file and parallel I/O handling)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;wipefs &lt;span class="nt"&gt;-af&lt;/span&gt; /dev/vdc1
&lt;span class="nb"&gt;sudo &lt;/span&gt;mkfs.xfs &lt;span class="nt"&gt;-f&lt;/span&gt; /dev/vdc1
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/models
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount /dev/vdc1 /mnt/models
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt;:&lt;span class="nv"&gt;$USER&lt;/span&gt; /mnt/models

&lt;span class="c"&gt;# Point HuggingFace cache to the NVMe drive&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/models/huggingface
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HF_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/models/huggingface
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"export HF_HOME=/mnt/models/huggingface"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the cache on NVMe, subsequent container restarts load the full 144GB of weights into VRAM in seconds rather than minutes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deploying vLLM on MI300X
&lt;/h2&gt;

&lt;p&gt;Deploying vLLM on AMD hardware requires passing the correct kernel drivers into the Docker container. Unlike NVIDIA's &lt;code&gt;--gpus all&lt;/code&gt; flag, the ROCm ecosystem requires direct device passthrough of the KFD (Kernel Fusion Driver) and DRI (Direct Rendering Infrastructure) interfaces.&lt;/p&gt;

&lt;p&gt;Here is the production deployment command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; vllm-qwen2-vl-72b &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt; host &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ipc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/kfd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/dri &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-add&lt;/span&gt; video &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-add&lt;/span&gt; render &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /mnt/models:/mnt/models:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;HF_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/models/huggingface &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;VLLM_USE_TRITON_FLASH_ATTN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; Qwen/Qwen2-VL-72B-Instruct &lt;span class="se"&gt;\&lt;/span&gt;
  vllm/vllm-openai-rocm:v0.20.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dtype&lt;/span&gt; bfloat16 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--gpu-memory-utilization&lt;/span&gt; 0.92 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-model-len&lt;/span&gt; 16384 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-num-seqs&lt;/span&gt; 64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-num-batched-tokens&lt;/span&gt; 8192 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-chunked-prefill&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--port&lt;/span&gt; 8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Each Flag Matters
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Host integration&lt;/strong&gt; (&lt;code&gt;--network host --ipc=host&lt;/code&gt;): Bypassing Docker's bridge network eliminates overhead, which is critical for benchmarking true API latency. Host IPC is required for efficient shared memory operations between vLLM's internal processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROCm passthrough&lt;/strong&gt; (&lt;code&gt;--device=/dev/kfd --device=/dev/dri --group-add video --group-add render&lt;/code&gt;): This is how the container communicates with the CDNA architecture of the MI300X. If your container fails to start with mysterious ROCm errors, the cause is almost always a permissions issue with these device paths or missing group additions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt; (&lt;code&gt;--dtype bfloat16&lt;/code&gt;): BF16 is the optimal datatype for MI300X. It provides the same dynamic range as FP32, preventing the numerical overflow issues that occur with standard FP16 during the massive attention matrix multiplications in 72B+ models. The MI300X Matrix Core technology natively supports BF16 — do not force FP16 on this architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory management&lt;/strong&gt; (&lt;code&gt;--gpu-memory-utilization 0.92&lt;/code&gt;): This tells vLLM to reserve 92% of the 192GB VRAM. After loading the model weights, the remaining allocation is dedicated entirely to the KV cache block pool, managed by vLLM's PagedAttention system. The engine carved out 32.18 GiB specifically for the KV cache, providing 105,440 tokens of cache capacity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency limits&lt;/strong&gt; (&lt;code&gt;--max-num-seqs 64&lt;/code&gt;, &lt;code&gt;--max-num-batched-tokens 8192&lt;/code&gt;): These define the batching boundaries to prevent OOM under heavy load. With 64 maximum concurrent sequences and 8192 tokens per batch, the scheduler has enough room to interleave requests without exhausting the KV cache blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunked prefill&lt;/strong&gt; (&lt;code&gt;--enable-chunked-prefill&lt;/code&gt;): This is non-negotiable for multimodal models. Vision inputs generate massive prompt token counts — a single high-resolution document image can tokenize into thousands of visual tokens. Without chunked prefill, a single massive document would monopolize the entire prefill pipeline, stalling all other requests in the batch. Chunked prefill breaks the initial prompt processing into smaller chunks and interleaves them with decode steps from other in-flight requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4yzmrt8eig0ryhz2zkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4yzmrt8eig0ryhz2zkj.png" alt="VLLM version-0.20.1" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyfr11onx21bh90qqkbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyfr11onx21bh90qqkbf.png" alt="ASGI server running on port 8000" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because vLLM exposes an OpenAI-compatible API, this endpoint is a drop-in replacement for existing application logic. LangChain, LlamaIndex, or custom agentic workflows can point directly to &lt;code&gt;localhost:8000/v1&lt;/code&gt; without modifying the integration layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monitoring AMD GPUs During Inference
&lt;/h2&gt;

&lt;p&gt;If you come from the NVIDIA ecosystem, your muscle memory will reach for &lt;code&gt;nvidia-smi&lt;/code&gt;. On consumer AMD cards, you might try &lt;code&gt;radeontop&lt;/code&gt;. Neither works for data center CDNA architectures like the MI300X.&lt;/p&gt;

&lt;p&gt;The correct tool is &lt;code&gt;amd-smi&lt;/code&gt; (or &lt;code&gt;rocm-smi&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;watch &lt;span class="nt"&gt;-n&lt;/span&gt; 2 amd-smi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr3wkfjpw01auetpsvgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr3wkfjpw01auetpsvgm.png" alt="Memory Info of MI300X" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcvyqbvm2w9wymjnrvqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcvyqbvm2w9wymjnrvqv.png" alt="Memory Info of MI300X with utilisation bar" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two things to note from this output. First, the VRAM usage stays relatively static during inference because vLLM pre-allocates the entire KV cache block pool at startup based on the &lt;code&gt;0.92&lt;/code&gt; utilization flag. What fluctuates is power draw and GPU utilization, which spike during the compute-heavy prefill phases of multimodal requests. Second, &lt;code&gt;amd-smi&lt;/code&gt; sometimes aggregates memory differently than &lt;code&gt;nvidia-smi&lt;/code&gt;. Trust the vLLM engine logs — specifically the "GPU KV cache usage" percentage reported every 10 seconds — for the most accurate view of your KV cache block utilization.&lt;/p&gt;




&lt;h2&gt;
  
  
  Benchmarking the Deployment
&lt;/h2&gt;

&lt;p&gt;To validate this infrastructure for production document processing, I used GuideLLM to run two distinct benchmarking phases against the live endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase A: Synthetic Stress Test — VRAM Saturation Sweep
&lt;/h3&gt;

&lt;p&gt;This test was designed to push the KV cache to its absolute breaking point using maximum-context synthetic prompts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;guidellm benchmark &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:8000/v1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; &lt;span class="s2"&gt;"Qwen/Qwen2-VL-72B-Instruct"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; sweep &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s2"&gt;"prompt_tokens=8192,output_tokens=1024"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-seconds&lt;/span&gt; 300 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--warmup&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-dir&lt;/span&gt; ./results-stress &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--outputs&lt;/span&gt; json,html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The sweep profile automatically escalates from synchronous (1 request at a time) through throughput-maximizing batches and then across increasing constant-rate loads. This produces a full performance curve from idle to saturated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flov2g9lwqdguiusgbogp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flov2g9lwqdguiusgbogp.png" alt="Cache Hit Rate Log" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm7bd9wdxv76zmfxfbmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm7bd9wdxv76zmfxfbmr.png" alt="constant-rate strategies, with input tokens fixed at 8,211 and output at 1,024 per request" width="800" height="729"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawjtght8j8txc5a30fnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawjtght8j8txc5a30fnw.png" alt="latency and throughput statistics" width="800" height="801"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The critical result from Phase A: the synchronous baseline ITL was 39.6ms (median), climbing to 66.8ms at higher concurrency. This proves the MI300X HBM3 memory bandwidth is delivering. Anything under 100ms ITL feels instantaneous to a human reader in a streaming interface.&lt;/p&gt;

&lt;p&gt;At peak load, the KV cache hit 99.5% utilization — and the system survived. This is where chunked prefill earns its keep. Without it, sending a massive batch of new prompts to a system at 99% KV cache capacity would cause an immediate OOM crash. Chunked prefill allows the scheduler to break incoming prefill work into small blocks, filling the remaining gaps without exceeding physical limits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase B: Enterprise DocVQA Workload
&lt;/h3&gt;

&lt;p&gt;Synthetic data validates the infrastructure. Real data validates the architecture. I used the &lt;code&gt;lmms-lab/DocVQA&lt;/code&gt; dataset, throwing 64 concurrent streams at the GPU to simulate a heavily loaded internal document analysis tool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;guidellm benchmark &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:8000/v1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; &lt;span class="s2"&gt;"Qwen/Qwen2-VL-72B-Instruct"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; concurrent &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rate&lt;/span&gt; 64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s2"&gt;"lmms-lab/DocVQA"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data-args&lt;/span&gt; &lt;span class="s1"&gt;'{"name": "DocVQA"}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-seconds&lt;/span&gt; 120 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--warmup&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-dir&lt;/span&gt; ./results-doc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--outputs&lt;/span&gt; json,html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3iimu0wbx4af4v43n6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3iimu0wbx4af4v43n6o.png" alt="DocVQA dataset - Parquet files downloading from HuggingFace, ConcurrentProfile resolved at 64 streams" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq157bq9skf91ey5359cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq157bq9skf91ey5359cr.png" alt="DocVQA concurrent load - KV cache fluctuating between 21% and 94.8% as real document images cycle" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp7wgdjyb36dbb13yxgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp7wgdjyb36dbb13yxgj.png" alt="median TTFT of 38.5 seconds, median ITL of 1,879ms, total throughput of 2,621 tokens/sec" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv355gkps5jyoccr04jm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv355gkps5jyoccr04jm.png" alt="median input of 4,996 tokens per request, median output of 19 tokens, median image input of 3.86M pixels per request" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The DocVQA results tell a different story than the synthetic test — and that's the point. Real multimodal workloads are fundamentally harder than synthetic text. Each document image tokenizes into thousands of visual tokens (median 4,996 input tokens per request, with 3.86 million pixels of image data), which means the prefill phase dominates. The median TTFT of 38.5 seconds at 64 concurrent streams reflects the GPU working through massive vision encoder computations for dozens of simultaneous documents.&lt;/p&gt;

&lt;p&gt;The system completed 46 requests in 110 seconds with 64 concurrent streams — no errors, no OOMs, no crashes. The server throughput of 2,621 total tokens per second demonstrates that even under extreme multimodal concurrency, the architecture remains stable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding the Latency Pipeline
&lt;/h2&gt;

&lt;p&gt;To build reliable systems on top of these numbers, you need to understand what happens between the moment a user submits a request and the moment they see the complete response. The inference pipeline has two fundamentally different computational phases, and each one is bottlenecked by a different hardware resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2fnucf2q21eo3vyl908.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2fnucf2q21eo3vyl908.png" alt="prefill-decode latency pipeline" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  TTFT: Time To First Token (Compute-Bound)
&lt;/h3&gt;

&lt;p&gt;TTFT measures the time between request submission and the first generated token appearing. For multimodal models, TTFT is dominated by the prefill phase — the GPU must process the base64 image through the vision encoder (ViT), project the visual embeddings into the LLM's token space, concatenate them with the text prompt tokens, and then perform the full self-attention computation over the entire combined sequence to populate the KV cache.&lt;/p&gt;

&lt;p&gt;This is a compute-bound operation. The GPU cores are doing dense matrix multiplications across thousands of visual tokens. Under the 64-stream DocVQA load, TTFT was 38.5 seconds (median) — each request is competing for compute time with 63 other in-flight prefill and decode operations.&lt;/p&gt;

&lt;p&gt;In production, if TTFT is too high for your SLA, the levers are: reduce &lt;code&gt;--max-num-seqs&lt;/code&gt; to limit concurrency (trading throughput for latency), tune &lt;code&gt;--max-num-batched-tokens&lt;/code&gt; to prioritize individual request latency, or scale horizontally by adding more MI300X nodes behind a load balancer.&lt;/p&gt;

&lt;h3&gt;
  
  
  ITL: Inter-Token Latency (Memory-Bandwidth-Bound)
&lt;/h3&gt;

&lt;p&gt;Once prefill completes and the KV cache is populated, the model enters the decode phase. It generates one token at a time in an autoregressive loop. Each token generation requires reading the entire 144GB of model weights from HBM3 VRAM to the compute units.&lt;/p&gt;

&lt;p&gt;This is a memory-bandwidth-bound operation. The GPU cores are fast enough — they are waiting on data delivery from memory. This is why the MI300X's 5.3 TB/s HBM3 bandwidth matters so much. My synthetic stress test showed a synchronous ITL baseline of 39.6ms, which aligns closely with the theoretical minimum: 144GB of weights divided by 5.3 TB/s bandwidth equals roughly 27ms per token, with the remainder accounted for by attention computation over the KV cache, kernel launch overhead, and scheduling latency.&lt;/p&gt;

&lt;p&gt;At higher concurrency, ITL rises because the memory bus is shared across all in-flight decode operations. Under the synthetic sweep, ITL scaled gracefully from 39.6ms (synchronous) to 66.8ms (highest constant rate) — a 1.7x increase despite a 6x increase in concurrency. Under the DocVQA workload at 64 concurrent streams, the median ITL was 1,879ms, reflecting the extreme memory pressure of simultaneously maintaining KV caches for 64 high-resolution document contexts.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Chunked Prefill Prevents Catastrophic Failure
&lt;/h3&gt;

&lt;p&gt;During Phase A, the KV cache hit 99.5% utilization. Without chunked prefill, a new incoming request at this point would attempt to allocate its full prefill budget in one shot — and fail with an OOM crash, potentially taking down the entire serving process.&lt;/p&gt;

&lt;p&gt;Chunked prefill changes this behavior. Instead of processing the entire prompt in a single monolithic computation, the scheduler breaks the prefill into smaller chunks (bounded by &lt;code&gt;--max-num-batched-tokens&lt;/code&gt;). Between chunks, it interleaves decode steps from other in-flight requests. This means the system can gradually allocate KV cache blocks as they become available from completed requests, rather than demanding the full allocation upfront. The result is graceful degradation under pressure rather than catastrophic failure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VRAM reporting nuances.&lt;/strong&gt; The &lt;code&gt;amd-smi&lt;/code&gt; tool and the VRAM bar visualization sometimes report different figures than what vLLM's internal engine logs show. This is because &lt;code&gt;amd-smi&lt;/code&gt; reports total GPU memory allocation (including driver overhead, CUDA graphs, and pre-allocated buffers), while vLLM reports specifically on KV cache block utilization. For production monitoring, instrument against the vLLM &lt;code&gt;/metrics&lt;/code&gt; Prometheus endpoint, which exposes &lt;code&gt;vllm:gpu_cache_usage_perc&lt;/code&gt; directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The BF16 imperative.&lt;/strong&gt; Do not attempt FP16 on MI300X for models of this size. BF16 is natively supported by the Matrix Core technology, maintains FP32-equivalent dynamic range, and avoids the precision loss that causes output degradation in 72B+ parameter models. This is not a preference — it is a correctness requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROCm is production-ready.&lt;/strong&gt; The ROCm 7.2 + vLLM v0.20.1 stack ran stable through sustained stress testing with zero crashes. For teams evaluating AMD as an alternative to NVIDIA for inference workloads, the ecosystem has matured significantly. The primary friction point is in the initial Docker configuration (device passthrough and group permissions), not in runtime stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SHM sizing.&lt;/strong&gt; If you encounter cross-process communication errors in vLLM, pass &lt;code&gt;--shm-size 8g&lt;/code&gt; to your Docker run command. This is not always required but resolves intermittent failures in certain multi-worker configurations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reproduce This
&lt;/h2&gt;

&lt;p&gt;The exact commands used in this post:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Mount NVMe and set HF cache&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;wipefs &lt;span class="nt"&gt;-af&lt;/span&gt; /dev/vdc1 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;mkfs.xfs &lt;span class="nt"&gt;-f&lt;/span&gt; /dev/vdc1
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/models &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;mount /dev/vdc1 /mnt/models
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt;:&lt;span class="nv"&gt;$USER&lt;/span&gt; /mnt/models
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/models/huggingface
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HF_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/models/huggingface
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 2. Launch vLLM (ROCm, v0.20.1)&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; vllm-qwen2-vl-72b &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt; host &lt;span class="nt"&gt;--ipc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/kfd &lt;span class="nt"&gt;--device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/dri &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--group-add&lt;/span&gt; video &lt;span class="nt"&gt;--group-add&lt;/span&gt; render &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /mnt/models:/mnt/models:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;HF_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/models/huggingface &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;VLLM_USE_TRITON_FLASH_ATTN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  vllm/vllm-openai-rocm:v0.20.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; Qwen/Qwen2-VL-72B-Instruct &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dtype&lt;/span&gt; bfloat16 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--gpu-memory-utilization&lt;/span&gt; 0.92 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-model-len&lt;/span&gt; 16384 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-num-seqs&lt;/span&gt; 64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-num-batched-tokens&lt;/span&gt; 8192 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--enable-chunked-prefill&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--port&lt;/span&gt; 8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 3. Stress test (synthetic sweep)&lt;/span&gt;
guidellm benchmark &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:8000/v1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; &lt;span class="s2"&gt;"Qwen/Qwen2-VL-72B-Instruct"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; sweep &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s2"&gt;"prompt_tokens=8192,output_tokens=1024"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-seconds&lt;/span&gt; 300 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--warmup&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-dir&lt;/span&gt; ./results-stress &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--outputs&lt;/span&gt; json,html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 4. DocVQA benchmark (64 concurrent streams)&lt;/span&gt;
guidellm benchmark &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; &lt;span class="s2"&gt;"http://localhost:8000/v1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--model&lt;/span&gt; &lt;span class="s2"&gt;"Qwen/Qwen2-VL-72B-Instruct"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--profile&lt;/span&gt; concurrent &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rate&lt;/span&gt; 64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data&lt;/span&gt; &lt;span class="s2"&gt;"lmms-lab/DocVQA"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--data-args&lt;/span&gt; &lt;span class="s1"&gt;'{"name": "DocVQA"}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-seconds&lt;/span&gt; 120 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--warmup&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-dir&lt;/span&gt; ./results-doc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--outputs&lt;/span&gt; json,html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The AMD Instinct MI300X fundamentally alters how we architect enterprise AI infrastructure. Loading a 72-billion parameter multimodal model with zero quantization, dedicating 32GB to the KV cache, and serving 64 concurrent document analysis streams on a single node at $1.99/hour - this is a capability that did not exist at this price point 12 months ago.&lt;/p&gt;

&lt;p&gt;For ML engineering teams building automated document processing, visual data extraction, or complex agentic systems, the VRAM constraints of 80GB hardware have forced painful compromises between model quality and deployment feasibility. The MI300X, paired with ROCm 7.2 and vLLM's advanced scheduling (chunked prefill, PagedAttention), provides a stable, powerful foundation for production-grade unquantized inference - at a fraction of the cost of equivalent NVIDIA configurations.&lt;/p&gt;

&lt;p&gt;And AMD is continuing to push the memory boundary further. The Instinct MI325X extends capacity to 256GB HBM3E, targeting massive MoE and ultra-long-context inference workloads. Beyond that, the Instinct MI350X and MI355X move into next-generation CDNA4 territory with 288GB HBM3E, positioning AMD aggressively for frontier-scale enterprise AI.&lt;/p&gt;

&lt;p&gt;What makes this trajectory especially significant is not just raw capacity - it is architectural simplification. Today, deploying a 72B model unquantized on NVIDIA means splitting weights across multiple GPUs, engineering around KV cache exhaustion, and accepting the latency overhead of cross-device communication. With 192GB-class accelerators, those constraints disappear for this model class. With 256–288GB, they disappear for even larger architectures - MoE models, ultra-long-context workloads, and multi-modal pipelines that would currently require four or more 80GB cards.&lt;/p&gt;

&lt;p&gt;For enterprise AI engineering, the shift from 80GB-class to 192–288GB-class accelerators is not incremental. It fundamentally changes what becomes practical in production: fewer nodes, simpler serving topologies, lower operational complexity, and - critically - no quantization tax on model quality.&lt;/p&gt;

</description>
      <category>vllm</category>
      <category>rocm</category>
      <category>mi300x</category>
      <category>genai</category>
    </item>
  </channel>
</rss>
