Diffusion Large Language Models (dLLMs) have long held the promise of ultra-fast text generation through parallel decoding. Yet, this promise has remained unfulfilledโuntil now. In practice, open-source dLLMs have consistently lagged behind their autoregressive (AR) counterparts like LLaMA3 in inference speed, hindering their real-world adoption.
Today, we are excited to introduce Discrete Diffusion Forcing (D2F), a simple yet powerful strategy that shatters this speed limit. D2F creates a novel AR-diffusion hybrid model that achieves up to a 2.5x speedup over leading AR models and a staggering 50x acceleration over vanilla dLLMs, all while maintaining comparable output quality.

The Bottleneck: Why Were Diffusion LLMs Slow?
The potential of dLLMs lies in parallel decoding, but two fundamental issues have historically crippled their speed:
- KV Cache Incompatibility: The problem starts with bidirectional attention, where every token can see every other token. This holistic view is powerful for quality but prevents the use of the Key-Value (KV) Cacheโa critical optimization in AR models that saves massive amounts of redundant computation.
- Strict Inter-Block Dependency: Previous attempts to enable caching by generating text in blocks were still stuck. They required each block to be fully denoised before starting the next, killing any hope of true parallel processing across blocks.
The D2F Solution: A Hybrid of Speed and Power
D2F overcomes these bottlenecks by rethinking how dLLMs are trained and how they generate text. It's a two-part solution.
Architecture: Block-wise AR meets Parallel Diffusion
At its core, D2F reframes generation as a block-autoregressive process. Text is produced in sequential blocks, but with a crucial architectural twist in the attention mechanism:
- Attention within a block remains bidirectional to capture rich local context.
- Attention between blocks is made causal, meaning a block only attends to previously completed blocks.
This smart hybrid design makes D2F fully compatible with the standard KV cache, slashing a huge source of computational waste.
Method: Asymmetric Distillation and Pipelined Inference
With this new architecture, D2F uses an elegant strategy for training and inference:
Training with Asymmetric Distillation: Instead of costly training from scratch, we use an efficient distillation process. A powerful, pre-trained dLLM (the "teacher") distills its knowledge into our D2F model (the "student"). The teacher uses its full bidirectional view to make predictions, while the student learns to match them using only its limited, block-wise causal view. This process transfers the teacher's capabilities into our faster, cache-friendly architecture in just 12 hours on 8 A100 GPUs.

Inference with a Pipelined Parallel Decoder: This is where D2Fโs speed truly shines. Because the model is trained to predict future blocks from partially incomplete ones, it doesn't need to wait. During inference, it runs a dynamic pipeline:
- A new block is added to the pipeline as soon as its predecessor is only partially complete.
- Blocks begin in a conservative "semi-activated" state and transition to an aggressive "fully-activated" state once they have enough context from the block before them.
This creates a highly efficient, asynchronous workflow where multiple blocks are refined in parallel, maximizing GPU utilization and dramatically boosting throughput.

Putting D2F to the Test: Performance Highlights
D2F sets a new standard for dLLM efficiency across multiple benchmarks. We applied our D2F method to two popular open-source dLLMs, LLaDA and Dream, and the results speak for themselves.
Detailed Benchmark Results: LLaDA-Instruct-8B
Our D2F method dramatically boosts speed for LLaDA, achieving up to a 52.9x increase in throughput on MBPP and a 29.1x increase on HumanEval compared to the baseline, while significantly outperforming previous SOTA acceleration methods.
GSM8K-4-shot
Method | TPS โ | Latency (s) โ | Gen. Length | Score โ |
---|---|---|---|---|
LLaDA-Instruct | 7.2 (1.0x) | 32.3 (1.0x) | 231 | 77.4 |
dLLM-Cache | 20.1 (2.8x) | 11.5 (2.8x) | 231 | 77.5 |
Fast-dLLM (Prefix-Cache) | 33.3 (4.6x) | 7.0 (4.6x) | 232 | 77.8 |
Fast-dLLM (Dual-Cache) | 35.2 (4.9x) | 6.6 (4.9x) | 232 | 78.9 |
D2F-LLaDA | 52.5 (7.3x) | 2.8 (11.5x) | 144 | 77.3 |
MBPP-3-shot
Method | TPS โ | Latency (s) โ | Gen. Length | Score โ |
---|---|---|---|---|
LLaDA-Instruct | 0.9 (1.0x) | 71.4 (1.0x) | 65 | 39.0 |
dLLM-Cache | 2.3 (2.6x) | 28.3 (2.5x) | 66 | 37.0 |
Fast-dLLM (Prefix-Cache) | 13.0 (14.4x) | 4.9 (14.6x) | 64 | 37.6 |
Fast-dLLM (Dual-Cache) | 15.3 (17.0x) | 3.8 (18.8x) | 58 | 36.4 |
D2F-LLaDA | 47.6 (52.9x) | 1.4 (51.0x) | 68 | 38.0 |
Detailed Benchmark Results: Dream-Base-7B
Applying D2F to Dream-Base-7B results in substantial gains, including a 9.6x speedup on GSM8K-CoT and a 10.1x speedup on MBPP. Notably, the performance score often improves alongside the speed.
GSM8K-CoT-8-shot
Method | TPS โ | Latency (s) โ | Gen. Length | Score โ |
---|---|---|---|---|
Dream-Base | 9.5 (1.0x) | 26.8 (1.0x) | 255 | 75.0 |
dLLM-Cache | 26.0 (2.7x) | 9.8 (2.7x) | 255 | 72.0 |
Fast-dLLM (Prefix-Cache) | 50.3 (5.3x) | 5.1 (5.3x) | 255 | 76.6 |
Fast-dLLM (Dual-Cache) | 49.8 (5.2x) | 5.1 (5.3x) | 255 | 75.0 |
D2F-Dream | 91.2 (9.6x) | 2.8 (9.6x) | 256 | 77.6 |
MBPP-3-shot
Method | TPS โ | Latency (s) โ | Gen. Length | Score โ |
---|---|---|---|---|
Dream-Base | 10.4 (1.0x) | 24.6 (1.0x) | 256 | 56.2 |
dLLM-Cache | 25.5 (2.5x) | 10.0 (2.5x) | 256 | 52.6 |
Fast-dLLM (Prefix-Cache) | 71.6 (6.9x) | 3.6 (6.8x) | 256 | 56.4 |
Fast-dLLM (Dual-Cache) | 73.2 (7.0x) | 3.5 (7.0x) | 256 | 51.0 |
D2F-Dream | 105 (10.1x) | 2.3 (10.7x) | 240 | 55.2 |
Demo Results of vLLM on HumanEval
We tested the performance of our implemented vLLM (initial version). While there was significant performance improvement, we observed an unexpected decrease in inference scores.Model & Thresholds (Add, Act, Conf) | TPS โ | Latency (s) โ | Gen. Length | Score โ |
---|---|---|---|---|
Dream-Base (0.1, 0.95, 0.90) | 20.2 (1.0x) | 12.6 (1.0x) | 255 | 54.3 |
D2F-Dream (0.1, 0.95, 0.90) | 73.2 (3.6x) | 3.1 (4.1x) | 227 | 54.3 |
D2F-Dream-vLLM (0.1, 0.95, 0.90) | 127.2 (6.3x) | 1.87 (6.74x) | 238 | 34.1 |
D2F-Dream-vLLM (0.05, 0.90, 0.95) | 131.7 (6.5x) | 1.78 (7.08x) | 238 | 40.2 |
Implementation Notes
The current vLLM-based prototype does not yet include:
- Specialized fused / hand-optimized operators for existing decoding paradigms
- CUDA Graph capture
- Multi-GPU distributed inference
We leverage Flex Attention for convenience to obtain efficient inference. Despite saturating GPU memory, overall SM occupancy remains below optimal, indicating ample headroom for future engine-level optimization (e.g., kernel fusion, graph capture, improved scheduling, compute / communication overlap). Even in this preliminary state, the implementation already delivers substantial performance gains.
Superior Efficiency Frontier
Beyond raw numbers, D2F operates on a significantly better throughput-performance curve. The graph below shows that for any given performance level, D2F is drastically faster. For instance, on GSM8K, our model achieves 3.1x higher throughput than LLaMA3 while also attaining a better score.

Conclusion
Discrete Diffusion Forcing (D2F) represents a fundamental leap forward for diffusion-based language models. By creating a smart hybrid of autoregressive and diffusion principles, it overcomes the critical inference bottlenecks that have long held dLLMs back. For the first time, an open-source dLLM can not only match but significantly exceed the speed of highly optimized autoregressive models.
This breakthrough establishes dLLMs as a practical, high-performance alternative for a wide range of text generation tasks, paving the way for new applications and further research into efficient, parallel-first language modeling.
Citation
If you find our work useful, please consider citing our paper:
@misc{wang2025diffusionLLMsD2F,
title = {Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing},
author = {Wang, Xu and Xu, Chenkai and Jin, Yijie and Jin, Jiachun and Zhang, Hao and Deng, Zhijie},
year = {2025},
month = {aug},
eprint = {2508.09192},
archivePrefix= {arXiv},
primaryClass = {cs.LG},
doi = {10.48550/arXiv.2508.09192},
url = {https://arxiv.org/abs/2508.09192},
note = {arXiv:2508.09192}
}