D2F: Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing

Xu Wang*, Chenkai Xu*, Yijie Jin, Jiachun Jin, Hao Zhang, Zhijie Dengโ€  Shanghai Jiao Tong University, University of California San Diego * Equal Contribution ย ย  โ€  Corresponding Author
Real-time demo comparing our D2F-dLLM (left) and a baseline AR model (right). D2F generates text in parallel blocks, achieving significantly higher throughput.

Diffusion Large Language Models (dLLMs) have long held the promise of ultra-fast text generation through parallel decoding. Yet, this promise has remained unfulfilledโ€”until now. In practice, open-source dLLMs have consistently lagged behind their autoregressive (AR) counterparts like LLaMA3 in inference speed, hindering their real-world adoption.

Today, we are excited to introduce Discrete Diffusion Forcing (D2F), a simple yet powerful strategy that shatters this speed limit. D2F creates a novel AR-diffusion hybrid model that achieves up to a 2.5x speedup over leading AR models and a staggering 50x acceleration over vanilla dLLMs, all while maintaining comparable output quality.

Bar chart comparing inference throughput of D2F with other DLLMs and AR models.
Figure 1. D2F dLLMs surpass similarly-sized autoregressive (AR) models in inference speed for the first time, achieving up to 2.5x higher throughput than LLaMA3 and Qwen2.5 (Speed tests conducted on NVIDIA A100-PCIe-40GB GPUs). This comparison includes vanilla dLLMs, previous acceleration methods, and AR baselines.

The Bottleneck: Why Were Diffusion LLMs Slow?

The potential of dLLMs lies in parallel decoding, but two fundamental issues have historically crippled their speed:

The D2F Solution: A Hybrid of Speed and Power

D2F overcomes these bottlenecks by rethinking how dLLMs are trained and how they generate text. It's a two-part solution.

Architecture: Block-wise AR meets Parallel Diffusion

At its core, D2F reframes generation as a block-autoregressive process. Text is produced in sequential blocks, but with a crucial architectural twist in the attention mechanism:

This smart hybrid design makes D2F fully compatible with the standard KV cache, slashing a huge source of computational waste.

Method: Asymmetric Distillation and Pipelined Inference

With this new architecture, D2F uses an elegant strategy for training and inference:

Training with Asymmetric Distillation: Instead of costly training from scratch, we use an efficient distillation process. A powerful, pre-trained dLLM (the "teacher") distills its knowledge into our D2F model (the "student"). The teacher uses its full bidirectional view to make predictions, while the student learns to match them using only its limited, block-wise causal view. This process transfers the teacher's capabilities into our faster, cache-friendly architecture in just 12 hours on 8 A100 GPUs.

Diagram showing the asymmetric distillation process for D2F.
Figure 2. Overview of our training method, Discrete Diffusion Forcing (D2F). A D2F model with a block-wise causal attention mask (the student) is trained to mimic the predictions of a pre-trained bidirectional dLLM (the teacher). This enables KV caching while learning to perform parallel decoding across blocks.

Inference with a Pipelined Parallel Decoder: This is where D2Fโ€™s speed truly shines. Because the model is trained to predict future blocks from partially incomplete ones, it doesn't need to wait. During inference, it runs a dynamic pipeline:

Figure 3. A slow-motion demonstration of the parallel decoding process within a single block of D2F. Watch as multiple tokens within the block are refined simultaneously, showcasing the efficiency of our approach.

This creates a highly efficient, asynchronous workflow where multiple blocks are refined in parallel, maximizing GPU utilization and dramatically boosting throughput.

Visualization of the pipelined parallel decoding workflow.
Figure 4. Visualization of our pipelined parallel decoding algorithm. A dynamic pipeline of blocks is decoded in parallel. New blocks are added once their predecessor is partially complete, and transition from a conservative 'semi-activated' state to an aggressive 'fully-activated' state, maximizing throughput.

Putting D2F to the Test: Performance Highlights

D2F sets a new standard for dLLM efficiency across multiple benchmarks. We applied our D2F method to two popular open-source dLLMs, LLaDA and Dream, and the results speak for themselves.

Detailed Benchmark Results: LLaDA-Instruct-8B

Our D2F method dramatically boosts speed for LLaDA, achieving up to a 52.9x increase in throughput on MBPP and a 29.1x increase on HumanEval compared to the baseline, while significantly outperforming previous SOTA acceleration methods.

GSM8K-4-shot
MethodTPS โ†‘Latency (s) โ†“Gen. LengthScore โ†‘
LLaDA-Instruct7.2 (1.0x)32.3 (1.0x)23177.4
dLLM-Cache20.1 (2.8x)11.5 (2.8x)23177.5
Fast-dLLM (Prefix-Cache)33.3 (4.6x)7.0 (4.6x)23277.8
Fast-dLLM (Dual-Cache)35.2 (4.9x)6.6 (4.9x)23278.9
D2F-LLaDA52.5 (7.3x)2.8 (11.5x)14477.3
MBPP-3-shot
MethodTPS โ†‘Latency (s) โ†“Gen. LengthScore โ†‘
LLaDA-Instruct0.9 (1.0x)71.4 (1.0x)6539.0
dLLM-Cache2.3 (2.6x)28.3 (2.5x)6637.0
Fast-dLLM (Prefix-Cache)13.0 (14.4x)4.9 (14.6x)6437.6
Fast-dLLM (Dual-Cache)15.3 (17.0x)3.8 (18.8x)5836.4
D2F-LLaDA47.6 (52.9x)1.4 (51.0x)6838.0

Detailed Benchmark Results: Dream-Base-7B

Applying D2F to Dream-Base-7B results in substantial gains, including a 9.6x speedup on GSM8K-CoT and a 10.1x speedup on MBPP. Notably, the performance score often improves alongside the speed.

GSM8K-CoT-8-shot
MethodTPS โ†‘Latency (s) โ†“Gen. LengthScore โ†‘
Dream-Base9.5 (1.0x)26.8 (1.0x)25575.0
dLLM-Cache26.0 (2.7x)9.8 (2.7x)25572.0
Fast-dLLM (Prefix-Cache)50.3 (5.3x)5.1 (5.3x)25576.6
Fast-dLLM (Dual-Cache)49.8 (5.2x)5.1 (5.3x)25575.0
D2F-Dream91.2 (9.6x)2.8 (9.6x)25677.6
MBPP-3-shot
MethodTPS โ†‘Latency (s) โ†“Gen. LengthScore โ†‘
Dream-Base10.4 (1.0x)24.6 (1.0x)25656.2
dLLM-Cache25.5 (2.5x)10.0 (2.5x)25652.6
Fast-dLLM (Prefix-Cache)71.6 (6.9x)3.6 (6.8x)25656.4
Fast-dLLM (Dual-Cache)73.2 (7.0x)3.5 (7.0x)25651.0
D2F-Dream105 (10.1x)2.3 (10.7x)24055.2
Demo Results of vLLM on HumanEval
We tested the performance of our implemented vLLM (initial version). While there was significant performance improvement, we observed an unexpected decrease in inference scores.
Model & Thresholds (Add, Act, Conf) TPS โ†‘ Latency (s) โ†“ Gen. Length Score โ†‘
Dream-Base (0.1, 0.95, 0.90) 20.2 (1.0x) 12.6 (1.0x) 255 54.3
D2F-Dream (0.1, 0.95, 0.90) 73.2 (3.6x) 3.1 (4.1x) 227 54.3
D2F-Dream-vLLM (0.1, 0.95, 0.90) 127.2 (6.3x) 1.87 (6.74x) 238 34.1
D2F-Dream-vLLM (0.05, 0.90, 0.95) 131.7 (6.5x) 1.78 (7.08x) 238 40.2
Implementation Notes

The current vLLM-based prototype does not yet include:

  • Specialized fused / hand-optimized operators for existing decoding paradigms
  • CUDA Graph capture
  • Multi-GPU distributed inference

We leverage Flex Attention for convenience to obtain efficient inference. Despite saturating GPU memory, overall SM occupancy remains below optimal, indicating ample headroom for future engine-level optimization (e.g., kernel fusion, graph capture, improved scheduling, compute / communication overlap). Even in this preliminary state, the implementation already delivers substantial performance gains.

Superior Efficiency Frontier

Beyond raw numbers, D2F operates on a significantly better throughput-performance curve. The graph below shows that for any given performance level, D2F is drastically faster. For instance, on GSM8K, our model achieves 3.1x higher throughput than LLaMA3 while also attaining a better score.

Graph showing the throughput vs. performance trade-off for D2F and other models.
Figure 5. The throughput-performance trade-off. D2F models (blue data points) establish a far more favorable efficiency curve. Unlike vanilla dLLMs, which see performance plummet at higher speeds, D2F maintains high scores while dramatically increasing throughput, outperforming AR baselines (stars) on both axes.

Conclusion

Discrete Diffusion Forcing (D2F) represents a fundamental leap forward for diffusion-based language models. By creating a smart hybrid of autoregressive and diffusion principles, it overcomes the critical inference bottlenecks that have long held dLLMs back. For the first time, an open-source dLLM can not only match but significantly exceed the speed of highly optimized autoregressive models.

This breakthrough establishes dLLMs as a practical, high-performance alternative for a wide range of text generation tasks, paving the way for new applications and further research into efficient, parallel-first language modeling.

Citation

If you find our work useful, please consider citing our paper:

@misc{wang2025diffusionLLMsD2F,
                  title        = {Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing},
                  author       = {Wang, Xu and Xu, Chenkai and Jin, Yijie and Jin, Jiachun and Zhang, Hao and Deng, Zhijie},
                  year         = {2025},
                  month        = {aug},
                  eprint       = {2508.09192},
                  archivePrefix= {arXiv},
                  primaryClass = {cs.LG},
                  doi          = {10.48550/arXiv.2508.09192},
                  url          = {https://arxiv.org/abs/2508.09192},
                  note         = {arXiv:2508.09192}
                }