Think Before You Diffuse: LLMs-guided Physics-Aware Video Generation

Ke Zhang, Cihan Xiao, Yiqun Mei, Jiacong Xu, Vishal M. Patel
Johns Hopkins University
Demo GIF 1 Demo GIF 2 Demo GIF 3

Demo Video

Abstract

Recent video diffusion models have demonstrated their great capability in generating visually-pleasing results, while synthesizing the correct physical effects in generated videos remains challenging. The complexity of real-world motions, interactions, and dynamics introduce great difficulties when learning physics from data. In this work, we propose DiffPhy, a generic framework that enables physically-correct and photo-realistic video generation by fine-tuning a pre-trained video diffusion model. Our method leverages large language models (LLMs) to explicitly reason a comprehensive physical context from the text prompt and use it to guide the generation. To incorporate physical context into the diffusion model, we leverage a Multimodal large language model (MLLM) as a supervisory signal and introduce a set of novel training objectives that jointly enforce physical correctness and semantic consistency with the input text. We also establish a high-quality physical video dataset containing diverse phyiscal-related actions and events to facilitate effective finetuning. Extensive experiments on public benchmarks, demonstrate that DiffPhy is able to produce state-of-the-art results across diverse physical scenarios. Our model and data will be released after the review process.

Method Pipeline

Method Pipeline Diagram

An overview of DiffPhy. Our method contains five steps: 1. Given a user prompt, we leverage a pretrained LLM to reason physical properties from the text input. 2. We then (a) enhance the user prompt with physical context and (b) produce a list of relevant physical phenomena associated with the described event. 3. We use the enhanced prompt to guide video generation. The phenomena list is used to penalize outputs with implausible physics. 4. A set of novel training objectives are proposed to jointly enforce physical correctness and semantic consistency.

BibTeX Citation

@misc{zhang2025think,
  title={Think Before You Diffuse: LLMs-Guided Physics-Aware Video Generation},
  author={Ke Zhang, Cihan Xiao, Yiqun Mei, Jiacong Xu, Vishal M. Patel},
  year={2025},
  eprint={2505.21653},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}