Training-free Guidance in Text-to-Video Generation
via Multimodal Planning and Structured Noise Initialization

UNC Chapel Hill
*: equal contribution

Abstract

Teaser
Recent advancements in text-to-video (T2V) diffusion models have significantly enhanced the visual quality of the generated videos. However, even recent T2V models find it challenging to follow text descriptions accurately, especially when the prompt requires accurate control of spatial layouts or object trajectories. A recent line of research uses layout guidance for T2V models that require fine-tuning or iterative manipulation of the attention map during inference time. This significantly increases the memory requirement, making it difficult to adopt a large T2V model as a backbone. To address this, we introduce Video-MSG, a training-free Guidance method for T2V generation based on Multimodal planning and Structured noise initialization. Video-MSG consists of three steps, where in the first two steps, Video-MSG creates Video Sketch, a fine-grained spatio-temporal plan for the final video, specifying background, foreground, and object trajectories, in the form of draft video frames. In the last step, Video-MSG guides a downstream T2V diffusion model with Video Sketch through noise inversion and denoising. Notably, Video-MSG does not need fine-tuning or attention manipulation with additional memory during inference time, making it easier to adopt large T2V models. Video-MSG demonstrates its effectiveness in enhancing text alignment with multiple T2V backbones (VideoCrafter2 and CogVideoX-5B) on popular T2V generation benchmarks (T2VCompBench and VBench). We provide comprehensive ablation studies about noise inversion ratio, different background generators, background object detection, and foreground object segmentation.

Method

Teaser
We introduce Video-MSG, Multimodal Sketch Guidance for video generation, a training-free guidance method for T2V generation based on multimodal planning and structured noise initialization. Video-MSG consists of three stages (illustrated in Fig. 2): 1. Background planning, where we adopt T2I and I2V models to generate background image priors with natural animation. 2. Foreground Object Layout and Trajectory Planning, where we apply MLLM and object detectors to plan and place foreground objects into the background harmoniously. 3. Video Generation with Structured Noise Initialization, where the synthesized images derived from the above stages are used as Video Sketch for final video generation via inversion techniques.

Qualitative Examples

Motion: An egg rolling from the right to left on the table.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Motion: A helicopter gracefully descending to land.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Numeracy: Three bears fish in a river surrounded by mountains.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Numeracy: Six penguins waddle together across an icy landscape.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Spatial: A gorilla sitting on the left side of a vending machine in a forest.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Spatial: A child building a sandcasle on the right of a beach umbrella.

CogVideoX-5B

Teaser

Ours (Video Sketch)

Teaser

Ours (Final Video)

Teaser

Quantitative Results

BibTeX


@article{li2025video-msg,
        author = {Jialu Li and Shoubin Yu and Han Lin and Jaemin Cho and Jaehong Yoon and Mohit Bansal},
        title = {Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization},
        year = {2025},
        journal = {ArXiv2504.08641},
}