LiON-LoRA: Rethinking LoRA Fusion to Unify Controllable Spatial and Temporal Generation for Video Diffusion

1Zhejiang University 2Alibaba DAMO Academy 3Hupan Lab
ICCV 2025
*Equal Contribution
Corresponding Author
Motivation

Camera and motion control results of our LiON-LoRA. LiON-LoRA can linearly control both camera trajectory and object motion in videos generated by the video diffusion model. Furthermore, based on LoRA fine-tuning, LiON-LoRA achieves excellent generalization with minimal training data..

Abstract

Video Diffusion Models (VDMs) have demonstrated remarkable capabilities in synthesizing realistic videos by learning from large-scale data. Although vanilla Low-Rank Adaptation (LoRA) can learn specific spatial or temporal movement to driven VDMs with constrained data, achieving precise control over both camera trajectories and object motion remains challenging due to the unstable fusion and non-linear scalability. To address these issues, we propose LiON-LoRA, a novel framework that rethinks LoRA fusion through three core principles: Linear scalability, Orthogonality, and Norm consistency. First, we analyze the orthogonality of LoRA features in shallow VDM layers, enabling decoupled low-level controllability. Second, norm consistency is enforced across layers to stabilize fusion during complex camera motion combinations. Third, a controllable token is integrated into the diffusion transformer (DiT) to linearly adjust motion amplitudes for both cameras and objects with a modified self-attention mechanism to ensure decoupled control. Additionally, we extend LiON-LoRA to temporal generation by leveraging static-camera videos, unifying spatial and temporal controllability. Experiments demonstrate that LiON-LoRA outperforms state-of-the-art methods in trajectory control accuracy and motion strength adjustment, achieving superior generalization with minimal training data.

Different Camera Motion

Linear Scalability of LiON-LoRA

Object Motion Strength Controllable

Fusion of Different Motion

Framework

Description for the first image

Orthogonality and Norm Consistency.

Description for the second image

Pipeline of multiple LiON-LoRA.

Vanilla LoRA Fusion VS Ours

Scale Token VS Adapter Scale

BibTeX

@article{lionlora,
        title={LiON-LoRA: Rethinking LoRA Fusion to Unify Controllable Spatial and Temporal Generation for Video Diffusion},
        author={Yisu Zhang, Chenjie Cao, Chaohui Yu, Jianke Zhu},
        journal={ International Conference on Computer Vision (ICCV)},
        year={2025}
      }