Zequn Yang (杨泽群)

I am a fourth-year Ph.D. student at the Gaoling School of Artificial Intelligence, Renmin University of China, advised by Prof. Di Hu and co-advised by Prof. Feiping Nie. My research interests center on the mechanisms of multimodal learning and multimodal interaction, especially understanding and quantifying complex interactions (synergy, redundancy, and uniqueness) from an information-theoretic perspective. Recently, I focus on vision-language collaboration mechanisms and pretraining strategies in multimodal large language models (MLLMs).

I received my Bachelor's degree in Automation from Beihang University in 2022.

Email  /  Google Scholar  /  Github

profile photo
News

[2026-03] One paper accepted by CVPR 2026, thanks to all co-authors!

[2025-05] One paper accepted by ICML 2025, thanks to all co-authors!

[2025-03] One paper accepted by CVPR 2025, thanks to all co-authors!

[2024-01] One paper accepted by ICLR 2024, thanks to all co-authors!

[2023-11] One paper accepted by Pattern Recognition, thanks to all co-authors!

Services

Reviewer: ICLR 2024-2026, ICML 2024-2026, CVPR 2024-2026, AAAI 2024-2025, NeurIPS 2025, IJCAI 2025

Publications
info-theoretic-decomposition Information-Theoretic Decomposition for Multimodal Interaction Learning

Zequn Yang, Yake Wei, Haotian Ni, Zhihao Xu, Di Hu

CVPR 2026

Information-theoretic multimodal interaction decomposition

sample-level-interaction Efficient Quantification of Multimodal Interaction at Sample Level

Zequn Yang, Hongfa Wang, Di Hu

ICML 2025

arXiv / code

Multimodal interaction quantification

clean-usnob Quantifying and Enhancing Multi-modal Robustness with Modality Preference

Zequn Yang, Yake Wei, Ce Liang, Di Hu

ICLR 2024

arXiv /code

Multi-modal Robustness

clean-usnob Geometric-Inspired Graph-based Incomplete Multi-view Clustering

Zequn Yang, Han Zhang, Yake Wei, Zheng Wang, Feiping Nie, Di Hu

Pattern Recognition

paper / code

Incomplete multi-view clustering

mibench MIBench: Evaluating LMMs on Multimodal Interaction

Yu Miao*, Zequn Yang*, Yake Wei, Ziheng Chen, Haotian Ni, Haodong Duan, Kai Chen, Di Hu
(* equal contribution)

arXiv preprint, 2026

arXiv

Multimodal interaction benchmark for large multimodal models

adaptive-unimodal-regulation Adaptive Unimodal Regulation for Balanced Multimodal Information Acquisition

Chengxiang Huang, Yake Wei, Zequn Yang, Di Hu

CVPR 2025

arXiv / code

Adaptive Multimodal Learning for Balanced Information Acquisition



Updated at Mar. 2026
Thanks Jon Barron and Yake Wei for this elaborate template.