Li Yang

Li Yang

Assistant Professor, Department of Computer Science, UNC Charlotte

I lead the Efficient AI Computing Lab. Before joining UNC Charlotte, I received my Ph.D. from Arizona State University (advised by Prof. Deliang Fan).


Our research aims to build efficient and adaptive AI systems that can learn, evolve, and operate under real-world resource constraints.

Research Directions
Modern foundation models such as large language models (LLMs) and multimodal generative models provide powerful capabilities but require substantial computational resources. Our research focuses on developing efficient techniques that reduce the training and inference cost of large-scale models while maintaining strong performance.

Key topics

  • Model compression (pruning, quantization, low-rank adaptation)
  • Efficient fine-tuning and adaptation
  • Efficient reasoning
Real-world AI systems must operate in dynamic environments where tasks and data distributions evolve over time. Our research develops adaptive learning algorithms that enable models to continuously learn from new data while retaining previously acquired knowledge.

Key topics

  • Efficient continual learning algorithms for large-scale models
  • Lifelong learning agents
  • Online and streaming learning systems
  • Adaptive model architectures
Many emerging applications require AI models to run directly on resource-constrained platforms such as mobile devices, sensors, and embedded systems. Our work explores efficient AI systems that enable real-time inference and learning at the edge.

Key topics

  • Software–hardware co-design for efficient AI systems
  • On-device learning and model adaptation
  • Real-time AI for streaming and sensor data
  • Robust and privacy-preserving edge AI

Open Positions —

I am looking for self-motivated Ph.D. students (fully funded) and research interns. If you are interested in efficient ML, continual learning, or edge AI, please send your CV to my email. Admission details here.

News

Feb 2026paper Two papers accepted to DAC 2026 and CVPR 2026, respectively.
Jan 2026paper One paper accepted to ICLR 2026.
Jan 2026service Serve as a TPC member for DAC 2026.
Nov 2025paper One paper accepted to WACV 2026.
Jun 2025paper One paper accepted to ECML-PKDD 2025.
Jun 2025service Serve as a committee member for ICCAD 2025.
May 2025paper One paper accepted to ISQED 2025 (Invited).
May 2025paper One paper accepted to GLSVLSI 2025.
Apr 2025service Serve on an NSF panel.
Feb 2025paper One paper accepted to CVPR 2025.
Jan 2025paper One paper accepted to ICLR 2025.
Oct 2024paper One paper accepted to WACV 2025.
Oct 2024paper One paper accepted to BigData 2024.
Sep 2024paper One paper accepted to ASP-DAC 2025 and NeurIPS 2024.
Jun 2024service Serve as a committee member for ICCAD 2024.
Feb 2024paper One paper accepted to DAC 2024.
Jan 2024award Receive NSF CRII award.
Aug 2023milestone Joined UNC Charlotte as an Assistant Professor.
Jul 2023service Serve as a committee member for ICCAD 2023.

Recent Publications

Full list →
DAC '26

Think Before You Prune: Self-Reflective Structured Pruning for Reasoning Language Models

Ziyan Wang, Enmao Diao, Qi Le, Ping Wang, Guanghui Wang, Mingyi Lee, Shih-Yang Yeh, and Li Yang

DAC 2026

ICLR '26

Rethinking Continual Learning with Progressive Neural Collapse

Zheng Wang, Wanhao Yu, Li Yang, and Sen Lin

ICLR 2026

WACV '26

More Than Memory Savings: Zeroth-Order Optimization Mitigates Forgetting in Continual Learning

Wanhao Yu, Ziyan Wang, Shuning Niu, Sen Lin, and Li Yang

WACV 2026

CVPR '25

Closest Neighbors are Harmful for Lightweight Masked Auto-encoders

Jian Meng, Ahmed Hassan, Li Yang, Deliang Fan, Jinwoo Shin, and Jae-sun Seo

CVPR 2025

ICLR '25

Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing

Qi Le, Enmao Diao, Ziyan Wang, Xinran Wang, Jie Ding, Li Yang, and Ali Anwar

ICLR 2025

ISQED '25

Efficient Self-Supervised Continual Learning with Progressive Task-Correlated Layer Freezing

Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, and Deliang Fan

ISQED 2025 (Invited)

GLSVLSI '25

LoRAFusion: A Crossbar-aware Multi-task Adaption Framework via Efficient Fusion of Pretrained LoRA Modules

Jingkai Guo, Ammar Ali, Li Yang, and Deliang Fan

GLSVLSI 2025

COINS '25

Dyna-Optics: Architecting a Channel-Adaptive DNN Near-Sensor Optical Accelerator for Dynamic Inference

Deniz Najafi, Wanhao Yu, Mehrdad Morsali, Paolo Mercati, Mohsen Imani, Mahdi Nikdast, Li Yang, and Shaahin Angizi

COINS 2025

WACV '25

MFNeRF: Memory Efficient NeRF with Mixed-Feature Hash Table

Yongjae Lee, Li Yang, and Deliang Fan

WACV 2025

NeurIPS '24

LP-3DGS: Learning to Prune 3D Gaussian Splatting

Zhaoliang Zhang, Tianchen Song, Yongjae Lee, Li Yang, Cheng Peng, Rama Chellappa, and Deliang Fan

NeurIPS 2024

DAC '24

Hyb-Learn: A Framework for On-Device Self-Supervised Continual Learning with Hybrid RRAM/SRAM Memory

Fan Zhang, Li Yang, and Deliang Fan

DAC 2024

AAAI '24

EMGAN: Early-Mix-GAN on Extracting Server-Side Model in Split Federated Learning

Jingtao Li, Xing Chen, Li Yang, Adnan Siraj Rakin, Deliang Fan, and Chaitali Chakrabarti

AAAI 2024

NeurIPS '23

Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training

Jian Meng, Li Yang, Kyungmin Lee, Jinwoo Shin, Deliang Fan, and Jae-sun Seo

NeurIPS 2023

Awards

2022Best Interactive Presentation Award, DATE 2022
2022Best Paper Candidate, DATE 2022
2022Best Paper Candidate Nomination, DAC 2022
2020Young Fellow Award, DAC 2020

Lab Members

PhD Students

  • Ziyan Wang Jan 2024 –
  • Wanhao Yu Aug 2024 –

Master Students

  • Zhongxing Qin May 2024 –
  • Alexander Mesa May 2025 –

Undergraduates

  • Bruh Lemma Yadecha Apr 2025 –

Research Interns

  • Liwen Zhang May 2025 –

Academic Services

TPC member: ICCAD 2023, 2024, 2025, DAC 2026
Reviewer: ICML, NeurIPS, ICLR, ECCV, CVPR, ICCV, ACL, TNNLS, etc.