Han Cai
hcai.hm [at] gmail (dot) com
I am a final-year Ph.D. student at MIT, advised by Prof. Song Han. Before coming to MIT, I received my Master's and Bachelor's degree at Shanghai Jiao Tong University (SJTU), advised by Prof. Yong Yu. At SJTU, I also worked closely with Prof. Weinan Zhang and Prof. Jun Wang.
My research interests lie in machine learning, particularly efficient foudation models (diffusion models, LLMs, etc), EdgeAI and AutoML.
Email / Google Scholar / GitHub / Twitter / Linkedin
Awards
Competition Awards
- Third Place in the 2022 ACM/IEEE TinyML Design Contest at ICCAD
- First Place in the 2020 IEEE Low-Power Computer Vision Challenge, CPU detection & FPGA track
- First Place in the 2019 IEEE Low-Power Image Recognition Challenge, classification & detection track
- First Place in the Low-Power Computer Vision Workshop at ICCV 2019, DSP track
- Third Place in the CVPR 2019 Low-Power Image Recognition Challenge, classification track
News
- Feb 29 2024: CAN and DistriFusion are accepted by CVPR 2024.
- Sep 12 2023: EfficientViT is highlighted by MIT home page and MIT News.
- July 18 2023: EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction is accepted by ICCV 2023.
- Mar 2022: Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation is accepted by CVPR'2022.
- Jan 2022: Network Augmentation for Tiny Deep Learning is accepted by ICLR'2022.
- Sep 2020: TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning is accepted by NeurIPS'2020.
- Feb 2020: APQ: Joint Search for Network Architecture, Pruning and Quantization Policy is accepted by CVPR'2020.
- Dec 2019: Once-For-All Network (OFA) is accepted by ICLR'2020.
- Dec 2018: ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware is accepted by ICLR'2019.
Selected Projects
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
EfficientViT-SAM is a new family of accelerated segment anything models. We replace SAM's heavy image encoder with EfficientViT. Benefiting from EfficientViT's efficiency and capacity, EfficientViT-SAM delivers 48.9x measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance.
EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction
EfficientViT is a new family of vision models for high-resolution dense prediction. It achieves global receptive field and multi-scale learning with only hardware-efficient operations. EfficientViT delivers remarkable performance gains over previous models with speedup on diverse hardware platforms.
[Media: MIT home page, MIT News, Imaging and Machine Vision Europe]
[Industry Integration: NVIDIA, HuggingFace]
[Code: GitHub (0.9k stars)]
Once for All: Train One Network and Specialize it for Efficient Deployment
OFA is an efficient AutoML technique that decouples model training from architecture search. Train only once, specialize for many hardware platforms, from CPU/GPU to hardware accelerators. OFA consistently outperforms SOTA NAS methods while reducing orders of magnitude GPU hours and CO2 emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top1 accuracy under the mobile setting (<600M FLOPs). OFA is the winning solution for CVPR 2020 Workshop of Low-Power Computer Vision Challenge (FPGA track), 2019 IEEE Low-Power Image Recognition Challenge (classification and detection track), Low-Power Computer Vision Workshop at ICCV 2019 (DSP track).
[Media: MIT News, Qualcomm, VentureBeat, SingularityHub]
[Industry Integration: Meta, Sony, AMD]
[Code: GitHub (1.8k stars), Colab]
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
ProxylessNAS is an efficient hardware-aware neural architecture search method, which can directly search on large-scale datasets. It can design specialized neural network architecture for different hardware platforms. With >74.5% top-1 accuracy, the latency of ProxylessNAS is 1.8x faster than MobileNetV2.
[Media: MIT News, IEEE Spectrum]
[Industry Integration: Meta, Amazon, Microsoft]
[Code: GitHub (1.4k stars)]
Academic Services
- Serve as a reviewer for ICML, NeurIPS, ICLR, CVPR, ICCV, ECCV, TPAMI, IJCV, ACL, EMNLP