Search websites, locations, and people

Search websites, locations, and people
ABOUT
ACADEMICS
RESEARCH
ADMISSIONS
NEWS & EVENTS
CAMPUS LIFE
INNOVATION
CAREERS
Zhenzhong LAN, Ph.D.
Deep Learning Laboratory (DLL)
Zhenzhong LAN, Ph.D.
Deep Learning Laboratory (DLL)
"Westlake University has one of the best faculty team in the world. Located in one of the most vibrant cities in China, I have no doubt that it will change the world. Come and join us if you see things differently and want to make a better world."
Biography
Dr. Zhenzhong Lan received his bachelor’s degree from Sun-Yat Sen University in 2010. He then spent two years at the human sensing lab at CMU as a visiting scholar. Between 2012 and 2017, he was a Ph.D. student at School of Computer Science, CMU. He then joined a LA-based smart surveillance startup and served as a director of R&D before he left to join Google AI in Feb. 2018. At Google, he published one of the best natural language understanding models in the world called ALBERT. He joined Westlake University as an assistant professor in 2020, focusing on solving fundamental natural language processing and computer vision problems.
Research
My research has been mainly in data-driven video and natural language understanding. I mostly do applied research that has industry impact. For example, my research on video understanding has been used to find violent videos on the Internet; my recent findings on natural language understanding has been used by Google News, Google Assistant, and other Google products. As a key member, he participated in the Multimedia Event Detection (MED) task organized by the National Institute of Standards and Technology from 2011 to 2015. The goal of this task is to perform accurate and fast video analysis and retrieval. Past competitors include IBM, BBN, SRI, Kitware, Stanford, many other companies and academic groups around the world. In MED 2014, our team ranked 1st in 6 out of the 8 conditions. I also serves as a committee member for most of the top Computer Vision and Multimedia conferences such as CVPR, ICCV, ECCV, MM etc.
Representative Publications
1. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language, Arxiv, 2019.
2. Sebastian Goodman, Zhenzhong Lan, Radu Soricut. Multi-stage Pretraining for Abstractive Summarization, Arxiv, 2019.
3. Zhenzhong Lan, Yi Zhu, Alexander G Hauptmann, Shawn Newsam. Deep local video features for action recognition. CVPR workshops 2017.
4. Zhenzhong Lan, Shoou-I Yu, Dezhong Yao, Ming Lin, Bhiksha Raj, Alexander G Hauptmann. The Best of both worlds: Combining data-independent and data-driven approaches for action Recognition. CVPR workshops 2016.
5. Lin, Ming, Zhenzhong Lan, and Alexander G. Hauptmann. Density Corrected Sparse Recovery When RIP Condition Is Broken. IJCAI’15.
6. Zhenzhong Lan, Ming Lin, Xuanchong Li, Alexander G Hauptmann, Bhiksha Raj. Beyond gaussian pyramid: Multi-skip feature stacking for action recognition. CVPR’15.
7. Wei Tong, Yi Yang, Lu Jiang, Shoou-I Yu, Zhenzhong Lan, et al. E-LAMP: Integration of Innovative Ideas for Multimedia Event Detection. Machine Vision and Applications, 2014.
8. Zhenzhong Lan, Lei Bao, Shoou-I Yu, Wei Liu, Alexander G Hauptmann. Multimedia Classification and Event Detection Using Double Fusion. Multimedia Tools and Applications, 2014.
9. Jiang, Lu, Deyu Meng, Shoou-I. Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. Self-paced learning with diversity. NIPS’14.
10. Ballas, Nicolas, Yi Yang, Zhenzhong Lan, Bertrand Delezoide, Francoise Preteux, Alexander Hauptmann. Space-time robust representation for action recognition. ICCV’13.
11. Zhenzhong Lan, Lu Jiang,Shoou-I Yu, Shourabh Rawat, Yang Cai, et al. CMU-Informedia at TRECVID 2013 multimedia event detection. TRECVID Workshop, 2013.
12. Zhenzhong Lan, Lei Bao, Shoou-I Yu, Wei Liu, Alexander G Hauptmann. Double Fusion for Multimedia Event Detection. MMM’12.
13. Minh Hoai, Zhenzhong Lan, Fernando De la Torre. Joint Segmentation and Classification of Human Actions in Video. CVPR’11.
Our lab has multiple senior researcher, postdoc, technician, and graduate student positions. Undergraduates that are qualified for direct acceptance into graduate programs are welcome to email me any time to learn more about our research progress and internship opportunities.