In Summer 2020, I will join Westlake University as an assistant professor, focusing on solving fundamental natural language processing and computer vision problems. Currently, I am a NLP researcher at Google AI. Before joining Google in Feb. 2018, I was a director of R&D of a LA-based smart surveillance startup. Prior to that, I was a PhD student at Carnegie Mellon University (CMU), advised by Prof. Alexander Hauptmann. I graduated in May 2017. Before that, I spent two years as a visiting scholar at human senening lab at CMU. I got my bachelor degree in 2010 at the lovely Sun-yat Sen University.
My research has been mainly in data-driven video and natural language understanding. I mostly do applied research that has industry impact. For example, my research on video understanding has been used to find violent videos on the Internet; my recent findings on natural language understanding has been used by Google News, Google Assistant, and other Google products. As a key member, he participated in the Multimedia Event Detection (MED) task organized by the National Institute of Standards and Technology from 2011 to 2015. The goal of this task is to perform accurate and fast video analysis and retrieval. Past competitors include IBM, BBN, SRI, Kitware, Stanford, many other companies and academic groups around the world. In MED 2014, our team ranked 1st in 6 out of the 8 conditions. I also serves as a committee member for most of the top Computer Vision and Multimedia conferences such as CVPR, ICCV, ECCV, MM etc.
1. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language, Arxiv, 2019.
2. Sebastian Goodman, Zhenzhong Lan, Radu Soricut. Multi-stage Pretraining for Abstractive Summarization, Arxiv, 2019.
3. Zhenzhong Lan, Yi Zhu, Alexander G Hauptmann, Shawn Newsam. Deep local video features for action recognition. CVPR workshops 2017.
4. Zhenzhong Lan, Shoou-I Yu, Dezhong Yao, Ming Lin, Bhiksha Raj, Alexander G Hauptmann. The Best of both worlds: Combining data-independent and data-driven approaches for action Recognition. CVPR workshops 2016.
5. Lin, Ming, Zhenzhong Lan, and Alexander G. Hauptmann. Density Corrected Sparse Recovery When RIP Condition Is Broken. IJCAI’15.
6. Zhenzhong Lan, Ming Lin, Xuanchong Li, Alexander G Hauptmann, Bhiksha Raj. Beyond gaussian pyramid: Multi-skip feature stacking for action recognition. CVPR’15.
7. Wei Tong, Yi Yang, Lu Jiang, Shoou-I Yu, Zhenzhong Lan, et al. E-LAMP: Integration of Innovative Ideas for Multimedia Event Detection. Machine Vision and Applications, 2014.
8. Zhenzhong Lan, Lei Bao, Shoou-I Yu, Wei Liu, Alexander G Hauptmann. Multimedia Classification and Event Detection Using Double Fusion. Multimedia Tools and Applications, 2014.
9. Jiang, Lu, Deyu Meng, Shoou-I. Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. Self-paced learning with diversity. NIPS’14.
10. Ballas, Nicolas, Yi Yang, Zhenzhong Lan, Bertrand Delezoide, Francoise Preteux, Alexander Hauptmann. Space-time robust representation for action recognition. ICCV’13.
11. Zhenzhong Lan, Lu Jiang,Shoou-I Yu, Shourabh Rawat, Yang Cai, et al. CMU-Informedia at TRECVID 2013 multimedia event detection. TRECVID Workshop, 2013.
12. Zhenzhong Lan, Lei Bao, Shoou-I Yu, Wei Liu, Alexander G Hauptmann. Double Fusion for Multimedia Event Detection. MMM’12.
13. Minh Hoai, Zhenzhong Lan, Fernando De la Torre. Joint Segmentation and Classification of Human Actions in Video. CVPR’11.
Our lab has multiple senior researcher, postdoc, technician, and graduate student positions. Undergraduates that are qualified for direct acceptance into graduate programs are welcome to email me any time to learn more about our research progress and internship opportunities.