The 1st Efficient Natural Language and Multimodal Models Workshop
The landscape of AI has been significantly altered by the advances in large-scale pre-trained models, laying the groundwork for more general AI and allowing us to reach previously unattainable performance levels in natural language processing and multimodal learning. Despite the empirical success, these large-scale models require an enormous amount of computation to achieve high performance, hindering their deployment in devices with low memory and strict latency requirements. Additionally, these models often rely on large amounts of labeled training data that are difficult to acquire or annotate for many tasks, including those dealing with sensitive user data. Challenges also arise about how the model can continuously improve itself with feedback signals from users. We propose to organize a one-day track, focusing on model/data efficiency to build large-scale models.
This event aims to bring together experts in machine learning, natural language processing, optimization, and systems to stimulate vibrant discussions toward a deeper and more explicit understanding of the connections in between and to foster new research directions toward advanced approaches to improve the efficiency of various NLP and multimodal models.
Call for Papers
We encourage the community to submit their solutions, ideas, and ongoing work concerning efficiency for NLP and multimodal models. The scope of this workshop includes, but not limited to, the following topics.
- Efficient model architecture design for large-scale NLP and multimodal models.
- Model compression and acceleration for large transformer-based models, such as quantization, pruning, and knowledge distillation for NLP and multimodal.
- Efficient training/optimization/fine-tuning methods of NLP and multimodal models.
- Efficient framework (e.g., DeepSpeed) to support NLP and multimodal models training and inference.
- Novel methods such as sample efficient training, data augmentation, and data distillation to improve the data efficiency of NLP and multimodal models.
Submission Instructions
The submission website will be up shortly. All the submitted papers have to be anonymous for double-blind review. We expect each paper will be reviewed by at least three reviewers. The content of the paper (excluding the references and supplementary materials) should not be longer than 8 pages.
Authors can submit up to 100 MB of supplementary materials separately. Authors are highly encouraged to submit their codes for reproducibility purposes. Although original submissions are preferred, submitted papers can be among your already published or ArXiv papers, and your under submission works. Please make sure to indicate the complete list of conflict of interests for all the authors of your paper. To encourage higher quality submissions, our sponsors are offering the Best Paper Award to qualified outstanding original oral and poster presentations (upon nomination of the reviewers). Bear in mind that our workshop is not archival, but the accepted papers will be hosted on the workshop website.
Confirmed Speakers

Prof.
Yejin Choi
University of Washington (Allen Institute for AI)

Prof.
Yoon Kim
Massachusetts Institute of Technology

Dr.
Chunting Zhou
Meta AI Research

Prof.
Sameer Singh
University of California Irvine
Prof.
Bang Liu
University of Montreal

Dr.
Jianfeng Gao
Microsoft Research
Schedule (Tentative)
Our workshop will include 9 confirmed invited talks (30 min each). Four contributed talks (will be selected from paper submissions; 15 min each) encourage novel contributed work as well as highlight junior researchers. A panel discussion and two in-person poster sessions are created to encourage broad, interactive, and interdisciplinary discussions on open problems. Coffee breaks are scheduled right after poster sessions to allow sufficient time for interested participants to discuss and socialize. Workshop schedules will be published early with talk/paper titles to allow for the choice of attendance based on content.
Organizers

Yuntian Deng
Harvard University

Mengzhou Xia
Princeton University

Mehdi Rezagholizadeh
Huawei Noah's Ark Lab

Yue Dong
University of California, Riverside

Shiyu Chang
UC Santa Barbara

Yu Cheng
Microsoft Research
Advisory Committee

Ahmed H. Awadallah
Microsoft

Danqi Chen
Princeton University

Alexander Rush
Cornell University
Program Committee
|
|