NS4NLP: Neuro-Symbolic Modeling for NLP

Tutorial @ COLING 2022

Monday, October 17: 8 am - 11 am Korea Standard Time.

Sunday, October 16: 7 pm - 10 pm US Eastern Time.

The goal of neuro-symbolic methods is to combine symbolic representations and neural networks to benefit from the complementary strengths of the two paradigms. These ideas have a long history in AI and currently experience a resurgence of interest in several AI communities, including NLP. This tutorial targets researchers and practitioners interested in applying and advancing the use of these methods for natural language processing problems. A major goal of this tutorial is to provide a framework for analyzing the different modeling and algorithmic choices when combining symbolic and neural models for knowledge representation, learning and reasoning.

Tutorial Outline

Introduction (15 mins) [Slides]
Algorithmic Frameworks and Applications (35 mins) Framework 1: Integrating Rules and Symbolic Knowledge into Neural Language Models. Applications: Word Math Problems, Common Sense Reasoning, Grounding and Generation [Slides Part 1] [Slides Part 2]
Break (15 mins)
Algorithmic Frameworks and Applications (cont.) (70 mins) Framework 2: Augmenting Network Architectures and Loss Functions Using Logic Rules. Applications: Textual Inference and Sentence-level Semantics [Slides]
Framework 3: Agumenting Statistical Relational Learning with Neural Potentials and Distributed Representations. Applications: Discourse, Argumentation and Computational Social Science [Slides]
Break (15 mins)
Conclusion (10 mins) Challenges and Opportunities: What is Next for NS4NLP? [Slides]
Demo (20 mins) Modeling an NLP Application with DRaiL: a Declarative Language for Deep Relational Learning [Collab Notebook]


Dan Roth
Eduardo D. Glandt Distinguished Professor, University of Pennsylvania | VP/Distinguished Scientist, AWS AI


Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, UPenn, VP/Distinguished Scientist at AWS AI-Labs, and a Fellow of the AAAS, ACM, AAAI, and ACL. In 2017, he was awarded the John McCarthy Award, the highest award the AI community gives to mid-career AI researchers. Roth was recognized ``for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning.’’ Roth has published broadly in machine learning, NLP, KRR, and learning theory, and has given keynote talks and tutorials in all ACL and AAAI major conferences. Roth was the Editor-in-Chief of JAIR until 2017, and was the program chair of AAAI’11, ACL’03 and CoNLL’02; he serves regularly as an area chair and senior program committee member in the major conferences in his research areas.

Yejin Choi
Brett Helsel Professor, University of Washington | Senior Research Manager, AI2


Yejin Choi is the Brett Helsel Professor at the Paul G. Allen School of CSE at the University of Washington with a dual appointment at AI2. Her research investigates commonsense knowledge and reasoning, neuro-symbolic integration, multimodal representation learning, and AI for social good. She is a co-recipient of the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award in 2018, the Alexa Prize in 2017, IEEE AI’s 10 to Watch in 2016, and the ICCV Marr Prize in 2013. Yejin (together with Dan Roth) gave a tutorial at ACL 2020 on Commonsense Reasoning for NLP, which was the second most popular tutorial at ACL 2020

Vivek Srikumar
Associate Professor, The University of Utah


Vivek Srikumar is an Associate Professor in the School of Computing at the University of Utah. His research lies in the areas of natural language processing and machine learning, and has primarily been driven by the question of efficiently reasoning about text with only limited supervision. His work has been published in various AI, NLP and machine learning venues, and has been recognized by a best paper award at EMNLP 2014 and honorable mentions from CoNLL and IEEE Micro magazine. He serves on the editorial boards of JAIR and the CL journal, and is an associate program chair for AAAI 2022. He has presented tutorials on structured prediction and debiasing text representations at various NLP and AI venues in the past.

Dan Goldwasser
Associate Professor, Purdue University


Dan Goldwasser is an Associate Professor at the Department of Computer Science at Purdue University. His current interests focus on developing representation and reasoning frameworks for computational social science applications, used for grounding political discourse and understanding real-world scenarios. His work is published at the main AI and NLP venues, such as ACL, NAACL, EMNLP and AAAI. He has received research support from the NSF, including a recent CAREER award, DARPA and Google. Dan is on the editorial board of JAIR and regularly serves as an area chair and senior program committee member in major AI and NLP conferences. He has given several tutorials, including a shorter version of this tutorial at IJCAI 2021, together with Maria Pacheco.

Maria L. Pacheco
Postdoctoral Researcher, Microsoft Research | Visting Assistant Professor, University of Colorado Boulder


Maria L. Pacheco is a Postdoctoral Researcher at Microsoft Research NYC, and an Incoming Assistant Professor in the Department of Computer Science at the University of Colorado Boulder. Her research focuses broadly on neuro-symbolic representations for language applications, and the role they play in human-AI communication. Maria has published and served in the program committee for top conferences and journals in AI and NLP, such as AAAI, TACL, NAACL, EMNLP, EACL, CoNLL, and SIGDIAL. Maria has delivered tutorials and talks on neuro-symbolic modeling for NLP to diverse audiences, including a shorter version of this tutorial at IJCAI 2021, together with Dan Goldwasser.

Sean Welleck
Postdoctoral Scholar, University of Washington | Young Investigator, AI2


Sean Welleck is a Postdoctoral Scholar at the University of Washington and AI2. He earned his PhD at New York University. His research focuses on generating and reasoning with natural language, including neural theorem proving and integrating symbolic representations into neural language generation. He has published at and served as a program committee member for NeurIPS, ICLR, ICML, AAAI, EMNLP, and ACL, hosts the Thesis Review podcast, and organized the NeurIPS 2021 workshop on Math-AI for Education.


Disclaimer: We had small connectivity issues during the introduction, so the audio is a little choppy in some parts. This problem was fixed after a few minutes.