I am currently on the Llama team at Meta Gen AI. I completed my PhD in the NLP Group at the University of Washington, where I was advised by Hannaneh Hajishirzi and Luke Zettlemoyer. I also have an MS in Language Technologies from Carnegie Mellon University and a Bachelors in Computer Science and Engineering from IIT Kharagpur.
My primary research lies at the intersection of natural language processing and machine learning. I am broadly interested in interpretability, complex interactions, and robust inference for large language models (LLMs) and other foundation models. My research has focused on the following key areas:
- Explainable AI and model interpretability: Evaluation and training methods for pretrained models to provide faithful, yet readable explanations for their decisions [1][2].
- Model robustness: Robustness to counterfactual inputs [3], identifying error-prone data sub-populations for distributional robustness [4], addressing factual inconsistencies in LLM generations.
- Complex interaction and reasoning with LLMs: Interacting with tools (APIs, search engines, code) and humans to plan and realize multi-step reasoning[5]
Research Experience
- [May 23-Nov 23] Meta AI Intern, supervised by Luke Zettlemoyer, Koustuv Sinha
- [Sep 22-Jan 23] Microsoft Research Intern, supervised by Marco Ribeiro and Scott Lundberg.
- [Mar 22-Sep 22] Allen Institute of Artificial Intelligence Visiting Researcher, supervised by Hanna Hajishirzi, Pradeep Dasigi
- [June 21-Dec 21] Google DeepMind (formerly Google AI Research) Intern, supervised by Ian Tenney and Matthew Lamm
- [Mar 20-Mar 21] Meta AI (formerly Facebook AI Research) Visiting Researcher, supervised by Luke Zettlemoyer, Marjan Ghazvininejad
Relevant Publications
Please see my Google Scholar page for a more updated list.
-
ART: Automatic multi-step reasoning and tool-use for large language models
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro
Arxiv Pre-print -
PuMer: Pruning and Merging Tokens for Efficient Vision Language Models
Qingqing Cao, Bhargavi Paranjape, Hannaneh Hajishirzi
ACL 2023 -
AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization
Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi
ICLR 2023 -
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, Luke Zettlemoyer EMNLP 2022 (Findings Track) -
Retrieval-guided Counterfactual Generation for QA
Bhargavi Paranjape, Matthew Lamm, Ian Tenney
ACL 2022 -
EASE: Extractive-Abstractive Summarization with Explanations
Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad
Proceedings of the Third Workshop on New Frontiers in Summarization (EMNLP 2021) -
FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation
Kushal Lakhotia, Bhargavi Paranjape, Asish Ghoshal, Wen-tau Yih, Yashar Mehdad, Srinivasan Iyer
[Code]
EMNLP 2021 -
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, and Hannaneh Hajishirzi
[Code], [Slides]
ACL 2021 (Findings Track) -
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer
[Code], [Slides]
EMNLP 2020 -
Entity Projection via Machine-Translation for Cross-Lingual NER
Alankar Jain, Bhargavi Paranjape, Zachary C Lipton
[Code]
EMNLP 2019 -
Contextualized Representations for Low-resource Utterance Tagging
Bhargavi Paranjape, Graham Neubig
SIGDIAL 2019 -
ProtoNN: compressed and accurate kNN for resource-scarce devices
Chirag Gupta, Arun Sai Suggala, Ankit Goyal, Harsha Vardhan Simhadri, Bhargavi Paranjape, Ashish Kumar, Saurabh Goyal, Raghavendra Udupa, Manik Varma, Prateek Jain
ICML 2017 -
SCDV : Sparse Composite Document Vectors using soft clustering over distributional representations
Dheerak Mekala, Vivek Gupta, Bhargavi Paranjape, and Harish Karknick
[Code]
EMNLP 2017 -
Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly
[Code]
2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)
Best Student Paper Award
Professional Contributions
- Program Committee member of BlackBoxNLP 2021 workshop at EMNLP 2021.
- Organizing HAMLETS (Human And Machine in-the-Loop Evaluation and Learning Strategies) workshop at Neurips 2020. List of accepted papers here.
- Reviewer at EMNLP, AKBC (2020), NAACL, ACL, EMNLP, ICLR, NeurIPS (2021,22,23)