About

Hello! I'm a Ph.D. candidate in the Faculty of Information, a graduate fellow of the Data Sciences Institute (DSI), and a graduate affiliate of the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. I'm co-advised by Prof. Ishtiaque Ahmed and Prof. Shion Guha. I am also a member of the Department of Computer Science's Third Space research group, Dynamics Graphics Project lab and iSchool's Human-Centred Data Science Lab.

Before starting my Ph.D., I worked as a Data Scientist at Everwell Health Solutions and a AI Center Fellow at Microsoft Research India. I completed my M.Sc. in Artificial Intelligence from KU Leuven, Belgium, and B.Tech in Instrumentation and Control Engineering from NIT Trichy, India.


Interests

I'm broadly interested in supporting the safe and responsible use of AI by (a) analyzing the tensions, expectations, and subtle technical assumptions that underlie AI system development, and (b) developing software and practical tools that enable the responsible deployment of AI in real-world settings.

My current research focuses on the reasoning-like behavior of LLMs that underpins how people build trust in and make decisions with these systems, and centers on two interrelated inquiries:

  • Interpreting and framing LLM reasoning for AI safety. The reasoning behavior of LLMs is often viewed as a structured process with discrete steps leading to a conclusion. My research unpacks this conception to develop theoretically grounded frameworks for reinterpreting reasoning in ways that better support AI safety.
  • Evaluating and improving LLM reasoning beyond logical correctness. Most benchmarks focus on assessing the logical correctness of reasoning, but often overlook the ambiguities in validating various components of natural language reasoning, especially in socially critical contexts such as toxicity understanding. My research addresses this by developing methods and systems to analyze non-formal reasoning in LLMs, which focuses on unexpressed and ambiguous premises and assumptions.

My research interests are shaped in part by my experience working with AI and data in industrial settings prior to my Ph.D. During that time, I designed analytical dashboards for public-sector organizations, designed algorithmic incentive schemes to promote public awareness initiatives, and developed a open-source software called DiCE for explaining the decisions of AI models, which now forms an integral part of Microsoft's Responsible AI Toolbox.


Approach

My research is at the intersection of HCI, NLP, and FAccT, where I ground my methods in sociopolitical and philosophical theories and draw on user studies with AI practitioners and other stakeholders across AI workflows for a human-centered approach. My work produces both (a) theoretical frameworks that account for real-world constraints and sociotechnical complexities, and (b) software and systems for informed and reflective use of LLM reasoning in practice.


Plain Academic