About

Hello! I'm a Ph.D. candidate in the Faculty of Information, a graduate fellow of the Data Sciences Institute (DSI), and a graduate affiliate of the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. I'm co-advised by Prof. Ishtiaque Ahmed and Prof. Shion Guha. I am also a member of the Department of Computer Science's Third Space research group, Dynamics Graphics Project lab and iSchool's Human-Centred Data Science Lab.

Before starting my Ph.D., I worked as a Data Scientist at Everwell Health Solutions and an AI Center Fellow at Microsoft Research India. I completed my M.Sc. in Artificial Intelligence from KU Leuven, Belgium, and B.Tech in Instrumentation and Control Engineering from NIT Trichy, India.


Interests

I'm broadly interested in supporting the safe and responsible use of AI by (a) analyzing the norms, tensions, and subtle technical assumptions that underlie AI system development, and (b) developing software and frameworks that translate theoretical and qualitative analyses into practical mechanisms for the responsible deployment of AI in real-world settings.

My current research focuses on how AI practitioners reason—and LLMs appear to reason—under ambiguous contexts related to AI safety, and revolves around the following inquiries:

  • Interpreting how AI practitioners reason through ambiguities. Ambiguity—situations where language carries multiple possible meanings—is unavoidable in real-world communication. These ambiguities can shape how practitioners interpret problems, reason about risks, and make safety-related decisions. My research examines how such ambiguities arise in AI development and how they influence practitioners' reasoning in practice.
  • Reframing LLM reasoning for AI safety. The reasoning behavior of LLMs is often viewed as a structured process with discrete steps leading to a conclusion. My research unpacks this conception to develop theoretically grounded frameworks for reframing reasoning in ways that better support AI safety.
  • Evaluating and improving LLM reasoning beyond logical correctness. Most benchmarks focus on assessing the logical correctness of reasoning, but often overlook the ambiguities in validating various components of natural language reasoning, especially in socially critical contexts such as toxicity understanding. My research addresses this by developing methods and systems to analyze non-formal reasoning in LLMs, which focuses on unexpressed and ambiguous premises and assumptions.

My research interests are shaped in part by my experience working with AI and data in industrial settings prior to my Ph.D. During that time, I designed analytical dashboards for public-sector organizations, designed algorithmic incentive schemes to promote public awareness initiatives, and developed a open-source software called DiCE for explaining the decisions of AI models, which now forms an integral part of Microsoft's Responsible AI Toolbox.


Approach

My research is at the intersection of HCI, NLP, and FAccT, where I ground my methods in sociopolitical and philosophical theories and draw on user studies with AI practitioners and other stakeholders across AI workflows for a human-centered approach. My work produces both (a) theoretical frameworks that account for real-world constraints and sociotechnical complexities, and (b) software and systems for informed and reflective use of LLM reasoning in practice.


Plain Academic