
Hello! I'm a Ph.D. candidate in the Faculty of Information, a graduate fellow of the Data Sciences Institute (DSI), and a graduate affiliate of the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. I'm co-advised by Prof. Ishtiaque Ahmed and Prof. Shion Guha. I am also a member of the Department of Computer Science's Third Space research group, Dynamics Graphics Project lab and iSchool's Human-Centred Data Science Lab.
Before starting my Ph.D., I worked as a Data Scientist at Everwell Health Solutions and an AI Center Fellow at Microsoft Research India. I completed my M.Sc. in Artificial Intelligence from KU Leuven, Belgium, and B.Tech in Instrumentation and Control Engineering from NIT Trichy, India.
AI safety in practice is inherently ambiguous. Navigating these ambiguities meaningfully requires reasoning—about AI practitioners' own assumptions and decisions, and about LLMs' reasoning-like behavior in safety-critical contexts. This reasoning challenge in AI safety is the central focus of my research, which I pursue through two complementary research directions:
These two efforts are mutually reinforcing: reflective frameworks prompt practitioners to critically examine their own reasoning, while the methods and metrics I develop provide them with more rigorous and principled ways to evaluate LLM reasoning than existing benchmarks. Together, they ensure practitioners make AI safety decisions reflectively and with appropriate nuance, rather than defaulting to existing benchmarks or intuition alone.
My research interests are shaped in part by my experience working with AI and data in industrial settings prior to my Ph.D. During that time, I designed analytical dashboards for public-sector organizations, designed algorithmic incentive schemes to promote public awareness initiatives, and developed a open-source software called DiCE for explaining the decisions of AI models, which now forms an integral part of Microsoft's Responsible AI Toolbox.
My research sits at the intersection of HCI and NLP, drawing on user studies with AI practitioners and other stakeholders across AI workflows for a human-centered approach. I ground my methods in sociopolitical and philosophical literature—particularly non-ideal justice theories and informal logic—which give me the conceptual vocabulary to rigorously examine the normative and ambiguous dimensions of AI safety questions.