Hello! I'm a Ph.D. candidate in the Faculty of Information, a graduate fellow of the Data Sciences Institute (DSI), and a graduate affiliate of the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. I'm co-advised by Prof. Ishtiaque Ahmed and Prof. Shion Guha. I am also a member of the Department of Computer Science's Third Space research group, Dynamics Graphics Project lab and iSchool's Human-Centred Data Science Lab.
Before starting my Ph.D., I worked as a Data Scientist at Everwell Health Solutions and a AI Center Fellow at Microsoft Research India. I completed my M.Sc. in Artificial Intelligence from KU Leuven, Belgium, and B.Tech in Instrumentation and Control Engineering from NIT Trichy, India.
I'm broadly interested in supporting the safe and responsible use of AI by (a) analyzing the tensions, expectations, and subtle technical assumptions that underlie AI system development, and (b) developing software and practical tools that enable the responsible deployment of AI in real-world settings.
My current research focuses on the reasoning-like behavior of LLMs that underpins how people build trust in and make decisions with these systems, and centers on two interrelated inquiries:
My research interests are shaped in part by my experience working with AI and data in industrial settings prior to my Ph.D. During that time, I designed analytical dashboards for public-sector organizations, designed algorithmic incentive schemes to promote public awareness initiatives, and developed a open-source software called DiCE for explaining the decisions of AI models, which now forms an integral part of Microsoft's Responsible AI Toolbox.
My research is at the intersection of HCI, NLP, and FAccT, where I ground my methods in sociopolitical and philosophical theories and draw on user studies with AI practitioners and other stakeholders across AI workflows for a human-centered approach. My work produces both (a) theoretical frameworks that account for real-world constraints and sociotechnical complexities, and (b) software and systems for informed and reflective use of LLM reasoning in practice.