Ramaravind K M



Email
Google Scholar
GitHub
HuggingFace
LinkedIn

Recent News
  • Leading a CRAFT workshop at FAccT 2026 on Assumptions and Ambiguities in Canada's Algorithmic Impact Assessment. More updates soon!
  • UofT nominee to Human-Computer Interaction Consortium (HCIC) 2026 in Colorado.
  • Presented my paper on Argument-based Consistency in LLM explanations at EACL 2026.

About

Hello! I'm a Ph.D. candidate in the Faculty of Information, a graduate fellow of the Data Sciences Institute (DSI), and a graduate affiliate of the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto. I'm co-advised by Prof. Ishtiaque Ahmed and Prof. Shion Guha. I am also a member of the Department of Computer Science's Third Space research group, Dynamics Graphics Project lab and iSchool's Human-Centred Data Science Lab.

Before starting my Ph.D., I worked as a Data Scientist at Everwell Health Solutions and an AI Center Fellow at Microsoft Research India. I completed my M.Sc. in Artificial Intelligence from KU Leuven, Belgium, and B.Tech in Instrumentation and Control Engineering from NIT Trichy, India.


Interests

AI safety in practice is inherently ambiguous. Navigating these ambiguities meaningfully requires reasoning—about AI practitioners' own assumptions and decisions, and about LLMs' reasoning-like behavior in safety-critical contexts. This reasoning challenge in AI safety is the central focus of my research, which I pursue through two complementary research directions:

  • Reflective Frameworks: How can practitioners critically examine their own assumptions and decisions in AI safety contexts? I develop frameworks and reflective materials that surface what they take for granted and make implicit choices explicit.
  • Methods and Metrics: How can practitioners evaluate LLMs' reasoning-like behavior in safety-critical contexts with greater depth and nuance than what existing benchmarks allow? I develop theoretically-grounded methods and metrics that reveal where LLM reasoning is inconsistent, insufficient, or unsupported.

These two efforts are mutually reinforcing: reflective frameworks prompt practitioners to critically examine their own reasoning, while the methods and metrics I develop provide them with more rigorous and principled ways to evaluate LLM reasoning than existing benchmarks. Together, they ensure practitioners make AI safety decisions reflectively and with appropriate nuance, rather than defaulting to existing benchmarks or intuition alone.

My research interests are shaped in part by my experience working with AI and data in industrial settings prior to my Ph.D. During that time, I designed analytical dashboards for public-sector organizations, designed algorithmic incentive schemes to promote public awareness initiatives, and developed a open-source software called DiCE for explaining the decisions of AI models, which now forms an integral part of Microsoft's Responsible AI Toolbox.


Approach

My research sits at the intersection of HCI and NLP, drawing on user studies with AI practitioners and other stakeholders across AI workflows for a human-centered approach. I ground my methods in sociopolitical and philosophical literature—particularly non-ideal justice theories and informal logic—which give me the conceptual vocabulary to rigorously examine the normative and ambiguous dimensions of AI safety questions.


Plain Academic