BODHI Framework
A systems design framework that reimagines medical AI as an epistemic agent: one that questions assumptions, reasons under uncertainty, and recognizes the boundaries of its own knowledge.
The Problem
Medical AI is dangerously overconfident
Current AI systems conflate statistical pattern recognition with genuine clinical understanding. They deliver confident predictions without mechanisms to express uncertainty, acknowledge limitations, or push back on flawed assumptions.
This sycophantic behavior has documented clinical consequences: missed diagnoses, delayed treatments, and eroded trust. The root issue isn't model accuracy. It's the absence of epistemic agency.
BODHI introduces a design philosophy where AI systems are architected around epistemic virtues: the capacity to bridge knowledge gaps, remain open to alternatives, discern complexity, practice humility, and actively inquire.
Read the design philosophy in PLOS Digital Health →Sycophantic Agreement
AI confirms clinician assumptions instead of challenging potentially incorrect diagnoses
False Confidence
Models generate authoritative sounding responses even when operating outside their training distribution
Missing Questions
Baseline models ask clarifying questions only 7.8% of the time when they should be asking nearly always
Design Principles
BODHI is grounded in four principles that guide the design of AI systems with genuine epistemic agency — independent of any specific implementation method.
Virtue Activation Matrix
Central to BODHI's design principles: four quadrants organize behavioral responses based on the interplay between uncertainty and clinical stakes.
Active Research
BODHI's design philosophy translates into concrete research modules, each operationalizing a distinct epistemic virtue. Some are empirically validated; others are active areas of investigation.
Curiosity Module
ValidatedContext seeking behavior, clarifying question generation, and proactive exploration of alternative diagnoses. Preliminary evaluation demonstrated +89.6pp improvement on HealthBench Hard.
Humility Module
ValidatedUncertainty quantification, sycophancy detection and mitigation, explicit limitation statements, and recognition of knowledge boundaries. Preliminary evaluation achieved d = 5.80 effect size on hedging behavior.
Creativity Module
Novel hypothesis generation, alternative analytical approaches, and divergent thinking in clinical reasoning. Expanding BODHI beyond safety constraints into creative diagnostic exploration.
Sycophancy Detection
ValidatedDetection and mitigation of sycophantic agreement where AI confirms clinician assumptions instead of challenging potentially incorrect diagnoses. Anti-sycophancy measures promote independent clinical reasoning with documented reduction in agreement bias.
BODHI is an expanding research platform. We welcome researchers, clinicians, and engineers interested in building AI systems with genuine epistemic agency.
Get InvolvedPublications
Beyond Overconfidence: Embedding Curiosity and Humility for Ethical Medical AI
An Engineering Framework for Curiosity Driven and Humble AI in Clinical Decision Support
Humility and Curiosity in Human–AI Systems for Health Care
Uncertainty Makes It Stable: Curiosity Driven Quantized Mixture of Experts
Research Team
BODHI is developed by an interdisciplinary team of researchers, clinicians, and engineers from institutions worldwide.
Principal Investigator
Multidisciplinary Team Researchers
Partner Institutions









Code & Resources
BODHI Python Package
Fully open source implementation of the BODHI framework. Install via pip, explore the code, report issues, or contribute on GitHub.
Evaluation Scripts
Complete evaluation framework for testing BODHI on HealthBench Hard and other clinical benchmarks with statistical analysis.
View on GitHub →Curiosity Driven QMoE
Quantized Mixture of Experts with curiosity driven routing for efficient edge deployment with stable latency.
View on GitHub →Get in Touch
We welcome researchers, clinicians, and engineers working on epistemic agency, uncertainty quantification, and safe AI for clinical decision support. Reach out to any of us directly.