An interview with the co-founder of an AI diagnostics platform on evidence, humility, and clinical trust
Key Takeaways
- Early detection only matters if it changes outcomes
- AI must explain uncertainty, not hide it
- Clinical trust is built through evidence, not demos
- Integration beats standalone tools in healthcare
- Responsible scaling matters more than rapid adoption
Early disease detection has long been a promise of modern medicine, but translating that promise into reliable clinical practice has proven difficult. Data is abundant, yet signals are often subtle, noisy, and unevenly distributed across populations. For Dr. Anika Rao, co-founder and Chief Scientific Officer of the AI diagnostics startup Virexa, the challenge is not building powerful models, but building ones clinicians can trust. A physician-scientist by training, Rao founded Virexa after years working at the intersection of clinical research and applied machine learning. In this interview, she discusses why early detection demands restraint, how AI should communicate uncertainty, and what it takes to move responsibly from research to real-world care.
Interview
Q1: What problem convinced you that AI could meaningfully improve early disease detection?
It wasn’t a single moment so much as a pattern I kept seeing. In clinical research, we often identify biomarkers or risk signals years before symptoms appear, but those insights rarely make it into routine care. The gap isn’t knowledge—it’s translation.
AI offered a way to bridge that gap by synthesizing weak signals across imaging, labs, and longitudinal records. But I was also very aware of how easy it is to overstate what models can do. Virexa was founded around a narrow question: can we surface risk earlier without creating false confidence or unnecessary intervention? If the answer wasn’t yes, we weren’t interested.
Q2: Early detection can raise concerns about overdiagnosis. How do you address that?
That concern is absolutely valid, and it’s one of the reasons we’re cautious about language. Detection alone isn’t the goal—actionability is. If identifying a risk earlier doesn’t lead to better outcomes, it may cause more harm than good.
Our models are designed to support stratification, not diagnosis. They help clinicians understand who may benefit from closer monitoring or additional testing, not what definitive condition a patient has. We’re explicit about confidence intervals and limitations, because responsible early detection requires acknowledging uncertainty rather than smoothing it away.
Q3: How do you build trust with clinicians who are skeptical of AI diagnostics?
Trust starts with respecting clinical judgment. We don’t position Virexa as an authority; we position it as an assistant. That means showing our work—what data was used, which features contributed most, and where the model is less reliable.
We also spend a lot of time in clinical pilots, collecting feedback from physicians and adjusting accordingly. If a tool disrupts workflow or creates cognitive burden, it won’t be used, no matter how accurate it is. Adoption happens when clinicians feel the system was built with them, not for them.
Q4: What role does validation play in moving from research to real-world use?
Validation is everything. A model that performs well in a controlled dataset can fail quickly in diverse, real-world settings. We run prospective studies and continuously monitor performance across populations, sites, and data quality conditions.
Equally important is being willing to pause. If results don’t generalize, we stop and investigate rather than pushing forward. In diagnostics, credibility compounds slowly but can evaporate instantly. We’d rather delay expansion than compromise evidence.
Q5: As a co-founder, how do you balance scientific rigor with startup pressures?
It’s a constant tension. Startups are incentivized to move fast, but science—and healthcare—move deliberately for a reason. My role is often to slow things down just enough to ask hard questions.
We’ve built a culture where skepticism is encouraged internally. If someone can’t explain why a result holds up under scrutiny, it doesn’t ship. Growth matters, but not at the expense of trust. In the long run, scientific discipline is a competitive advantage, even if it doesn’t always feel like one day to day.
Looking Forward
Virexa’s approach reflects a broader maturation in AI-driven healthcare, where enthusiasm is increasingly tempered by responsibility. Rao’s insistence on evidence, transparency, and clinical partnership challenges the notion that faster is always better in medical innovation. As AI diagnostics move closer to the front lines of care, the companies that endure may be those that embrace uncertainty rather than obscure it. In the high-stakes world of early disease detection, seeing clearly—and cautiously—may be the most meaningful breakthrough of all.