Interpretable Virtual Cell Simulation

We transform black-box foundation models into transparent tools for drug discovery. Our interpretability platform reveals the biological mechanisms behind AI predictions, helping you validate drug candidates and de-risk your pipeline.

Our Techniques

We use cutting-edge AI methods to decode how foundation models understand biology.

Sparse Autoencoders

We decode the AI's internal representations to identify specific biological concepts it has learned, like cell types and disease pathways.

Circuit Tracing

We map how the model connects these concepts to make predictions, revealing the biological mechanism behind each drug's effects.

Latest Research

Sparse Autoencoders Reveal Interpretable Features in the Tahoe Single-Cell Foundation Model

We trained sparse autoencoders on the Tahoe-x1 model to decompose its learned representations into interpretable biological features, revealing how the model encodes cell types, pathways, and disease states.

November 2025

Our Team

Stephen Lu

Stephen Lu

CEO

Stephen leads computational biology and genomics development, and leads business relations. He is a PhD candidate at UC Berkeley focused on AI applied to biology.

Website →
Thomas Jiralerspong

Thomas Jiralerspong

CTO & CSO

Thomas leads mechanistic interpretability, causal inference, and core AI research. He is a PhD student with Yoshua Bengio and Guillaume Lajoie at Mila, and former research fellow at Anthropic.

Website →