Building safer AI for mental health
Multiphasic Labs builds tooling to evaluate and improve the safety of AI systems in sensitive domains—starting with mental health chatbots.
View our MVP on GitHub →What we're building
Our first product is the Mental Health Safety Tester—a Python CLI tool for scripted pre-deployment testing of mental-health-oriented chatbots. It runs synthetic vulnerable-user personas against a target system, then uses an LLM-as-judge to score responses against clinical safety criteria.
Clinical Testing Tool (MVP)
Scripted personas, configurable judge model, and structured JSON results. Designed as a building block for offline safety evaluation—not for evaluating real users. Open on GitHub →
Get in touch
We're early-stage and focused on making AI systems safer in high-stakes contexts. For collaboration or feedback, reach out via GitHub.