BlindLlama

Speed up LLM adoption and cut costs without compromising on privacy

BlindLlama, a Zero-Trust AI API to evaluate open-source LLMs quickly on confidential data.

BlindLlama's architecture

Composed of 2 main parts:
- An open-source client-side Python SDK that verifies the hardened environment is indeed guaranteeing data sent is not exposed to malicious servers that could intercept and forward it.

- An open-source server we call enclaves, made up of three key components which work together to serve models without any exposure to the AI provider. We remove all potential server-side leakage channels from network to logs and provide cryptographic proof that those privacy controls are in place using TPMs.
BlindLlama's architecture
Read our docs

BlindLlama to deploy large language model

Effortless Open-Source LLM Integration with Secure, Transparent APIs and End-to-End Data Protection
Confidentiality

We serve AI models in a hardened environment that ensures data is never exposed as all external access are removed

+
Verifiability

We use secure hardware to provide cryptographic proof so that you can have irrefutable proof your data will remain confidential

Learn more
Join the community
GitHub
Contribute to our project by opening issues and PRs.
Discord
Join the community, share your ideas, and talk with Mithril’s team.
Join the discussion
Contact us
We are happy to answer any questions you may have, and welcome suggestions.
Contact us