Mithril Security recreates trust in AI by protecting data. Our confidential computing based solution enables ML engineers to deploy secured AI models without accessing data.
Most secure confidential computation solution with specialised codebase and AI-only related operations.
Keep your AI fast while securing data with AI-optimised hardware and software.
Deploy our solution on premise or in the cloud easily thanks to our dockerized inference server.
Mithril Security has developped all the tools and infrastructure to provide seamless secure AI inference
Convert your trained Pytorch/Tensorflow model into ONNX or NNEF format for inference.
Drop your exported model inside our secure inference server. This can be done on our infrastructure or yours.
Connect and consume the service on your client application using our SDK
Thanks to end-to-end encryption during inference, users' data and model weights are never shown to anyone in clear.
This means even if the infrastructure is compromised, users' data will not be leaked.
Change little in your existing workflows. Because we leverage open source standards such as ONNX, you can deploy AI models with secure inference with little effort.
Benefit from the highest level of security with end-to-end encryption, while keeping a high throughput.
When running for ResNets50, our server showed to be more than X250 faster than traditional privacy enhancing technologies like homomorphic encryption.