Mithril Security provides a secure inference server for AI models. We help ML engineers deploy AI models, with end-to-end encryption during inference, without cryptography skills thanks to our SDK.
Securing the model weights, especially on an untrusted public Cloud is key to protect the model IP. Avoiding users' data leak is also vital to avoid hefty fines.
Regulation has increased with GDPR and CCPA. The European Union disclosed a new legal framework on AI in April 2021 where privacy is key.
Gaining users' trust is key today as the number of leaks, for instance conversations from voice assistants, have cast doubt on how data is handled.
Mithril Security has developped all the tools and infrastructure to provide seamless secure AI inference
Convert your trained Pytorch/Tensorflow model into ONNX or NNEF format for inference.
Drop your exported model inside our secure inference server. This can be done on our infrastructure or yours.
Connect and consume the service on your client application using our SDK
Thanks to end-to-end encryption during inference, users' data and model weights are never shown to anyone in clear.
This means even if the infrastructure is compromised, users' data will not be leaked.
Change little in your existing workflows. Because we leverage open source standards such as ONNX, you can deploy AI models with secure inference with little effort.
Benefit from the highest level of security with end-to-end encryption, while keeping a high throughput.
Our server has been tested on Imagenet with MobileNets and ResNets and showed a x1.3 slowdown depending on model size.