Mithril Chainvest

Mithril Security provides a secure inference server for AI models. We help ML engineers deploy AI models, with end-to-end encryption during inference, without cryptography skills thanks to our SDK.

Why secure inference

Security

Securing the model weights, especially on an untrusted public Cloud is key to protect the model IP. Avoiding users' data leak is also vital to avoid hefty fines.

Compliance

Regulation has increased with GDPR and CCPA. The European Union disclosed a new legal framework on AI in April 2021 where privacy is key.

Trust

Gaining users' trust is key today as the number of leaks, for instance conversations from voice assistants, have cast doubt on how data is handled.

How it works

Step one

Convert your trained Pytorch/Tensorflow model into ONNX or NNEF format for inference.

Step two

Drop your exported model inside our secure inference server. This can be done on our infrastructure or yours.

Step three

Connect and consume the service on your client application using our SDK

Zero-trust

Thanks to end-to-end encryption during inference, users' data and model weights are never shown to anyone in clear.


This means even if the infrastructure is compromised, users' data will not be leaked.

Drag and drop your model

Change little in your existing workflows. Because we leverage open source standards such as ONNX, you can deploy AI models with secure inference with little effort.

Fast inference

Benefit from the highest level of security with end-to-end encryption, while keeping a high throughput.


Our server has been tested on Imagenet with MobileNets and ResNets and showed a x1.3 slowdown depending on model size.

Ready to get started?