Introducing 🌊LaVague, an open-source Large Action Model framework to automate automation.

BASTIONAI: Secure multi-party AI training with Confidential Computing.

About this event

The hybrid-format webinar « BASTIONAI: Secure multi-party AI training with Confidential Computing.» was held at Sorbonne Center for Artificial Intelligence – SCAI on October 17 at 6:30 pm.

Multi-party learning is the key to accessing more data and developing more efficient AI. For instance, data is often scarce in healthcare and siloed in small datasets. But a high level of data protection is crucial to convince more organizations to collaborate.

Techniques such as Federated Learning (FL) have emerged to reduce the risk of training models on multiple private datasets. Yet, their deployment complexity and massive overhead hardly make them an excellent fit to answer the needs for secure multi-party training. That is why we, at Mithril Security, are building BastionAI , a frictionless privacy-friendly deep learning framework.

This webinar explains why we need secure AI training solutions and gives an overview of Federated Learning. Then we present BastionAI, our new solution for secure training, and provide a live demo of finetuning a DistilBERT model on a small private dataset with Differential Privacy. The webinar will be organized in the following manner:

Organization:

  • Induction - Why do we need secure AI training?
  • Overview of Federated Learning
  • Presentation of BastionAI, our multi-party secure training framework project
  • Q&A session

Organized by Daniel HUYNH, CEO of Mithril Security
Hosted by: SCAI

Join the community
GitHub
Contribute to our project by opening issues and PRs.
Discord
Join the community, share your ideas, and talk with Mithril’s team.
Join the discussion
Contact us
We are happy to answer any questions you may have, and welcome suggestions.
Contact us