How to reconcile privacy and AI with Confidential Computing

About this event

The hybrid-format workshop « AI and privacy » was held at Sorbonne Center for Artificial Intelligence – SCAI on February 15 at 6 pm.

AI is revolutionizing many fields from healthcare to biometrics these recent years. However, due to security and privacy concerns, data is still being siloed and not shared enough due to the fear of data exposure and IP leakage.

Confidential Computing is a recent technology that enables end-to-end encryption when analyzing sensitive data. Data owners can share their data with AI companies by leveraging Confidential Computing. For instance, to train or consume an AI model without ever risking their data being stolen, leaked, or used for any other purpose. Furthermore, data remains protected even when shared with third parties.

This workshop aims to introduce the high-level principles of Confidential Computing and how it can used it to deploy privacy-friendly AI models, including Transformers for confidential document analysis and ResNets for medical imaging. The workshop will be organized in the following manner:

Organization:

  • Presentation of the current challenges
  • Introduction to Confidential Computing
  • Example of confidential AI deployment with Mithril Security
  • Guest speakers (Philippe Beraud, Chief Technology and Security Advisor @Microsoft and Flemming Backhaus, Sales Development Account Manager Azure @Intel)
  • Q&A

This webinar was held at HEC Records in Paris on June 16th 6:30 PM
Organized by Daniel HUYNH, CEO of Mithril Security
Hosted by HEC Records

Join the community
GitHub
Contribute to our project by opening issues and PRs.
Discord
Join the community, share your ideas, and talk with Mithril’s team.
Join the discussion
Contact us
We are happy to answer any questions you may have, and welcome suggestions.
Contact us