AICert

Open-source tool to trace AI model's provenance

Provide technical guarantees your model comes from trustworthy sources for easy AI compliance

Sign up for AICert's beta version testing:

AI lacks transparency and auditability

As booms and regulation intensifies, tracing what dataset and algorithms were used to train an AI model becomes key. However providing technical proof of weights provenance is an unsolved problem.

Even when open-sourcing a model and its training procedure (software, hardware, and data), due to the randomness induced by software and hardware, AI models are not reproducible, which means we have no way to prove a model comes from a specific training procedure.

Therefore we have no way today to provide trustworthy provenance of AI models, and this poses regulatory and security issues, as models can contain backdoors or be trained on PII, non-consented or copyrighted data, which is non-compliant with the EU AI Act.

AICert

‍An open-source framework to ensure a trustworthy supply chain for AI using secure hardware

AICert is the first AI provenance solution to provide cryptographic proof that a model is the result of the application of a specific algorithm on a specific training set.

‍
AICert uses secure hardware, such as TPMs, to create unforgeable ID cards for AI that cryptographically bind a model hash to the hash of the training procedure.

This ID card serves as irrefutable proof to trace the provenance of a model to ensure it comes from a trustworthy and unbiased training procedure.

Supported byFuture of Life Institute

Future-proof your AI models

Prove absence of copyrighted data for training

Prove absence of biased data for training

Prove usage of safety procedures during training

Designed for AI Teams

AICert is designed for data science teams to easily create certificates that encompass the information needed to trace a model provenance.
1
Provide your training code as a Docker image, along with the training set
2
Provision the right machines with the secure hardware and software stack
3
Run the training procedure on the secure hardware to produce certificate
4
Share the certificate with users who can verify provenance of your model

Cryptographic proof with secure hardware

At the core of our traceability solution is the use of secure hardware. Secure hardware such as TPMs or secure enclaves have code integrity properties, i.e. provide proof that a specific software stack is loaded, from the BIOS all the way to the app, through the OS.

As the code and data used for training can be attested inside the secure hardware, we can create a certificate that binds the weights to the training code and data. This certificate is non-falsifiable and can be stored on a public ledger to prove a specific model was trained using a specific training set and code.

Registration for AICert

We are currently developing AICert, to make AI traceable and transparent by enabling AI builders to create certificates with cryptographic proofs binding the weights to the training data and code.
‍
Sign up for beta version testing:
Join the community
GitHub
Contribute to our project by opening issues and PRs.
Discord
Join the community, share your ideas, and talk with Mithril’s team.
Join the discussion
Contact us
We are happy to answer any questions you may have, and welcome suggestions.
Contact us