Get ready for the
EU AI Act with SuperAlign
Get ready for the
EU AI Act with SuperAlign
Get ready for the
EU AI Act with SuperAlign
Discover how our solution ensures compliance with the EU AI Act. Schedule a demo today to see our advanced features in action and learn how we can support your AI regulatory needs.
Schedule Demo with SuperAlign
Comprehensive Strategies for Managing High-Risk AI Use Cases
With our GRC for AI platform, you can effectively prepare to implement an AI risk management system and fulfill transparency obligations under the EU AI Act by:
Identifying your high-risk AI use cases;
Adopting appropriate and targeted risk management measures to mitigate identified risks for your AI use cases;
Completing technical documentation requirements; and
Incorporating automated tools with human oversight to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
What is the EU AI Act?
The EU AI Act is an EU-wide legal framework (Regulation) that sets out clear transparency and reporting obligations for any company placing an AI system on the EU market, or companies whose system outputs are used within the EU (regardless of where systems are developed or deployed). It was originally proposed by the European Commission on 21 April 2021, and has been politically agreed upon by all three EU institutions (8 December 2023). The European Parliament's plenary vote on the proposed Artificial Intelligence Act is expected to take place in mid-March 2024 (according to Parliament's draft agenda).
Following the final vote, the EU AI Act would enter into force after publication in the Official Journal of the European Union (expected Spring 2024).
Fines are expected for:
Non-compliance with prohibited AI violations resulting in up to 7% of total orldwide annual turnover for the preceding financial year or €35M (whichever is higher)
Non-compliance with most other violations resulting in up to 3% of total worldwide annual turnover for the preceding financial year or €15M (whichever is higher)
Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request resulting in up to 1.5% of total worldwide global annual turnover or €7.5M(whichever is higher)
Non-compliance with prohibited AI violations resulting in up to 7% of total orldwide annual turnover for the preceding financial year or €35M (whichever is higher)
Non-compliance with most other violations resulting in up to 3% of total worldwide annual turnover for the preceding financial year or €15M (whichever is higher)
Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request resulting in up to 1.5% of total worldwide global annual turnover or €7.5M(whichever is higher)
Non-compliance with prohibited AI violations resulting in up to 7% of total orldwide annual turnover for the preceding financial year or €35M (whichever is higher)
Non-compliance with most other violations resulting in up to 3% of total worldwide annual turnover for the preceding financial year or €15M (whichever is higher)
Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request resulting in up to 1.5% of total worldwide global annual turnover or €7.5M(whichever is higher)
What are businesses responsible for doing?
As an organization building or using AI systems that are placed on the EU market or whose system outputs are used within the EU, you will be responsible for ensuring compliance with the EU AI Act.
Enterprise obligations will be dependent on the level of risk an AI system poses to people’s safety, security, or fundamental rights along the AI value chain. The most significant transparency and reporting requirements will be for AI systems classified as “high-risk,” as well as general-purpose AI system providers determined to be high-impact or posing “systemic risks.”
Depending on the risk threshold of your systems, enterprises will have some level of responsibilities that could include:
As an organization building or using AI systems that are placed on the EU market or whose system outputs are used within the EU, you will be responsible for ensuring compliance with the EU AI Act.
Enterprise obligations will be dependent on the level of risk an AI system poses to people’s safety, security, or fundamental rights along the AI value chain. The most significant transparency and reporting requirements will be for AI systems classified as “high-risk,” as well as general-purpose AI system providers determined to be high-impact or posing “systemic risks.”
Depending on the risk threshold of your systems, enterprises will have some level of responsibilities that could include:
Registration: Registration of all use cases in the EU database before placing the AI solution on the market or putting it into service.
Classification: Identification of all high-risk AI use cases.
Risk Management: Adoption of appropriate and targeted risk management measures to mitigate identified risks.
Data Governance: Confirmation of the use of high-quality training data, adherence to appropriate data governance practices, and assurance that datasets are relevant and unbiased.
Technical Documentation: Keeping records containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring (i.e. the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form). The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system (note: high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system).
Human Oversight: Incorporate human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
Accuracy, Robustness, and Security: Ensure consistent accuracy, robustness, and cybersecurity measures throughout the AI system's lifecycle.
Quality Management: Providers of high-risk AI systems must have a quality management system in place documented in a systematic and orderly manner in the form of written policies, procedures and instructions.
EU Declaration of Conformity: Draft the declaration of conformity for each high-risk AI system, asserting compliance (kept up to date for 10 years, submitting copies to national authorities, and updating as necessary).
CE Marking: Ensure that the CE marking is affixed in a visible, legible, and indelible manner or digitally accessible for digital systems, thereby indicating compliance with the general principles and applicable European Union laws.
Incident Reporting: Providers of high-risk AI systems placed on the European Union market must report any “serious incident” to the market surveillance authorities of the EU Member States where that incident occurred (immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident).
Registration: Registration of all use cases in the EU database before placing the AI solution on the market or putting it into service.
Classification: Identification of all high-risk AI use cases.
Risk Management: Adoption of appropriate and targeted risk management measures to mitigate identified risks.
Data Governance: Confirmation of the use of high-quality training data, adherence to appropriate data governance practices, and assurance that datasets are relevant and unbiased.
Technical Documentation: Keeping records containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring (i.e. the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form). The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system (note: high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system).
Human Oversight: Incorporate human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
Accuracy, Robustness, and Security: Ensure consistent accuracy, robustness, and cybersecurity measures throughout the AI system's lifecycle.
Quality Management: Providers of high-risk AI systems must have a quality management system in place documented in a systematic and orderly manner in the form of written policies, procedures and instructions.
EU Declaration of Conformity: Draft the declaration of conformity for each high-risk AI system, asserting compliance (kept up to date for 10 years, submitting copies to national authorities, and updating as necessary).
CE Marking: Ensure that the CE marking is affixed in a visible, legible, and indelible manner or digitally accessible for digital systems, thereby indicating compliance with the general principles and applicable European Union laws.
Incident Reporting: Providers of high-risk AI systems placed on the European Union market must report any “serious incident” to the market surveillance authorities of the EU Member States where that incident occurred (immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident).
Registration: Registration of all use cases in the EU database before placing the AI solution on the market or putting it into service.
Classification: Identification of all high-risk AI use cases.
Risk Management: Adoption of appropriate and targeted risk management measures to mitigate identified risks.
Data Governance: Confirmation of the use of high-quality training data, adherence to appropriate data governance practices, and assurance that datasets are relevant and unbiased.
Technical Documentation: Keeping records containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring (i.e. the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system and drawn in a clear and comprehensive form). The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system (note: high risk AI systems should technically allow for automatic recording of events (logs) over the duration of the lifetime of the system).
Human Oversight: Incorporate human-machine interface tools to prevent or minimize risks upfront, enabling users to understand, interpret, and confidently use these tools.
Accuracy, Robustness, and Security: Ensure consistent accuracy, robustness, and cybersecurity measures throughout the AI system's lifecycle.
Quality Management: Providers of high-risk AI systems must have a quality management system in place documented in a systematic and orderly manner in the form of written policies, procedures and instructions.
EU Declaration of Conformity: Draft the declaration of conformity for each high-risk AI system, asserting compliance (kept up to date for 10 years, submitting copies to national authorities, and updating as necessary).
CE Marking: Ensure that the CE marking is affixed in a visible, legible, and indelible manner or digitally accessible for digital systems, thereby indicating compliance with the general principles and applicable European Union laws.
Incident Reporting: Providers of high-risk AI systems placed on the European Union market must report any “serious incident” to the market surveillance authorities of the EU Member States where that incident occurred (immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident).
SuperAlign
Microsoft
for Startups
Google for Startups
INCEPTION PROGRAM
Network Builders
SuperAlign
Microsoft
for Startups
Google for Startups
INCEPTION PROGRAM
Network Builders
SuperAlign
Microsoft
for Startups
Google for Startups
INCEPTION PROGRAM
Network Builders