Certain AI systems are prohibited if they are regarded as presenting an unacceptably high level of risk. Some examples include AI systems that are exploitative or utilise manipulative methods to negatively affect a person’s behaviour and cause them, or others, harm.
Other EU AI Act prohibited practices include; social scoring based on personal data, systems which promote, or could lead to, discrimination, facial recognition software used to infer an individual’s political views, sexual orientation, or other sensitive, private information.
Also included would be autonomous AI-powered weapons, that don’t have human oversight, which could target individuals. In short, any AI systems which could endanger individuals or violate their fundamental rights would be classified as ‘unacceptable risk’.
The most detailed compliance rules cover AI systems that are considered ‘high risk’. These AI systems could potentially have adverse consequences, or be harmful to individuals, if used incorrectly.
This would apply especially to AI systems used in making medical assessments or in making hiring decisions, AI used in administration of justice, law enforcement or surveillance, for example facial recognition software or biometric identification systems used in public spaces, AI used in credit-scoring or assessing insurance premiums, and AI systems which control self-driving vehicles.
Because of the inherent high risk of harm, discrimination, miscarriage of justice, or violation of an individual’s rights, developers of ‘high risk’ AI systems are required to undergo risk assessment, obtain certification, and implement appropriate strategies and safeguards to mitigate risk.
AI systems considered to carry ‘limited risk’ include chatbots, and biometric or emotion recognition AI systems that interact with people. Other AI systems considered ‘limited risk’ are those which create ‘deep fakes’ such as online content that seems genuine although it is AI-generated. Such systems are required to indicate that the content has been created or manipulated artificially.
It’s worth noting that some exceptions could include legally permitted usage of AI to prevent or detect criminal behaviour.
Also, creative or artistic works which are ‘evidently’ fictional or satirical may be disclosed in a way that doesn’t impede the artistic works.
AI systems not considered to be ‘unacceptable’ ‘high’ or ‘limited’ risk are classified as ‘low’ or minimal risk.
As we can see, the EU AI Act risk assessment has the intention of classifying AI systems according to the potential level of harm it could cause. This includes manipulation of vulnerable individuals such as children, as well as discrimination or privacy concerns.
Furthermore, the European Commission has the power to classify certain AI systems as having systemic risk. A distinction is made between AI models which carry a systemic risk and those which don’t.
If the computational power of an AI system exceeds a certain threshold, it may be presumed to present a systemic risk.