The EU Artificial Intelligence (AI) Act represents a pivotal step in shaping the future of AI governance on a global scale. Positioned as the world’s first comprehensive framework for regulating AI, this landmark legislation aims to balance innovation with responsibility, establishing trust in AI systems while fostering their safe and ethical deployment.
Building on the EU’s tradition of setting international benchmarks with policies like the GDPR, the AI Act introduces a tiered risk-based classification of AI systems, transparency obligations and stringent compliance measures. As businesses and organizations prepare to adapt, the Act signals a transformative era for AI, ensuring that its growth aligns with societal values and human-centric principles.
Also Read: DPDP Act
The AI Act Aims to Achieve Four Key Objectives
- To ensure that AI systems placed on the EU market are safe and respect fundamental rights.
- To ensure legal certainty to facilitate investment and innovation in AI.
- To enhance governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems.
- To facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
How Does the EU Define an AI System?
The AI Act’s definition of an AI system is derived from the recently updated definition used by the Organization for Economic Co-operation and Development (OECD).
The AI Act defines an AI system as follows: “An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

AI Risk Framework and Requirements.
The AI Act defines a framework to understand the risks associated with AI. It classifies AI systems based on their potential risks and divides them into different categories depending on the data they capture, and the decisions or actions taken with that data. EU obligations will vary depending on the category of AI being used. While an agreement on the context has been reached, the final text of the regulations is not yet available. However, the following sections summarize the obligations stipulated under the AI Act, based on the publicly available information
The AI Act is structured around a clear classification system that divides AI into three distinct categories:
Prohibited systems: which use cases pose an unacceptable risk?
Which AI systems are covered?
AI systems that enable manipulation, exploitation and social control practices are seen as posing an unacceptable risk. This category prohibits AI for the following purposes:
- Manipulation that harms or is likely to harm an AI user or another person (e.g. children, the elderly or people with disabilities).
- Social scoring evaluation or classification of people based on their social behavior, inferred or predicted or personal characteristics, leading to detrimental or unfavorable treatment in social contexts.
- Emotion recognition software in the workplace (such as human resource functions) and educational institutions. Exemptions apply for some safety systems (e.g. detection of the drowsiness of pilots)
- Biometric categorization to infer sensitive data, such as race, sexual orientation or religious beliefs.
- Indiscriminate scraping of facial images from the internet or CCTV to populate facial recognition databases.
- Predictive policing of individuals (risk scoring for committing future crimes based on personal traits).
- Law enforcement uses real-time remote biometric identification (RBI) systems in publicly accessible spaces.
What are the obligations related to this category?
Since the AI systems in this category pose an unacceptable risk, their use is prohibited.
High-Risk AI Systems: Use Cases and Rules Explained Simply
The AI Act lists certain uses of AI as “high-risk” because they can significantly affect safety, health, or basic rights. Here’s a breakdown of the current high-risk uses:
1. AI in Products Covered by EU Safety Laws
AI systems used in the safety features of certain products must meet EU safety standards. These products include:
- Medical devices (e.g. AI in diagnostics or monitoring systems).
- Marine equipment (e.g. AI controlling ship navigation).
- Motor vehicles (e.g. self-driving features or advanced safety tools).
- Agricultural vehicles (e.g. AI in automated farming machinery).
- Machinery (e.g. industrial robots or AI in factories).
- Railway systems (e.g. AI managing train operations).
- Aviation systems (e.g. air traffic AI tools).
- Toys (e.g. AI-enabled interactive or learning toys).

1. AI in Sensitive Use Cases
AI systems that could harm health, safety, or basic rights in specific areas are considered high-risk. These include:
- Biometric identification and categorization of individuals (e.g. facial recognition).
- Management and operation of critical infrastructure, such as the safety components of traffic, water, gas, heating and electricity systems.
- Education and vocational training, specifically systems that determine access to education or assess student performance (e.g. exam scoring).
- Employment and workforce management, including recruitment, performance monitoring, and access to self-employment opportunities.
- Access to essential services and benefits, covering both private and public sectors, such as determining eligibility for benefits (e.g. healthcare or social welfare), evaluating creditworthiness (e.g. loan approvals), and pricing life and health insurance (excluding systems used for detecting financial fraud)
- Law enforcement applications, like data analytics systems used to evaluate evidence of criminal activity
- Migration, asylum, and border control management, including activities such as monitoring migration trends, border surveillance, travel document verification and processing visa, asylum and residency applications
- Administration of justice and democratic processes, including tasks like legal research and interpretation of laws
Exceptions to High-Risk Classification: An AI system will not be classified as high-risk if it:
- Performs a specific procedural task without direct implications for safety or security.
- Is designed to review or enhance the quality of human output.
- Detects decision-making patterns or flags deviations from established patterns for inconsistencies, without impacting the decision-making process.
- Is utilized specifically for detecting financial fraud.
Obligations for Providers of High-Risk AI Systems
General Obligations
Providers of high-risk AI systems must adhere to the following requirements:
- Implement and maintain robust AI risk and quality management systems.
- Ensure effective data governance practices are in place.
- Maintain comprehensive technical documentation and records.
- Provide clear and accessible information to users about the AI system.
- Enable and facilitate meaningful human oversight of the system.
- Meet established standards for accuracy, robustness and cybersecurity, ensuring the system is suitable for its intended purpose.
- Register high-risk AI systems in the EU database before market entry. Systems utilized for law enforcement, migration, asylum, border control and critical infrastructure must be registered in a non-public section of the database.
Pre-Market Conformity Assessment and Post-Market Obligations for High-Risk AI Systems
Pre-Market Conformity Assessment
Before placing a high-risk AI system on the market, providers must conduct a conformity assessment to verify compliance with the required standards. Key aspects include:
- Self-Assessment: Providers may self-assess if:
- They use procedures and methodologies that adhere to EU-approved technical standards (harmonized standards), which presume conformity.
- Third-Party Conformity Assessment: An accredited body (notified body) must perform the assessment if:
- The AI system is part of a safety component requiring third-party evaluation under Union harmonized regulations.
- The AI system includes a biometric identification feature.
- Harmonized standards are not employed.
Post-Market Obligations
Once a high-risk AI system is on the market, providers must ensure its safety and compliance throughout its lifecycle. Obligations include:
- Retain logs generated by high-risk systems, under their control, for at least six months.
- Take immediate action to address nonconforming systems already in the market and notify relevant operators in the value chain.
- Collaborate with national competent authorities or the AI Office by providing necessary information and documentation to demonstrate compliance upon reasonable request.
- Continuously monitor the system’s performance and safety, ensuring ongoing adherence to the AI Act.
- Report serious incidents or malfunctions leading to breaches of fundamental rights to the appropriate authorities.
- New Conformity Assessments for Modifications:
- Conduct new assessments for substantial changes, such as alterations to the system’s intended purpose or changes impacting regulatory compliance.
- This requirement applies to modifications made by the original provider or third parties.
- Reassess the risk classification of systems considered low or minimal risk to confirm if it remains valid after changes.
What are the obligations for deployers, importers, and distributors of high-risk AI systems?
Obligations of deployers of high-risk AI systems include:
- Completing a fundamental rights impact assessment (FRIA) before putting the AI system in use if the deployer:
- Is a public body or private entity providing public services.
- Provides essential private services, such as evaluating creditworthiness or assessing risks and pricing for life and health insurance.
- Implementing human oversight by individuals with appropriate training and competence.
- Ensuring input data relevance to the system’s intended use.
- Suspending the use of the system if it poses a risk at a national level.Informing the AI system provider of any serious incidents.
- Retaining automatically generated system logs.
- Complying with registration requirements when the user is a public authority.
- Adhering to GDPR obligations by performing a data protection impact assessment.
- Verifying compliance with the AI Act and ensuring all relevant documentation is available.
Obligations of importers and distributors of high-risk AI systems include:
- Verifying compliance with the AI Act and ensuring all relevant documentation is available.
- Communicating with providers and market surveillance authorities as needed.
Minimal-Risk Systems: What Obligations Apply?
Limited Risk
Permitted, subject to specific transparency and disclosure obligations where uses pose a limited risk. Certain AI systems that interact directly with people (e.g., chatbots), and visual or audio “deepfake” content that has been manipulated by an AI system
Minimal Risk
Permitted, with no additional AI Act requirements where uses pose minimal risk. By default, all other AI systems that do not fall into the above categories (e.g., photo editing software, product-recommender systems, spam filtering software, scheduling software)
Providers Must:
- Design and develop systems to clearly inform users from the beginning that they are interacting with an AI system (e.g. chatbots such as ChatGPT-based systems).
Deployers Must:
- Inform individuals and secure their consent when they are exposed to permitted emotion recognition or biometric categorization systems (e.g. systems monitoring driver attentiveness for safety).
- Clearly disclose and label any visual or audio content manipulated by AI, such as “deep fake” materials.

AI Systems are Classified as Follows in the Act
Who Will be Affected
The AI Act covers a wide range of AI systems and imposes significant responsibilities throughout the AI value chain. It prioritizes the impact of AI systems on people, particularly their wellbeing and fundamental rights. The Act also includes extraterritorial provisions, meaning it applies to any organization offering AI systems that affect individuals within the EU, regardless of where the organization is based.
The AI Act may also apply to AI systems already on the market before it takes effect under specific conditions:
- If they are general-purpose AI (GPAI) models.
- If they fall into the “prohibited” category or are “high-risk” AI systems designed for use by public authorities.
- If an existing AI system undergoes major updates, it will be reclassified and treated according to the updated risk category, like new systems entering the market.
The AI Act Will Apply To
- Providers offering AI systems in the market within the EU, regardless of where they are based.
- Providers and deployers of AI systems outside the EU, if the AI’s output is used within the EU.
- Deployers of AI systems located in the EU.Importers and distributors placing AI systems on the EU market
- Manufacturers placing products with AI systems on the EU market under their own brand or trademark.
The AI Act Will Not Apply To:
- Public authorities in non-EU countries or international organizations with EU law enforcement or judicial agreements, if safeguards are in place.
- AI systems used for purposes outside EU regulations, such as military or defence.
- AI systems created and used only for scientific research and discovery.Research, testing, and development of AI systems before they are sold or used.
- Free and open-source software, unless it is classified as prohibited or high-risk, or falls under transparency rules.
Who in Your Organization Will be Affected?
The AI Act is likely to impact executives responsible for compliance, data governance, and the development, deployment, or use of AI technologies. Additionally, the Board of Directors and Governance Committees should be aware and informed about the Act’s requirements.
Given the broad definition of AI and its rapid growth, organizations need a comprehensive approach. Senior leaders should work together on innovation, risk management and AI system governance to ensure compliance with the AI Act.
How Will the AI Act Interact with Existing Legislation and Standards?
- AI providers must comply with all relevant EU laws while integrating the requirements of the AI Act.
- Providers can align AI Act compliance with existing procedures to avoid redundancy and simplify the compliance process.
- The AI Act will be embedded into relevant EU legislation (e.g. financial services regulations). Sector-specific regulators will act as competent authorities to oversee enforcement within their respective sectors.
How Will New Standards Be Developed and When Will They Be Ready?
To minimize compliance burdens and accelerate time-to-market, the AI Act permits compliance self- assessments, provided the obligations are met using European Commission-approved best practices formalized in harmonized standards.
- The European Commission has issued a “standardization request” to European standards bodies (CEN and CENELEC), outlining topics for which new harmonized standards are necessary to address the AI Act’s compliance obligations (e.g. pre-market obligations for high-risk AI systems).
- European standardization bodies aim to deliver these standards within the implementation timelines of the AI Act.
- Where feasible, European standardization bodies will adopt standards developed by international organizations (ISO and IEC) with minimal modifications.
What Are the Penalties for Noncompliance?
The AI Act establishes a stringent enforcement framework for noncompliance, with penalties based on the severity of the violation under a risk-based approach. There are three levels of noncompliance, each carrying substantial financial penalties:

Next Steps
Begin preparing for the AI Act’s regulatory requirements by involving the right stakeholders, including legal, privacy, data science, risk management and procurement professionals. Establish a multidisciplinary task force to ensure comprehensive compliance. Identify and evaluate all AI systems developed or used within your organization, categorizing them based on the AI Act’s risk levels (minimal, high or unacceptable).
Establish Effective Governance Frameworks
- Create policies to categorize AI systems by risk level as defined in the AI Act. Incorporate the legislative intent behind prohibited and high-risk systems:
- Clearly communicate your approach to AI Act compliance to stakeholders, including customers and partners.
- Outline roles and expectations in maintaining ongoing compliance.Implement or improve governance frameworks aligned with the AI Act and emerging standards.
- Use automated solutions to streamline compliance mapping, obligation tracking and workflow management.
- Develop robust data governance systems to ensure data quality, security and privacy, adaptable to evolving regulatory and technological changes.
- Create a dedicated team or department to oversee AI ethics and governance, ensuring compliance with evolving regulations.
- Implement continuous training initiatives to improve AI knowledge across your organization and promote ethical AI practices.
Understand and Manage AI Risks
- Assess the risks posed by AI systems to individuals, your organization and the ecosystem. Perform fundamental rights and systemic risk assessments where relevant.
- Update data handling practices to align with applicable laws and industry best practices.
- Review existing AI systems and use cases to identify high-risk systems requiring compliance.
- Identify areas of noncompliance through a gap analysis and develop an action plan to address deficiencies. Automating the analysis can enhance speed and efficiency.
- Thoroughly test AI systems to ensure functionality and compliance.
- Leverage the regulatory sandbox for testing and utilize automated threat detection tools to reduce effort in meeting technical documentation requirements.
Strengthen assessments to include AI-specific risks. For foundational model usage, monitor provider compliance with the AI Act and request necessary technical documentation. Stay informed about updated “acceptable use” policies from these providers.
Implement Scalable Solutions
- Streamline AI system processes to ensure models are transparent, explainable and trustworthy. Use automation to map technical metrics and system metadata to governance frameworks, enabling compliance automation.
- Create a centralized repository to manage documentation for all AI systems, ensuring they meet the AI Act’s compliance requirements.
- Align business strategies with the evolving AI regulatory framework, preparing for potential amendments to the AI Act.
- Educate staff on the ethical and legal implications of AI systems, preparing them for new responsibilities related to compliance and governance.
- Review and update consumer-facing policies, including terms and conditions, privacy policies, and consent notices.
- Actively participate in industry forums and policy-making discussions to shape and stay ahead of emerging AI regulatory trends.
Adopt Trusted AI for Innovation, Design and Control
- Embedded Trusted AI principles and security considerations into the AI system design phase.
- Drive innovation within ethical and regulatory boundaries, balancing technological growth with social responsibility.
- Perform routine assessments and updates of AI systems to maintain compliance and incorporate advancements in transparency and explainability.
Frequent Ask Questions
1. What is the EU AI Act, and why is it important?
The EU AI Act is a proposed regulation designed to establish a legal framework for the safe and trustworthy use of artificial intelligence in the European Union. It aims to ensure that AI systems respect fundamental rights, are safe and meet specific regulatory standards, especially for high-risk applications.
2. What are high-risk AI systems under the EU AI Act?
High-risk AI systems are those that pose significant risks to safety or fundamental rights. Examples include AI used in critical infrastructure, healthcare, law enforcement, education and biometric identification systems. Such systems must comply with stringent requirements, including risk management, data governance and human oversight.
3. What are the obligations for providers of high-risk AI systems?
Providers of high-risk AI systems are required to adhere to several critical obligations to ensure compliance and accountability. They must conduct pre-market conformity assessments, maintain detailed documentation and records, and implement measures for human oversight and system transparency.
4. Who is responsible for ensuring compliance with the EU AI Act?
Ensuring compliance with the EU AI Act is a shared responsibility among several key stakeholders. Providers are responsible for developing AI systems that meet regulatory requirements. Authorized representatives act on behalf of non-EU providers within the EU to ensure compliance. Importers must verify that AI systems meet EU standards before market entry, while distributors are tasked with ensuring that the products they make available adhere to the regulations.
5. What are the penalties for non-compliance with the EU AI Act?
Penalties for non-compliance with the EU AI Act are substantial and depend on the severity of the breach. Severe violations can result in fines of up to €35 million or 7% of the provider’s total worldwide annual turnover, whichever is higher. For less severe infractions, such as improper documentation, fines may reach €10 million or 2% of the provider’s total annual turnover.
Further Read:
https://scikiq.com
https://scikiq.com/supply-chain
https://scikiq.com/marketing-use-cases
https://scikiq.com/retail
https://scikiq.com/healthcare-analytics
https://scikiq.com/banking-and-finance
https://scikiq.com/telecom