By Jaromir Gruber

The Impact of the EU AI Act on Businesses: What You Need to Know

Artificial Intelligence (AI) is no longer a futuristic concept but an integral part of everyday life and business operations. Small and medium-sized enterprises (SMEs) benefit significantly from AI-driven solutions that enhance efficiency, automate processes, and enable innovative customer interactions. However, alongside these opportunities come regulatory challenges. To minimize risks and establish a clear legal framework, the European Union has introduced the AI Act — the world’s first comprehensive legislation governing AI.

library with books

This article outlines the key provisions of the AI Act relevant to SMEs, how businesses can prepare, and what strategic measures are necessary to not only comply with the new regulation but also leverage it for competitive advantage.


What Is the AI Act and Why Does It Matter?

The AI Act is an EU regulation set to be implemented in phases starting August 1, 2024. Its primary objective is to create a fair internal market for trustworthy and human-centric AI while ensuring safety, fundamental rights, and data protection. The AI Act is not only relevant for AI developers but also for companies utilizing AI systems — including many SMEs.

Definition of AI Systems


AI Act Implementation Timeline

The AI Act will be rolled out in phases, with full enforcement by August 2026:

  • August 2024 — Final legislative approval
  • February 2025 — Prohibited AI systems must be discontinued
  • August 2025 — General-purpose AI (GPAI) obligations take effect
  • August 2026 — Full compliance required

Risk Classification

The AI Act categorizes AI systems into four risk levels:

  • Unacceptable Risk — AI applications posing a threat to safety and fundamental rights are banned. This includes biometric categorization based on sensitive data and AI-driven manipulation technologies.
  • High Risk — AI used in critical sectors such as healthcare, law enforcement, infrastructure, and recruitment.
  • Limited Risk — AI systems interacting with humans or generating media content, such as chatbots or AI-generated images.
  • Minimal Risk — Systems with negligible impact, such as spam filters or AI-powered search functions.

High-Risk AI Systems

Certain AI systems classified as high-risk are subject to stringent regulatory requirements, including:

  • Biometric Systems — Remote biometric identification and categorization, subject to strict regulations.
  • Critical Infrastructure — AI for transportation, energy, and digital security.
  • ducation — AI-based admissions, assessment, and exam monitoring.
  • Employment & HR — AI in recruitment, performance evaluations, and promotions.
  • Public & Private Services — AI for social benefits assessments, credit scoring, and emergency response classification.
  • Law Enforcement — AI for crime risk assessment, lie detection, and forensic analysis.
  • Migration & Border Control — AI for asylum application processing and identity verification.
  • Justice & Democracy — AI systems influencing elections or interpreting legal cases.

Compliance Requirements for High-Risk AI Providers

To meet regulatory obligations, providers of high-risk AI systems must:

✔ Implement a risk management system — Continuous risk monitoring across the AI system’s lifecycle.
✔ Ensure data governance — Training, validation, and test datasets must be relevant, representative, and free from errors.
✔ Develop technical documentation — Compliance documentation must be available for regulatory assessment.
✔ Enable event logging and change documentation — AI systems must automatically record relevant events and modifications.
✔ Provide user guidelines — Clear instructions to help downstream users comply with regulations.
✔ Ensure human oversight — AI must allow for human intervention when necessary.
✔ Guarantee accuracy, robustness, and cybersecurity — Systems must meet high technical standards.
✔ Establish a quality management system — Continuous monitoring and regulatory compliance enforcement.


Limited-Risk AI Systems

AI systems classified as limited risk under the EU AI Act are those where risks can be mitigated primarily through transparency obligations. These systems include:

  • AI systems interacting with individuals: Examples include chatbots and virtual assistants.
  • AI systems generating or modifying media content: This includes AI-created images, audio, text, and videos.
  • Biometric categorization and emotion recognition systems: While some of these applications are outright prohibited — such as biometric systems categorizing individuals based on sensitive attributes like political or religious beliefs — others fall under limited risk and must adhere to transparency rules.
  • General-purpose AI systems (GPAIS): AI models capable of generating text, images, or other outputs, such as ChatGPT.
  • Retrieval-Augmented Generation (RAG) systems: AI models that fetch external knowledge to enhance responses, reducing inaccuracies (“hallucinations”) and improving relevance.

Compliance Obligations for Limited-Risk AI

Although these systems are not subject to the strict regulatory requirements imposed on high-risk AI, transparency obligations remain essential to ensure informed user interactions and regulatory compliance. Key requirements include:

✔ User Awareness & Transparency

  • Users must be clearly informed when interacting with an AI system. This is particularly relevant for chatbots, virtual assistants, and automated decision-making systems.
  • Transparency helps users distinguish AI-generated responses from human interactions, preventing potential misinformation or deception.

✔ Labeling of AI-Generated Content

  • AI-generated or AI-modified images, videos, audio, and text must be labeled to indicate their synthetic nature.
  • Deepfake detection: AI-generated media that could be misinterpreted as real content must be explicitly marked as artificial.
  • Exemptions: Artistic, satirical, or fictional content is not subject to mandatory labeling. However, factual AI-generated text that is not reviewed or edited by humans and is of public interest must be identified.

✔ Accessibility of Transparency Notices

  • Labels and disclosures must be clear, unambiguous, and accessible to all users, including individuals with disabilities.

✔ Copyright Compliance & Data Transparency

  • GPAI providers must ensure compliance with EU copyright regulations.
  • Companies must provide detailed technical documentation outlining the training data sources used for AI models.
  • Data provenance and accountability are key to maintaining compliance, preventing intellectual property violations, and ensuring ethical AI deployment.

By adhering to these transparency and compliance requirements, businesses leveraging limited-risk AI can maintain regulatory alignment, build user trust, and mitigate potential reputational and legal risks.


Minimal-Risk AI Systems

AI applications classified as minimal risk are exempt from specific regulatory obligations. Examples include:

  • AI-driven video games
  • Spam filters

While voluntary adherence to ethical guidelines is encouraged, compliance is not mandatory. However, risk classification always depends on the specific AI application.


Non-Compliance Penalties

Companies failing to comply with the AI Act face significant financial penalties. Fines range from up to €7.5 million or 1.5% of global revenue to €35 million or 7% of annual revenue, depending on the severity of the violation. The most severe penalties apply to prohibited AI systems.


AI Literacy: A New Requirement for SMEs

Starting February 2025, businesses using AI must ensure their employees receive appropriate training, regardless of the AI system’s risk classification. Required competencies include:

  • Technical Knowledge — Basic understanding of machine learning and algorithms.
  • Legal Awareness — Familiarity with the AI Act and GDPR.
  • Ethical Considerations — Identifying and mitigating algorithmic bias.
  • Risk Management — Assessing AI risks and limitations.

Training can be conducted internally or externally through webinars, workshops, or e-learning. Documentation of training efforts is advisable for compliance verification.


Practical Steps for SMEs

To successfully implement the AI Act, SMEs should take the following actions:

  1. Assess AI Usage — Identify AI systems in use and their risk classification.
  2. Ensure Compliance for High-Risk AI — Meet all regulatory requirements.
  3. Enhance Transparency for Limited-Risk AI — Inform users when interacting with AI.
  4. Train Employees — Invest in workforce education to meet legal and technical standards.
  5. Review Data Management — Ensure AI applications comply with data protection regulations.
  6. Leverage External Expertise — Utilize advisory services from organizations such as WKO or RTR.

Conclusion: Compliance as a Competitive Advantage

The EU AI Act presents both challenges and opportunities for SMEs. Companies that proactively adopt compliant, privacy-conscious AI solutions will gain a long-term competitive edge.

Is your business ready for the AI Act? Understanding the regulatory landscape and strategically implementing AI is crucial for success.

Get Expert Guidance

Let experts assist you! Solon Labs offers tailored AI solutions that comply with legal requirements, as well as comprehensive consulting for the safe and efficient implementation of AI in your business.

Contact us today for a free consultation and take advantage of the opportunities AI offers in a responsible and profitable way!

The EU AI Act is a new regulation for the governance of Artificial Intelligence (AI) within the EU. Its goal is to promote safe and trustworthy AI systems, minimize risks, and create uniform standards for businesses.

Implementation happens in phases:

  • August 2024: Final vote
  • February 2025: Shutdown of prohibited AI systems
  • August 2025: Rules for general-purpose AI systems (GPAI)
  • August 2026: Full application of all provisions

The AI Act impacts not only AI developers but also companies that use AI systems. Businesses in critical sectors such as healthcare, law enforcement, and human resources are particularly affected.

  • The AI Act distinguishes four risk levels:
  • Unacceptable risk: Prohibited AI systems, e.g., biometric categorization of sensitive data.
  • High risk: AI applications with significant impact, e.g., in HR management or law enforcement.
  • Limited risk: AI systems requiring transparency, e.g., chatbots.
  • Minimal risk: Systems like spam filters or AI-powered search functions.

Providers must:

  • Establish a risk management system
  • Ensure data quality
  • Prepare technical documentation
  • Guarantee human oversight
  • Ensure cybersecurity and robustness

  • Users must be informed when interacting with AI
  • AI-generated content must be labeled (e.g., deepfakes)
  • Copyright laws must be respected

Companies can face fines of up to 35 million euros or 7% of their global annual revenue.

  • Analyze existing AI systems and assess risks
  • Adapt high-risk AI to legal requirements
  • Train employees on AI competencies
  • Ensure transparency and data protection
  • Seek expert consultation

Companies that adopt compliant and transparent AI early can position themselves as trustworthy providers and secure long-term competitive advantages.

Solon Labs offers:

  • Consulting on AI Act compliance
  • Development and optimization of AI-compliant systems
  • Training and workshops for businesses
Get a free consultation: Schedule a call

How Generative AI is Revolutionizing SMEs: Practical Use Cases and Opportunities