By Niklas Reisz

What Really Lies Behind Mandatory AI Competence According to the EU AI Act? Guidance for Businesses

AI competence mandatory from February 2025 – scaremongering or a strategic necessity for your company? Instead of falling for overpriced ad-hoc courses, find out here what the EU AI Act really demands. Discover how to view the requirements not as a burden, but as a genuine opportunity, and leverage AI competence intelligently for your success.

eu ai act symbol image

EU AI Act: What Companies Really Need to Know About AI Competence Now – And How They Can Benefit

Recently, there has been a surge in offers, particularly on platforms like LinkedIn, promoting employee training on AI competence. These often create a sense of urgency, suggesting that AI competence will be mandatory from February 2, 2025, and that expensive training must be booked quickly. An impression is created that an inspector could appear at any moment to impose significant penalties.

However, such quick fixes are not only costly but also fall short. The issue is far from resolved with just that. Instead, it requires a fundamental understanding and a strategic approach. In this article, we explain what the requirements of the EU AI Act truly entail and what is crucial from a business perspective to leverage the opportunities therein.

What does the EU AI Act specifically require?

On February 2, 2025, initial provisions of the EU AI Act (AIA) came into force. This includes the much-discussed AI competence requirement as per Article 4 AIA.

Article 4 AIA states:

Providers and deployers of AI systems shall take measures to ensure, to the best of their ability, that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy, taking into account their technical knowledge, experience, education and training and the context in which the AI systems are to be used, as well as the persons or groups of persons for whom the AI systems are to be used.

Thus, it is about companies ensuring that all relevant individuals possess the necessary competence in dealing with AI.

What exactly is AI competence – and who is affected?

The EU AI Act defines “AI literacy” (referred to as KI-Kompetenz in the original text, aligning with AI competence) in Article 3, number 56 as:

the skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to be aware of the opportunities and risks of AI and of possible harm it may cause

In essence, this means: seizing opportunities through AI, minimizing risks, and preventing potential harm. This is an approach that is fundamentally sensible for any forward-thinking company.

Providers, Operators (Deployers), and Affected Persons:

The requirement for AI competence addresses all actors along the AI value chain, depending on their role (cf. Recital 20 AIA). The required competencies vary:

  • Providers (e.g., developers of AI software) need in-depth technical knowledge to develop safe and value-compliant AI systems, especially for high-risk AI.
  • Operators (companies deploying AI systems) must understand how they function to use the systems responsibly and manage risks within their own operations.

Article 4 AIA obliges providers and operators to take “measures.” The specific nature of these measures depends on the AI system used, its risk level, the prior knowledge of the employees, and the deployment context. AI competence is interdisciplinary, encompassing technical, legal, and ethical aspects. A company developing a chatbot will have different priorities than a company merely using that chatbot in customer service. Even the use of tools like ChatGPT or Microsoft Co-Pilot counts as AI deployment.

Do I have to expect penalties, and will I be audited?

The EU AI Act itself does not provide for direct administrative penalties for a violation of Article 4 AIA. Nevertheless, non-compliance can lead to consequences. Corporate due diligence obligations, such as those anchored in Austrian law (e.g., § 1313a ABGB), are specified by Article 4 AIA with regard to AI.

This means: If damages occur that can be traced back to a lack of AI competence within the company, Article 4 AIA clarifies that an obligation for corresponding training and competence development would have existed. This can be relevant for liability in the event of damage.

The underlying logic is clear: The aim is to prevent damage due to a lack of AI competence – a dictate of business prudence that serves the protection and success of every company.

Implementation Recommendation: From Obligation to Opportunity

Artificial intelligence is rapidly evolving into a fundamental component of business processes and value creation – much like computers or the internet once did. Not engaging with it is not a viable long-term option. The requirements of the EU AI Act should therefore not be seen as a burdensome obligation that can be checked off with a quick workshop. Rather, it is about a continuous process that should be firmly integrated into the company. This process not only helps to minimize risks but, above all, opens up significant opportunities.

We recommend a structured four-step approach:

Step 1: Anchor AI competence at a strategic level

The use of AI in your company is a directional, strategic decision. A clear AI strategy is recommended here, defining which goals you are pursuing with AI and how AI deployment fits into your overall corporate strategy, values, culture, and risk appetite. It’s about answering the overarching “What do we want to achieve?” and “Why?” questions.

To firmly establish this strategic orientation and the principles for responsible AI handling within the company and to define “How do we implement it?”, creating an internal AI policy is advisable. It serves as a central document that guides, regulates, and acts as a guideline for all employees regarding the practical implementation of your AI strategy.

This AI policy should specifically:

  • Clearly present the core aspects of your AI strategy as well as the derived guiding principles and objectives for AI deployment in the company.
  • Define unambiguous rules for the development, procurement, and use of AI systems, including references to other relevant corporate policies (e.g., data protection, IT security, ESG).
  • Establish clear roles and responsibilities for AI-related decisions and processes (e.g., who approves the deployment of new AI tools?).
  • Be transparently communicated internally and made easily accessible to all employees.

A helpful template for an AI policy as well as general information on developing an AI strategy is provided by the Austrian Federal Economic Chamber (WKO): https://www.wko.at/digitalisierung/ki-guidelines-fuer-kmu (Note: Link leads to German content)

For a deeper insight into the topic of AI strategy, we recommend our current blog post (Link).

Step 2: Assess the status quo – Where does your company stand?

Artificial intelligence is often already an unnoticed part of many standard software products (e.g., Microsoft Co-Pilot in Office products) or is implemented through updates. It is therefore possible that AI systems are already being used in your company without the responsible parties being aware of it.

  • Conduct an inventory of the software currently in use. Existing lists (e.g., from IT security, data processing register) can serve as a basis.
  • Identify responsibilities for the respective software deployment areas.
  • Update this overview regularly, especially when introducing new AI components or systems.

Step 3: Implement AI competence operationally – Tailored and practice-oriented

The acquisition of AI competence at the operational level depends on your corporate strategy and the type of AI systems used. Competence development should cover technical, legal, and ethical aspects, risk awareness, and practical application skills, taking into account the employees’ knowledge level and the AI system’s risk level.

  • Target group orientation: Executives, project teams, trainees, or even external service providers have different requirements.
  • Training formats: Workshops, lectures, e-learning – depending on needs and target group, voluntary or mandatory, ideally recurring.

Possible training content:

  • Knowledge of the corporate strategy and internal AI policies.
  • Basic digital literacy skills (e.g., according to DigComp 2.3 AT).
  • Fundamental understanding of AI: functionality, application examples, innovation potential.
  • Specifics of AI: bias, hallucinations, importance of training data.
  • User training for specifically deployed AI systems (e.g., prompting workshops for text AI).
  • Legal aspects: data protection, labor law, copyright law, basics of the AI Act.

The WKO offers an overview of practice-oriented AI solutions: https://www.wko.at/digitalisierung/ki-loesungen-fuer-die-praxis (Note: Link leads to German content)

Information on the AI Act can also be found at the AI Service Point of RTR: https://ki.rtr.at (Note: Link leads to German content)

Best practices for competence development:

  • Regular inventory of deployed AI systems and re-evaluation of use cases.
  • Interdisciplinary monitoring (technology, legal, compliance, IT security, HR, works council – depending on company size).
  • Practice-oriented learning through testing and evaluating new systems.

Step 4: Documentation

Evidence and continuous improvement. To be able to demonstrate the implementation of Article 4 AIA, meticulous documentation is essential.

  • Record your AI strategy and internal AI policy in writing and make them accessible. Template examples (see WKO link above) can help.
  • Develop a training and knowledge transfer concept.
  • Document conducted training sessions in the personnel file of each affected employee with details such as:
    • Type of training (physical, e-learning, etc.)
    • Provider (for external training)
    • Training content and objective
    • Date of training
    • Planned repetitions

Conclusion: AI Competence as a Strategic Advantage

The requirements of the EU AI Act regarding AI competence are no cause for panic but rather an invitation to engage with artificial intelligence strategically and thoroughly. Instead of booking overpriced ad-hoc courses, companies should view the development of AI competence as an integral part of their corporate development. A well-thought-out, step-by-step approach not only enables compliance with legal requirements but, above all, allows companies to recognize and sustainably leverage the enormous opportunities of AI for their own business. Consider it an investment in the future viability of your company.

Do you need assistance in developing your AI strategy, implementing the necessary steps, or leveraging the potential of AI in your company? As a specialized Data Science and AI consultancy, we are happy to support you with our expertise.

Yes, initial provisions of the EU AI Act, including Article 4 on AI competence, came into force on February 2, 2025. Providers and operators of AI systems must ensure their personnel possess sufficient AI knowledge, considering their technical expertise, experience, education, and the context in which AI systems will be deployed.

According to Article 3, number 56 of the EU AI Act, AI competence is defined as the skills, knowledge, and understanding that enable providers, operators, and affected persons to deploy AI systems proficiently and to be aware of the opportunities, risks, and potential harm they may cause, considering their respective rights and obligations under this regulation.

The requirement for AI competence applies to all actors along the AI value chain, depending on their role (cf. Recital 20 AIA). This includes providers (e.g., developers of AI software) and operators (companies using AI systems). Even the use of tools like ChatGPT or Microsoft Co-Pilot is considered AI use, necessitating relevant competence.

The EU AI Act itself does not stipulate direct administrative penalties for a breach of Article 4 AIA. However, non-compliance can lead to consequences. If damages arise from a lack of AI competence within a company, Article 4 AIA clarifies that an obligation for appropriate training and competence development would have existed, potentially leading to liability.

A lack of AI competence can lead to damages for which companies may be held liable. Article 4 AIA specifies corporate due diligence obligations concerning AI. It is a matter of sound business practice to prevent damages due to insufficient AI competence and to leverage the opportunities AI offers, protecting and ensuring the success of any company.

The initial step is to anchor AI competence at a strategic level. This involves developing a clear AI strategy that defines company objectives with AI and how its deployment aligns with the overall corporate strategy, values, culture, and risk appetite. An internal AI policy is recommended to embed this strategic direction and principles for responsible AI use.

An internal AI policy should clearly outline core aspects of the AI strategy, derived principles and goals for AI deployment, define rules for the development, procurement, and use of AI systems, including references to other relevant corporate policies (e.g., data protection, IT security, ESG), establish clear roles and responsibilities for AI-related decisions, and be transparently communicated and accessible to all employees.

Companies should conduct an inventory of currently used software, as AI is often embedded in standard products or implemented via updates without explicit awareness. Existing lists (e.g., from IT security, data processing registers) can serve as a basis. Responsibilities for software deployment areas should be identified, and this overview should be regularly updated.

Training content should encompass the company's AI strategy and internal AI policies, foundational digital literacy (e.g., according to DigComp 2.3 AT), a basic understanding of AI (how it works, application examples, innovation potential), specifics of AI (bias, hallucinations, importance of training data), user training for specific AI systems (e.g., prompting workshops for text AI), and legal aspects (data protection, labor law, copyright, AI Act basics).

Thorough documentation is essential to demonstrate compliance with Article 4 AIA. This includes written AI strategy and internal AI policy documents, a training and knowledge transfer concept, and records of completed training sessions in each affected employee's personnel file, detailing the type, provider, content, timing, and planned repetitions of the training.

AI Strategy for the DACH Region: Achieving a Competitive Advantage in 8 Steps