EU AI Act: Seminar to advance your AI expertise

The EU AI Act has been in force since August 1, 2024. Attend our training courses to develop the knowledge you need in the field of AI.

What you can expect

Risk classification of AI systems in accordance with the AI Act

Companies must categorize their AI systems into different risk classes from minimal to high risk. Depending on the classification, specific requirements apply, especially when using AI in areas such as health, education and security.

Transparency and documentation obligations
Transparency must also be ensured for low-risk systems. Users of high-risk AI systems must carefully monitor, document and control their use in order to minimize risks to security and fundamental rights.
Security and compliance checks

Regular audits and checks are required to ensure conformity with the provisions of the AI Regulation. This includes security measures, risk analyses and compliance with ethical standards.

Online Training

AI competence for employees in the context of the EU AI Act – Easily Accessible as an Online Training

Enhance Your AI Expertise | 99€

EU AI Act

In a nutshell

EU AI Act: The new AI regulation comes into force

The EU AI Act is a European Union law regulating the use of artificial intelligence (AI), which was adopted on 1. August 2024 has come into force. The AI Act is aimed at all companies, authorities and organizations that develop or use AI in the EU. The EU AI Act is intended to promote the ethical and safe use of AI technologies, strengthen trust in innovations and minimize risks at the same time. Application of the regulation is planned in several stages.

Key data

New Standards for using Artificial Intelligence under the AI Act

The regulations of the EU AI Act provide for a classification of AI systems according to their risk. A distinction is made between four risk classes:

  • prohibited AI practices
  • high-risk AI
  • AI with a medium and specific risk
  • AI with a low risk

In principle, the following applies: The higher the potential risks of a system for people, the more strictly it is regulated. Prohibited AI systems are considered unacceptable, such as social scoring or mass surveillance technologies. Although high-risk AI systems are permitted, they are subject to strict requirements as they can have a direct impact on people’s lives and rights. These systems are used, for example, in critical infrastructures, in the education sector, in personnel administration or for credit scoring.

What you can expect

Your new obligations as an AI provider and/or operator

Under the EU AI Act, providers and operators of large language models (LLMs) have clear obligations. Providers must ensure that AI-generated texts are clearly labeled and remain machine-readable and reliable.

Operators, also known as users or deployers, must also provide this labeling if the content is published and concerns important public topics. Labeling is not required if the content has been reviewed or editorially checked by humans.

Compulsory further training: That's why you need trained employees

The new legislation requires companies to ensure that employees working with AI systems have sufficient AI skills, knowledge and abilities. Providers and operators of AI technology must ensure that their staff are comprehensively trained by AI experts in order to be able to use AI systems safely and responsibly. This is not just about technical knowledge, but also about understanding the opportunities and risks of AI. Accordingly, the training courses must be adapted to the existing knowledge and the specific area of application of the employees.

Mandatory labeling: AI-generated content gets a label

The EU AI Act sets out clear labeling requirements for AI-generated content to ensure transparency and protect the public from misleading information. One important regulation concerns the labeling requirement for AI-generated texts. Articles that were created with the help of AI and deal with public affairs must be clearly marked as such. This measure is intended to ensure that readers know when they are dealing with machine-generated content, especially if it concerns topics of public interest.

In addition, there are strict transparency requirements for AI-generated images and videos. Visual content generated or manipulated by AI – known as deepfakes – must be clearly labeled. This is intended to prevent false or manipulated content from being perceived as genuine and thus maintain the public’s trust in digital media. The AI Act’s labelling requirements aim to minimize the misuse of AI technologies and at the same time promote the ethical use of these systems.

You should have this on your radar

Ready for AI compliance

Training and further education
Customized training for employees to build the necessary knowledge and skills in dealing with AI in accordance with the EU AI Act
Documentations
We support you in the preparation of documentation to provide evidence under the AI Act.
Consulting
We support you in the implementation of data protection requirements (e.g. GDPR) and the requirements under the AI Act.
Audit of AI systems
Analysis and documentation of your AI applications, in preparation for the legal audit with regard to legal requirements and ethical standards.
Risk assessment
We support you in the risk assessment of AI systems in order to identify potential threats and vulnerabilities and implement suitable measures to minimize risks.
Markings
We support you in implementing the legally required labeling for AI-generated content, such as texts, images and videos.
What can I do?

Obtain sufficient AI expertise from experts

At Open Logic Systems, we support AI operators and/or users with expert advice to ensure full AI compliance. We support you to implement the requirements of the EU AI Act and to design your AI systems responsibly and in compliance with the law. From risk analysis and labeling obligations to training your team.

Prices

Our services to ensure your AI compliance

Knowledge is the key to success: the better you understand the opportunities and risks of artificial intelligence, the more effectively you can leverage its capabilities. A well-trained team can be a real competitive advantage for your company. Furthermore, thorough training will ensure that your employees have sufficient AI knowledge. Get in touch – we will be happy to find the right solution for you.

Online Seminar

AI expertise for employees in the context
of the EU AI Act
99 one-time-payment
  • For employees from all departments
  • Duration: 1.5 hours
  • Prerequisite: None
  • Online seminar
  • Dates available on request
Favorite

Seminar Content: AI Competence for Employees in the Context of the AI Act

  • Fundamentals of Artificial Intelligence – Key Terms and Concepts
  • Functions and Capabilities of AI-based Systems
  • Operation, Opportunities, and Risks of Generative AI (Language Models / LLMs)
  • Scoring and Classification
  • Recommendation Systems
  • Forecasting and Simulations
  • Use Cases from Everyday Business Scenarios
  • Open Discussion

All topics will be explained using practical examples.

The fundamentals of artificial intelligence are taught online through a combination of theory and practical examples.

Training

AI expertise for process and data managers
as part of the AI Act
199 one-time-payment
  • Process and data managers
  • Management functions
  • Duration: 2.5 hours
  • Prerequisite: None
  • Training of Custom AI Models
  • Online seminar or in person event
  • Dates available on request

Seminar Content: AI Competence in the Context of the AI Act

  • Brief Recap: Fundamentals of Artificial Intelligence
  • Example-based Training and Use of Models for the Following Tasks:
    • Scoring of customers, suppliers, and prospects
    • Fraud detection
    • Technical service / customer support
    • Master data management
    • Customer lifecycle and recommendations
    • Demand forecasting
    • Information retrieval and extraction in the context of sales and HR
  • Framework for the Use of Artificial Intelligence: Opportunities and Risks
  • Transparent AI and Real-world Use Cases
  • Open Discussion

Participants will have the opportunity to train and deploy their own AI models live. No prior AI or IT knowledge is required to participate.

The fundamentals of artificial intelligence are taught online through a combination of theory and practical examples.

Training

AI expertise for process and
data managers as part of the AI Act
1499 one-time-payment
  • For employees from all departments
  • For managers from all specialist areas
  • Duration: 6 hrs
  • Prerequisite: None
  • Training of Custom AI Models
  • In person event
  • Dates available on request

Individual Seminar: AI Competence within the Framework of the AI Act – Example Content:

  • Fundamentals of Artificial Intelligence – Terms and Concepts
  • Functions and Capabilities of AI-based Systems
  • Function, Opportunities, and Risks of Generative AI (Language Models / LLMs)
  • Description and Evaluation of Own Use Cases in the Field of Artificial Intelligence
  • Data Foundations
  • Legal Classification
  • Technical Feasibility
  • Business Value
  • Training and Use of AI Models in Areas Such As:
    • Procurement
    • Sales
    • Controlling and Finance
    • Master Data
    • Human Resources
    • Customer Service
  • Framework Conditions for the Use of Artificial Intelligence: Opportunities and Risks
  • Explainable Artificial Intelligence and Use Cases
  • Open Discussion

The fundamentals of artificial intelligence are taught alternately through theory and practical examples, either in an in-person session or online upon request.

We design the content of the AI Competence seminar as part of the AI Act with you according to your wishes. Skills in the field of artificial intelligence or IT are not necessary for participation. However, we do not offer legal advice.

FAQ

Frequently asked questions

The EU AI Act is still relatively new. The specifications are constantly being expanded and updated. Here you will find an overview of frequently asked questions:

  • Social scoring: AI systems that evaluate people based on their social status or personality traits.
  • Mass surveillance: Biometric identification systems used for blanket surveillance in public spaces, except in the case of serious security threats.
  • Harm to children: AI systems that encourage dangerous behavior in children or specifically exploit the weaknesses of vulnerable groups.
  • Deepfakes: AI technologies that generate realistic but fake image, audio or video content.
  • Emotion recognition: AI systems that monitor the emotional states of people at work.
  • Security-critical infrastructures: AI systems that are used to manage and operate security-critical infrastructures such as energy or transportation systems.
  • Educational institutions: AI systems that decide on access to education and training institutions or are used to assess pupils and students.
  • Personnel employment: AI systems that are used in the work context for recruitment, promotion decisions, dismissals, task allocation and for monitoring and evaluating employee performance and behavior.
  • Credit check: AI systems used to assess the creditworthiness of natural persons, with the exception of systems used to detect financial fraud.
By definition, an “AI system provider” is a natural or legal person, public authority, agency or other body that develops or has developed an AI system or an AI model with a general purpose and places it on the market under its own name or trademark or puts the AI system into operation under its own name or trademark, whether in return for payment or free of charge. “Operator” of an AI system is a natural or legal person, authority, institution or other body that uses an AI system under its own responsibility, unless the AI system is used in the context of a personal and non-professional activity.
The EU AI Act provides for a gradual implementation of the law by 2030. A regularly updated timeline of the most important dates can be found here.
No. As with the GDPR, the AI Regulation applies to operators of AI systems if they are established or “physically present” in the EU. The AI Act is also aimed at providers and operators of AI that are established or located outside the EU, but whose systems generate results that are used in the Union.
CONTACT

How can we support you?

Would you like to receive further information or are you interested in an individual consultation? Simply send us a short message telling us how we can help you and we will get back to you as soon as possible.

Let‘s get started.




    I agree to the collection, processing and storage of my data provided here in accordance with your privacy policy. I can revoke my consent at any time by informally notifying you.

    Free consultation

    I am interested in a free consultation and would like to discuss individual options. Please contact me using the following contact details:

    EU AI Act | Request an appointment

    I am interested in scheduling an appointment for an EU AI Act training (2,5h) and would like to discuss the available options. Please feel free to contact me at the following details:

    EU AI Act | Request an appointment

    I am interested in scheduling an appointment for an EU AI Act training (6h) and would like to discuss the available options. Please feel free to contact me at the following details: