Digitalization

EU AI Act: Why Artificial Intelligence is also an Issue for Management Systems

Manuel Klötzer / 12.03.2025

Hardly any other technology is currently being discussed as controversially as artificial intelligence (AI). Alongside great hopes, it inevitably raises questions about the risks of its use. And it brings with it requirements that need to be met - such as the AI Act at EU level, the provisions of which have been gradually coming into effect since the beginning of February 2025. In this respect, a responsible approach to AI encompasses far more than just the question of its most profitable use.

Risks and Opportunities of Artificial Intelligence

Artificial intelligence can make it easier to deal with the challenges of our time in a variety of ways. For example, the analysis of large amounts of data can help to improve the quality of products and services, increase productivity and establish new business models. It can also benefit society, for example by detecting diseases more reliably. 

On the other hand, AI can also replace human jobs or change working conditions. The ability to analyze previously unimaginable amounts of data creates new opportunities for surveillance, discrimination and cybercrime. And questions inevitably arise: how secure is sensitive or copyrighted information? How reliable are AI-generated answers? And who is liable if artificial intelligence spreads false facts, makes incorrect decisions or causes damage in other ways? 

AI Act Follows a Risk-based Approach

In order to counter the risks and address issues such as data protection, transparency and liability in connection with AI, legislative bodies such as the European Union are also increasingly addressing the topic. The EU Artificial Intelligence Act (also known as the AI Act) came into force in August 2024. It imposes obligations primarily on providers, but also on (professional) users of AI systems. This is based on a risk-based approach: the greater the risk associated with an AI application, the more far-reaching the regulation. Certain procedures whose risks considered unacceptable have been banned by the AI Act since the beginning of February 2025. 

The other provisions of the AI Act will be applied gradually over several years. The AI Act then obliges providers of high-risk AI systems to set up a quality management system (described in more detail in the text of the regulation). This “shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions" and, in addition to a number of other aspects, must also include a risk management system, the requirements of which are also described in the text of the AI Act. 

To address liability issues in connection with artificial intelligence, also at EU level, the Product Liability Directive, came into force at the beginning of December, in which the definition of product is explicitly extended to include software and AI systems as opposed to the previous version. Unlike the AI Act, whose provisions are directly binding in all EU member states, it is a directive. The individual states therefore have two years to enact corresponding legislation themselves. The latter also previously applied to the EU Machinery Directive, which will be completely replaced by the new Machinery Regulation by 2027. Individual provisions of the regulation have been gradually coming into force since July 2023. The Machinery Regulation is another example of a legal regulation that explicitly addresses artificial intelligence - and more are likely to follow.

Management System Ensures Greater Reliability

The consideration of the topic of artificial intelligence as part of a management system will therefore be mandatory for certain companies in future under the AI Act. However, even without an explicit regulatory requirement, it is a good idea in many cases. This is because a management system enables the joint consideration of risks and opportunities as well as the requirements of all interested parties. In addition to legislators, this also includes customers or employees whose interests could be affected by AI applications. The management system can also be used to establish processes that serve to fulfill requirements and can be described in more detail in documents. In this way, it also serves to gain more reliability in dealing with AI applications. 

However, AI is just one of many topics that companies are dealing with. An Integrated Management System (IMS) is a good way to consider all these topics together within a uniform structure, thereby recognizing interrelationships and exploiting synergies. There are various ways of integrating the topic of artificial intelligence into an IMS: On the one hand, many companies already have a quality management system in accordance with ISO 9001, which explicitly requires the consideration of risks and opportunities, among other things. In this context, the risks and opportunities of artificial intelligence can also be taken into account.

New ISO Standard for AI Management System

For some companies, certification in accordance with ISO 42001 may also be the method of choice: This standard, published at the end of 2023, formulates requirements for a management system for artificial intelligence. Integration into the IMS is made easier by the fact that, together with ISO 42001, all other management system standards of the International Organization for Standardization (ISO) are based on the Harmonized Structure and therefore have a similar structure. However, each company must decide for itself which solution is the right one in view of its own framework conditions and requirements (including legal requirements).

When setting up and expanding (Integrated) Management Systems, it is helpful to link risks and opportunities, requirements, processes and documents in a meaningful way. This also serves to increase the transparency of the management system by enabling all those involved in the company to access these resources. In many cases, the use of appropriate software is recommended for this purpose.

Software Offers Support for Management Systems

Interaction of the modules that help your company to make your QMS or IMS effective

With modules such as Risk Management, Requirements Management or Process Management, the Babtec software supports the handling of all relevant requirements as well as the development and expansion of (Integrated) Management Systems - with intuitive usability and based on a common data master.

Comments

No comments

Write Comment

* These fields are required