The AI Act, adopted by the European Union on July 12, 2024, marks a major milestone in the international regulation of artificial intelligence (AI). With this legislation, the EU is taking a pioneering role in setting binding standards for the use of AI. However, while this progress is welcome, it also poses significant challenges for businesses—especially small and medium-sized enterprises (SMEs) and start-ups.
In conversation with Manoj Kahdan, the key aspects and potential impacts of the AI Act on the economy are explored. The discussion covers how businesses will need to adapt to the new regulations, the risks and opportunities that arise, and how this regulation might affect Europe's competitiveness on the global stage.
Zur Person: Manoj Kahdan is a research associate and part of the Digital Responsibility & Innovation Lab at RWTH Aachen University. His research focuses on AI governance and AI certification, and he leads the "Certified AI" project on behalf of RWTH Aachen.
The AI Act has come into force and is sparking controversial discussions. Where do you see the biggest challenges for companies affected by the new regulations?
Manoj Kahdan: The AI Act is an important step in AI regulation, but it also brings significant challenges, particularly for smaller companies. One of the biggest challenges is the classification of AI applications. The AI Act categorizes AI into different risk levels – from prohibited to high-risk, and those with minimal or no risk. Each category comes with different compliance requirements.
The problem is that many companies are now realizing that software they have been marketing for years is suddenly classified as AI. This reclassification means that businesses must navigate new, often complex regulations for their existing processes and products. For SMEs, which may only have one or two people handling compliance matters, this is an enormous burden. Companies not only need to understand and meet the new requirements, but also allocate resources to implement them. In many cases, this may mean seeking external consultancy, which adds further costs. For consultants, of course, this opens up a new and lucrative market.
That sounds like a huge burden for SMEs. Do you think some companies will try to avoid using the term "AI" to sidestep the new regulations?
Manoj Kahdan: Yes, that’s certainly possible. In recent years, the term "AI" has often been used very loosely, almost to the point of inflation. Many products were marketed as AI, even when they were just simple algorithmic processes. Now, companies might try to take the opposite approach and stop labeling their products as AI in order to, ideally, avoid the strict compliance requirements.
Can regulation kill innovation?
Manoj Kahdan: That statement might seem provocative at first, but it reflects a real risk. If regulations are too strict or create high barriers, they can indeed stifle innovation. Small businesses and start-ups are often the driving forces behind technological breakthroughs, but they don’t have the same resources as large companies to meet complex regulatory requirements.
A simple example: If a start-up develops an innovative idea that could be classified as a high-risk application, it would need to make enormous efforts to meet all pre-market and post-market compliance requirements. These requirements can be so discouraging that the company might decide not to pursue the idea further or to move it to a region with less restrictive regulations. In the long term, this could lead to a loss of Europe’s competitiveness.
Do you think the AI Act could put the European economy at a disadvantage compared to the global market?
Manoj Kahdan: In the short term, that risk exists. Europe is indeed a leader in AI regulation, but this also means we are imposing significant constraints on ourselves, while other regions of the world, like the USA or China, have less stringent rules. This could lead to companies preferring to implement their innovative ideas where regulations are less restrictive.
On the other hand, in the long term, Europe could also take on a leading role through this regulation. The GDPR, the General Data Protection Regulation, was initially met with criticism but has since become a global "gold standard." The same could happen with the AI Act. If Europe succeeds in establishing a value-based, transparent, and fair system for AI regulation, it could serve as a global model. However, this will only work if the regulation is practical and flexible enough not to stifle innovation.
What steps should companies take now to prepare for the requirements of the AI Act?
Manoj Kahdan: First, it’s crucial for companies to understand where their applications fall within the risk categories of the AI Act. This means they need to carefully analyze their products and services to determine if and how they are affected by the new regulations.
Second, they should consider seeking expert knowledge, either through internal training or by working with external consultants. The requirements for transparency, documentation, and reporting are not trivial, so it's important for companies to have a clear strategy from the outset on how they plan to meet these demands.
Another important point is collaboration within the company. Compliance with the AI Act doesn’t just concern the IT department; it requires coordination across various departments, including legal, data protection, IT, and management. It’s crucial that all parties understand the requirements they face and how they can work together to meet them.
What does the future of the AI Act look like? Will it evolve, and if so, in what direction?
Manoj Kahdan: The AI Act is a dynamic instrument, and it’s very likely that it will evolve based on companies' experiences and market reactions. In the coming years, it will be particularly interesting to see how businesses implement the requirements and how flexibly lawmakers respond to challenges.
An example of this are so-called sandboxes, regulatory test environments where companies can develop and test their AI applications without having to immediately meet all regulatory requirements. These sandboxes could play a key role in promoting innovation without applying too much regulatory pressure. It will be interesting to see whether this initiative truly leads to an innovation boost or if further adjustments will be necessary.
In the long term, I hope the AI Act will be seen not only as a set of regulations but also as a tool to promote ethical and responsible AI development. If we succeed in setting a global standard, this could give Europe a unique position in the global AI landscape.
One last question: What advice would you give to small businesses and start-ups facing major challenges due to the AI Act?
Manoj Kahdan: My advice would be not to be discouraged by the challenges, but to view them as opportunities. Yes, the requirements are demanding, but they also offer the chance to position yourself as a responsible and transparent player in the market from the very beginning. Companies should take the time to prepare thoroughly and develop a clear strategy for how they will meet the requirements of the AI Act.
At the same time, it’s important to engage in dialogue with regulatory authorities. The AI Act is still in its early stages, and there is certainly room for adjustments. If companies actively share their experiences and challenges, they can help make the regulation more practical and better aligned with the real needs of the market.
Thank you for the insightful conversation. We look forward to seeing how the regulation and the market evolve in the coming years.
The interview was conducted by Franziska Peters, Marketing & Communications Manager at AI Grid.