In a world where artificial intelligence (AI) is increasingly permeating various aspects of life, companies face pressing questions: How can AI technologies be utilized to make work easier without creating risks like data privacy issues or loss of control? Franziska Peters, Communications Manager at AI Grid, interviewed Pauline M. Kuss, who, as a member of AI Grid and as part of her PhD research on sociotechnical AI governance at Freie Universität Berlin, explores these challenges. Pauline brings an interdisciplinary perspective to the complex questions surrounding AI in practice.
Insights from Research: What is Organizational AI Governance?
“I work on organizational AI governance,” Pauline begins. “This means I look at how companies integrate AI into their processes and products, focusing on the risks that arise and how to mitigate them.” Her goal is to develop a framework that serves as a guide for companies to use AI safely and efficiently.
She offers an example to illustrate the challenges: “Imagine a recruitment company using freely available models like ChatGPT. Employees upload résumés into the system without realizing this could lead to data privacy issues. At the same time, there’s the risk of the system delivering inaccurate results. Companies need strategies to enable the use of such systems while minimizing risks like data privacy breaches.”
AI in Medicine: Increasing Efficiency with Challenges
AI offers many benefits but also raises complex questions. “Especially in medicine, we see how AI systems could relieve doctors and nursing staff,” Pauline explains. “For example, these systems can analyze X-rays and highlight abnormalities, enabling medical staff to focus on specific areas. This saves time and improves diagnostics.”
But how can we ensure these systems are used responsibly? “Control must always remain with humans. It’s essential to understand how these systems work and to intervene when errors occur,” she emphasizes. “Ideally, AI takes over monotonous tasks, freeing people for creative and meaningful work.”
“In hospitals, for instance, AI systems could simplify documentation work,” Pauline says. “However, these systems must be intuitive and avoid creating additional burdens. Otherwise, frustration arises instead of relief.”
“Technology is Never Neutral”
Her research extends far beyond technical aspects, encompassing ethical and societal issues. “Technology is never neutral,” Pauline notes. “It shapes social structures and individual identities. A key question is how we can ensure that people retain control and feel valued in their work.”
A central theme of her work involves foundational models like GPT. These large AI systems are developed by a few companies and form the basis for many applications. “Training such models is resource-intensive and expensive, meaning only a few players dominate the market. At the same time, companies using these models often have little insight into how they work. This shifts responsibility and control,” she explains.
From Data Protection Law to Interdisciplinary Research
Pauline’s journey into research began with a degree focusing on privacy and data protection law. “During my undergraduate studies, I learned a lot about the legal aspects of data but wanted to understand the technical side as well,” she recalls. She pursued a master’s in data science, gaining insights into neural networks and deep learning, while simultaneously completing a second master’s in technology law to deepen her understanding of AI regulation.
“This interdisciplinary perspective allows me to connect legal, technical, and ethical aspects,” Pauline says. “This is particularly important when discussing AI because these technologies deeply influence our society.”
The Ideal Relationship Between Humans and AI
When asked what the future relationship between humans and AI should look like, Pauline has a clear vision: “Control must always remain with humans. AI should support us, not dictate to us. If we manage to make these systems transparent and secure, we can fully harness their potential.”
She also sees companies as having a responsibility: “They need to provide their employees with the right tools and training to critically assess and use these systems responsibly.”
Personal Exchange Through the AI Grid Network
Abschließend spricht Pauline über ihre Erfahrungen bei einer von AI Grid organisierten Science & Innovation Tour mit einer Delegation von PhDs aus dem Netzwerk in Zürich. „Es war unglaublich bereichernd, sich mit anderen Forschenden auszutauschen. Man bekommt neue Perspektiven und kann die eigene Arbeit im größeren Kontext sehen“, erzählt sie. Besonders beeindruckt habe sie der Besuch bei IBM Research und Disney Research. „Die unterschiedlichen Ansätze und Kulturen der Unternehmen haben gezeigt, wie vielseitig die Einsatzmöglichkeiten von KI sind.“