AI Grid on tour in Paris: How can AI become transparent and trustworthy?

Trustworthy & Explainable AI, (trustworthy and explainable artificial intelligence), promotion of young AI talent, exchange between the research locations Germany and France

Berlin, April 18, 2024 - What methods are there to make AI systems more transparent? How can AI be used in a socially responsible way and according to ethical principles? How can we ensure that AI systems make fair and unbiased decisions? These are the questions addressed by the Trustworthy and Explainable AI research focus area. At the end of April, 15 AI Grid members will travel to Paris to discuss these and other current AI topics with French experts.

From April 24-26, 2024, the next Science & Innovation Tour vom AI Grid will take place. In Paris, 15 young researchers will have the opportunity to immerse themselves in the AI landscape of the French capital and meet experts from the most renowned French research centers as well as companies and start-ups. The trip will focus on the following topics Trustworthy & Explainable AI.

How can AI become transparent and trustworthy?

Trustworthy and explainable artificial intelligence is highly relevant: As AI systems penetrate ever deeper into the privacy of our everyday lives and make decisions that affect our lives, the need to understand and trust these systems is growing. The new generation of AI presents us with fundamental questions: How can we create responsible technologies that meet our ethical standards? How can we ensure that algorithms make comprehensible decisions? And how can we create an environment in which society not only accepts AI, but also plays an active role in shaping it?

These and other questions are addressed by current research in the field of Trustworthy & Explainable AI - a topic that is not only important for experts in computer science, but for each and every one of us. Because in a world where AI makes decisions about creditworthiness, medical diagnoses and even legal judgments, we need to ensure that technology remains our ally and does not become an opaque author.

During the two-day trip to Paris, leading experts from France will exchange ideas with German AI researchers. One focus is the cooperation with Inria and the Open Science AI Lab Kyutai - an Open Science AI Lab dedicated to research into Large Language Models (LLMs).

Another highlight is the cooperation with the AXA Group as part of the joint laboratory TRAIL - Trustworthy And Responsible AI Lab. Research contributions are presented here, including the work of Lea Eileen Brauner. These sessions offer German AI researchers the opportunity to learn about current research topics and approaches and to exchange ideas about future-oriented AI applications.

Interview offer: How Lea Eileen Brauner is using AI to gain a better understanding of rare diseases

Lea Eileen Brauner is one of the 15 AI Grid members traveling to Paris. She is studying computer science with a focus on data science at Ostfalia in Wolfenbüttel near Braunschweig and is dedicated to machine learning across multiple description spaces. In conventional machine learning, the processes normally “learn” from a single data basis (also known as a description space), from which these processes draw their information. Lea Eileen Brauner is working on allowing methods to learn from several description spaces simultaneously in order to find globally similar groups.

With her research, she hopes to gain a better understanding of rare diseases and enable faster diagnoses. As the medical data is very sensitive and the aim is not to replace diagnoses with AI, but rather to expand the understanding of these diseases with the help of AI, the traceability of the methods is of great importance.

Possible discussion topics with Lea Eileen Brauner:

  • Why is it necessary to improve the current diagnosis process for rare diseases and where are the deficits?
  • How could the identification of similar groups of patients using AI across different description spaces contribute to this?
  • To what extent should the system you have developed support the diagnostic finding process? Should it generate diagnoses?

We would be delighted if you would be interested in talking to Lea Eileen Brauner or other AI Grid scientists.

About AI Grid:

AI Grid is an initiative that connects young AI talents with established AI experts. The project is funded by the Federal Ministry of Education and Research and is part of the German AI Action Plan to promote innovation and research in the field of artificial intelligence.

AI Grid. We promote AI talents.

Technical terms:

Explainable AI, XAIAn approach in artificial intelligence that aims to present the processes and decisions of AI models in a clear and comprehensible way. The aim is to transform complex AI systems, which are often perceived as “black boxes”, especially those based on deep neural networks, into comprehensible processes.

Trustworthy AIA comprehensive framework in AI that integrates legal compliance, ethical alignment and technical reliability. Trustworthy AI addresses seven core requirements: Human control, technical security, privacy, transparency, diversity and fairness, societal wellbeing and accountability.

Give your career a boost!

AI Grid Ambassadors

AI Grid Ambassador in Israel

AI Grid Ambassador in Great Britain

AI Grid Ambassador in Sweden

AI Grid Ambassador in Switzerland

AI Grid Ambassador in the USA (West Coast)

AI Grid Ambassador in Japan (Tokyo)

AI Grid Ambassador in Spain

AI Grid Ambassador in the USA (East Coast)

AI Grid Ambassador in France