The AIDA committee will discuss the impact of bias on the development of trustworthy AI, and ways to reduce it, in a public hearing on Tuesday.
When: Tuesday, 30 November 2021, 16:45 –18:45
Where: Room József ANTALL 2Q2 and videoconference
You can watch the webstreaming of the debate here.
The hearing will comprise two panel discussions with experts from academia, civil society and industry, including computer scientist and AI specialist Timnit Gebru and EU Agency for Fundamental Rights Director Michael O'Flaherty.
The presentations will be followed by a Q&A with AIDA Members. The first panel will focus on the impact of bias on the development of trustworthy AI. The second panel will explore algorithmic accountability, data governance, and how to reduce bias in AI systems.
More information on the event, as well as the programme and meeting documents, are available from the hearing webpage.
FOLLOW US ON TWITTER! Highlights, all press releases, important dates, new documents and much more are published on @EP_Artifintel
A study from the European Parliament's research service for the Panel for the Future of Science and Technology (STOA) highlights how AI can be susceptible to bias. Systematic bias may arise as a result of the data used to train systems, or as a result of values held by system developers and users. It most frequently occurs when machine learning applications are trained on data that only reflect certain demographic groups, or which reflect societal biases. A number of cases have received attention for promoting unintended social bias, which has then been reproduced or automatically reinforced by AI systems.