€187.71

Workshop: Exploring Gender Bias in Word Embedding

Event Information

Share this event

Date and Time

Location

Location

Maschinenraum

Zionskirchstraße 73a

10119 Berlin

Germany

View Map

Event description
In this 90 minutes workshop, we will explore bias in word embedding - a widespread building block of many machine learning models.

About this Event

There is a fundamental gap between ethics, which lays in the kingdom of human values, and artificial intelligence, which is, broadly speaking, a mathematical model. Nowadays, many intelligent systems are being deployed daily, and data scientists are making decisions in the mathematical domain, that may have ethical and social implications.

How can practitioners embed ethics in an intelligent system? How can they audit the ethics of AI? What are the limitations of such methods?

In this 90 minutes workshop, we will explore bias in word embedding - a widespread building block of many machine learning models that work with natural languages. Word embeddings have an easy-to-explain representation that allows an intuitive understanding of this building block and its potential biases without a technical background.

We will use an open-source toolkit called [Responsibly](https://docs.responsibly.ai/) to explore, visualize, measure and remove bias in a word embedding - particularly the gender bias.

Word embeddings will serve as a case study to the bias in machine learning. Besides, the exploration process will naturally raise practical, methodological, and philosophical questions about the ethics of AI and the limitations of technical measurement and mitigation approaches. On top of that, the deployment of the same model in different contexts might affect our ethical judgment about it. All of that will be discussed in the workshop.

The workshop is hands-on and interactive. The participants will be able to run pre-written code in parallel to the instructor in the same tools that data scientists are using. Nevertheless, the workshop is designed to be adaptive to a diverse audience: from without any background in machine learning or programming to data science practitioners. The participants should bring their own laptops, but neither setup nor installation is required.

----------------------------------------------------------------

Shlomi is a data scientist and educator. He works on developing knowledge, practices, and tools to audit and mitigate the biases of real-world AI systems. He is the developer of Responsibly an open-source toolkit that brings techniques from AI bias and fairness research to practitioners. Last summer, Shlomi worked in Center for Human-Compatible AI at UC Berkeley on increasing the interpretability of artificial neural networks. Besides that, Shlomi teaches various computing classes on Python programming and machine learning. He is a co-founder of the Israeli Cyber Education Center, where he led the design of national computing programs for kids and teenagers. He managed the development of the cybersecurity Bagrut (the Israeli matriculation exam, the equivalent of Abitur), and co-authored a Computer Network textbook in tutorial approach in Hebrew, Before that, Shlomi worked as an algorithmic researcher and a research team leader in cybersecurity.

Share with friends

Date and Time

Location

Maschinenraum

Zionskirchstraße 73a

10119 Berlin

Germany

View Map

Save This Event

Event Saved