Unsurprisingly, the new wave of artificial intelligence known as generative AI has demonstrated that cutting-edge algorithms are not immune to gender bias. Whether it’s content hypersexualizing women, replicating stereotypes or reinforcing gender-based discriminations, these so-called revolutionary tools clearly have their limitations and issues.
According to Estelle Pannatier, Policy and Advocacy Officer at AlgorithmWatch CH, there are already some possible legal recourses to counter these. But a lack of transparency remains on how AI is trained and used, which makes it harder to highlight the discrimination these technologies can cause.
Le Temps: What does gender bias in AI mean?
Estelle Pannatier: The term “algorithmic bias” is generally used to describe systematic errors in a computer system that create unfair outcomes by favoring one category over another in an unjustified manner. Along the same lines, an example of gender bias might include an algorithm that assigns different risk scores based on a person’s gender in a context where this difference isn’t justified. The notion of bias is nevertheless a bit reductive. Its use is generally limited to the quality of the systems’ data with the following idea: It’s the data that causes biases which results in AI making distinctions based on gender, for example. However, gender inequalities stemming from the use of AI have various causes.
So gender discrimination is not purely the result of biases present in the databases used by AI?
Data quality obviously plays a role — when the data was harvested, how it was processed, etc. — but there are many human-made choices behind this work. Beyond the data issue, other factors include the AI models used by these algorithms, i.e. the parameters selected, and how these systems are eventually used. Even a system deemed technologically “perfect” can have discriminating consequences if misused.
Specifically, how do these biases impact how AI works?
Take the example of an algorithm based on autonomous learning designed for a company’s hiring process. If women have historically occupied fewer positions of responsibility than men at this firm, and the algorithm is trained on this data without correction, it will replicate that imbalance. When it comes to the AI models, for example for an algorithm to sort CVs, humans will need to set exclusion criteria. But these criteria might be biased.
What effect can these biases have on women?
The consequences can be significant for those affected. The situation may lead to unequal access to employment. A study we conducted has shown that job offers on Facebook were displayed to a user based on their gender. An ad for truck driving is rather shown to men, while one for childcare services is rather shown to women. There was also a well-publicized example in Austria, where the employment service used an algorithm to assess unemployed people’s chances of finding a job. The algorithm took points away from women who had dependents but didn’t do the same for men, which could unfairly deprive women of access to certain social benefits.
Does the notorious lack of women in tech also impact how AI is developed?
The people who design these technologies do infuse them with their ideas. There is a tendency to underestimate the human component of these tools. Having more diversity at the root of an artificial intelligence’s design can only be positive regardless of what discrimination is in question. But processes, such as impact assessments on fundamental rights, must also be put in place to ensure that these systems do not lead to discrimination.
A year ago, we were just starting to talk about generative AI. When it comes to technology, legislation often seems to be lagging. Is the development of these tools outpacing the ability to regulate them effectively?
The field is evolving very quickly, but we must first remember that we are not in a lawless space. Current laws and standards also apply to AI. Various initiatives are being put in place to fill regulatory gaps, such as the European Union’s AI Regulation or the AI Convention currently being negotiated within the Council of Europe. In Switzerland, the Federal Council is analyzing, through the end of 2024, what are the needs in AI regulation.
But how can we regulate these technologies to avoid sexist tendencies if they’re constantly evolving?
First and foremost, transparency must be ensured for those affected and society in general, especially when these systems are used to make decisions about people. How these systems are designed and used should also be subject to impact assessments to identify and mitigate risks to fundamental rights. Lastly, protection against discriminations should be strengthened to protect against algorithmic discrimination, in particular by reinforcing the means for recourse for those affected.
Beyond discrimination, we’ve also seen these so-called “generative” tools be used to produce pornographic deepfakes or content hypersexualising women.
In some of these situations, legal means may apply, including tools relating to infringement on one’s right to honor. But there is still some important work to be done around how the legal framework is enforced. It takes a lot of time and energy to get content removed. We also need to clarify who’s responsible for what. For example, who can be held responsible when illegal content is generated?
What are the main challenges today in preventing the possible abuses linked to AI?
It’s really difficult to identify discrimination linked to the use of algorithmic systems, both for those affected by it and for those seeking to denounce it, because it often requires being able to “get inside the machine.” But the stakes are high, especially when it comes to algorithms that are used in decision-making and can have a major impact on a person’s life, for instance in the context of migration or criminal prosecution.