Embedding Gender Equality in the EU's Digital Future: From AI Bias to Actionable Policy Solutions
Are we building gender neutral AIs?
Gender equality and AI governance don’t always go hand in hand, but closing that gap was exactly what the Connecting Women in Digital webinar set out to explore on March 12th, 2026
Our speaker Weijie Huang, a researcher at the Inclusive AI Lab at Utrecht University, works at the intersection of feminist scholarship and digital policy. Her research spans gender bias in AI systems, platform governance, AI safety, and global perspectives on technology and representation. Her presentation made a case for embedding gender equality across the entire AI lifecycle.
Gender bias in AI: not a glitch
Huang opened by framing gender bias in AI as not a bug waiting to be patched. It is a structural feature—built layer by layer across the entire AI system, from the data used to train models to the design choices made by developers to the institutional logics that govern deployment.
“Gender bias in AI is not a technical problem; it’s a governance issue. And if the EU AI Act is to protect fundamental rights in practice, gender equality must be embedded across the AI lifecycle, not treated as an afterthought.”
This carries particular urgency in the current moment. With the AI Act set to fully apply in August 2026, Europe is moving to the implementation of policies. High-risk AI systems are already being used in employment screening, healthcare, migration assessment, and education access; decisions now mediated by algorithmic infrastructure. The governance stakes, Huang argued, are therefore threefold:
- fundamental rights exposure if bias is embedded upstream and difficult to detect;
- public trust erosion if citizens perceive that automated systems reproduce structural inequality;
- regulatory fragmentation risk if Member States interpret gender safeguards differently across high-risk implementations.
The gender data gap: when inequality is automated
Huang drew on her co-authored gender data report to trace the anatomy of the problem. She and her colleagues posed the question: if data is the foundation of AI systems, what happens when that data does not adequately represent women?
The answer, she showed, unfolds across several interconnected layers. Historically, much of the world’s data has been built around a white male default. In medical research, for example, diagnostic models trained primarily on male-centred data have been shown to produce higher misdiagnosis rates for women, particularly women from minority backgrounds. This is inherited bias and not intentional discrimination, Huang was careful to note.
The problem compounds with intersectionality: large-scale datasets used to train facial recognition systems have disproportionately included light-skinned male faces, leaving women of colour statistically marginalized and therefore systematically less accurately identified.
A third dimension concerns what Huang called structural absence. Women are not always misrepresented in the data, but are sometimes penalised because their life trajectories do not match the assumed norm. An AI recruitment tool trained on male careers may interpret caregiving gaps as lower productivity, automating an inequality it was never designed to question.
“When women are missing, misrepresented, or misunderstood in the data, inequality becomes automated. And once it becomes automated, it becomes harder to see and harder to contest.”
Finally, Huang pointed to a policy-level blind spot: according to UN Women, 80% of gender-related Sustainable Development Goal indicators globally lack complete data. Unpaid care work, digital access gaps, and gender-based violence often simply fall outside the policy radar. When AI systems are built on those incomplete datasets, they inherit those blind spots.
Deepfakes and the gendered architecture of harm
Huang then turned to what she described as “one of the most underacknowledged governance failures” in the AI debate: the disproportionate impact of deepfake technology on women. Her team at the Inclusive AI Lab conducted research in collaboration with the Google Safety and Security team to examine how deepfake-related harms are experienced by women and girls in the Global South, from a lived experience and policy perspective.
Much of the regulatory debate around deepfakes focuses on political disinformation, election interference, and geopolitical risk. These are legitimate concerns, as Huang pointed out, but they are not where the majority of deepfake harm is actually happening. The largest offender today is gendered violence, particularly non-consensual synthetic intimate imagery. This gap between statistical reality and regulatory focus is itself a governance failure.
Three stages explain why women are particularly vulnerable: production, amplification, and commodification. At the production stage, the gender imbalance in the AI sector means that safety features protecting women from digital harm are not a priority. At the amplification stage, algorithms accelerate the spread of harmful content for profit, with liability remaining largely unclear. At the commodification stage, women’s digital identities, their faces, their bodies, are treated as data assets in a market that has found a way to extract value from women’s images.
“Gendered harm is not accidental. It is embedded in design choices, platform incentives, and economic models. If governance intervenes only at the content level, it’s already too late — lifecycle governance must intervene upstream.”
Global lessons for European regulators
A comparative review of regulatory approaches across the EU, United States, China, and parts of the Global South revealed that existing frameworks tend to prioritise quantifiable or geopolitically legible harms, leaving complex social harms, including technology-facilitated gender-based violence, under-addressed.
But Huang’s global scan also revealed powerful examples of communities building their own solutions:
- In Senegal, the I Am the Code Foundation, founded by Marieme Jammeh, trains girls and young women in coding and digital skills with the explicit goal of making them producers of data and designers of systems.
- In Indonesia, digital financial platforms have adopted alternative credit indicators drawing on community participation and everyday behaviour patterns.
- In Pakistan, the Digital Rights Foundation’s cyber harassment helpline has documented over 20,000 cases, demonstrating that making gendered harm legible to regulators is both possible and necessary.
AI systems trained in the EU and US carry embedded assumptions that do not travel neutrally across global contexts. If the EU is serious about embedding gender equality in AI governance, it must reckon with how its regulation interacts with global data.
From analysis to action: the Gender AI Safety Framework
Huang then introduced the Gender AI Safety Framework developed at the Inclusive AI Lab: a practical governance roadmap organised across three layers.
- The input layer concerns the foundations: strong civil society networks, feminist researchers, and governance structures that protect collective rights rather than only individual ones.
- The process layer introduces testing policies and models with real communities, especially those most affected, alongside design justice principles that ask who builds AI, who benefits, and who bears the risk.
- The purpose layer centres empowerment: providing tools, digital literacies, and survivor-centred reporting systems so that communities gain agency over the AI systems shaping their lives. These layers are designed as a continuous, iterative cycle, not a linear checklist.
Practically, Huang proposed two mechanisms within existing EU governance structures. The first is gender equality assessment embedded at the pre-deployment stage. The second is post-market monitoring that includes gender-disaggregated harm reporting, with stress-testing of systems before deployment to anticipate differential impact patterns. Huang was direct about what this requires:
“If the Women in Digital Agenda is to move beyond declaration, these mechanisms cannot remain voluntary. They need to be operationalised within the existing conformity assessment and monitoring structures under the AI Act.”
Girls in STEAM as upstream governance intervention
Huang concluded with a reflection that connected the broader governance argument to the work of the Women in Digital Thematic Working Group 1: encouraging girls to pursue STEAM subjects is a form of upstream governance.
“When technology and knowledge stay concentrated in male hands, bias keeps getting built into the systems we rely on. By expanding who learns to create technology, we actually change how technology is made.”
The concept of the classroom as evidence lab (tracking who participates, who gains confidence, and who withdraws) is a particularly important tool. Evidence generated at this micro level feeds into change at the macro level, cultivating what Huang called future data stewards: young people equipped to use technology, understand how it makes decisions, and to challenge it when it does not.
Europe doesn’t need more regulation, it needs the regulation it already has to actually work for women was the message Huang left the room with. Gender-responsive conformity assessments, disaggregated harm reporting, and mechanisms that go beyond declaration to deliver measurable impact are the means to do that.
“We need to give women more agency, the ability to design it themselves, to know how to use it, not just be forced to use it.”


