With the UK AI market now worth over £72 billion (Forbes), it’s clear artificial intelligence is here to stay. Yet its growth comes with unease: 59% of Brits say they’re concerned about AI, with over a third worried about the ethical implications of its misuse. Teenagers are its fastest adopters, using it for study, work, and play, while tools like ChatGPT and Google Gemini have become household names.

AI has already become a part of daily life, with some real benefits for businesses. It’s also incredibly lucrative for those who make it. But for Black communities and others who are marginalised, is it causing more harm than good?

In an exclusive article for Mix, sociologist Ebony Owusu-Nepaul examines what AI’s rapid rise means for Black communities, the problems behind the interface, and steps we can take to drive equity in AI.

The Problem Behind the Interface

 

Beneath the sleek design and promises of efficiency, the people building and selling AI remain overwhelmingly white, male, and middle-class. History shows what happens when powerful technologies are shaped without diverse voices: bias gets baked in and inequalities deepen. We see it time and again through inaccurate media representation of minority groups that seeps into society and shapes how people are treated. Amplified stereotypes, purposeful mischaracterisations, and stories told on behalf of minorities are just a few examples.

According to Electro IQ, in the UK, just 25% of tech employees are from minority ethnic backgrounds; only 5% of those identify as Black, and only one in five are women. This is despite research showing that more diverse companies are statistically more likely to outperform financially.

For Black communities – already navigating systemic bias in opportunity, quality of life, and representation – the stakes couldn’t be higher. Amnesty International reports that Black people in the UK are twice as likely to be in insecure work as white people, and that the physical health of Black and Asian women in the UK is up to 20 years behind that of white women.

So what happens when a mostly white cohort drives the future of AI? How does that shape data bias? And what will it mean for Black communities across social, economic, and environmental well-being?

Not Neutral, Never Was

 

There are reasons to celebrate AI. We’re more efficient, we can find the “right” words instantly, and our emails look more polished than ever. But if the people creating these systems are consistently from majority groups, we’re in trouble.

Bias – even when unintended – produces systems that serve the needs of those who built them. As tech-inclusion author Sara Wachter-Boettcher warned in 2017, AI systems are “not neutral at all” and will be just as fallible as their creators. Worse, bias embedded at the start compounds over time.

“There’s no shortage of research showing that women and people of colour get worse treatment than their white male peers in the job market… AI-enabled hiring software may be a booming market, but I won’t be trusting it to level the playing field or eliminate the wage gap anytime soon. Because for all their seemingly scientific methods, algorithms aren’t neutral at all.” Sara Wachter-Boettcher 2027

As Dr Sheard notes, recruitment algorithms have offered fresh opportunities for hiring managers to discriminate against those already at risk in the labour market, including women, people over 50, and people for whom English is a second language. This is especially troubling when such biases are built into machine learning with the capacity to compound with time and more data.

And it stretches far beyond recruitment. It can mean…

  • Invasive surveillance in minority communities
  • false arrests from flawed facial recognition
  • healthcare discrimination, where symptoms in Black patients are missed or dismissed

All under the banner of algorithmic “efficiency”.

Sign up for more like this

Subscribe to our newsletter for more expert insights and the latest DEI news

Following the Data

 

We can’t rely on assumptions; we have to look at the facts. Where do Black communities sit in this AI boom? Will we have to fight the same battles – to be seen, heard, and accurately represented in this space – for equity in tech all over again? The risk lies in data bias. If inaccuracies or prejudiced inputs are baked into AI, they don’t stay hidden – they’re replicated and legitimised. Vulnerable users, or those without access to balanced information, may take those outputs as fact.

This becomes even more concerning when major public bodies fail to interrogate AI’s ethics. Ofcom’s 2025 report, for example, sidestepped the social and ethical implications altogether. AI is presented as a new development that society will simply have to function around. There’s no serious investigation into the negative impacts on society, particularly for those already marginalised and disadvantaged.

Media Representation, Rewritten

 

When we talk about representation in media, we think of TV, film, and journalism: who gets to speak, who directs the narrative, and who’s in front of the camera. But AI is now another form of media production. It shapes our language, imagery, and decision-making in the background, every single day.

What was once discrimination through opportunity-hoarding in the mediascape – selling marketable stereotypes like the “Angry Black Woman” or the “docile Asian woman”, limiting chances for Black and other ethnic minorities to have a seat at the table – now shows up in a different font: AI. Investigations into text-to-image models reveal that depictions of Black people are disproportionately violent and hypersexualised: lighter-skinned women appear more often in sexualised nudity, while darker-skinned people are shown in gang violence and aggression.

Similar disparities appear on the platforms that decide who gets seen. Black and other ethnic-minority creators, often the trendsetters and movement makers, are unfairly treated by social-media algorithms. On serious content about oppression and social movements, creators are frequently shadow-banned or have their accounts auto-deleted.

Katherine Miller writes that as you make language models bigger, overt racism decreases, but covert racism increases. What changes that trajectory is who builds the systems.

If AI is set to influence how we think, work, and live for decades, the question is urgent: Who’s writing the code, and who’s being written out of it?

Working For Equity in AI

 

Timnit Gebru

The former lead of Google’s ethical AI team, Gebru, has been one of the loudest voices exposing data bias and its harm to Black and marginalised communities. After being fired from Google – arguably over a paper on discriminatory practices – she founded research initiatives to reshape tech inclusively, including projects using satellite imagery to study South African towns and the lasting impacts of apartheid.

Joy Buolamwini

An MIT graduate and Harvard tutor, Buolamwini discovered bias firsthand when facial-recognition code failed to detect her face unless she wore a white mask. Her research proved that commercial systems misidentify dark-skinned women far more than lighter-skinned men. Her work forced major tech companies to confront and correct datasets, and she continues to campaign for algorithmic accountability.

“Unaltered data collection methods that rely on public figures inherited power shadows that led to overrepresentation of men and lighter-skinned individuals. To overcome power shadows, we must be aware of them. We must also be intentional in our approach to developing technology that relies on data. The status quo fell far too short. I would need to show new ways of constructing benchmark datasets and more in-depth approaches to analysing the performance of facial recognition technologies. By showing these limitations, could I push for a new normal?”

Buolamwini, 2023

Beyond Data: Environmental Racism

 

On top of informational discrimination, AI’s environmental impact poses another threat. Water-cooling systems are being used at alarming rates to sustain daily use of models like ChatGPT. In fact, 32% of U.S. data centres are sited in high or extremely high water-stress areas (only 30% in low-stress), concentrating cooling demand where water is scarcest. But what about the people living near these power plants?

Take Boxtown, Memphis – a largely Black community now surrounded by AI power infrastructure approved by tech giants. These facilities consume enormous energy while polluting the air, with devastating health effects. Residents report unprecedented respiratory illness; doctors are seeing alarming rates of cancer and breathing disorders. Despite protests and complaints, government intervention has been absent.

This is not new. The Black community is no stranger to environmental racism. As Greenpeace UK highlighted in an article by 2022 article by Mya-Rose Craig, Black people in the UK are disproportionately likely to breathe illegal levels of pollution every single day.

What You Can Do

It’s easy to feel powerless in the face of billion-pound tech industries, but the truth is that AI isn’t some distant future; it’s shaped every day by choices, data, and demand. Here’s how you can make a difference:

1. Question the Interface

Don’t take AI outputs at face value. If you’re using ChatGPT, Midjourney, or similar tools, interrogate the assumptions behind them. Ask: Whose voices are missing? Who benefits? Who is harmed?

2. Support Inclusive Builders

Seek out and back projects led by Black technologists, women, and other underrepresented groups. Whether it’s through funding, visibility, or simply using their platforms, every click and share shifts demand.

3. Push Institutions for Accountability

Universities, employers, and governments are integrating AI fast – often without scrutiny. Here at Mix, one of our values is Candid but Kind. That means that we should ask tough questions and expect accountability. Challenge your workplace, school, or local council on how they’re auditing AI for bias and environmental impact. Transparency must be demanded, not assumed.

4. Centre Human Voices

Remember that AI is not neutral. Amplify real stories from marginalised communities, so they don’t get erased by datasets that misrepresent or silence them.

5. Connect Tech to Justice

Environmental racism, labour rights, and data bias are not separate issues. AI intersects with all of them. Get involved with campaigns linking climate justice, racial justice, and tech accountability.

6. Learn and Educate

Read the work of Timnit Gebru, Joy Buolamwini, and others. Share their findings. Build literacy around AI in your networks so fewer people are caught off-guard by biased systems.

What Next?

 

This isn’t a call to slow AI down. It’s a call to build it differently. Representation isn’t a “nice-to-have” in tech; it’s the only way to stop history repeating itself in code.

The question isn’t whether AI will shape the future – it already is. The real question is: how do we stop history from repeating itself, and who gets to decide what “progress” looks like?

Ebony Owusu-Nepaul

Ebony Owusu-Nepaul is a sociologist and creative strategist whose work centres on media reform, inclusive storytelling and training. Her undergraduate and postgraduate research focused on reclaiming and reframing harmful narratives about minority groups, particularly within media contexts. Ebony has supported organisations across sectors to advance inclusivity through both structural reform and ground-level dialogue. She has contributed to academic and public discourse by speaking on university panels addressing the Minority Ethnic Attainment Gap. In addition, Ebony has advised authors on responsible representation in literature. Holding an MPhil in Sociology from the University of Cambridge, Ebony is currently examining how emerging technologies are shaping cultural narratives and the implications for marginalised communities.

Further reading

Psychological Safety Is Not ‘Soft’ – It’s Foundational

What shapes the moments when people speak up – or stay silent? Psychological safety drives that split-second judgement about whether it feels safe or risky to contribute. Angela Wren, Head of DEI Solutions, explains why it underpins performance and innovation, and how leaders build it through everyday signals and responses.

Read article

A Bold Blueprint for an Antiracist Society: Why the World Should Watch Wales

Hayley Barnard looks at how Wales is setting a global benchmark for antiracism, and what other nations (and organisations) can learn from its honest, legally grounded and action-focused approach to building equitable, inclusive systems.

Read article

International Women’s Day: Give to Gain

International Women’s Day is a moment to rethink what progress really requires. Here, Chartered Psychologist & Head of DEI Solutions Angela Wren explores how gender equality depends on psychological safety, why systemic barriers persist, and what 2026’s ‘Give to Gain’ theme should mean in practice.

Read article

Case Study: Brewers Decorator Centres

Brewers Decorator Centres partnered with Mix to strengthen inclusive leadership across the business. Through tailored training for managers, inclusion became part of everyday leadership conversations – building confidence, improving effectiveness, and supporting Brewers’ mission to be a great place to work.

Read article

Tenure Bias – How Fresh Perspectives Keep Teams Future-Ready

Tenure brings knowledge, context and continuity – valuable assets in any organisation. But when length of service becomes the loudest voice in the room, it can quietly limit fresh thinking. In this quick guide, Mix CEO Hayley Barnard explains how tenure bias shows up at work, and how leaders can balance experience with curiosity.

Read article

Sign up for the latest EDI news, best practice and guides from our team.

© 2026 Mix Diversity Limited. Registered Company No 09280349