TIME TO READ: 3 MINUTES

Privacy concerns, data quality, model monopolies, and biased gatekeepers all pose significant challenges to achieving fair, ethical and equitable AI systems.
Issues of fairness in AI
1. Fair use of data
AI is becoming embedded into every product. Post online and use free products? Then it is more than likely your data is used to train the models. Is profiteering from our data, surrendered often unwittingly, fair?
- ChatGPT, OpenAI’s AI chatbot with natural language processing (NLP) has been subject to privacy concerns since it launched in November 2022. It collected user data to train its models without being transparent and asking for explict consent. Users could only opt out in April 2023.
- Apple announced its AI system Apple Intelligence in June this year. It promises to uphold a “new standard for privacy in AI” according to its CEO Tim Cook. However, it has partnered with OpenAI using ChatGPT to add to writing tasks such as email composition. Apple claims it’s taking privacy seriously and seeks to run most Apple Intelligence features directly on devices. Functions that require more processing will be outsourced to the cloud for certain tasks. The user’s data will be protected with additional security measures and a promise not to store it indefinitely. A step forward to gain users’ trust and a nudge for others to follow in treating consumers fairly.
2. The quality of the data
Often the problem is that complete data sets just don’t exist to train the algorithmic systems.
- This issue in scientific and medical data is highlighted by Caroline Criado Perez’s book Invisible Women, a fascinating book with some stark statistics on gender biases. She reveals that in research on diseases such as cardiovascular disease and HIV women are systematically excluded and underrepresented creating tragic consequences and preventable outcomes. In the UK Women are 50% more likely to be misdiagnosed following a heart attack.
- Data needs to be “labelled” to be processed by the algorithms. This is a very subjective process carried out often by undervalued teams working under pressure. Cultural contexts are often missed and this can lead to inconsistencies and biases in the data set. This was highlighted by Dr Joy Boulamwini in her book Unmasking AI.
3. The AI Monopoly
The expense of training a powerful AI model excludes smaller businesses. It’s a choice of making a deal with companies like Amazon, Apple, Google, Microsoft, Meta etc. to get your product off the ground or gather massive investment to challenge them. Big tech has all the cards and all the data.
- The Competition and Markets Authority (CMA) in the UK warned in April 2024 that the same Tech giants dominated the AI field effectively stifling smaller firms. It found a partnership “interconnected web” involving the same dominant firms: Google, Apple, Microsoft, Meta, Amazon, and chip-maker Nvidia. It warned this situation was “ultimately harming businesses and consumers, for example through reduced choice, lower quality, and higher prices, as well as stunting the flow of potentially unprecedented innovation and wider economic benefits from AI”.
4. Biased Designers
Who are the architects of AI?
Overwhelmingly white and male.
- In 2023, The World Economic Forum reported that women accounted for just 29 per cent of all science, technology, engineering and math (STEM) workers. They tend to be employed in entry-level jobs, less likely to hold positions of leadership. The Global Gender Gap Report of 2023 revealed there are only 30 per cent of women currently working in AI.
- Data scientist, Cathy O’Neil, author of ‘Weapons of Math Destruction.’ states that the problem is algorithmic models are most often invisible to all but their designers, the mathematicians and computer scientists. By not being scrutinised by others they run the danger of complacency. If they do happen to encode human prejudice this would not be highlighted until it was too late and already incorporated in the systems with the real consequence of negatively affecting lives.
If there is a lack of diversity and transparency at the fundamental creation level of AI then this will be reflected in the models. If the algorithm reflects the social and societal biases then it’s always going to be biased.
How do you solve a problem like Fairness?
- People working in AI need to be representative of all diverse groups in society and trained to recognise biases.
- Active citizen oversight is needed to monitor the development of AI technologies so that groups of people aren’t discriminated against either directly or indirectly and the processes are kept transparent.
- Ethical concerns should be taken into account well before the models are released and not as an afterthought.
- Companies should be required to be accountable as to how they use our data and citizens should always have the ability to query and challenge the data they hold.
- Big Tech should not have a monopoly on new and potentially world-changing technologies.
- Stakeholders, the groups using and implementing AI, and those who will be affected by it, should have a say in its future.
We need to move away from the idea that the computer is always right and blindly trusting the “black box” into an era of the re-emergence of confident citizen power and public scrutiny.

©Jennifer Martin 2025



