How do you solve a problem like Fairness?

TIME TO READ: 3 MINUTES

Generated with Adobe Firefly, Prompt: friendly robot running through field of flowers with arms horizontal with mountain in the background set in Switzerland, Golden hour, Landscape photography, image reference Maria from Sound of Music.

Privacy concerns, data quality, model monopolies, and biased gatekeepers all pose significant challenges to achieving fair, ethical and equitable AI systems.

Issues of fairness in AI

1. Fair use of data

AI is becoming embedded into every product. Post online and use free products? Then it is more than likely your data is used to train the models. Is profiteering from our data, surrendered often unwittingly, fair?

  • ChatGPT, OpenAI’s AI chatbot with natural language processing (NLP) has been subject to privacy concerns since it launched in November 2022. It collected user data to train its models without being transparent and asking for explict consent. Users could only opt out in April 2023.
  • Apple announced its AI system Apple Intelligence in June this year. It promises to uphold a “new standard for privacy in AI” according to its CEO Tim Cook. However, it has partnered with OpenAI using ChatGPT to add to writing tasks such as email composition. Apple claims it’s taking privacy seriously and seeks to run most Apple Intelligence features directly on devices. Functions that require more processing will be outsourced to the cloud for certain tasks. The user’s data will be protected with additional security measures and a promise not to store it indefinitely. A step forward to gain users’ trust and a nudge for others to follow in treating consumers fairly.

2. The quality of the data

Often the problem is that complete data sets just don’t exist to train the algorithmic systems.

  • This issue in scientific and medical data is highlighted by Caroline Criado Perez’s book Invisible Womena fascinating book with some stark statistics on gender biases. She reveals that in research on diseases such as cardiovascular disease and HIV women are systematically excluded and underrepresented creating tragic consequences and preventable outcomes. In the UK Women are 50% more likely to be misdiagnosed following a heart attack.
  • Data needs to be “labelled” to be processed by the algorithms. This is a very subjective process carried out often by undervalued teams working under pressure. Cultural contexts are often missed and this can lead to inconsistencies and biases in the data set. This was highlighted by Dr Joy Boulamwini in her book Unmasking AI.

3. The AI Monopoly

The expense of training a powerful AI model excludes smaller businesses. It’s a choice of making a deal with companies like Amazon, Apple, Google, Microsoft, Meta etc. to get your product off the ground or gather massive investment to challenge them. Big tech has all the cards and all the data.

  • The Competition and Markets Authority (CMA) in the UK warned in April 2024 that the same Tech giants dominated the AI field effectively stifling smaller firms. It found a partnership “interconnected web” involving the same dominant firms: Google, Apple, Microsoft, Meta, Amazon, and chip-maker Nvidia. It warned this situation was “ultimately harming businesses and consumers, for example through reduced choice, lower quality, and higher prices, as well as stunting the flow of potentially unprecedented innovation and wider economic benefits from AI”.

4. Biased Designers

Who are the architects of AI?

Overwhelmingly white and male.

  • In 2023, The World Economic Forum reported that women accounted for just  29 per cent of all science, technology, engineering and math (STEM) workers. They tend to be employed in entry-level jobs, less likely to hold positions of leadership. The Global Gender Gap Report of 2023 revealed there are only 30 per cent of women currently working in AI.
  • Data scientist, Cathy O’Neil, author of Weapons of Math Destruction.’ states that the problem is algorithmic models are most often invisible to all but their designers, the mathematicians and computer scientists. By not being scrutinised by others they run the danger of complacency. If they do happen to encode human prejudice this would not be highlighted until it was too late and already incorporated in the systems with the real consequence of negatively affecting lives.

If there is a lack of diversity and transparency at the fundamental creation level of AI then this will be reflected in the models. If the algorithm reflects the social and societal biases then it’s always going to be biased.

How do you solve a problem like Fairness?

  • People working in AI need to be representative of all diverse groups in society and trained to recognise biases. 
  • Active citizen oversight is needed to monitor the development of AI technologies so that groups of people aren’t discriminated against either directly or indirectly and the processes are kept transparent. 
  • Ethical concerns should be taken into account well before the models are released and not as an afterthought. 
  • Companies should be required to be accountable as to how they use our data and citizens should always have the ability to query and challenge the data they hold.
  • Big Tech should not have a monopoly on new and potentially world-changing technologies.
  • Stakeholders, the groups using and implementing AI, and those who will be affected by it, should have a say in its future.

We need to move away from the idea that the computer is always right and blindly trusting the “black box” into an era of the re-emergence of confident citizen power and public scrutiny.

Generated with Luma Dream Machine
Logo for designer Jennifer Martin who specialises in animation, video, creative concepts, illustration, web, infographics, and design solutions.

©Jennifer Martin 2025

Generative dining: You are what you eat

TIME TO READ: 2 MINUTES

Feed in junk data and you’ll get biased, junk information


Logo for designer Jennifer Martin who specialises in animation, video, creative concepts, illustration, web, infographics, and design solutions.

©Jennifer Martin 2024

Opening the Magic box

TIME TO READ: 3 MINUTES

Wizard looking a computer server - the magic box

Open the magic box of Generative AI for people to trust the technology

Continuing my AI journey, I recently completed a very enlightening course from IBM on Generative AI: Impact, Consideration, and Ethical Issues.

One of the main themes was the importance of transparency around the new, and rapidly developing advances that seem to appear every day in the Generative AI technology space. There has been a rush to claim to be first to the table with this exciting technology. It can seem overwhelming and frankly scary to people who don’t fully understand.

By opening the “Magic Box” and explaining in non-technical terms, the benefits, risks and limits, people will have more confidence that this is a technology to be embraced and not feared.

The responsibility for being more transparent lies first with the creators of the technologies but also with the businesses using generative AI. There is a need to be open about how people’s data is being used and for what purpose. People need to feel reassured that their personal data is secure and that copyright and privacy laws are not being breached.

In business, it is important to have a governance policy on the use of Generative AI that covers these issues. Currently, there is a lack of AI regulation but that could and mostly will have to change in the future, so it pays to put guidance in place. Identity fraud, misinformation, copyright infringement and data privacy violations are all issues that need to be considered and a good governance policy will migitate.

Biases can occur when the data generative models are trained on have not been checked and vetted. Ideally, data should be “clean” without discrimination and biases. However, the majority of LLMs (large language models) have been trained on the content of the internet which we all know to contain the best and the worst of human ideas.

The constant checking of outputs by humans should be the norm, with methods in place so users can flag issues.

Trust will only be achieved if the “Magic Box” of Generative AI models are opened up, as reliable and transparent tools for their users.


What responsible use of Generative AI is centred around:

🔍 Transparency
Openness as to how it all works. People need non-technical explanations to trust the “Magic Box” and understand its potential benefits, limits, and risks.

⚖ Accountability
Holding individuals and organisations responsible for ethical and legal consequences relating to copyright and privacy laws.

⛑ Safety guardrails
Constantly checking for biases in data, and being aware that hallucinations can occur. Building in error handling where users can report issues and action can be taken.

⛔ Privacy
Taking steps to protect personal data, complying with data protection laws and ensuring generated content does not disclose confidential information.

👩‍🎓 Training
Upskilling your workforce. Jobs are going to change and new jobs will be created. By investing in your staff, you give them the best opportunity to embrace this technology and increase productivity.

#Ethicalai

Logo for designer Jennifer Martin who specialises in animation, video, creative concepts, illustration, web, infographics, and design solutions.

©Jennifer Martin 2024

AI CHIT-Chat

TIME TO READ: 2 MINUTES

Toy Robot. Photo by Rock'n Roll Monkey on Unsplash

I recently completed a course on Promptly, which taught me how to build a Generative AI App. I was fascinated by its promise of simplifying the development process with no coding required. Promptly’s example was based on Harry Potter – and an entertaining chatbot that speaks the language of Hogwarts. So, I decided to take a shot at creating a Jenn-AI chatbot, which provides wisdom on all things art and design.

It was a fairly simple process, made easier by following the instructional videos by Promptly Co-founder and CEO Priyank Chodisetti. The interface of Promptly is good and intuitive and gives a non-coder an insight into what’s going on in the background without being overwhelmed with technical jargon.

I picked from a template, a wide selection is available from generative AI apps, agents and chatbots, and defined my inputs to create the form the users will fill out. I customised the form by selecting an image of my chatbot and even the colour of the chat window. Then I chose the processor I wanted to use, ChatGTP, and defined the input prompt: You are Jenn-AI, a digital designer with a print, digital, UI, UX, video and animation background. Your favourite painter is Frida Kahlo. Your favourite designer is Vivienne Westwood. You like to paint and draw. You love Anime, especially the work of Studio Ghibli. You answer all questions by comparing artists and designers and their work. You answer positively.

After choosing the welcome message and three suggested messages, I was ready to preview. You can see the result here.

Thanks to Promptly’s AI wizardry, I’d created a chatbot that wasn’t just informative but also downright charming and full of useful (or less) arty fun facts. Try it out!

Logo for designer Jennifer Martin who specialises in animation, video, creative concepts, illustration, web, infographics, and design solutions.

©Jennifer Martin 2024

AI vs Designers: Will Tools Replace Creativity?

TIME TO READ: 2 MINUTES

To illustrate the designer's tool box. Tool box with builders hard hat. Paint splattered background. Light hanging down from ceiling.

IMAGE PROMPT: designer’s tool box filled with lots of different tools on a bench with a lamp hanging down white space at top builders hard hat set square paint brushes background paint splattered wall with studio lighting

Generated in Photoshop 2024


Inspired by a course I completed recently on Advanced Creative Thinking and AI: Tools for Success, from Imperial College London, I decided to delve deeper into the AI vs Designers dilemma.

While some fear that AI will replace designers altogether, I think the reality is more nuanced. Rather than displacing human designers, AI is reshaping the nature of their work. Instead of being seen as a threat, AI is increasingly viewed as another essential tool in the designer’s toolbox. Designers are using AI to enhance their creativity, improve efficiency, and explore new exciting possibilities.

With AI technologies like machine learning and generative design, designers now have access to powerful tools that can enhance their capabilities and streamline workflows. Tasks that once demanded hours of manual labour, such as image editing, layout optimisation, can now be automated, allowing designers to focus more on ideas, concepts, innovation and collaboration. It’s revolutionising job roles and freeing time for creativity.

However, the question remains: Will AI eventually replace designers? The answer lies in understanding that AI is not a substitute for human creativity; rather, it’s a catalyst for innovation. While AI can automate certain aspects of the design process, I believe it cannot replicate the depth of human emotion, intuition, and cultural understanding that designers bring to their work. Moreover, AI lacks the ability to form entirely original ideas or understand the broader context of a design project.

However, to succeed in this AI-driven landscape, I think designers must embrace this new technology and adapt to its possibilities. Those who resist or ignore AI risk being left behind in an increasingly competitive market. Instead of fearing displacement, designers should see AI as an opportunity to evolve their skill sets, explore new avenues of creativity, and deliver more value to their clients.

The impact of AI on the design community is profound and multifaceted. While it’s reshaping job roles and workflows, AI is not a threat to designers but rather a powerful ally in their quest for innovation. To thrive in this AI-driven era, designers must embrace technology, adapt to new methodologies, and continue to push the boundaries of creativity. By doing so, they can ensure their relevance and significance in an increasingly digital and interconnected world.

Blog outline written first with ChatGPT and Grammarly….and then enhanced and edited by a real human.

Logo for designer Jennifer Martin who specialises in animation, video, creative concepts, illustration, web, infographics, and design solutions.

©Jennifer Martin 2024