Just recently, we were working on creating speculative brand materials for an autism charity, and I decided to test what AI would generate with the same task.
And the results…
They were bad.
Like, really bad.
The results it gave me used a jigsaw piece. At best, it’s a lazy concept. But actually, it’s much worse than that.
The use of a jigsaw piece suggests that autism is a puzzle to be solved, or even worse, that autistic individuals are missing essential pieces, or worst of all, that autism itself needs to be cured (see more background on this in this article by Paula Jessop).
The problem is quite simple: AI learns from data. But what if the data is biased, prejudiced, or harmful in some way?
And it struck me that anyone using AI tools has a huge responsibility to challenge stereotypes or messages that may harm.
I realise that I could have used the tool in a different way to get better results. But the more people rely on these tools the lazier they will become.
What can we do to help combat this?
Here are a few thoughts:
Educate the AI systems. We have the ability to provide feedback to the AI models so that in the future, the results will be more considerate of any issues. So if you notice something wrong, then hit that feedback button!
Test any ideas AI spits out with real people. As part of our proposal, we suggest working with people in the autistic community so we can create a brand that truly reflects them. Their insights will be truly valuable, ensuring a more inclusive representation. So if you do use AI, make sure you sense-check this with the real end users.
Question everything you get back from AI. Everything! Don’t take anything AI tells you as the truth. It can be a great starting point, but it won’t provide complete answers. Always rely on human judgment and expertise to refine and contextualize the information.
Consider what you are prompting it with. Do you have the most up-to-date thinking and research? Make sure the AI is aware of this. You can input this data and get the AI to use it when spitting back its answers.
I should point out that I love using AI tools, but I also appreciate the human mind. It’s crucial to strike a balance and leverage the best of both worlds or else we will fall into some kind of horrible infinite feedback loop of doom.
Leave a comment