What does AI mean for creativity?

This week, I went to the Barbican to check out what academics and artists have to say about the implications of AI at a very stimulating recording of Radio 3’s excellent Free Thinking series. The show will be broadcast in the first week of June, but in the meantime, here’s what I got from it:

  1. The biases we have creep into the creative process – and since humans create AI, it absorbs bias too.

One of the things that really frustrates me is that people who position themselves as logicians always talk about machines as though they are impartial – but we make machines and we make decisions about data and tools, which means they carry our bias – it seeps in through the choices we make. A bit like how old white men (hello, Alabama) like to think the law is impartial when it is made by them, in their image, to serve their interests and give longevity to their bias.

For example, activist and academic Joy Buolamwini pointed out that cameras are biased by design. Photographic technology has been built by light-skinned people and optimised for light skin, but no-one cared until chocolate and furniture manufacturers complained that it did no favours for their product ranges. Before that, everyone just said darker skin was more difficult to light.

Buolamwini highlighted that we’re doing the same again with facial recognition technology – which is way better at dealing with light skinned faces than dark skinned ones. This all comes down to quality of data. We often use pale, male datasets because they are the most available, the most convenient. But this means that what we create reflects the bias and limitations of our society rather than its objective reality. AI can help here: there is an international standard for facial recognition being developed which uses machines to identify faces based on a scale of lightness and darkness rather than subjective definitions like “white” or “black”.

What it boils down to is that AI is only as good as your data set. If you’re using data that comes off the shelf, you may not know the ingredients. But if you curate your own data, it will absorb your bias. So we need to take real care in the way we create and use AI – but also think about being more transparent in the way we approach the decision-making behind our data.

Given all this bias, what is AI good for?

Lots of things – provided we change our approach. Because of our tendency to humanise technology (of which more later) when we approach the concept of AI, we often think in terms of it replicating or replacing existing human processes – along with all their inherent bias and failings. However, if we avoid doing this, AI can be a great tool for helping us identify and minimise bias (not eliminate, I’m a cynic) instead of reinforcing it.

 

2. Cobotics are better than robotics

The tendency when approaching AI is to assume it will replicate and then replace us. Part of this is natural human insecurity – as well as a desire for what is known as moral outsourcing, whereby we put moral arbitration onto the inanimate so we don’t have to take responsibility. But AI’s potential is far more powerful if we aim for “Cobotics” – a state of cooperating between machines and humans where we can use AI to augment our skills – whether creative, scientific, judiciary or otherwise – instead of aping them.  

3. The future is interdisciplinary

As a French medievalist turned copywriter turned PR person turned web editor turned social media strategist turned marketing director turned content strategist with a splash of poetry, activism and HR thrown into the mix, I was not inclined to disagree with English Literature academic and roboticist Michael Szollosy when he said we all need to up our interdisciplinary game.

Interdisciplinary skills are the only way to truly understand the implications and applications of AI and also – to help counteract its absorption of bias by putting ourselves in a stronger position to absorb our own. Does that mean we all need to go back to school? No – the cobotics principle can help us here. We need to change our approach to collaboration – less working separately and piecing it all together at the end and more sitting together in a room, creating things together and absorbing each others skills.

 

4. But… what happens to a work of art – or design – in the age of mechanical reproduction?

People build machines to solve problems, but they often create as many as they solve. At the very least, they raise questions we can’t always answer. Artist Anna Ridler uses algorithms to create some of her work, but admits this leads to queries such as: Who is the artist? Who do you credit? What is the legality here?

The fact that we are even asking such questions betrays our fragility and bias. Write Michael Szollosy pointed out that in the Enlightenment – or Age of Reason if you prefer – thinkers made automatons – 18th Century machines, robots, AI – an ideal. In the Romantic period, artists made them an abomination – something that threatens our own humanity. Event today, robotic ethics are very conservative and we are now redefining human sentience based on how we think – and fear – about AI.

Caught up in this is the fact that in a Western world dominated by Abrahamic philosophies, we assume that every creation is made in the creator’s image (and given the previous discourse on bias, often it is). As a result, even when using the phrase Artificial Intelligence, we forget that it is a simulation of intelligence in our eagerness to believe that it is like us and therefore confer actual intelligence upon it.

But it’s worth noting that drawing and design are simultaneously nouns and verbs. The noun version is an object a machine can create. Whereas in its verb form, the act of creating a work of art is a process that involves meaning, memory, context and provenance – all of which come from a human artist,

Reading list  for the curious and/or intellectually inclined

All-female line-up too. BOOM.

Leave a comment