Learning to see, w/ Akten

Originally inspired by the neural networks of our own brain, Deep Learning Artificial Intelligence algorithms have been around for decades, but they are recently seeing a huge rise in popularity. This is often attributed to recent increases in computing power and the availability of extensive training data. However, progress is undeniably fuelled by the multi-billion dollar investments from the purveyors of mass surveillance: technology companies whose business models rely on targeted, psychographic advertising; and government organisations and their War on Terror. Their aim is the automation of *Understanding* Big Data, i.e. understanding text, images and sounds. But what does it mean to ‘understand’? What does it mean to ‘learn’ or to ‘see’? Can a machine truly understand what it is seeing? Moreover, can it creatively reinterpret what it thinks it understands?

“Learning To See” is an ongoing series of works that use state-of-the-art Machine Learning algorithms as a means of reflecting on ourselves and how we make sense of the world. The picture we see in our conscious minds is not a direct representation of the outside world, or of what our senses deliver, but is of a simulated world, reconstructed based on our expectations and prior beliefs. Artificial neural networks loosely inspired by our own visual cortex look through surveillance cameras and try to make sense of what they are seeing. Of course they can see only what they already know. Just like us.

The work is part of a broader line of inquiry about self affirming cognitive biases, our inability to see the world from others’ point of view, and the resulting social polarisation.

The series consists of a number of studies, each motivated by related but different ideas.