It’s uncanny standing eye-to-eye with a robot. Folded sheets of metal and latex simulating the fragile impression of the cupid’s bow, a faintly furrowed brow, a crease of a smile which forms between cheek and eye. Observing the almost-human form in nuts and bolts, you feel suspended in a funny kind of social limbo. Should a cyborg be greeted? Does a cyborg require personal space? Its facial expressions adjust with a soft buzz, fluttering fingers and arms flailing somewhere between a conductor and a rock climber. As Mark Fisher puts it in The Weird and the Eerie, the uncanny puts the “strange within the familiar […] processing the outside through the gaps and impasses of the inside.” It is perfectly imperfect, hovering between real, unreal, and perhaps, divine.
The technology which governs and pervades all aspects of contemporary life rarely presents itself in a human’s form so overtly. Rather, it’s an elegant extension of our own minds and desires. We obligingly follow the algorithms’ breadcrumbs, its nudges, matchings, ratings, comments, shares and live-tracks. We monitor our sleep patterns, stream entertainment, use the automatic check-out and make sure our friends have checked-in. Hell, I’m even editing this article right now on my smartphone. Thus, the question is not whether we are being directed by technology, but, where is it that we’re being directed to? Will technology ever be capable of autonomously creating the cultural products we know and love?
This week, I paid a visit to the Barbican’s ‘AI: More Than Human’ exhibition. In spite of the centre’s famously disorienting layout, the exhibition was ambitiously spread over two floors, taking its attendees on a multi-storey, multi-sensory expedition through the history of robotic intelligence. It starts at the source, looking at early cultural and religious images in Shinotism, Judaism and Arabic alchemy before exploring Gothic sensibilities and technological innovations, from Babbage to beyond. As one might expect from an interactive exhibition of this kind, many of the newest, shiniest toys were beleaguered by throngs of visitors waiting their turn. Here, the diverse potentials of AI and future technologies were laid out like an ample, all-expenses-paid breakfast buffet. Less popular than the emotion-reading headsets and virtual exhibits wherein you can sneak a selfie, was this immersive smell box, holding the perfume of extinct flowers in ‘Resurrecting the Sublime’. Equally astonishing and looking like a prop out of the new Spiderman: Far From Home film, Neri Oxman and the Mediated Matter Group’s ‘Synthetic Apiary’ was a mind bending triumph of futuristic design, aiming to save future bee populations. The Pièce De Résistance and uncanniest of them all was the robotic dog, programmed to replicate the behaviours and responses of our furry friends, yapping, wagging and cocking its head to one side in characteristic canine confusion.
Between weighty reproductions of the Difference Machine and DeepDream distortions, the exhibit which struck a personal chord was an unassuming plaque describing the burgeoning capabilities of AI programmes used for writing articles, albeit, functional data-centric articles and sports reports. It asked, sincerely, “Do we still need people to deliver emotion in the written word or could AI conceivably perform this role?” Whilst the reality of convincingly creative bots is, I hope at least, far off, that’s not to say it hasn’t already got a seat at the table. Microsoft’s 2016 autonomous twitter bot ‘Tay’ took a turn for the worst and was shut down shortly after spouting racist and homophobic language which it learned through user interaction on the website. Keaton Patti’s postmodern Olive Garden script-writing Bot is far more socially appetising, producing comedically clunky, culturally questionable discourse which includes showstopping lines such as “Waitress: Pasta nachos for you. We see the pasta nachos. They are warm and defeated.” For now at least, according to Oxford University’s Associate professor in machine learning, Michael Osborne, “For both the UK or the US, that almost 90% of creative jobs are at low or no risk of automation.”
The essence of what separates the artistic products of humans and robots is the human’s capacity for creativity. Can a computer ever replicate creativity? And how, precisely, can we define creativity? For Kant, this was “exemplary originality” and for mathematician, cosmologist and author of The Creativity Code Marcus du Sautoy, this creativity is far more than simply “neuronal and chemical activity” but is rather “the human code”, exercising an innate understanding and reflection of what it means to be human through “new and surprising” inventions. If we continue feeding datasets and manage to simulate this “human code”, perhaps the bots may appear more in our likeness and creative cultural/artistic products of merit may, one day, be produced by autonomous bots.
Luckily, I’m not the only one freaking out at the prospect of the bots snatching my occupation. Du Sautoy is the first to admit he’s going “through a very existential crisis” at the thought of his occupation as mathematician being potentially challenged by more efficient robots. However it may look, the future of AI, technology and culture as we know it remains unknown. Writing in her diaries shortly after creating the Analytical Engine, Ada Lovelace noted with certainty that, “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we order it to perform.” Today’s generation of coders are looking to change that, and as Du Sautoy relays,“[They] believe they can finally prove Ada Lovelace wrong.”