You can’t have failed to have seen all the fuss and furore over the latest threat to the online safety of children over the last week or two. The so-called Momo game was everywhere, threatening to harm and even kill our most vulnerable social media users with its sinister call to self-harm and death.
But once the hysteria died away and everyone from children’s charities to police forces declared the whole thing a hoax, many of us with an interest in tech were left scratching our heads about why it had all happened in the first place and if there was even a grain of truth in it. More pressingly we asked ourselves if it could happen again and how we would go about preventing not only the accompanying hysteria but the viral threat of such a subversive online interaction.
First, a little about Momo. The disturbing sculpture of the woman with the misshapen mouth, bulging eyes and with the body of a bird, was the original work of a Japanese artist Keisuke Also back in 2016. Instagrammers took a macabre liking to the sculpture and posted up pictures of her, which attracted significant attention. Two years later the image hit Reddit and gained 900 comments in just 48-hours, bringing it back into the public’s consciousness. What follows from here on in is the stuff of hearsay, legend and panic.
The tragic suicide of a 12-year-old girl in Buenos Aires is blamed on this apparent ‘suicide game’ while a mother in Edinburgh, Scotland claims her son was found holding a knife to his own throat after receiving instructions through social media channels to do so. YouTube rebutted any claim of Momo creeping into kids’ programmes such as Peppa Pig, though the British media and members of the public claimed otherwise. And so it went on. Schools went on the offensive urging parents to stay hyper vigilant over what their children were doing online and demanding they report instances of Momo interactions to the police.
It very quickly became apparent that the Momo threat was just a hoax and the danger posed of children harming themselves came far more from the media’s promotion of it then Momo itself. True or not, what it has done is ask some challenging questions for anyone with an interest in technology.
Since the birth of the internet and in particular the rise of YouTube, posters have taken their interests offline and carefully placed them online for the full glare of the public. This includes, of course, tastes in the macabre and the gruesome. Horror movies watched at home translate to the uploading of so-called ‘found footage’ where viewers are led to believe they’re watching real-life events unfold in real time as filmed on someone’s phone or camera – think The Blair Witch Project.
Young people and teens in particular with a taste for this kind of genre, pursue the latest, most gruesome channel and revel, like generations before them, in the telling and re-telling of scary tales and folklore. It’s no real surprise then that sometimes this folklore, these urban myths, spill over the fringe channels and start causing hysteria and panic where they were never designed to be.
But, the questions remain, could all this have been true and how would we have been able to shut it down on YouTube and WhatsApp?
The truth is that yes, something like Momo is all too easy to take to viral levels. While finding your number for WhatsApp is less easy, it’s not impossible and once in you’re only a message away. Of course, for most of us, a simple block contact and delete is the easiest way to deal with that kind of intrusion but that doesn’t mean it’s any less disturbing, particularly for the young or more vulnerable.
YouTube says in its terms and conditions: content that aims to encourage dangerous or illegal activities that risk serious physical harm or death is not allowed on YouTube.
It promises to take down anything that breaches those guidelines but with 300 hours of video uploaded to the platform every minute and more than one billion users scattered across the globe, keeping tabs on content is easier said than done. Taking down hurtful or disturbing YouTube content came to the fore two years ago when the rise of terror organisations using YouTube to encourage and recruit jihadists became apparent, alongside far-right movements across Europe using the channel for similar purposes.
Like so much in the tech world, the platform relies on the ability of artificial intelligence to solve its problems and find problematic content. It relies on this AI to learn and adapt with more experience and become more able to determine nuance as it works. The problem, of course, is that it can’t really do this to any degree of sophistication and certainly not in the same way a human brain can.
As with life, as with YouTube, the answer will be self-regulation and passing on the message of how to stay safe online to the younger members of our society. What we can’t do is put a lid on the hysteria and mass panic that is hair-triggered at a moment’s notice. All we can do about that is ride it out until we’re able to start seeing the wood for the trees and trust that as our tech gets smarter it will start weeding out the nasty, the hateful and the downright dangerous.