‘THINK OF THE CHILDREN’: SOME DIRTY LITTLE SECRETS
Guest blog post by Paula Boddington, Associate Professor of Philosophy & Healthcare, University of West London.
We are told that tech can solve all our problems and ‘keep our kids safe from online harm’. But how can tech pick out ‘hate’ in a world of muddled thinking (eg what is a woman?) and even half-free speech?
Is ‘online safety’ just talk, generating ever more profits for big business while appearing to ‘do something’? If David Cameron had kept his 2017 promise to bring in age verification for watching online porn we might have answers by now.
We OBJECT to tech lies.
Much of what is happening now that is of concern for women and society in general is brought in
under the twisted imperatives of technology, which seduces policy makers (and the public) into seeing
machinery as an agent, the benevolent answer to our needs – even as people are turned
into machines that can be disassembled into component parts and controlled, eg sex robots.
A key element is a rhetoric of a future that brings a ‘greater good’, appealing to emotions deeply
held by any human, or inculcated in us by the repetitious urgings of that same ubiquitous
technology. This future promise lures, bewitches and blinds us to reality.
A conjuring trick assures us that the technology, despite being almost always still in nascent
form, is ready for the job, equal to or better than any human. Together with the bureaucrats
and technocrats in charge of it, it will help us all to live more human lives.
Look for this pattern, it’s everywhere. The trumpeting around issues concerning so-called ‘online
safety’ invariably start with ‘think of the kiddies’. Who does not want to do that? The announcements
around the proposed Online Safety Bill in the UK cite concerns for children’s safety online and for
online measures to combat terrorism. Having reeled us in, the Bill cites sweeping ambitions to make
the UK the ‘safest place in the world to go online’, and to target online content that may be legal, but
is ‘harmful to adults’. Whatever ‘harmful’ means remains to be seen. Ofcom will be in charge of that.
Ironically, of course, ‘harmful to adults’ will likely include gender critical content from those
concerned with the harms to children being encouraged to ‘transition’ in the name of the widely
promoted ideology of transgenderism and the highly profitable medical industrial complex behind it
– yet again based upon the twin foundations of ‘saving the kiddies’ (notably by waving bogus
claim of suicidality) and false promises of an untested, dangerous, technology.
The claim that the Online Safety Bill is the brainchild of a government genuinely on a mission
to protect children from sexual predators would be laughable were the situation not so serious.
Predators rush unhindered to the online world and pounce. Criminals who rape and prostitute children
walk free all over the country, and the few who are caught are out of prison, often offending again
in an eyeblink. Does the government really care about their victims? Is it pouring money into effective
policing and prosecution and into support for the victims?
Nope. It’s pouring money into tech – that’s business. Those lobbying the government to make our
internet so safe include firms producing safety technology who stand to gain financially. The concern
for children may be not entirely fictional, but it is mostly rhetorical.
Moreover, this Bill aims to constrain the legal behaviour of adults online, effectively censoring the
whole population, to protect us from ourselves, while ironically it is the internet itself which thrives on
and fosters sexual abuse of minors and adults. It is also built on principles of human
psychology that purposefully operate on our weaknesses, encouraging the kind of impulsive, angry,
mean-tweet-generating behaviour that generates ‘online hate’ content.
So is the technology held at fault? No. We are. Tech good, human bad.
The urge to ‘think of the children’ is so strong that most people won’t even notice this. The brain
switches to survival or emergency mode when thinking of endangered children. Nor will most people
notice that the technology on which this relies is not up to the job. Flagging, removing or
suppressing content will be done by algorithms, backed up by human moderators in order to
allow for a modicum of review so that this can be passed off as fair. What’s actually going on,
of course, is that perceived problems are generated by technology, so it’s assumed that technology
will be able to solve them. But are the algorithms that detect ‘online harm’ really up to the job?
If you ask those advising bodies like Ofcom, the answer is clearly ‘Yes. With a few teething problems
perhaps, but give us a bit more money and we’ll sort it’. But if you read the academic papers (often
written by the very same people) you get a different answer.
Algorithms work poorly with natural language. They are very bad at detecting nuance and
context; in fact some professionals in this area believe that it is so hard to determine a speaker’s
intention online, that there is no point in trying, and that determining whether content counts as
harmful or as hate speech will be depend how it is perceived by others.
In other words, this weakness in the technology does not lead to a rejection of the technology, but a
rejection of the human element of speaker’s intent and meaning. See how it works?
We also know that algorithms for detecting so-called online hate are particularly likely to label
content from members of minority group as hate: their use of language tends to be idiosyncratic,
and like any group, subject to constant changes. So minority groups are particularly in danger of
being ousted from social media. But one of the reasons for removing content online is to
reduce harms to these very same groups. Again, the technology harms those it was
intended to assist! Is the technology therefore dropped? No, because we all crave the glorious
future promised. The technocrats will sort this out, obviously, since they are so great at their jobs
(and also because they will lose those jobs unless they carry on tinkering and spouting false
promises). Meanwhile, those impacted must just suck it up.
Another problem lurks behind the curtain of algorithms used to detect online hate. How they are
they trained? Technology is made by humans, and algorithms learn how to label content as hate or
not hate by being trained by humans on sets of data before being set loose into the wilds of the
internet. A dirty little secret is that very often, not only do the human trainers disagree with each
other, they aren’t even trained themselves. It’s just their own opinion. So it’s pretty
random as to what the algorithms label as hate.
Another dirty little secret is that the data sets available to those developing algorithms to detect online
hate are pretty limited. Much of it is based on data from Twitter, because it’s available. But Twitter
only has about 300,000 active monthly users, and most content comes from a sub-section of these.
Thus the data is unrepresentative, and also, sloppily labelled. Those working in the field know full
well this is the problem – but many of them support the idea of using this ropey technology to judge
humanity – and to lock many out of the public square online.
If the public knew that, their online discourse was effectively being judged for hateful content by a
troupe of untrained, unknown, unaccountable and badly paid employees hidden behind the algorithms
used to judge us, there would be an outcry. Even some of the politicians pushing this online
censorship might start to wonder. The illusion is that it’s being done by machinery, by technology,
by complicated and very clever formulae. This is a conjuring trick, performed while distracting us as
‘we think of the kiddies’.
Don’t fall for it.
Paula Boddington