Friday, 6 November 2009
Internet Security: a conspiracy against the customer?
In the first week of July, 1980, the world would have been destroyed if computer systems had been left to their own devices.
Here’s the novelist Christa Wolf, writing her diary in what was then East Germany, on the very brow of the face-to-snarling-face confrontation between capitalism and communism:
“Meteln, July 8. Twice in the past week, the US computer has sounded the alarm: Soviet rockets are flying towards the United States. In such a case, we are told, the President has twenty-five minutes to make a decision. The computer (we hear) has now been switched off. The delusion: to make security dependent on a machine, rather than an analysis of the situation possible only to human beings”.
From the fact that we’re still here one can deduce that human intervention – probably a red-phone call between White House and Kremlin – overrode the intentions of the machine and prevented our annihilation. But now fast-forward almost thirty years to the recent RSA Europe 2009 Conference in London, and listen to a senior figure from Internet Security:
“Whenever you can take the user out of the security equation without affecting his or her performance then you’re well on the way to a security solution”.
And another speaker:
“Our systems are over-reliant on the human element. We need to completely eliminate human involvement and mitigate its influence”.
Isn’t this the same delusion at large? Doesn’t it demonstrate, if we accept George Bernard Shaw’s theory that “all professions are conspiracies against the laity” the way Internet Security is becoming a conspiracy against the very people it’s supposed to be protecting – the clients and the colleagues who are its ultimate customers? A conspiracy to exclude them, to baffle them, to talk over and round them in an unintelligible language?
The same speaker who urged us to “take the user out of the security equation” invited us to be astounded and outraged when one of the many surveys in his deck revealed that “98 per cent of UK office workers [yes, that’s almost every one of them] do not see the protection of corporate electronic data as their responsibility”.
His solution? Don’t involve them at all. Ignore them. Build a slicker system.
As if security had as its grail a kind of fully-automated Hadron Colllider which could revisit the big bang of the virtual world’s creation and reinvent it with the elimination of risk and “the human element”.
My friend and colleague Peter Wood, who runs First Base Technologies, illustrates in his lectures and practice, chillingly and entertainingly, that yes, cyber-criminals are technologically adroit, but principally, they are social engineers: the first vulnerability they seek out is not in the machine, but in the mind: greed, vanity, lust, envy, fear or innocence and trust.
So how do you patch those?
At the 21st annual conference of FIRST, the Forum of Incident Response and Security Teams, held in Kyoto this June, one of the simplest and most provocative suggestions was made by Dr Suguru Yamaguchi, member and adviser on information security at the Japanese Cabinet Office National Information Security Centre.
“We need to find ways to help corporate executives actually to visualize what goes on when a computer network is under attack”, he said. “Just explaining in words isn't enough - the words are too dense, too technical - what we should do is design special games and animations which will bring the severity of current threats vividly alive in the executives’ imaginations”.
His idea flashed around the world, and was picked up on nearly 150,000 news sites within days. He said: stop either ignoring the user or, when you deal with him or her, being technical, turgid, instructional; instead, talk to humans in ways that humans understand; start being dramatic, start playing, start investigating ways to communicate which may even be non-verbal.
It was a theme I developed in my own address to RSA Europe, telling an audience:
“The educational establishment in the UK was convulsed a few days ago by a report which recommended that the culture of targets and rigid curricula for little children should be swept aside and replaced by learning through games and play, at least until the child has reached six.
“Immediately, radio phone-ins were flooded with reports from parents about education systems abroad which applied this theory to astonishing effect. I recall one father ringing in to say that by three his daughter was speaking fluent Japanese and Chinese. She hadn’t been taught them. She’d learned them in a game.
“Of course, at some stage children have to knuckle down and address themselves to a syllabus.
“But why shouldn’t we, as adults, recapture the pleasure and the value of learning through play, and use that as a principal tool to bring the inexpert into the world of Internet Security?”
And I expect the same ideas to be percolating through FIRST’s 22nd conference next year in Miami, which has as its theme “Past the Faded Perimeter” – that is to say, how does security contend with criminals now the 20th century device of inclusive and exclusive technological ramparts has so often turned out to be flawed and permeable to the cunning of delinquent social engineers, playing on human nature?
How else but by involving and enlightening the users, the “human element”, and turning them into willing conscripts in a sort of home guard or civil defence association which becomes a human firewall?
In the UK, the three words “computer says no” have become a catchphrase. Delivered in an advertisement by a plump, bored operative to a supplicant for a loan or mortgage he’s about to disappoint, the line is an indication by a bank of the kind of financial institution it is not and will not become.
But “computer says no” also speaks to a deeper sentiment: to a public rage at and contempt for all those organizations which have replaced the discriminations of the human mind with closed and inflexible processes; which have, for example, in the justice system (one thinks of the case of Gary McKinnon) eliminated reasonable doubt – because a computer has no reason to doubt – and become all sword and no balance.
Partly out of nostalgia (or should I say, Ostalgia: I was in Berlin twenty years ago when, thanks entirely to the pressure of human sentiment, the wall came down) and partly with an eye on a future project, I have been re-reading and researching Christa Wolf, with whose words I began this blog.
In a relatively recent interview she said:
“With the wild growth of technology and global networks, it seems to me that the power of systems is on the rise. And these are becoming independent, it’s no longer possible to ascertain which people carry responsibility. Rational counterweights, like democracy for example, seem to have been hollowed out, and their influence is declining. This is not only regrettable, it also makes you fearful of what our grandchildren’s generation will have to cope with.”
In 1983 Christa Wolf published a novel called “Cassandra” in which she retold the tale of the unhappy Trojan priestess partly as an allegory – well, it’s my theory – for her predicament as an artist in what was then communist East Germany.
Let me finish by reminding you of Cassandra’s story (that’s her with the snakes at the top of this piece, by the way). It was most delicately set down, I think, by the great Dr Lempriere in his Classical Dictionary of 1834:
“CASSANDRA, daughter of Priam and Hecuba, was passionately loved by Apollo, who promised to grant her whatever she might require, if she would gratify his passion. She asked the power of knowing futurity; and as soon as she had received it, she refused to perform her promise, and slighted Apollo. The god, in his disappointment, wetted her lips with his tongue, and by this action effected that no credit or reliance should ever be put on her predictions, however true or faithful they might be… She was looked upon by the Trojans as insane, and she was even confined, and her predictions were disregarded.”