For
almost 70 years, the touchstone of researchers in the field of artificial
intelligence has been the Turing
Test. Devised by Alan Turing in 1950, the essence
of the idea is that if a human can’t tell whether a particular conversation is
with a human or a machine, then the machine passes the test and can be
considered intelligent. It strikes me,
though, that there’s a problem with this approach – what happens if an entity
which in most other respects appears to be human fails the test? Should we conclude that we are not dealing
with a human at all, but with a machine?
This
question came to the fore a few days ago, during an interview which Theresa May
did on Radio Derby. When asked whether she knew what a
mugwump was – trust Boris to have put her on the spot again – her response was “What I recognise is that what we need in
this country is strong and stable leadership”. Now had any competent AI researcher been
holding this conversation with an unknown entity, that entity would have been
immediately identified as a computer; the researcher would have to record a
‘fail’ and note that no intelligence had been detected. Even the most basic of AI programmes would
have come up with a better answer than that.
So
is she human or a machine? And how can
we ever be certain?