Saturday 29 April 2017

Failing the Turing Test

For almost 70 years, the touchstone of researchers in the field of artificial intelligence has been the Turing Test.  Devised by Alan Turing in 1950, the essence of the idea is that if a human can’t tell whether a particular conversation is with a human or a machine, then the machine passes the test and can be considered intelligent.  It strikes me, though, that there’s a problem with this approach – what happens if an entity which in most other respects appears to be human fails the test?  Should we conclude that we are not dealing with a human at all, but with a machine?
This question came to the fore a few days ago, during an interview which Theresa May did on Radio Derby.  When asked whether she knew what a mugwump was – trust Boris to have put her on the spot again – her response was “What I recognise is that what we need in this country is strong and stable leadership”.  Now had any competent AI researcher been holding this conversation with an unknown entity, that entity would have been immediately identified as a computer; the researcher would have to record a ‘fail’ and note that no intelligence had been detected.  Even the most basic of AI programmes would have come up with a better answer than that.
So is she human or a machine?  And how can we ever be certain?

2 comments:

Neilyn said...

Well, she's full of 'it', so surely she must be human!

Gav said...

Well , yes, it's the party machine that comes up with these banal slogans, which would shame a fortune cookie.

All Mrs May has to do is just read them off the script, again and again. Any human could do that.

[Slightly off topic, perhaps what we really need in this country isn't strong leadership, it's wise leadership, but I don't suppose a machine could ever fathom that. Or Mrs May for that matter.]