A little over half a century
ago, I sat in a seminar in which my then Psychology lecturer invited us to
define ‘intelligence’. After we’d had a few tries, he put us out of our misery
by telling us that the working definition is that ‘intelligence is what
intelligence tests measure’. That is to say, people have developed tests to
measure their concept of intelligence but there simply is no satisfactory
all-encompassing definition of the word. Rationality – the ability to look at
facts and data and reason a way to a conclusion is clearly part of it, but
there is also such a thing as emotional intelligence, and it’s far from clear
to me that that aspect is currently informing much of the thinking around Artificial
Intelligence. Whether the dangers posed to mankind by AI are as serious as some
are making them out to be is an open question. There are, however, plenty of
scientists and experts in the field lining up to warn us of the dangers, and I can
certainly empathise with the idea that any truly intelligent entity which looks
at the current state of the earth is likely to conclude that the planet’s
future might be better ensured if the plague of one particular species could be
eliminated.
Let’s assume that the experts
are right, and that AI poses a real and present danger to humanity. The
proposed solution – greater regulation of those working in the field – seems to
me highly unlikely to address the issue. If there’s one thing we know about the
currently most intelligent species on this planet, it is that there will always
be someone willing to break any rule that is made. There are, after all, laws
against murder, but they don’t prevent murder, merely set out the process and
punishment for handling the murderer after the event. Telling the world in a
deep, profound, and multinational voice that they must not do certain things
doesn’t really solve the AI problem, nor does having a process for punishing the
transgressors after their products have destroyed humanity. In theory, the capacity of any computing
processor to act should be limited by any parameters set by its programmers,
but most people’s conception of true intelligence would obviously include an
ability to consider the validity of those parameters and override them as
necessary. Even Asimov’s famous laws of robotics
don’t really seem to overcome the problem, because a truly intelligent machine
would also necessarily have the capacity to challenge those. Perhaps it’s
already too late: the attempt to put controls in place is an impossible quest.
Or maybe we just need some AI help to solve it.
In the meantime, the UK’s
Prime Minister, a man whose usual solution to all problems is to claim that
they don’t exist and repeat his five doomed priorities, is busily presenting
himself as the world leader on the matter. The basis for such a claim is dubious,
to say the least. And whilst claiming to be world-leading in every field may
play well to a home audience, I do rather wonder what impact it has on other
world leaders when Sunak turns up at their meetings claiming to be setting the
agenda and leading the rest of them. Probably not the impact he thinks he’s
having.
1 comment:
Sunak is sticking his nose into the AI debate simply because his father in law is a major investor in a Big Tech company with a serious global reach. Follow the money is good advice in this case, no big deal or big social motive, just another plot/conspiracy to steer the "regulatory framework" in a direction where it looks good but has no balls.
Post a Comment