Artificial Intelligence (AI) is arguably the most hyped term in 2017, possibly even surpassing Blockchain technologies. Yet, like Blockchain there is little public understanding when you look beyond the high-level marketing spin.
However, AI and what it is leading to, presents a challenge at many different levels for both technology and society.
At the very base level, AI is glorified automated routing and decision-making, with (or without) voice recognition. It can discern key words and phrases, and test them against a set of pre-programmed rules. If there is a match, or correlation, then the communication is routed to a pre-determined destination or additional code. A glorified answering machine, a Human Computer Interface (HCI). No more than an amoeba.
As the rule-sets get more complex, more sophisticated routing and decision-making can be executed. However, regardless of the interaction, they are nonetheless governed by a complex set of rules.
In Isaac Asimov’s ‘Robot’ series of Science Fiction stories, US Robots’ Chief Robopsychologist Dr. Susan Calvin spends her time disentangling the immensely complex rule-sets to determine particular robot behaviour. But robots, from the Czech word “robota” (“drudgery, servitude”) are exactly that – Automated servants obeying rules.
However, their evolution, from automatons, with inherent machine learning, to sentient cognitive machines – the capacity to feel, perceive, or experience subjectively, and then take decisions based on that experience, is approaching from the horizon at an ever-increasing rate. At what point do we consider the Rubicon to be crossed?
They are no longer the stuff of Science Fiction such as the HAL9000 series from the film 2001, A Space Odyssey, although prophetically these are the sort of issues that we will need to address with Cognitive Machines, despite the fact that HAL was questionably cognitive.
Cognitive Machines do however force us to relook at a number of fundamental societal issues and challenges us to redefine and extend the meanings of words like “Trust”, “Liability”, “Benefit” … etc.
Whereas AI, with machine learning, can currently be governed by existing contract law, I would suggest that Cognitive Machines may require a rethink.
Isaac Asimov touched on this topic with the book “Bicentennial Man”, where an android – an AI in human form – bids to be recognised as a human life form.
In 2017, the Chain of Trust in any digital transaction may well involve an AI element in the path.
Today may be the time that we start preparing for Cognitive Machines by considering their potential impact in our society, so that when they do arrive – and not if – we will be prepared.
Chair of EEMA, Jon Shamah
Download in pdf View from the EEMA Chair – April