[ Main page | About Your System | About Fuzzy Logic | The Importance of Explainability | Commercialising ]
Many researchers in expert systems have been keen that they should be able to justify the advice they gave. They therefore built an explanation feature into their creations. For instance, Cardiff University's Dave Marshall has a set of lecture notes on expert systems at https://users.cs.cf.ac.uk/Dave.Marshall/AI1/mycin.html . In it, he writes:
EXPLANATION
This mode allows the system to explain its conclusions and its reasoning process. This ability comes from the AND/OR trees created during the production system reasoning process. As a result most expert systems can answer the following why and how questions:
Why was a given fact used?
Why was a given fact not used?
How was a given conclusion reached?
How was it that another conclusion was not reached?
Researchers have stressed the importance of this ability. One, whom I knew personally, was Donald Michie ( https://en.wikipedia.org/wiki/Donald_Michie ), a codebreaker at Bletchley Park and a pioneer of British Artificial Intelligence. In their 1984 book The Creative Computer, Michie and his co-author Rory Johnston wrote:
Taking the opportunities [of computers] will not be easy. It will require a complete reversal of the approach traditionally followed by technology, from one intended to get the most economical use out of machinery, to one aimed at making the processes of the system clearly comprehensible to humans. For this, computers will need to think like people. Unless the computer systems of the next decade fit the 'human window' they will become so complex and opaque that they will be impossible to control. Loss of control leads merely to frustration as far as many applications now are concerned, but when society becomes more dependent on computers, and where such things as military warning systems, nuclear power stations and geopolitical and financial communications networks are operated by them, loss of control can lead to major crisis.
Notice these phrases: "making the processes of the system clearly comprehensible to humans"; "think like people"; "fit the 'human window'"; "will become so complex and opaque that they will be impossible to control". The authors clearly want systems not to be opaque.
Similarly, in "Experiments on the Mechanization of Game-Learning", Computer Journal Vol. 25, 1, (1982), Michie writes:
It will not be desirable for control rooms in nuclear power stations, air traffic control centres, and the like to become polluted with uncomprehended descriptions generated by their associated computing systems.
Notice too the applications that Michie and Johston mention, which include military warning systems. There had already been many occasions when these systems almost started World War III ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ), with causes as diverse as solar flares, circuit errors during power cuts, moonrise, swans, and a faulty satellite warning system. We were saved from that one by one man, Stanislav Petrov ( https://www.bbc.co.uk/news/world-europe-24280831 ). I'm sure Michie and Johnston had these in mind.
It's now not 1984, but 2024, forty years later. Computer systems are vast, ubiquitous, and unintelligible. Google won't explain its search rankings; YouTube won't explain its video recommendations; Twitter won't explain why it promotes those tweets but not these tweets. There's no point in adding to this opacity, and I hope the explanations that I'm building into your system will be a selling point.