At the recent Big Innovation Centre Spring Party, I was honoured to give one of the keynote addresses on how fake news can be tackled by AI. BBC World News presenter Aaron Heslehurst compered the evening, which was opened by the director of the Centre, Professor Birgitte Andersen, and concluded by Lord Tim Clement-Jones. In my speech, I openly expressed a caveat concerning AI and judgment.
Fake news is not always called what it is... lies. And lies destroy our values and our society - not least our democracy. It is critical that we find ways to check them. As the Global Ambassador for PUBLIQ, I explained that on the PUBLIQ platform, readers will be able to flag any issues they perceive by registering ‘dislike’. Nothing new here. Where PUBLIQ is different, is that if there is too much negative feedback, the native AI wakes up, rings the metaphorical bell, and selects a panel of three judges among established authors who use the platform, matching parameters extracted from the flagged article with the expertise and reputation or score of these authors. This panel then individually reviews the article, and is empowered to contact the author for explanations. Upon a majority vote, it can recommend blocking the article. In extreme circumstances, the platform can suspend the authors' wallet (their earned income from what they have published), and redistribute this income to the rest of the PUBLIQ community.
In other words, PUBLIQ alternates-as-appropriate human judgement and AI to fight fake news. This seems wise as the automatization of judgment is a dangerous area where, to paraphrase Tolkien (and Gandalf), the wisest lose themselves. Humans can be dependent on AI for many things, but should this include judgment? To put it differently - are we to be judged by robots? Some rather serious questions which as Tim Clement-Jones pointed out have deep ethical dimensions. He was also kind enough to say that I had 'pressed the button' on the heart of the matter.
The Berkman Klein Center, which brings together Harvard University and the MIT Media Lab, have launched the 'Ethics and Governance of Artificial Intelligence Initiative' aimed at bolstering the use of AI for the public good. One of the three topics to which this program will be applied includes justice, namely: 'Autonomy and the State' examining how 'Governments play an increasing role as both consumer and regulator of automated technologies. How AI is straining our notions of human autonomy, due process, and justice?'. The main website of the program asks 'How might approaches such as causal modelling rethink the role that autonomy has to play in areas such as criminal justice?'. The online document provided ('Algorithms and Justice'), however kick-starts by attacking the judicial system and human fallibility, stating:
'With fallible judges, juries, and lawyers, that system has been rightly criticized for inconsistency and for perpetuating practices that disproportionately harm marginalized groups'.
Thus, instead of taking a debating point of view, the document slams wide open the door for AI to judge humans, while consolidating the approach with the politically sensitive topic of human-biased justice.
By contrast - Christina Blacklaws, President of the UK Law Society - pointed out on Radio 4’s Today programme a few days ago (04.06.19), that complex AI algorithms used by the UK police, for example to pick a face in the crowd, are based on previously gathered data-sets which are as prejudiced as society can be - sometimes even with clear racist or gender biases which could be discriminatory. Since then what she said has made it to Page 1 of the Financial Times (10.06.19).
The Berkman Klein Center report further extols the argument that machines, i.e. the algorithms, can be less fallible than humans:
'Algorithmic technologies may minimize harms that are the products of human judgment'.
The key term here is 'minimize' - raising the question of the extent to which judgment should be delegated to AI. Some may argue that after all the NHS, through its BABYLON App, is delegating common diagnostics to AI, which admittedly is a form of judgment. Hence why not use basic AI 'judgements' for common crimes? As for example with speed cameras, and the automation of penalties - which is in a way a judiciary sanction without the intervention of a (human) judge.
At the recent Big Innovation Centre Spring Party, I was honoured to give one of the keynote addresses on how fake news can be tackled by AI. BBC World News presenter Aaron Heslehurst compered the evening, which was opened by the director of the Centre, Professor Birgitte Andersen, and concluded by Lord Tim Clement-Jones. In my speech, I openly expressed a caveat concerning AI and judgment.
Fake news is not always called what it is... lies. And lies destroy our values and our society - not least our democracy. It is critical that we find ways to check them. As the Global Ambassador for PUBLIQ, I explained that on the PUBLIQ platform, readers will be able to flag any issues they perceive by registering ‘dislike’. Nothing new here. Where PUBLIQ is different, is that if there is too much negative feedback, the native AI wakes up, rings the metaphorical bell, and selects a panel of three judges among established authors who use the platform, matching parameters extracted from the flagged article with the expertise and reputation or score of these authors. This panel then individually reviews the article, and is empowered to contact the author for explanations. Upon a majority vote, it can recommend blocking the article. In extreme circumstances, the platform can suspend the authors' wallet (their earned income from what they have published), and redistribute this income to the rest of the PUBLIQ community.
In other words, PUBLIQ alternates-as-appropriate human judgement and AI to fight fake news. This seems wise as the automatization of judgment is a dangerous area where, to paraphrase Tolkien (and Gandalf), the wisest lose themselves. Humans can be dependent on AI for many things, but should this include judgment? To put it differently - are we to be judged by robots? Some rather serious questions which as Tim Clement-Jones pointed out have deep ethical dimensions. He was also kind enough to say that I had 'pressed the button' on the heart of the matter.
The Berkman Klein Center, which brings together Harvard University and the MIT Media Lab, have launched the 'Ethics and Governance of Artificial Intelligence Initiative' aimed at bolstering the use of AI for the public good. One of the three topics to which this program will be applied includes justice, namely: 'Autonomy and the State' examining how 'Governments play an increasing role as both consumer and regulator of automated technologies. How AI is straining our notions of human autonomy, due process, and justice?'. The main website of the program asks 'How might approaches such as causal modelling rethink the role that autonomy has to play in areas such as criminal justice?'. The online document provided ('Algorithms and Justice'), however kick-starts by attacking the judicial system and human fallibility, stating:
'With fallible judges, juries, and lawyers, that system has been rightly criticized for inconsistency and for perpetuating practices that disproportionately harm marginalized groups'.
Thus, instead of taking a debating point of view, the document slams wide open the door for AI to judge humans, while consolidating the approach with the politically sensitive topic of human-biased justice.
By contrast - Christina Blacklaws, President of the UK Law Society - pointed out on Radio 4’s Today programme a few days ago (04.06.19), that complex AI algorithms used by the UK police, for example to pick a face in the crowd, are based on previously gathered data-sets which are as prejudiced as society can be - sometimes even with clear racist or gender biases which could be discriminatory. Since then what she said has made it to Page 1 of the Financial Times (10.06.19).
The Berkman Klein Center report further extols the argument that machines, i.e. the algorithms, can be less fallible than humans:
'Algorithmic technologies may minimize harms that are the products of human judgment'.
The key term here is 'minimize' - raising the question of the extent to which judgment should be delegated to AI. Some may argue that after all the NHS, through its BABYLON App, is delegating common diagnostics to AI, which admittedly is a form of judgment. Hence why not use basic AI 'judgements' for common crimes? As for example with speed cameras, and the automation of penalties - which is in a way a judiciary sanction without the intervention of a (human) judge.
I have asked my cousin Judge Paul Gulbenkian, who is a retired Crown Court Recorder and Immigration Judge with 40 years experience as such in criminal and immigration issues, for his advice. While he accepted that AI and robots will play an important role in the future judicial system of the U.K, I reproduce here verbatim the answers he has been so kind to provide in his email:
1. Judges have to make judgments on the credibility of witnesses from their experience of life and knowledge of the law. I cannot envisage robots being able to do this in any satisfactory way.
2. They also have to decide on the validity or otherwise of the evidence that is presented to them. They may require additional evidence and/or experts’ reports. How will robots deal with this?
3. They are required to make judgments on what can often be conflicting case-law and complicated legislation after careful analysis of the evidence and the facts and circumstances of an individual case. Is this something that robots will be capable of doing?'
Even if we set aside the long path to the Age of Enlightenment which has placed the human being at the centre of philosophical thinking, to put machines increasingly at the centre of our lives, to make us increasingly dependent on them, is not only to gradually degrade our dignity, but is also to put our destinies in the hand of another ‘species’.
Personally I no more wish to see humans judged by non-organic entities than by a dominating alien species. Human wisdom has its frailties, but we should not in my view be made children whose lives should be overruled by some more ‘adult’ (algorithmic or alien) form. To take that road is to open the door to the enslavement of all human beings on the basis that most are not wise (but who would dare claim to be?). This is precisely what dictatorships, thinking to know better and for the better, do.
In this year’s majestic BBC Reith Lecture, former Chief Justice Jonathan Sumption posed the question 'Who is to decide what is necessary in a democracy?'. Given a chance, as with a dictatorship, this could be exactly what AI would gravitate towards - deciding to award itself the role.
Human beings could choose to give AI the same powers of consciousness as us, which comes with the defects we have. This amidst the growing common conception (or perhaps misconception) that all future AI shall be more perfect, more ‘wise’ than we humans. But will it? And if so in relation to who or what, as more perfection does not mean to be perfect? A domain which, from a theological point of view, has always been the attribute of God. But, to push the reasoning to its limits, perhaps will AI one day challenge God to take His place? And perhaps we shall daily serve IT it in its new temple, i.e. our planet - with perhaps Alexa as its divine consort?
In conclusion, if we are to remain free, with our own self-determination and common human destiny, we should be responsible, perhaps 'wise', to not let AI take over more than a certain authorised portion of our daily lives. A simple decision but one which may bear weighty consequences.
1. Judges have to make judgments on the credibility of witnesses from their experience of life and knowledge of the law. I cannot envisage robots being able to do this in any satisfactory way.
2. They also have to decide on the validity or otherwise of the evidence that is presented to them. They may require additional evidence and/or experts’ reports. How will robots deal with this?
3. They are required to make judgments on what can often be conflicting case-law and complicated legislation after careful analysis of the evidence and the facts and circumstances of an individual case. Is this something that robots will be capable of doing?'
Even if we set aside the long path to the Age of Enlightenment which has placed the human being at the centre of philosophical thinking, to put machines increasingly at the centre of our lives, to make us increasingly dependent on them, is not only to gradually degrade our dignity, but is also to put our destinies in the hand of another ‘species’.
Personally I no more wish to see humans judged by non-organic entities than by a dominating alien species. Human wisdom has its frailties, but we should not in my view be made children whose lives should be overruled by some more ‘adult’ (algorithmic or alien) form. To take that road is to open the door to the enslavement of all human beings on the basis that most are not wise (but who would dare claim to be?). This is precisely what dictatorships, thinking to know better and for the better, do.
In this year’s majestic BBC Reith Lecture, former Chief Justice Jonathan Sumption posed the question 'Who is to decide what is necessary in a democracy?'. Given a chance, as with a dictatorship, this could be exactly what AI would gravitate towards - deciding to award itself the role.
Human beings could choose to give AI the same powers of consciousness as us, which comes with the defects we have. This amidst the growing common conception (or perhaps misconception) that all future AI shall be more perfect, more ‘wise’ than we humans. But will it? And if so in relation to who or what, as more perfection does not mean to be perfect? A domain which, from a theological point of view, has always been the attribute of God. But, to push the reasoning to its limits, perhaps will AI one day challenge God to take His place? And perhaps we shall daily serve IT it in its new temple, i.e. our planet - with perhaps Alexa as its divine consort?
In conclusion, if we are to remain free, with our own self-determination and common human destiny, we should be responsible, perhaps 'wise', to not let AI take over more than a certain authorised portion of our daily lives. A simple decision but one which may bear weighty consequences.
Above: William Blake (1757-1827). God judging Adam (1795). Source: Wikimedia.