Sowing human values into the heart of AI
21 April 2020
A philosopher wants to rein in the worst possible consequences of Artificial Intelligence before they happen.
Professor Tim Dare says, “We now have the capacity to do things with data that look troubling, that make privacy very tricky because we can now know things about people they haven’t told anyone and in some cases, may not know themselves.”
Tim is an ethicist at the University of Auckland, who works with public agencies on how they might deploy AI systems to preserve privacy and avoid ethical and moral pitfalls.
Since 2000 pundits say we have entered the fourth stage of the industrial revolution. The first stage started in 1784 with the advent of steam power, mechanical production and rail, the second from about 1870 brought mass production and electricity the third from 1969 saw automated production, electronics and computers and we are now in the age of AI, Big Data and robotics, an interdependent set of technologies that are already changing the way we live, work and govern society.
By 2020 some estimate that AI systems will have increased productivity by $US47 billion. Looking a decade further ahead, consultants PwC foresee a major leap, with productivity and new markets created from AI contributing a further $US15.7 trillion, more than the present economic output of China and India.
When people bandy terms like Machine Learning, Big Data, Neural Networks and Semantic Reasoning, it can be hard to get a fix on what is actually happening. The Royal Society’s definition of AI in a recent report is: “The term for computational methods and techniques that solve problems, make decision or perform tasks that, if performed by humans, would require thought.”
The AI Forum, the voice of New Zealand’s AI community prefers:”Advanced digital technologies that enable machines to reproduce or surpass abilities that would require intelligence if humans were to perform them.”
Tim says that in a sense AI is the same as human judgement only far better. “It’s what we’d do if we had a bigger brain and a much better memory. It becomes artificial intelligence when we say to it, ‘Ok, you have predicted these cases and now we’ll give you information on whether you were right’ and the AI adjusts its own algorithms in a constant iterative process to fine tune the accuracy of its predictions."
If that sounds like science fiction, the reality can be both prosaic and unsettling.
If that sounds like science fiction, the reality can be both prosaic and unsettling. The Ministry of Education has since 2016 used the School Transport Route Optimiser (STRO) to work out the most efficient school bus routes for 72,000 children across the country. The Ministry says STRO has reduced bus travel times and with that greenhouse gases while saving about $20m a year out of the school transport budget.
The Accident Compensation Corporation covers about 2 million claims a year at a cost of about $4 billion. ACC is developing a new AI system to improve how claims are assessed and approved by analysing 12 million claims submitted over a six-year period to identify the characteristics relevant to whether a claim was accepted. The goal is to fast track approval of simple claims by AI, leaving complex issues for human assessment. ACC is keen to add that the AI would not be granted the power to deny claims.
Public discomfort
While it is hard to quibble about making school bus transport better, the use of AI to determine more serious and life changing decisions and policy should give us pause.
This discomfort was very publicly expressed by Anne Tolley, then the Social Development Minister in 2015. She made her views clear when she scotched an early attempt by the Social Development Ministry to predict and identify children facing a high risk of maltreatment from birth. Tolley told Stuff: “Where it goes from there is another big ethical question. Because God knows, do we really want people with clipboards knocking on people’s doors and saying: ‘hello, I’m from the Government, I’m here to help because your children are going to end up in prison?’ I just can’t see that happening.” Tolley’s position was made clear by her note on the briefing paper, “Not on my watch. Children are not lab rats.”
Tim was the author of the ethics review on the project commissioned by the Ministry. The project, led by Dr Rhema Vaithianathan, then of the University of Auckland, now at Auckland University of Technology had built a Predictive Risk Model to be tested retrospectively for children born between 2003 and 2006. The goal was to check the model’s predictions against what had actually happened to the children. It was an observational study purely. No policies changed and no frontline worker would have accessed the prediction scores.
Before the introduction of the AI, human call takers were
screening out 27 percent of the highest risk cases and screening in 48 percent
of the lowest risk cases.
Though canned here, the research has been applied to a Family Screening Tool for Allegheny County, Pennsylvania since 2016. The county’s child endangerment hotline receives more than 15,000 calls a year and staff used to manually check a range of sources to decide whether child services staff should intervene. Before the introduction of the AI, human call takers were screening out 27 percent of the highest risk cases and screening in 48 percent of the lowest risk cases.
Tim says, “It matters if you have limited resources then you don’t want to put them in the wrong place. But that’s not the main argument. Interventions are burdensome. If you have unnecessary intervention, then it creates a burden on families that don’t need it.
"It’s about putting your services where they will do the most good and the least harm. We can’t help every child, so we need to help the children who need it most.” Since introduction of the tool high risk cases are now being properly triaged up to 90 per cent of the time.
Precisely because of this experience, if Tim had to choose between a human doctor and an AI, he would go with the AI every time.“All the evidence is that a good algorithm is better. It avoids obvious flaws and biases and it can do it much quicker and it doesn’t rely on personal experience.”
Outperfoming humans
AI systems in diagnostic medicine, the law and finance already outperform humans. There’s an AI that can differentiate between lung cancers and provide more accurate prognoses than human pathologists. Another system can spot Alzheimer’s’ Disease with 80 percent accuracy, a decade before symptoms appear.
In the legal world, an AI programme has predicted the decisions of the European Court of Human Rights with 79 percent accuracy, a similar AI managed 75 percent accuracy in predicting the rulings of the US Supreme Court, demonstrably better than a panel of 83 legal experts who were right 60 percent of the time. About 60 percent of global daily share trading is done by computer and the expectation is that artificial intelligences will be managing $2.2 trillion in funds in 2020.
Our expectation is for a high degree of transparency from the private sector that is even higher in the public sector. The complexity of the data and how the AI uncovers patterns means it is often impossible to explain how an AI came to its decision.
Tim says,“The combinations of factors and the scale of data are far too complex. So the black box of the AI becomes inscrutable. The predictions cannot be explained, though they can be verified as accurate.”
Tim says the array of ethical fish hooks are significant but says we have to find ways to manage and mitigate them because AI will be the end of privacy as we know it.
Tim has been working to safeguard how AI is used by the public sector. For the Ministry of Social Development, he and a team have been trialling the Privacy, Human Rights, and Ethics, PHRAE, a detailed online assessment tool to identify privacy human rights, and ethics risks early in the design cycle of an AI proposal.
Tim describes PHRAE as a “A fancy questionnaire with advice and warning and sections of relevant statute build into it.” Following the trial, the goal is to use PHRAE across all public services considering AI applications.
“AI is inevitable. One reason is the great benefits that it can bring. The other reason is AI is coming ready or not. They are such powerful tools, so we need to get a strong handle on what good use is and what it is not. We need the ethics of use front and centre.”
Tim believes the tool would benefit the private sector equally. Best practice use of AI is arguably a market advantage. “I doubt anyone has an interest in being the next Cambridge Analytica.”
Responding to big data:
Story: Gilbert Wong
Researcher portrait: Elise Manahan
The Challenge is a continuing series from the University of Auckland about
how our researchers are helping to tackle some of the world's biggest challenges.
To republish this article please contact: gilbert.wong@auckland.ac.nz