The architects of AI

How human minds shape machine intelligence

Professor Michael Witbrock
Professor Michael Witbrock

While the age of artificial intelligenceis poised to change society as we know it, our future relies on human intelligence to guide us towards a society in which we want to exist.

Human civilisation evolved from the minds of humans, enhanced by various computational systems. Artificial intelligence is set to disrupt this reality, with an explosion of many different kinds of strong intelligence imminent.

Michael Witbrock is a professor at the School of Computer Science and the founder of two AI-focused organisations, the NAOInstitute, and the Strong AI Lab (SAIL), hosted by Waipapa Taumata Rau, University of Auckland.

The NAOInstitute exists to understand the evolving natures of Natural, Artificial and Organisational Intelligences (NAOI), and the increasingly strong relationships amongst them. This includes the development of AI in the context of social responsibility and its impact on the Earth and our civilisation.

The Strong AI Lab (SAIL) functions within the NAOInstitute, focused on improving the capability of AI systems in pursuit of the long-term goal of seeing AI transcend its current limitations with social responsibility embedded in all of its objectives.

We are about to enter an age where human intelligence is not the only kind of strong intelligence around.

Michael Witbrock

Witbrock’s life’s work is in the field of artificial intelligence, where he is dedicated to improving the modelling and algorithms that influence the functionality of AI.

One aspect of his research aims to prevent AI from replicating the same flaws human intelligence suffers from. To do this, he is working to help AI learn from data without preconceptions. The hope to increase modelling accuracy and help address errors like confabulation –where an AI system generates incorrect information without a source.

“I’m most excited about progressing towards more reliable kinds of thinking to influence how these systems learn. For example, embedding reliable causal inference,” he says.

Witbrock’s opinion is that the rapid development of AI technology will see many automated systems become more powerful, disrupting and potentially improving our lives on a scale we have not seen before.

“We are about to enter an age where human intelligence is not the only kind of strong intelligence around.”

He says many examples of technological developments in the past have displaced workers by carrying out tasks more efficiently and accurately. However, he dismisses the argument that AI is just another example of humans needing to adapt to change as a grave underestimation of the situation.

“People often argue that humans have always adapted to technological advances replacing jobs in the past, and this is another evolution of that happening – well, I think exactly not.”

He predicts it is likely AI will replace many tasks previously carried out by people, and we need to plan to ensure society is prepared for this change. Or, at the very least, ensure that the decisions made at this critical moment don’t prevent us from benefiting in the future.

With the significance of this rapidly evolving technology in mind, he still maintains a cautiously optimistic outlook because he believes a more AI-integrated society could result in more equitable and inclusive communities. Suggesting AI could enable us to expand our perspective and consider the interests of more organisms and entities affected by human civilisation. “I think we [New Zealand] can do a good job of working out how to handle this change, which we should expect to eventuate over the next five to ten years, so not a long time.”

He attributes this belief to New Zealand’s relatively small and well-resourced population, which he believes is also socially well-positioned to adapt.

While Witbrock acknowledges there are valid reasons to be concerned about safety, he pushes back on the argument that all applications of AI must be 100% safe before deploying them. He argues that this argument is unrealistic and ignores that our current systems, which are largely controlled and operated by humans, are not entirely secure and often cause unintentional harm. This reality is experienced to a greater extent by minorities, to whom many of these systems are not designed to cater. Further, he says almost all our concerns about AI are true of humans, too.

“Hallucinations, a problem better described as confabulation, is something humans do all the time.” He laughs.

His rough bet is that AI systems will continue to become increasingly powerful until they are generally more capable than humans at many tasks. He believes the technology poses exciting potential with far-reaching and unlimited benefits if we implement it well. An opportunity he thinks we should embrace.

“Suppose we decide the end goal is for people to be free to pursue interests unrestrained by the obligations our current systems require – what would that look like, and how can we achieve it?”

There are economic benefits, too. Witbrock suggests that by embracing this transition, New Zealand could increase its collective wealth more rapidly than other countries. However, with rapid growth, the associated risks must be considered.

Possible solutions include gradually increasing income support means or taxing the productive output of AI systems.

“We still want the work of an AI to be significantly cheaper than the cost of a human to do the same thing, but the cost does not need to be reduced to zero.”

He elaborates, “At the moment, we’re taking up to 39% of the money a person receives, and there’s no reason we shouldn’t take 39 or 40% of the increased value produced by an AI system, which would otherwise have had to be paid to a person and taxed – but I’m not an economist.”

Witbrock assures us these conversations are happening here and around the world. He has just attended a Microsoft event Exploring AI Adoption in Aotearoa New Zealand, and is speaking from Wellington following a meeting with the Ministry of Business, Innovation and Employment (MBIE).

He says it is an interesting process because the subject is so big, and the consequences are so disruptive.

“There are people who are highly, publicly concerned about the ethics and the cultural aspects of AI use and are very active in enumerating those risks. They are not wrong to do so. But I think it would be wrong if we were not to set that against the enormous potential benefits the technology offers.”

This story first appeared in InSCight 2024.  Read more InScight stories