Using machine learning to predict dementia
10 December 2024
Scientists around the world have recognised the potential of AI to diagnose and predict a rising tide of dementia, and New Zealand is no exception, with the numbers afflicted by the condition forecast to double by 2050 with an associated healthcare cost of $6 billion.
Given the growing mountain of routinely collected health data, a transdisciplinary team from the Faculties of Science and Medical and Health Sciences is looking to develop machine learning models which Dr Daniel Wilson (Ngāpuhi, Ngāti Pikiao) says could cost-effectively analyse medical records and identify risk factors.
“The idea is that our team could use AI algorithms to look at that information and see whether there’s anything that might give a heads-up on whether somebody might have, or develop, dementia.”
As a lecturer in the School of Computer Science, Daniel’s role in the three-year project is to ensure that different cultural perspectives are taken into account, especially given that dementia – or mate wareware – is increasing faster among Māori, Pacific and Asian populations.
“Māori are less likely to put their family member into assisted care, for instance. That also goes for Asian families,” says Daniel, “and it does result in more of a whānau-focused sort of understanding of mate wareware rather than an individual clinical focus.”
The idea is that our team could use AI algorithms to look at that information and see whether there’s anything that might give a heads-up on whether somebody might have, or develop, dementia.”
An anonymous online survey of people aged 55 years and over revealed that more than 80 percent felt comfortable or very comfortable with their data being used in various scenarios. However, it also highlighted the need to include different cultural perspectives – and particularly those of Māori.
“Cultural elements, spiritual elements, connectedness with whānau and the community is a much broader conception than the clinical,” says Daniel, “so that
was a starting point for thinking about having more detailed interviews about perspectives on data use.”
A decision was made to approach a community-based support group to “let the Māori voice be heard in relation to health data”, and Daniel says a key consideration was how to overcome the tension between clinical research and Māori cultural norms.
“How might we engage in a way that wasn’t intrusive in the atmosphere that they had already created?”
The end result was a “wānanga-style” open forum where differing thoughts, opinions and experiences were discussed with those affected by dementia and their carers, and which generated a lot of discussion about consent – and trust – when it comes to the use of health data.
“There are obligations that go with the use of the data,” says Daniel, “because it’s not something that’s simply alienated from an individual by the person who’s doing the measuring. The conception is that there are still these obligations of appropriateness and tikanga and so forth.”
Understandings around the meaning of trust can also be very nuanced. While trust may be seen as fulfilling one’s obligations, Daniel says that in different cultural contexts there’s a question around which obligations count.
“So, is it individual virtues, OR is it about relationships and making sure that there are flows and benefits going in both directions, for example, in terms of feeding back information.”
Another concern is around ensuring that there isn’t a massive power differential where people feel that they’re handing over everything to someone who has more power than them. “A lot of this was coming through, this sort of tension between having a relationship rather than a transaction, collective versus individual consent, these kinds of things.”
Nevertheless, he says there is a general recognition that there are good intentions around the use of health data that might help future generations.
“There’s very much this whānau focus of ‘we’re doing this for our mokopuna’. That’s a real motivator to make things better further down the line. But there is also this thought that health information is tapu, it ought not to be used willy-nilly for exploratory investigations.”
And Daniel says that raises a problem with current AI practices where data has been collected and stored without a specific purpose. “They’re trying to work
out what it might be useful for because you don’t know until you look for patterns whether you’re going to find any patterns. So there’s a real tension there around the practicalities of consent.”
There’s very much this whānau focus of ‘we’re doing this for our mokopuna’. That’s a real motivator to make things better further down the line...
The movement of data between different organisations also carries different levels of comfort, particularly if it involves multinational companies that might profit from it.
“There’s all this wariness around how data is going to be used, and is it just going to be a form of exploitation, as opposed to maintaining a kind of a relationship. So, expectations of trustworthiness get weakened when you start talking more about government or international organisations.”
A case in point is the existence of the United States CLOUD Act (Clarifying Lawful
Overseas Use of Data), which enables federal authorities to compel U.S.-based technology companies, like Microsoft for instance, to provide requested data regardless of where it is stored.
In the UK, concerns were also raised after Google obtained the confidential NHS health records of 1.6 million patients when it bought a business that was using the records for AI research. “Given the current drive for more data to create better AI tools, something like this is possible here, too,” says Daniel. “So we need to think carefully about how to proceed safely.”
Another key focus is in the area of AI Ethics, and he’s keen to see data systems that work equitably – and which recognise that data is an extension of the individual and not just an abstract snapshot.
“There are lots of value judgements and assumptions that go into creating data. Like what is measured, how do you measure it? What trade-offs do you make, what other data do you link with, because data is the fuel for a lot of machine learning artificial intelligence.”
As a director of the recently launched Centre of Machine Learning for Social Good and part of a team involved in an MBIE funded Tikanga in Technology project, Daniel is a firm believer in community involvement to make AI systems safer and more culturally appropriate.
“If you’ve got something that’s going to impact on Māori, get Māori involved not just necessarily at an advisory or consultation level but right at the start when you’re working out what it is that you think you are doing so that you’re on the same page.”
As for the dementia study, which is being funded by the Health Research Council, he says there are a range of ethical considerations and discussions still to be had about the role of diagnostic models or support tools which might reduce costs and improve the quality of life for patients.
“It could give a heads-up to people that this is coming. But some people might not want to know that it’s coming. So, there’s a whole bunch of issues in terms of what this might look like in services because of these issues around the fact that there’s no cure at the moment for dementia.”
This story first appeared in InSCight 2024. Read more InScight stories.