Are we letting computers make too many big decisions?

There are concerns New Zealand is letting computers make too many big decisions without checking if they're accurate.

Researchers at the University of Otago believe now is the time to act, to protect against the dangers of artificial intelligence (AI).

"These tools, these algorithms are being used to make very important decisions - who gets to stay in the country, who gets to leave prison or stay in prison, children being taken from their parents," said Colin Gavaghan, associate professor of law.

Government Use of Artificial Intelligence in New Zealand, a new report led by the University of Otago and funded by the Law Foundation, found little transparency over the accuracy of Government algorithms, despite the use of AI increasing.

It recommends establishing an independent Government regulator.

"One of the things we are suggesting is there should be some kind of oversight body within the Government which can check Government algorithms for accuracy," said Gavaghan.

Systemic bias?
 

One thing they want to prevent is algorithms with a bias. Gavaghan said even AI systems can discriminate - as was discovered in the US, where algorithms have been used in the justice system for years with no proper testing of their accuracy.

"The COMPAS algorithm, for instance, has been widely criticised for overstating the risk of black prisoners reoffending, compared with their white counterparts - an outcome that can result in them being kept in prison for longer," the Law Foundation said in a statement.

"It's not so clear that everybody knows that about computer programmes," said Gavaghan. "In a sense a computer can't be prejudiced, but a computer is only as good as the information that's been fed into it."

Amy Fletcher, associate professor of political science at the University of Canterbury, has similar concerns.

"Effective Government use of AI could lead to more transparent, equitable, and efficient delivery of core services. However, without robust regulation and tech literacy across the public sector, we risk the reinforcement of bias, inequality, and systemic racism."

As does study co-author Ali Knott of the University of Otago's Department of Computer Science.

"There's a danger that other, innocent-looking factors - postcode for instance - can serve as proxies for things like race."

Pessimism
 

David Parry, head of computer science at AUT, holds a bleak view of the Government's ability to keep AI in check. He says the report understates the risk.

"Unfortunately most decision-makers have very little understanding of how these algorithms work or what the results actually mean. Bias is caused by data selection, the right to opt-out of data collection, existing bias in decision making and inappropriate choice of algorithm."

He says any potential regulator should operate "like a medicines agency", and require independent proof that any algorithms deployed by Government departments are free from bias.

"Such a regulator would benefit from responding to the very thoughtful and insightful views coming from Māori groups, for example... New Zealand has an exceptional opportunity to get this right and become a world-leader in the use, assessment and development of algorithmic approaches in Government if we are prepared to have a scientific and inclusive approach."

On the plus side, the report says New Zealand is in a better place than the US because most of its AI tools have been built in-house, rather than by third parties.

"That's a practice we strongly recommend our Government continues," said report co-author, University of Otago philosopher James Maclaurin.

More work is ahead - the researchers next plan to look at the impacts of AI on work and employment.

Newshub.