Robodebt was a “beta test” technology on the most vulnerable citizens
The robodette program has been an example of government ‘beta testing’ algorithms on its most vulnerable citizens that failed to properly account for fundamental principles of accuracy, accountability and fairness, according to the former Australian Commissioner for Human Rights the man.
Ed Santow, who has spent years warning of the dangers of poorly implemented technology as Australia’s commissioner for human rights, said on Wednesday that the lack of strategic artificial intelligence (AI) skills from Australia was “incredibly dangerous” and could lead to more programs like the “robotic debt disaster”. ”.
The online compliance intervention system, better known as robodebt, was launched by the coalition government in 2016. It used an algorithm to average the annual earnings of a social assistance recipient using ATO data and compared it to the income reported to Centrelink.
The scheme raised $ 1.7 billion in debt against 443,000 people and regularly matched data incorrectly, resulting in the issuance of inaccurate or non-existent debt notices.
In awarding victims of the program a settlement of $ 112 million in June, a Federal Court judge called the program “illegal” and a “shameful chapter” in Australian social security history.
Mr Santow sharply criticized robodebt as Australia’s human rights commissioner during his five-year term, which ended in July.
Now leading a responsible technology program at the University of Technology Sydney as a professor of responsible industry technology, Mr Santow said robodebt was an example of an AI-based technology or algorithm being deployed by governments without proper ethics, testing or remedies for those affected. .
“[AI] is often a fairly experimental technology, ”Santow said during an AI webinar in charge of the University of Technology Sydney (UTS) on Wednesday.
“Historically, not just in Australia, but many countries have had a very poor record when it comes to beta testing new technology on the most vulnerable citizens in our community.
“And, frankly, that’s what seems to have happened with robodebt.”
Mr Santow said the automated technology shouldn’t have been tested on vulnerable citizens without strict safeguards, and potentially not at all.
Any deployment of AI or algorithm-based technology must meet fundamental principles of accuracy, fairness and accountability in order to minimize risk, Santow explained.
But robotdebt hadn’t adequately addressed any, he said.
Accuracy is “essentially important” when governments make decisions using technology, Mr Santow said, but the robotic debt program has been found to have very high error rates, including increasing debts to people who owed the government nothing.
The program also lacked simple redress mechanisms for mistakes made, with victims being forced to “untie the Gordian knot” with lawyers to gain any accountability, the former human rights leader said.
Mr Santow also questioned the overall fairness of a program that had raised relatively low debt many years ago.
“Sometimes when you reduce someone’s money that you maybe overpaid $ 100 five, six, or seven years ago, it might not really be fair to claim that money.
“So you have to have a big picture of the system you’re building and make sure it really works fairly for people. “
Mr Santow said Australia needs to learn lessons from robodebt, but it will require more people with ‘strategic’ AI skills to avoid similar problems.
CSIRO predicted that, based on current trends, Australia will have 70,000 graduates below what is needed to meet the demand for technical skills in AI such as data science by 2030. Mr Santow said that the technical skills challenge is well understood and that several steps are taken to address it. .
But another “incredibly dangerous” skill gap in strategic AI expertise is widening, he said.
Mr Santow pointed to research showing organizations are under pressure to deploy AI technologies, but most had almost no idea how to do it right. The pressure would lead to irresponsible and dangerous use of technology like robodebt, he said.
The former human rights commissioner joined UTS this month to lead a responsible technology initiative. He will lead a university initiative to educate businesses and government agencies, from executives, about the risks and opportunities of AI.
“We really want to be at the forefront of building Australia’s AI capacity so that businesses and government agencies can use artificial intelligence intelligently, responsibly and in accordance with our liberal democratic values. And that means respecting people’s basic human rights.
Earlier this week, Perth lawyer Lorrainne Finlay was named Australia’s next human rights commissioner by the coalition government. Ms Finlay has rarely touched on technological rights issues and has criticized the Australian Human Rights Commission in the past.
Crisis assistance is available from Lifeline on 13 11 14.
Do you know more? Contact James Riley by email.