Text size: A A A

Avoiding bias in automated decision-making

AI has the potential to streamline government decision-making, bring positive outcomes to citizens faster and more efficiently, and allow public servants to focus on delivering services that benefit the community. 

But as robodebt demonstrated, along with similar situations like the automated childcare benefits program in the Netherlands, there is a very real possibility of creating negative outcomes when decision-making is handed over to machines with little human oversight or thought put into the design and assumptions underpinning the system.

Dr Simon Longstaff, executive director at The Ethics Centre, says technological progress almost always outstrips legislative or regulatory responses. As a result, he says, it’s critical the public service has a culture and a clear sense of purpose that can be applied even in the absence of specific rules or legislation. 

Essentially, he says, culture trumps technology every time. With the right culture, problems with the implementation and application of new technologies can be headed off by people empowered to stand up and say that something doesn’t seem right.  

Robodebt may have been a failure of technology but more significantly, it was an even greater failure of culture and process within the APS and the then-government. And as the robodebt royal commission discovered, those cultural failures had a very real impact on many everyday Australians.

Combating algorithmic bias and discrimination

Existing AI technologies have a well-documented problem of delivering responses that are biased and discriminatory. They also tend to “hallucinate”, which is industry jargon for providing plausible-sounding answers that are nothing more than fabrications. In other words, they make things up.

These issues are caused by the datasets used to train the AI. If the dataset contains bias or discrimination, the AI’s responses will reflect that.

The other problem AI has is that large language models – the sort used to create generative AI services such as ChatGPT and Google Gemini – are essentially “black boxes”. How they come up with the responses they generate is poorly understood and some researchers concede it may never be possible for humans to fully understand how these systems work.

For governments looking to use AI in decision-making, bias, discrimination, hallucinations and the black box problem are very real issues. 

Decision-making must be fair and based on reality. The process by which the decision is made must also be explainable, justifiable and easily understood. As it stands, AI today simply can’t reliably deliver on those requirements.

Dr Marc Cheong, who is a senior lecturer in information systems (digital ethics) at the University of Melbourne, observes that despite these problems, AI and automated decision-making will continue to make inroads with government because it allows decisions at scale.

It also creates efficiencies and lets public servants focus on core, rather than mundane and repetitive, tasks.

Because of this, Cheong says it’s important for governments to implement these systems with human oversight. He refers to robodebt as an example of how human oversight would have stopped the controversial program from going ahead. 

“There should have been someone saying ‘Should we do this? Should we implement an averaging formula, and does it make sense?’” 

More importantly, he says, someone should have asked if what was being proposed passed “the pub test”.

Humans must be involved in the design of the algorithm, Cheong notes, along with being intimately involved in the decisions the machine is making. “There needs to be someone saying: ‘This doesn’t look quite right’.”

It comes back to the cultural and process safeguards espoused by Longstaff. “As you establish these core values and principles, then they are applicable to new uses and edge cases,” he says. “You must build discipline in at the earliest stage so that it can be monitored over time.

“This doesn’t mean you’ll end up with something bland or beige,” he continues. “It’s not the enemy of innovation, but your new technology must be explainable, so it stays in the guardrails society expects to be applied.”

The explainability problem

As AI creeps into government decision making, one of the key challenges as it relates to the ‘black box’ nature of these systems is explainability. 

Public servants using these systems must be able to explain and justify why a decision was made but, given the opaque nature of the way these AI systems work, finding the factual basis for a decision is a formidable problem.

“There’s an ongoing conversation among researchers about how we can make these AI systems more explainable,” says Dr Ehsan Nabavi, senior lecturer in technology and society at ANU.  “But there is a real question about whether this is actually possible.”

One way to achieve this goal is ensuring humans are involved in the design process for the algorithms being used, he says, before going one step further saying rather than simply having human oversight and involvement, there must be community involvement and discussion about the role of AI in government. 

Nabavi also says the way these systems are designed, built and used is very much a political problem, rather than a technical one. The reason for this is governments can take several approaches to designing these systems to minimise bias and discrimination, and then putting them to work.

“You can take a technical approach, where you design better algorithms, or you can take a policy or regulatory stance,” he says. It’s a matter of priorities, he adds, while noting a technical approach may not necessarily work because, as we have seen, we don’t understand how these systems work. 

For governments wanting to use AI in decision-making, the path forward is clear. As Longstaff noted, culture trumps technology. People and communities must be involved in the design and use of these tools, and systems must support public servants to have the power and institutional support to put their hand up when something doesn’t seem right.

 

Protecting privacy in the digital ID age

Data inventories are effectively a ‘stock take’ of data. Of key interest in any data inventory is the personal information held by an organisation.
The Commonwealth’s freedom of information regime has been described as a dysfunctional, broken mess. But there are ways to repair it.
The potential positives of artificial intelligence and automated decision-making are numerous, but human design and oversight are necessities.
Scattered traces of personal information, along with the rise of artificial intelligence technologies, leaves people increasingly vulnerable to identity theft and data hacks.
Finland’s AuroraAI is perhaps the most ambitious government AI project that never quite made it over the starting line.
Big ransomware attacks have reset once opposing sides of political and industry debate.
Forensic IT experts, cybersecurity specialists, data practitioners, network architects, risk analysts – privacy is a team sport.
What might Australia’s digital ID scheme mean for privacy, social cohesion and civil liberties?
The Privacy Act definition of personal information is expanding, with an individual's 'attributes' becoming part of the process.
The Baltic nation of Estonia has a reputation for being the most advanced digital government in the world – and it’s had a digital ID scheme for more than two decades.
So far, the security benefits of an Australia-governed system of digital identification have been clouded by allergy to a national ID.