AI assessment tool launched by human rights commissioner

By Dan Holmes

February 16, 2024

Lorraine Finlay
Human rights commissioner Lorraine Finlay. (AAP Image/Mick Tsikas)

Human rights commissioner Lorraine Finlay has launched a human rights impact assessment tool for artificial intelligence-informed decision-making systems in banking.

Tech experts have become increasingly alarmed about the societal effects of AI decision-making, and mass data collection.

AI is also a technology with the potential to automate so many routine tasks, a Jetsons-eque one-hour workweek might not be as absurd as it once sounded.

Speaking at a CSIRO event, Finlay said it was about getting the balance right. She said she wanted to see businesses remain aware of the risks of AI while taking advantage of the opportunities it offers.

“When you start to talk about doing human rights assessments in business, that can seem like a really big step to have to take — something that’s completely overwhelming,” she said.

“This tool is designed to make human rights tangible, and hopefully give businesses the confidence to take that first step.

“We hope this tool is a really valuable and practical resource  and we hope it contributes to the continuing and really important conversation in Australia about ensuring AI being used in responsible ways.”

The Australian Responsible AI Index found last year that 82% of businesses believed they were practising AI responsibly, but less than 24% had actual measures in place to ensure they were aligned with responsible AI practices.

Part of the problem is most people think AI is limited to generative AI functions, like Chat-GPT and DALL·E. In reality, everyday tools like email filters may use AI to determine what is important, what can wait, and what goes to spam.

The human rights impact assessment (HRIA) tool seeks to make it easier for banks to create ethics frameworks around their use of AI to ensure it’s being used for good instead of ill.

While the HRIA tool does contain some specific recommendations, it has been written as more of a conversation starter than helps business ask the right questions, and set policies according to their own risk tolerances.

Non-negotiable elements include human oversight of algorithmic decision-making, so there are clear lines of accountability.

The tool has been put together based on the government’s AI ethics principles, established in June last year. Central to this is that AI should “benefit individuals, society and the environment”.

Since then, the government have worked with the AI Safety Centre to establish an expert working group to establish a clear set of safety standards.

Finlay said this work was being guided by the government’s 2021 AI discussion paper and the hundreds of submissions they received during the consultation period.

“The starting point in that report, which guides all of our work in that area, is that technology is essential but it has to be fair,” she said.

“We’re living in a time of unprecedented technological innovation and transformation, and we need to seize the opportunity that presents.

“We also need to recognise the significant risk these technologies pose to our human rights, and the serious harm that can be caused to individuals.”

The final report generated by the AI discussion paper made 38 recommendations, setting out a roadmap for the responsible use of AI in the current environment.

Recommendation 15 explicitly calls for business to use a human rights lens to consider whether they’re using automation and AI in a safe and ethical way. It proposes business use a human rights lens to examine the uses of AI before they’re implemented.

Finlay said the findings of the report have only become more critical, as the use of AI erupts across the country.

Achieving this lofty goal needs private sector buy-in for the development and implementation of guidelines. In the case of the banking tool, NAB worked with the Human Rights Commission and CSIRO to create the HRIA.

“These tools help measure the risk to human rights posed by business systems and activities. They ensure measures are put in place to address those risks,” Finlay said

“Our hope is tools like this will help identify and address human rights issues at the earliest stages of the design and development of AI-informed decision-making systems.”

The introduction of equal treatment principles into banking is something with the capacity to affect the industry beyond the obvious implications for privacy.

Discrimination is essential to the central business of consumer banks — deciding who does or doesn’t get a loan.

While this discrimination is explicitly on the basis of wealth, higher levels of disadvantage in CALD and Indigenous communities means the consequence has historically been the more axes of disadvantage someone has, the less likely they are to be granted a loan.

As part of the impact assessment tool, banks are asked to consider individual and group effects of algorithmic bias so everyone gets a fair go. This is strongly backed by evidence that suggests AI could effectively eliminate discrimination in banking, if well designed.

NAB’s head of privacy and data Jade Haar said data can be a breeding ground for inequality, and it was important to weed out biases at the first opportunity.

“One of the things that comes to mind is historically, gender was very underrepresented. Females were just not represented in the credit data pools, and that’s something we’ve had to fix and be aware of.

“For testing, we’re asking if we have the right kinds of people involved … it’s important when you’re testing your AI that you’ve got a diverse set of views and experiences. That makes sure it isn’t just focused on accuracy, but other angles as well.

“It’s not just saying ‘mitigation’ — it’s saying, ‘Is there something else we can do here to just completely avoid it’?”


READ MORE:

Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, too

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments