Promoted

Use AI to power up – but ignore privacy risks at your peril


Source: Adobe

There is little doubt that commercially available artificial intelligence (AI) tools are transforming the way that people work. They can make tasks quicker and cheaper, and draw from more extensive information sources to give more accurate and detailed results and insights.

Many Australian public servants also recognise the benefits that can flow from use of these tools, including increased efficiency, enhanced decision-making, improved customer experience and more efficient resource management.

Research by the Department of the Prime Minister and Cabinet last year found that the Australian community expects a higher standard of care, more tailored and personalised services, and greater convenience and efficiency, when accessing Australian Government services.

AI tools might be a way of achieving this, but agencies also need to consider the risks of using AI. They include potential privacy risks and the need to ensure compliance with legal privacy obligations and the Australian public’s expectations.

Many people in the community are cautious about the use of AI technology. In the OAIC’s most recent Community Attitudes to Privacy Survey, 43% of respondents were very concerned about their personal information being used by AI technology, while 71% wanted to be told if AI was being used to handle their personal information. Almost everyone (96%) wanted conditions in place before AI was used to make a decision that might affect them – 70% did not consider it reasonable for AI to be used at all and more than half (55%) wanted the accuracy of AI-generated results to be validated by a human.

Unlike some other countries, Australian legislation does not currently specifically regulate AI use by Australian Government agencies. However, work is being undertaken by Data and Digital Ministers across Australia to develop an initial framework for assuring government use of AI. This is expected to align with AI ethical principles and include common assurance processes. However, any legislative outcomes from this work that may provide legal authority for the handling of personal information are clearly still some time away.

Privacy risks for agencies

Agencies wanting to use AI tools – or to allow contractors to use those tools – need to make sure that they have considered privacy risks across a range of scenarios. These risks could include a failure to be sufficiently transparent about the use of AI tools and ensuring that people are aware of how their personal information will be handled by the tools.

Other risks might include a failure to ensure that personal information collected through AI tools, including any information collected “by creation” (which can occur when data is generated or combined by the tools), is reasonably necessary for the agency’s function or activities, and meets the extra requirements for sensitive information.

Risks associated with disclosure of personal information to any AI tools, and subsequent use of that information by the tools (including potentially making it available to other users of the tools), also need to be considered. Agencies will need to ensure that robust technical, risk and governance controls are in place to protect personal information used.

Given the potential for AI tools to inherit biases, factual errors and objectionable content from training data, the accuracy and objectivity of outputs can be unreliable. Other data quality risks can come from AI tools creating ‘hallucinations’ (i.e. simply making up information) and other errors due to the way their algorithms operate.

Considerations for APS staff

APS staff who are considering using an AI tool should understand the data flows of that tool and ensure planned uses are clearly documented and consistent with fundamental ethical principles relevant to AI. Conducting a privacy impact assessment (PIA) will also be a very important step. The Maddocks team is currently working with various agencies to undertake a ‘foundational PIA’ considering the use of AI tools generally, to help them identify the right privacy guardrails to apply for specific AI uses going forward.

Approaches such as these will enable the APS to ‘power up’ its service delivery using AI, with the right legal safeguards in place.

About the authors
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments