Digital transformation remains confusing and challenging for public servants

By Dan Holmes

January 18, 2024

parliament house canberra
ATO staff union calls out fear of going to Fair Work Commission. (frdric/Adobe)

Public servants are enthusiastic about new technology in the workplace despite a lack of obvious benefits, according to a recent survey.

A survey of 425 Australian government workers undertaken by tech company Appian found 63% of respondents believed digital transformation of government services had made them more accessible.

However, only 11% said they had access to all the information that was needed to perform their jobs.

Surveyed public servants were also sceptical that digitisation had made their jobs any easier. Almost all reported their daily workloads had either increased (47%) or remained at the same level (43%) since the introduction of new digital processes.

Executive director of the Artificial Intelligence and Cyber Futures Institute (AICF) at Charles Sturt University (CSU) Ganna Pogrebna said digitisation and increasing use of technology present unrealised opportunities for the public service.

We’re looking at whether we can cut down unnecessary work, like summarising information,” Pogrebna said.

We don’t have special machines we go to for advice … but there are a lot of data sources available to public sector workers.

“This could be extremely valuable for public sector decision making.”

Appian Asia Pacific & Japan Area vice president Luke Thomas said data silos were holding back the effectiveness of the public service as a whole.

“Without a unified view of data, government workers can struggle to make informed decisions, negatively impacting their efficiency and effectiveness,” he said.

“These silos create barriers to collaboration, preventing different departments and agencies from working together seamlessly.

“Breaking down these silos is essential to improve public service delivery, enhance collaboration, and ultimately serve the community better.”

While artificial intelligence (AI) has been the most discussed element of digital transformation over the past few years, only 14% of public servants said they were confident using AI tools for their work.

Seventy-four percent said they rarely or never used AI in their work.

Progrebna said this is likely a misunderstanding among the public service of what AI actually is.

While generative AI like ChatGPT and DALL•E have received widespread media coverage over the past year, algorithms that could be described as AI are used in everything from social media to email filters.

When we mention AI, many people think of Terminator and Skynet. But every time you do a Google search, there are hundreds of algorithms at play,” said Pogrebna.

“AI is everywhere — it’s a mainstream technology. You can’t avoid it.

“When people say they don’t use AI, they mean they don’t use ChatGPT.”

These benefits come with risks that can be hard to quantify in the fast-changing sphere, when new technologies are rarely rigorously tested before public release.

The problem of algorithmic accountability has been a running issue for self-driving car manufacturers and even within the APS itself.

The government established a taskforce last year to establish regulations around the responsible use of AI.

In their interim response to the taskforce’s investigation released on January 17, Department of Industry, Science and Resources representatives wrote “mandatory guardrails” around AI were being considered.

These include greater testing and transparency from companies, and clear lines of accountability around the development and use of algorithms in the workplace.

Progrebna said the normality of algorithms in everyday life has rendered many common uses of AI in the workplace invisible to workers.

She said that despite the clear benefits, we need to stay aware of the risks they pose now, as well as in the future.

The neural networks that make up AI are created by humans, and rather than challenging preconceptions with new information, may echo and reiterate the biases of their creators and users.

When we talk about the risks, obviously independent decision-making is a big one. If you think about leadership in public service, you’re no longer just managing human teams, you’re managing human and machine teams,” she said.

We shouldn’t just blindly trust algorithms. Even the people who are making them don’t understand how they work. They’re black boxes.

Education is extremely important, especially for risks, and understanding how they work — neural networks, the problems, which parts of the algorithm are not visible or explicable by humans.”

More information about the benefits and risks of digital transformation to the public service is available at AICF’s Trailblazer Academy.


READ MORE:

AI promises boost in COVID-19 detection via chest X-ray imaging

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments