Artificial intelligence is already affecting elections

By Dan Holmes

May 20, 2024

Tom Rogers
Australian Electoral Commissioner Tom Rogers. (AAP Image/Lukas Coch)

While AI has the power to be destructive to individuals, it could unravel whole societies too, according to electoral commissioner Tom Rogers.

Speaking to a senate inquiry on Monday, he said artificial intelligence was already affecting elections around the world.

“Countries as diverse as Pakistan, the United States, Indonesia and India have all demonstrated significant and widespread examples of deceptive AI content,”

“The AEC does not possess the legislative tools or internal technical capabilities to deter, detect, or adequately deal with false AI-generated content concerning the election process.

“What we’re concerned about is AI that misleads citizens about the act of voting … the truth of political statements either need to be lodged somewhere else.”

Artificial intelligence has the potential to be as transformative as the Industrial Revolution, and Australia is not ready, a Senate inquiry has heard

The speed of the development of AI — particularly generative AI — has caught governments around the world flat-footed, and regulators are struggling to keep up with a technological realm they barely understand.

The proprietary nature of most AI models has exaggerated this challenge. When policymakers can’t see inside the black box, it is all but impossible for them to know what controls might be needed until people are actually harmed by the technology.

Because misogyny is real, this didn’t take long. Concerns about the generation and sharing of abusive images exploded across the internet, when AI-generated pornography featuring Taylor Swift was widely shared. About a month later, the Select Committee on Adopting Artificial Intelligence was formed.

In its first hearing on May 20, the committee heard the safeguards around the technology are not sufficient to protect citizens.

ANU’s recently minted vice-chancellor and futurist Genevieve Bell said the lack of basic understanding of what AI is was slowing attempts at regulation.

She said the social component of the rise of AI makes up a large part of the public’s response and needs to be taken more seriously.

“There’s a piece of all of this which is how people manage AI that’s more of a cultural phenomenon. The ways we think about it are often driven by your age,” she said

“It’s driven by the science fiction we grow up with, which is in itself shaped by multiple other points of view.

“So helping our citizens understand that AI is not about to kill John Connor, it does not emerge in a single human form. In fact, it is infinitely more complicated.

“It usually means explaining to people the largest base of robots are vacuum cleaners, and the place AI is most likely to turn up in your life is the algorithm inside Netflix. It’s a very different reality to the one we sometimes talk about.”

While witnesses all raised concerns about the destructive potential of artificial intelligence, many were quick to remind the committee there was immense productive potential in artificial intelligence too.

The public service is, by and large not comfortable adopting the technology without a greater understanding of its practical and ethical implications. As the key source of information and advice to government, this makes developing informed regulations a non-starter.

But work is underway. The Human Rights Commissioner and CSIRO are working to develop safeguard frameworks for individuals and society and make sure the technology is used for public good.

Healthcare is expected to be one of the greatest beneficiaries, as AI reaches maturity as a diagnostic tool.

Australia is also a signatory to the Bletchley Declaration on AI safety. It calls on signatories to take a proactive approach to both the development and regulation of artificial intelligence.

Rogers said the tech companies have been relatively cooperative with the AEC on AI safety, but less cooperative in other areas of moderation.

He declined to single out any particular tech company because he “doesn’t want the Eye of Sauron to fall upon him”.

“When we reach out to them, they ordinarily answer. We’re meeting again with Meta later this week, and we’ve asked them for an overview of tools they’ve put in place,” he said.

“They were part of the 20 companies that signed the Munich Accord earlier this year, where they’ve pledged to combat disinformation, particularly this year.”

Responding to concerns raised by Senator David Pocock, he said there were many instances in which people were able to spread misinformation in which the electoral commission couldn’t do anything.

Pocock expressed concern about the use of deepfakes in election campaigns — something that has already taken place in the United States, India and South Korea.

South Korea provides a particularly interesting case study, having introduced legislation banning AI-generated campaign material with a penalty of seven years in prison. Its election was nevertheless rife with AI-generated mis- and disinformation.

He said it was unlikely there would be any changes to legislation before the next election that would enhance protections.

“AI is improving the quality of disinformation, making it more undetectable and spreading it more quickly through multiple channels.

“There’s misinformation all the time about elections and the AEC … whether that content comes from AI or other sources, we take that seriously.

“Of course, I’d prefer there’s no misleading information, but currently if it’s authorised, it’s lawful … ultimately it’s a matter for the Parliament,” he said.

“Ultimately, anything that provides extra transparency has got to be a good thing.”


Alarm sounded on ‘deep fake’ health misinformation

About the author
Notify of
Inline Feedbacks
View all comments