What does it look like to use all tools of statecraft in the information environment

By Anastasia Kapetas

April 30, 2024

telecommunications tower
Governments around the West are struggling with the issue of how to combat the multitude of disinformation flowing from digital platforms. (kinwun/Adobe)

Last week’s government anger over X’s refusal to take violent imagery of the Sydney church stabbing from its global network, coming off the back of Meta’s decision to exit Australia’s news media bargaining code, again brings the vexed issue of online harms into national focus.

It is not surprising that these recent events in Australia also attracted international news attention. Governments around the West are struggling with the central issue of how to combat the multitude of harms flowing from digital platforms.

This struggle continues despite more than a decade-and-a-half of alarms about the increasing flood of online disinformation and propaganda, the promotion of online hate and its links to radicalisation and violent extremism, deep fakes, identity theft and fraud, and child exploitation. These and other harms have had disastrous consequences for global and national security, as well as for the security of individual citizens.

But as AP4D’s new report, What does it look like to use all tools of statecraft in the information environment? argues, Australia, like other Western nations, has been doing some innovative policy work in this area but these efforts have been reactive rather than strategic.

What is now needed is a national plan that commits to sustained policy reckoning with the information environment’s most difficult issues, with social media regulation and the active support of credible information at its centre.

As the report explains, negative trendlines in the information environment are only getting worse. Disinformation is becoming more firmly embedded in political cultures around the globe, the business model of credible information including journalism and academic research continues to struggle and the new generative AI technologies that are coming online with few filters are making it so much easier for bad actors on in the digital space to do bad things. At the same time, platforms have content moderation fatigue

For many years democratic governments have been reluctant to be seen as heavy-handed in the information environment. The three main narratives pushed by digital platform companies — that regulation will kill innovation, trample free speech and allow strategic competitors to overtake the West in the tech space — have proved persuasive.

And many countries, including Australia, have siloed ways of thinking about threats in the information environment, such as cybersecurity, disinformation, social cohesion, foreign interference, data, privacy and criminal exploitation. Agencies do not naturally share information and analysis in a truly integrated way and the best current practice is often to coordinate through interdepartmental taskforces or similar mechanisms. These are usually built around a single issue and are non-enduring. Legislation and separation of powers often present barriers to whole-of-government action.

But the tide is running differently in 2024. Democratic countries are increasingly seeing threats in the information environment as existential and in the past 18 months have increased their regulatory ambitions.

For example, the Russian invasion of Ukraine, accompanied as it has been by a hybrid campaign of propaganda and disinformation against the West, has caused the EU to introduce much more stringent legislation aimed at getting social media companies to take much more responsibility for the disinformation on its networks. And while the Biden administration is struggling to do the same with domestic enablers of disinformation, the forced sale of TikTok from its Chinese owner Bytedance garnered enough rare bi-partisan support to be signed into law last week.

In its own efforts to regulate social media, Australia does not need to take an anti-technology stance. Rather, as the AP4D report argues, rather than being just reactive it can have a positive dialogue with citizens about the kind of information space Australia wants. One that works to strengthen its democratic society, economic prosperity, security and rule of law. Australia can affirm there have been huge innovations on the technical side, which now need to be matched by social and legal innovations — and aim to become a leader in designing the policy architecture that puts the democratic public interest at the centre of the information environment.

But the report also makes clear that dealing with online harms is not a solo play for Australia. Given the globally networked nature of the information environment, and the power and concentration of companies that control the major global information platforms, a multilateral approach to the building of new norms in the digital AI age is also becoming an urgent priority.


READ MORE:

Statecraft, staff & structure determine Australia’s international influence

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments