What Australia can learn from Finland’s AI disaster
Late last year, the Finnish government quietly announced the shelving of its AuroraAI initiative. Aurora’s demise ended an ambitious vision that was in many ways emblematic of one of the OECD’s most innovative civil services.
While the reasons behind Aurora’s being scuttled are not clear, the data challenges it faced early on, which have parallels to those faced by AI tech giants like OpenAI and Meta, suggest that Australia has some important insights to take away as its own AI agenda begins ramping up.
Under Aurora, Finnish citizens would establish an AI-powered ‘digital twin’ of themselves, curating personal data they felt was relevant to a life event (think ‘getting married’ or ‘recovering from an injury’) that they were navigating.
This digital twin would then be analysed by Aurora, which would construct a tailored service pathway that cut across government agencies, private sector services and community sector supports.
The Ministry of Finance’s slick 2019 public pitch video tells the story of what this would have looked like for ordinary Finnish citizens. In the video, a baker dressed in a traditional white apron uses his mobile phone to set up an online AuroraAI profile and explore his retirement options, including how he would pass his bakery on to his children. Meanwhile, his daughter – a newly-graduated baker herself – affectionately texts her father on her bus ride home, before opening AuroraAI to explore the same topic of family business generational change, but from her very different perspective and set of circumstances.
The implication was that Aurora would give them two unique experiences on a very similar topic tailored to their individual needs and characteristics.
Evolving a vision of AI-enabled service delivery and digital identity
Aurora was conceived in pre-LLM times. In 2017, Finland’s Ministry of Economic Affairs and Employment published a comprehensive vision for “turning Finland into a leading country in the application of artificial intelligence”, which included Aurora as a priority project. At that stage, the vision for Aurora was simpler – a chatbot comparable to Siri that could connect people to the right agency service and potentially automate very basic processes.
By 2020, as concerns started to rise about a new respiratory virus circulating around Asia, the Aurora vision had evolved. Aurora was shaping up to be a key digital plank in Finland’s “human-centric society”. More than a helpful chatbot, AuroraAI was described by one Ministry of Finance official as a “systemic transformation at a societal level”. Its scope had widened to include non-government services, and it had increased in functional ambition: the AI would be able to engage with a user to co-create a unique, tailored pathway that smoothly connected public, private and community sector services and that nudged people towards helpful life choices, across all the dimensions of wellbeing – like health, social connection, education, environmental engagement and civic participation.
Underlying the AI would be DigiMe, an innovative identity management concept born from ethical imperatives like maximising privacy and empowering personal control of one’s data.
DigiMe would allow users to curate a temporary, situation-specific digital twin constructed from the data that they felt was relevant to the life event they were navigating. This tailored digital version of themselves would then be the ‘me’ that the AI, trained on a range of service operation and customer data, would seek to assist.
Thinking was also evolving about Aurora’s ability to automate government processes. In a panel discussion hosted by Finland’s Centre for Artificial Intelligence shortly after Aurora’s demise, one Ministry of Finance official reflected that Aurora could have streamlined services beyond just connecting citizens to them. Imagine, as an example, Aurora proactively letting you know that your passport would shortly be expiring, without having to deal directly with the Ministry of Foreign Affairs and allowing you to initiate a renewal.
Challenges on many fronts
It’s not completely clear why AuroraAI was ultimately shuttered. We know that, in early trials, the program had identified a number of legislation challenges to data sharing and aggregation, with no clear pathway to address them. These issues would be familiar to Australia, which acted to at least partially overcome similar barriers by passing the Data Availability and Transparency Act 2022. However, some of Aurora’s challenges extended well beyond a ‘DAT Act’ remit.
For Aurora to work, it needed training data to shape its AI model, and it needed to compile personal information from a wide range of sources to construct the user’s DigiMe. Some of this data was held across layers of government from the national to the municipal level, and potentially by private companies or community sector services. Unlocking the data was always going to be a challenge.
Perhaps even more challenging would have been that AIs need a lot of data for training. Consider that OpenAI’s training approach for ChatGPT involved scraping what one may only somewhat flippantly call ‘the whole damn internet’. After running out of training data to scrape for its latest GPT-4 model, OpenAI created a speech recognition tool called Whisper, which it used to transcribe reportedly over a million hours of YouTube videos to feed the AI training process, placing it in hot water with Google, YouTube’s owner.
Meanwhile, Meta’s adventures into the AI space have seen it use “almost every available English-language book, essay, poem and news article on the internet”, a New York Times report suggests based on leaked recordings of internal Meta meetings.
The ability to bring together training data from many sources across agencies and sectors is one of the great challenges of the AI age – even without the legislative constraints and (thankfully) high ethical and transparency bars faced by government agencies in liberal democracies.
It’s something Finland would have needed to meaningfully tackle on a smaller scale so that it could accumulate enough data to generate the almost magical insight we are seeing emerge from AIs like GPT-4, even for the more specialised context of Aurora.
Another challenge went to Aurora’s ethical context and social licence. Aurora’s scope as a societal intervention was ambitious, and there is strong evidence that ethics and community wellbeing were authentically at the core of the vision. Nevertheless, the concept is intrinsically fraught with ethics and trust challenges.
How would Aurora – as a life event journey navigator – make determinations about what would be most helpful to allow a person to achieve their life goals? Especially when these goals were not explicitly articulated by the user, but rather were inferred by the AI?
Aurora’s approach of constructing service pathways and nudges to achieve a wellbeing result for its users strayed into questions of human autonomy – who should be allowed to determine what’s best for a person? The design of the DigiMe digital twin, the model for curating user data, and the worldviews behind the weightings from which a tailored service and life journey would be derived – seemed to risk overreach.
And what if AuroraAI simply got it wrong because of unidentified data biases, AI hallucinations or the drift of social norms over time making the AI less and less connected to society’s values?
This is perhaps why one of Aurora’s few pilot projects was seen by some as underwhelming. In 2022, AuroraAI was integrated into Zekki, an established digital tool for young people that connected them with local services promoting youth wellbeing. Instead of highly tailored, situational service pathways for young people seeking assistance, the results were useful but generic – something a clever-but-not-intelligent algorithm could have delivered equally well.
Lessons for Australia’s public service
If Australia is to take away anything from AuroraAI, perhaps it’s the game-changing possibilities that AI and the data universe represent for society. Now is a time to think big about novel models of public service that harness AI, innovative models for digital identity and lots of data and processing grunt.
But the entry cost of shepherding these big-picture visions into reality is high – worthwhile certainly, but high absolutely. Understanding how the data challenge – from getting the right volumes for meaningful training data to addressing the risks of bias, fairness, hallucination and model drift hiding in the dataset – need work. They need strategy, and they need coalitions across the different sectors that make up our society.
Perhaps even more complex is the challenge of social licence – determining how the vision needs to work to be accepted, or even embraced, by society at large (or what needs to happen to shift public trust in a positive direction over time).
AI, data, digital identity – and how these come together to the benefit of society is a technical challenge. But at its core, it’s a co-design challenge, and one that needs government to be in tune with the needs, aspirations, concerns and capabilities of the public and of sectoral players, at a time when trust is brittle and everything is changing quickly and continuously.
If we accept that AI will transform the business of public services this decade – and it will – then this challenge is not just important; as AuroraAI demonstrates, it’s urgent as well.
Protecting privacy in the digital ID age
- Move over big data: Data inventories are the next big thing
- Access to information: The $100m question
- Avoiding bias in automated decision-making
- The logical step towards reducing digital vulnerabilities
- What Australia can learn from Finland’s AI disaster
- Digital ID laws usher in quiet revolution
- Privacy by design: It’s soccer, not golf
- The social impact of digital ID
- ‘Attributes’ that could determine regulatory success
- Why Estonia leads the way in digital identity
- Australia’s overdue digital IDs will help fight online fraud