Partner Content

How next-gen AI and analytics are accelerating citizen service outcomes

By Brad Howarth

May 23, 2023

Recent breakthroughs in artificial intelligence have seen concepts that were once the realm of science fiction catapulted into people’s everyday experiences.

Thanks to generative AI tools such as ChatGPT, DALL-E and others, the ability to write complex content or create art using simple plain English commands is now within most people’s reach. 

This same technology can be applied to a wide variety of professional use cases, from generating website graphics and blog posts to writing instruction manuals and even creating basic software code.

But these new capabilities have also brought with them a swag of ethical and legal considerations, many of which arise from questions regarding the quality and provenance of the datasets used to train the AI models.

For Australia’s public sector leaders, who face constant pressure to deliver better digital services with the same or fewer resources, the key questions regarding AI today ask not only what the possibilities of these tools might be, but whether the risks of pursuing them outweigh the potential upsides.

Doing more with less

According to IBM’s Senior Data Science Specialist, Nicholas Renotte, interest in these tools is being driven by the desire to achieve better digital outcomes faster.

“People are definitely seeing efficiencies by leveraging these technologies,” Renotte says. “But there is always this question of reputational risk and IT risk when trying to incorporate them. Everyone is really interested in this, but a lot of people are afraid to use it in a corporate setting.”

IBM’s response has been to work with customers to find safe and secure ways to bring AI and automation into their operations – something the company has come to refer to as intelligent automation.  

Renotte says these considerations have been paramount for IBM’s engagements with businesses in highly regulated industries or government departments, and have led to the creation of a series of tools that automate business processes ranging from data management and insight generation through to workflows and event systems performance management. Many of these are now being used to streamline citizen services, detect anomalies such as fraudulent claims, and improve processes for knowledge workers.

Importantly, Renotte says these tools have been developed from the ground up to consider the security and ethical needs of enterprise organisations and government agencies.

“Intelligent automation refers to a powerful and evolving set of technologies for automating knowledge work, and augmenting the work of human knowledge workers,” Renotte says. “Everything that has been built with that in mind, so that this technology can be deployed in a secure way that still generates efficiencies.”

Spreading intelligence

Renotte says IBM’s goal is to release technology that makes people more productive while making systems more resilient, such as through ensuring they can draw insights from data faster, improve straight-through processing, optimise operations, and boost overall productivity – all of which comes together to accelerate business outcomes.

Importantly, he says much of the focus when developing these tools has been to make them accessible to everyone – not just data professionals.

“A big part of what we try to do is help the citizen analyst make the most of their data whilst closing the skills gap,” Renotte says. “This is powered using IBM Cloud Pak for Data and IBM Cloud Pak for Business Automation, using these tools, teams can access their data, build machine learning models and streamline processes securely and with ease.”

IBM’s service offering includes a range of capabilities stretching from broad automation capabilities to no code/low code development environments with visual design interfaces that allow non-technical workers to quickly create new processes and outcomes.

The company has also developed a secure framework for training Large Language Models (LLMs) similar to those made prominent by ChatGPT, but for individual datasets. 

In one instance IBM has begun working with NASA to create a set of LLMs to create foundation models that will power a new AI capability for analysing geospatial satellite data in support of climate science. The project will train an IBM geospatial intelligence foundation model on NASA’s Harmonized Landsat Sentinel-2 (HLS) dataset, while another is expected to be a large language model based on Earth science literature. 

Both models will be open and available to the entire science community, with the goal being to create a general-purpose system that can support multiple use cases.

Renotte says the NASA example holds a number of lessons to be followed by other agencies that are wanting to harness LLMs and other tools.

“Harnessing the power of data is not a one-time process,” he says. “This is why it is so important for having workflows in place that allow teams to collect, organise, analyse and infuse data into their existing work.”

Better data means better outcomes

Renotte says that while much media attention has been directed towards generative AI tools, IBM has also been working to advance the development of numerous other technologies for AI and automation.

These same concepts are also manifesting at the infrastructure level in the form of tools for observability. Renotte points to IBM Turbonomic, which can help IT systems managers quickly define relationships between hardware, virtualised environments, and complete application stacks, and then improve performance and reduce costs by automatically adjusting hardware utilisation for best performance and efficiency.

“We are finding with public sector clients in particular that they simply can’t continue to deliver greater value with their existing resources, but at the same time, that acquiring more people is also difficult,” Renotte says. “So we are seeing great interest in any secure and tested solutions that enable greater efficiency through automation.”

According to Renotte, the recent IRAP certification of IBM’s cloud service is also serving to bring comfort to government clients. 

“It always comes down to three things – safety, security, and trust – particularly in relation to AI,” Renotte says. “And all of our machine learning and AI capabilities have those attributes baked in.”

At IBM’s annual Think conference, IBM announced IBM watsonx – a new generative AI and data platform that will enable enterprises to scale and accelerate the impact of the most advanced AI with trusted data. Businesses need AI that is accurate, scalable, and adaptable. With IBM watsonx, clients can quickly train and deploy custom AI capabilities across their entire business, all while retaining full control of their data.

And while some media commentary has focused on the potential for these tools to replace human workers, Renotte says their best deployment is in scenarios where they are augmenting or assisting human workers, rather than replacing them.

“Data science is a team sport,” Renotte says. “But really, making the most of data throughout the organisation is something that everyone can do, when given the right tools.”

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments