Thanks to Hollywood, the first thing people tend to think of when they hear “artificial intelligence” is a dystopian future where humanity is at war with its electronic overlords – Terminator, Ex Machina and I, Robot come to mind.
“If you ask a random stranger what AI is, they’re likely to talk about robots that take over the world,” Peter van der Putten, Director AI Lab at Pegasystems, explained to The Mandarin.
In his role, van der Putten determines how AI can drive change and impact business, arts and science, and more in a responsible, customer centric and ethical manner. “Responsible and trustworthy AI can be used to have a meaningful impact, improve the life of citizens, customers, and businesses,” he said.
With oversight, regulations, responsible data use and the right tools that enable business logic to be coded simply, AI can be fair and free from bias, making the future much brighter than Hollywood suggests.
And with low code AI and workflow platforms like Pega Australians in the public and private sectors can significantly improve their business outcomes.
Using low-code simplifies AI
Pega platforms are low-code, providing a visual approach to creating AI driven experiences and applications. With AI programming, it simplifies the process of creating AI models and logic that allows both IT and business to work together so that when solutions are built, there’s a much greater degree of trust and understanding.
“Programming languages only speak to programmers,” van der Putten said. “A common, low code language combined with education, can speak to anyone and allows for much better alignment between business and IT.” And this increases the chances of developing an outcome that is useful, ethical and will be adopted throughout the organisation.
In addition, low-code AI can be supported by a workforce that is less technical than the current workforce. In Australia’s public service, for example, there are limited skills and digital resources to cope with the volume expected.
While governments can continue to train and invest, low code unlocks a new stream of people to be able to be yet to start new digital careers to be involved in creation of new capability.
And in “unlocking the full brainpower of potential,” van der Putten said there are enormous opportunities for improving service delivery for the benefit of citizens.
But he also says that low-code does not mean low security, or low ethics.
“While you need to have a platform that is user friendly so that the business can create and constantly innovate applications, there still needs to be enterprise strength behind it so that you can safeguard security and privacy – things that may not be front of mind for business.”
This means the applications need to provide the “guardrails” allowing organisations to focus on achieving their business objective – and that they may need to overcome internal and external fears of AI to achieve its potential.
Overcoming the fear of AI means tackling it head on
“The fears of AI, generated by Hollywood, plays into our fear of losing control and not being in charge,” van der Putten explained. “It’s important to kind of address these fears. Across AI globally, we see that regulation is really catching up in a range of areas including privacy, data sharing, transparency, fairness and those type of things. Privacy regulations that were introduced in the European Union General Data Protection Regulation are a good example. But very similar regulation is being made also for AI now to ensure it is in the interest of the citizen, is transparent, and is fair. In that sense, anything you could technically do doesn’t mean that you should or could do it.”
But there’s more organisations using AI to support business decisions can do to make sure it is benefiting the customer or the citizen – not just benefiting a company or government. Addressing issues around privacy is critical, but the use of AI should also be supported with insights and explanations on how AI is used in decision making – with the aim of ensuring decisions made are fair and free from bias.
“We need to make sure it’s transparent and that it is easy to understand and communicate how automated decisions are taken,” van der Putten said.
“Sometimes AI industry people focus way too much on the machine learning and explaining black box models. But when there’s an automated decision, it will also be driven to a large extent by business rules, which should be included when automatically explaining the decisions.”
“With the Pega platform, customers can set boundaries defining which bias would no longer be acceptable, apply existing decision logic, and simulate decision outcomes. This allows them to test for bias in both predictive models and logic – and then the ability to adapt logic to put in more ethical constraints and reduce any level of bias to be within acceptable limits.”
“Decisions need to be fair, whether they’re taken by humans or machines. It’s important that you measure that you make sure that you simulate the measure how fair these decisions are. Fortunately, we can do that with machines. And if properly captured, back test the human decisions as well.”
Working alongside decision-makers in the public sector, AI can be used as part of a “learning loop” for continuous improvement.
Using Pega Customer Decision Hub™, CBA created the Customer Engagement Engine that intelligently suggests and personalizes the next best conversation to have with each customer, whether they are in the branch, on the phone, online, or on a mobile device. Learn more about Pega Customer Decision Hub and how it enables organisations to deepen relationships and maximize value, every moment, everywhere >
Taking it one challenge at a time
While the tools and frameworks exist to help organisations including Australia’s public sector on its AI journey, delivering faster and better outcomes for citizens, beginning can be a challenge.
For van der Putten, the first steps should solve specific challenges that can be improved through automation.
Taking inspiration from the private sector, the use of Pega by the Commonwealth Bank, for example, AI was used specifically to identify the “next best conversation” to have with each customer, requiring 200 machine learning models using 157 billion data points. 250 million real time personalised messages were delivered using this system to customers during the first months of the COVID-19 pandemic, supporting customers and their financial needs during this time. As part of general support, over the course of one year their customer engagement engine has nudged customers to close to 500 million dollars worth of government benefits.
Once a challenge is solved, it is important then to move on to the next to continue the momentum of improved customer or citizen engagement. For both the business and customers, van der Putten said this continued progress demystifies AI and can keep it under control – delivering better and trusted outcomes for both citizens and business.