Partner Content

An end to rip-and-replace? How learning from patterns can take the risk out of legacy change

By Tom Burton

July 23, 2018

black hexagonal tiles with one gold one

System modernisation is consistently cited as the number one concern of CIOs inside and outside of government. 

For Australian public agencies, it’s a formidable challenge.  The sheer complexity of statutory and business rules, relentless machinery of government and program changes and reluctance to be seen spending on low-visibility, but vital infrastructure has created an enormous legacy backlog and that’s growing everyday.

These legacy reform projects typically come with large risks, with many examples of major project cost and time overruns, operational limitations and economic and reputational damage stemming from the unintended consequences of so-called ‘big bang’ technology modernisations.

Unfortunate as it may be, the truth is that in many cases mature information systems have grown old ‘disgracefully’. Repeated waves of change tend to petrify the system further, removing any flexibility and creating ‘software entropy’.

Successive waves of change then convolute ageing software as the artefacts in the code become co-mingled. They become difficult to separate and change independently of each other, driving up maintenance costs and making it increasingly difficult to modify a mature information system to reflect ongoing business process change.

Add to this mix the unrelenting pressure on chief information officers to bridge the experiential gap between commercial and government systems for end users, and it’s not hard to see why legacy system reform has become the iceberg issue that keeps leadership awake at night.

Fortunately, there is a middle ground.

Fear drives over-specification

Fear of the many unknowns surrounding upgrades has often produced a strongly conservative and highly intensive planning and requirements process that attempts to capture all risks and deliverables.

Once the accepted norm, this monolithic or ‘big bang’ approach is often contrasted with faster and more nimble practices like agile that ostensibly start again from the very beginning – fine for a start-up, but more difficult for established organisations bound by strong governance requirements, which need to retain corporate memory.

What’s less recognised is that there’s a way to leverage the best of both practices that’s not mutually exclusive.

The embracing of agile practices promotes consistent delivery of software coupled with the emergence of DevOps and more modularised procurement practices, opening up the opportunity to take a more iterative approach to ICT renewal.

But there are also vital lessons to learn from what is now a long experience of systems modernisation and how not to reinvent the wheel.

Unlocking why legacy systems survive and exist ‒ and the need to continually modernise  ‒ is the key to a highly strategic approach now being advocated by IBM, the company that has arguably more heritage and experience in executing technological change than any other.

Learning from experience

Through their work with organisations worldwide, an IBM California research group, led by Jan Gravensen, have identified a variety of actions and activities – dubbed  ‘patterns’ – that are relatively consistent in any ICT modernisation project.

Software and systems are ultimately human constructs, so it’s vital to understand the behaviour behind how and why they are built, as much as their ultimate or original purpose. It also pays to know the how and why of changes, modification and customisation over time.

Gravensen contends using a patterns-based approach enables CIOs to take a structured, evolutionary, and iterative approach to legacy renewal, thus removing much of the risk typically associated with big bang, rip-and-replace approaches.

“We have 20 years of lessons learned. We have good roadmaps rooted in what works and what hasn’t worked. By and large, we know what hasn’t worked are the large-scale, ten-year out, transformation programs,” says IBM Global Government Industry lead, Dr Julia Glidden, who recently visited Australia to appraise government and business CIOs about evolving transformation cultures.

“We now have a footprint of where best to start, depending on your use case.  The understanding you start with [is] the problem you are trying to solve, instead of taking on the whole of the enterprise.”

Referring to the work by Gravensen’s team work, Dr Glidden said: “I think this is a really important step change and it is now something that is packaged, it is a roadmap, it is consumable for CIOs everywhere based on the common problems.

“I have the privilege of flying around the world over 20 years working with various governments in their digital journey. And with rare exception, most CIOs will think there is a uniqueness to their challenge. But when you scratch below the surface, there tends to be common patterns of problems, which leads to common sets of solutions for mitigating those problems – depending on the business outcomes desired.

“I think the message to CIOs, when they are looking to stabilise their infrastructure or modernise, is they are not alone and their problems are not unique.”

Patterns everywhere

Gravensen and his team have identified what is common in most legacy considerations. These start with the patterns that describe the organisation’s behaviours, perceptions, biases and which, typically, heavily influence the strategy chosen to modernise its systems.

Importantly, there are also a cluster of economic patterns that influence the organisational choice of strategy. These lucid descriptions of economy patterns in some respects draw from behavioural economics (think nudge and motivations) to better understand why organisations in some instances chose dramatic and risky modernisation strategies.

The patterns Gravensen posits are identifiable, but they are also interrelated.

Developed behaviour, strategy and economy patterns identify the factors leading to the decision to modernise and the approaches selected. Understanding the behaviours, or perceptions and biases, that exist inside an organisation can help understand the challenge as well as misconceptions about the systems that exist.

Sometimes, the strategies selected are influenced by internal capability as well as economic factors. The availability of sufficient capital, or a lack of it, both influence the approaches selected.

Dark matter: technology anti-patterns

A third distinct group is a phenomena called anti-patterns, best characterised as the ‘technical debt’ that has accrued in the particular system.

It’s tempting to interpret these as the accrued price of just putting things off, especially if there’s a perceived imperative to find a quick ‘out of the box’ solution. The reality is it’s as much about understanding what you actually have – and why you have it – to critically evaluate what best next steps may be.

Understanding these anti-patterns – which lead to a system becoming a legacy – provides insights about the underlying cause of obsolescence. Understanding how the system was maintained over time, what information is lacking and the intricacies of the systems helps to determine what can potentially be undone to the system to improve its operations.

This deeper insight can derive deeper value. For many years mainframes – the heavy lifters of transactional processing ‒ were eschewed as legacy infrastructure and architecture. Yet their genesis in distributed architecture – and knowing what are solid architectural foundations – can give you the blueprint to build a better wheel, rather than designing one from scratch.

Mapping practicality

There are also patterns of practice. These can include declared dependencies, inversion of control, and externalisation which can all help to maintain important features and functions of legacy systems within a modern setting to reduce risk to agencies and organisations.

Often, it’s about identifying what’s important – think corporate memory or systemic knowledge – that’s been baked into systems across successive waves of technology and reflects wider organisational systemics as much as specific functions that need upgrading.

Understanding the ins and outs of legacy systems – call them heritage systems if you will – helps the development of efficiently targeted system designs, or architecture patterns.

These patterns can be used by teams following an agile methodology to sketch out the architectural runway, using a modular approach. Thus the transition can be gradual, effective, affordable, and reduce risk and burden for the organisation.

It’s about knowing what to look for to guide informed evolution, as opposed to starting again from scratch.

Hedging for the future

When significant investment and operational uptime are at stake, knowing what to hold as much as what to fold is an acquired skill for public sector CIOs and leadership. Commodity and cutthroat pricing do not in themselves determine architectural quality.

Reduced instruction set computing (RISC) was once regarded as verging on obsolescence as more commoditised processors became cheaper and their power increased. Yet today, that same RISC architecture powers most smartphones.

In technology, like other human endeavours, there can be a persistent duality of change and continuity.

The key message for Australian government agencies is that they are not alone in their struggles in balancing the cost of modernising legacy systems with the need to improve services and deliverables.

Thanks to expertise developed over decades working with public and private enterprises around the world, there are now research-informed and evidence-based approaches to help understand and balance the needs and deliver cost-effective and acceptable-risk modernisation approaches to any government agency.

Both ministers and technology trends may come and go in  short succession. Yet the machinery of government, systemic or informatic, will always need to know where its foundations lie to continuously serve.

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments