The readability debate: is there a formula for the write stuff?

By Dr Neil James

October 13, 2016

Asian Business man using notebook.

In a series of 2016 reports, an Irish software company reviewed the clarity of government websites in Australia, New Zealand, the United States, the United Kingdom and South Africa. This increased awareness of one of the most common measures of plain English: readability.

Public sector organisations are increasingly using readability tools to benchmark their communications. But before investing too extensively in this approach, it is worth pausing to review what readability can (and can’t) deliver.

What is readability?

While the general meaning of readability is obvious, in plain English it has a specific and more technical application. Over the last 80 years, around 1000 studies have quantified how two linguistic elements correlate closely with comprehension: word and sentence length.

This sounds like common sense. The longer the sentences and the bigger the words in a text, the more skill it demands of the reader. The seductive step is that readability metrics turn these elements into formulas and graphs that allow you to “score” your writing.

If your plain English strategy starts and ends with readability, you are setting yourself up to fail.

Some measures, such as the Flesch formula, grade text on a scale from 0 (very difficult) to 100 (very easy). Others, such as the Fry graph, estimate the number of years of education needed to understand a text at one pass. The Dale Chall formula generates both a score and a grade level.

To take the Fry graph as an example, a reading grade of eight suggests a reader in early high school (eight years of education) could comfortably read a text in a single browse. A text scoring 12 would require a high school education, and 15 an undergraduate degree.

What are the benefits of readability measures?

Readability measures are appealing because they objectively estimate how well your text aligns with the skills of your readers. And they do highlight when a text needs to improve.

For example, the Plain English Foundation has found the public sector in Australia writes on average at grade 15-16 on the Fry graph – bordering on postgraduate levels. Given that more than 80% of Australians lack even undergraduate education, agencies are writing well above the skills of the general population.

This doesn’t mean the public can’t read government communications, but that doing so takes time and often several attempts. This leads to errors and costly follow-up through call centres and correspondence. Agencies can reduce these costs by writing at grade 8-10 for the public and at 10-12 in internal documents.

Readability measures are especially effective in showing technical and subject matter experts how to calibrate their text for their readers. The United States Army, for example, found that the average soldier had a reading level of grade 9, yet engineers wrote operating manuals at grade 16. This led to “operator errors” that put lives in danger.

Communication businesses such as newspapers and educational publishers have been using readability for decades. American newspapers in the 1940s and 1950s boosted circulation and profits by reducing the readability scores of front-page stories from grade 16 to grade 11. Australian newspapers today fall in the 8-12 range. There’s no reason government organisations can’t do the same.

In recent years, there has been an explosion of free and proprietary systems that can calculate the readability of a text online. These have great appeal to managers and executives looking for a quick and cost-effective benchmark.

What are the limits of readability measures?

But before you mandate a readability measure or build a proprietary system into your performance management, consider some potential pitfalls.

First, you may be narrowing your focus too far. The fact is that readability can only measure two variables: word and sentence length. These are just one part of one aspect of clear communication.

Second, a good score will never guarantee the content is worth reading. You can write a piece of grammatical nonsense that fares well against a formula.

Nor will readability tell you anything about the structure or design of a document. And it can’t predict whether readers can navigate your text to find what they need.

Even as a measure of your language, readability is far from comprehensive. A text can be easy to read, yet still be inefficient, imprecise and ambiguous.

Yet the biggest mistake organisations make is to rely on automated readability scoring. The software can be notoriously inaccurate in estimating syllables. Elements such as punctuation, tables, headings and lists also skew the sentence count. Depending on your text and the system you are using, an automated readability result may be inaccurate by 10% to 50%.

How should agencies apply readability?

None of this means you should ignore readability. Most agencies can shorten their words and sentences to make their content easier to understand.

Consider a good readability score as a necessary but insufficient condition for clear communication. If your plain English strategy starts and ends with readability, you are setting yourself up to fail.

Readability properly deployed is just one of at least a dozen text-based performance indicators you should explore. The Plain English Foundation applies a dozen criteria to scrutinise the content, structure and design of communications along with expression elements such as readability. We’ve found the use of readability alone can give agencies either a false sense of security or a false sense of urgency about the performance of their text.

So by all means use readability measures, but put them in the right context and do not expect them to deliver what they can’t – a guarantee of clear communication.

About the author
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments