Blog | 24 Jul 2024

Airport chaos, a software glitch, and the cost of downtime

Teri Robinson

Managing Editor, Thought Leadership

The tsunami of flight cancellations that followed July’s global tech outage made the costs of computer downtime painfully clear to thousands of stranded travellers. Multiply such failures across the world’s largest companies and the cost of downtime is some $400 billion, or nine per cent of profits at Global 2000 organisations, according to new research by Oxford Economics in partnership with Splunk.

Measuring the overall cost of downtime is difficult—costs are often hidden in corporate operations and interactions with partners and customers—and the impact can take many forms. This time, the culprit was a glitch in an update to a widely used software platform. The next time might be the result of a cybersecurity incident like 2017’s WannaCry debacle, or human error, infrastructure failure, even a geopolitical event. Airlines, financial institutions and healthcare organisations are the public face of the current disruption, but companies of all stripes are vulnerable to the kind of financial, operational and reputational damage that ensued.

When we completed our work with Splunk, including a survey of 2,000 executives from leading firms around the world, we had no way of knowing that the research would be brought to life so dramatically just weeks later. We did know that the study, The Hidden Costs of Downtime, included some stark numbers:

As organisations cede more control to technology partners and our digital interconnectedness expands faster than tech infrastructure can keep up, the likelihood of incidents with global impact grows. It is more crucial than ever, then, to safeguard whole ecosystems against problems caused by individual failures, because as businesses—and travellers—around the world are painfully aware, it only takes a single glitch in a widely distributed platform to put multitudes at risk. Minimising the impact of downtime depends on the organisation, but some guidance can be found from the resilience leaders identified in our downtime study. These companies recover faster from downtime incidents, are less likely to incur hidden costs, and rack up less revenue loss. Perhaps that is because they are investing more—an average of $12 million each, annually—in security tools and are ramping up their use of emerging technology like GenAI.

Downtime is a risk in any computing environment, but that risk does not have to lead to catastrophic failures–or costs.

You may be interested in

nuclear and data centre

Post

Powering the UK Data Boom: The Nuclear Solution to the UK’s Data Centre Energy Crunch

The UK’s data centre sector is expanding rapidly as digitalisation, cloud computing, and artificial intelligence (AI) drive surging demand for high-performance computing infrastructure.

Find Out More
Hands of a humanoid robot and of a human trying to touch each other

Post

Are humanoid robots creepy?

Some very smart people are betting that machines shaped like humans will do much of our household and factory work for us in the near-ish future. But hurdles remain.

Find Out More

Post

Estimating Data Centre ‘Phantom Demand’

To support Australia’s planning for cloud and AI growth, Oxford Economics worked with the market operator to assess future electricity demand from data centres. By combining industry data with insights from network providers, the analysis shows that current connection enquiries significantly overstate the grid demand likely to materialise.

Find Out More
[autopilot_shortcode]