One of the biggest technology outages in history, the 2024 Crowdstrike glitch, lasted for just over an hour but the fallout has been far reaching.
The lessons learned are crucial and serve as a powerful reminder of how tech failures can trigger a cascade of negative outcomes.
The parallels in market data quality and use are striking where the risks of poor quality data flow through analysis, compliance and investment decisions.
Crowdstrike’s faulty software update brought banks, airlines, hospitals, supermarkets, emergency lines and media outlets in at least six countries to a halt. The upload crashed more than eight million computers around the globe and caused losses that some estimate at around US$5 billion.
The cybersecurity firm watched its share price plummet 32 per cent over the next 12 days, wiping out $25 billion of market value. Insurers paid out billions of dollars in claims and a raft of legal action has followed.
It’s a master class in the perils of poor quality control and provides a cautionary tale for data services. After all, reliable, relevant and accurate data is the lifeblood of informed decision-making, risk management and strategic planning.
One 2023 survey of market participants from both buy and sell-side firms across Europe, APAC and North America found data quality was more important than cost. Discussing the top priorities when selecting a market data vendor, 90 per cent agreed that data accuracy and data feed reliability topped their list of must-haves, ahead of the dollars.
It goes without saying that data errors or inconsistencies can lead to significant financial losses, regulatory issues, and a loss of credibility. As the Nobel Prize winning British economist Ronald Coase is reported to have said, “Torture the data, and it will confess to anything”.
Increasing complexity
The problem is that guaranteeing trust and accuracy, which can be trying at any time, is becoming more challenging with increasing data complexity and data sources. Systems also need to keep up with the continuing growth and higher peak volumes in markets.
As companies scale their data services, they often rely on multiple vendors to provide various data feeds and services. Each vendor may have different formats, standards, and protocols for delivering data and at any stage can be going through some level of change.
For example, Iress has more than 180 different vendor sources, many of which are constantly undertaking changes, which leads to frequent rewrites and updates to keep pace with the changes and maintain the same level of services.
The more vendors a company works with, the more complex the data management process is. It’s important to ensure that practices are able to scale efficiently with vendor service growth and ensure that those teams responsible for the onboarding and maintenance of these data services have processes that are repeatable across many different services.
It can also be expensive to manage the relationships and support for a wide range of vendors. There are the costs associated with vendor contracts, data integration, and ongoing support to ensure data quality and consistency.
Securing your data
Achieving high quality data requires an active plan to check sources are verified and that data management procedures evolve with the changing services. Teams responsible for these services must ensure the data is validated and normalised, meets update frequency requirements, complies with the organisation’s governance framework and provides the necessary tooling to adequately support the services. As data processing applications become increasingly sophisticated with machine learning algorithms and more dynamic tooling to import data sets, evolving real-time data observability is critical in managing data quality concerns. It allows teams to identify faults as they occur rather than well after the fact, or even worse, by consumers themselves.
But data quality isn’t just about policies, tools and procedures, it’s about the people.
Ensuring that data operations teams have the right skills and support as your technology stack evolves is critical to support scaling these operations. They need to be familiar with the logic and nuance of the data services they’re importing and really understand the formats and timing used by different vendors.
Training is crucial to encourage all employees to share in protecting, simplifying and managing appropriate access to data with teams empowered to continuously improve the systems powering the data services.
As MIT Sloan’s Miro Kazakoff says: “In a world of more data, the companies with more data-literate people are the ones that are going to win”.