Clearly, the scenario of a holiday-abbreviated session falling on the third Friday – expiration Friday – of a month had not properly been considered in all the hours of development for these particular exchanges. And, clearly, market data vendors had insufficient data scrubbing routines in place to test the validity of certain raw market data.
Finally, it is clear that the OCC did not have its own procedures in place to vet certain market data, even if it was coming from the most venerable sources. In short, this was not a data problem. This was a processing problem in which a bunch of folks dropped the ball.
While all the yammering and chattering around this episode suggests that relatively little damage was done, it speaks of the wild world of uncontrollable complexity risk.
So fasten your seat belts because it could get worse before it gets better. The good news: what doesn’t kill us makes us stronger.
Back in the day when I was the CIO for a high-turnover statistical arbitrage fund, I used to feel a perverse sense of good fortune when my team would experience unforeseen “glitches.” Since there was simply no way to anticipate every possible permutation of challenges “in the laboratory,” the only way to come anywhere close to bullet-proofing our systems and processes was to keep our jalopy on the proverbial track for as long as possible and overcome whatever technical or market-oriented challenge came our way – thereby ultimately (and hopefully) evolving our platform to a finely-tuned performance machine.
The same logic can be applied to the aforementioned BATS event as well as the recent Flash Crash. (In the case of the Flash Crash, the regulators bear some responsibility for not “federating” the rule book for an increasingly fragmented marketplace.)
Confidence issues and other grumblings aside, these problems – now that they’re in the rear-view mirror – actually make markets better, stronger healthier and more fault tolerant.
Sure, on one level, any kind of glitch on financial exchanges spooks the bejesus out of an already-traumatized public and serves as catnip for a mainstream media machine conditioned to inflate any and all imperfections in the financial firmament. However, the fact that these events have expanded our collective library of possible scenarios is a net positive for everyone (assuming we learn from them).
Here’s the buried headline: While we should always expect and prepare for the unexpected, the capital markets’ growing dependence on technology to do more, faster and with fewer people represents a recipe for an increasing frequency of surprises.
The quality control requirements for this level of unprecedented complexity defy comprehension, particularly within the largest financial intermediaries and the most fragmented markets. It’s a wonder that more glitches don’t find their way to the light of day.
To begin with, regulators around the world could play a major role in combating complexity risk by simplifying the rule book.
But the truth of the matter is that we simply cannot understand the full spectrum of what could go wrong.
One of our industry’s greatest ironies may be that the most challenging aspect of combating complexity risk is in finding ways to simplify systems, processes and operational infrastructure.
An increasing reliance on technology and automation creates intangible costs (and risks) that are not properly appreciated in our business. Excessive cutting of people in favor technology – or over-automation – incrementally exacerbates these risks because people are the only defense against “what we don’t know.”
Exhibit A: State Street just announced that it will cut 5 percent of its workforce through the end of 2011 as part of an “information technology transformation.”
Mark another point in the win column for complexity risk.