“Then there is the man who drowned crossing a stream with an average depth of six inches.” W.I.E. Gates.
Every week it seems that some organization or other is making headline claims on a new latency figure. From two digit microsecond matching to low latency access to new trading facilities, the race seems to be on to determine who can report the lowest latency.
Here’s the question: are the numbers we see in these headlines at all useful? I’ve previously noted that it’s crucial to measure latency in a holistic fashion, across all components in your trading flow. However, like many other latency headlines, numbers such as those above tend to be focused on a single point in that flow: market data acquisition, or message distribution, or trade execution. (I also imagined that I could find an Einstein quote suitable for this post, but I’ve failed miserably – let’s hope the remainder of this post makes up for it!).
There’s another reason for doubting the usefulness of these figures. There are millions of transactions flowing through your systems every hour. Market behavior is constantly changing, and volumes can vary by an order of magnitude from one minute to the next. In an environment like this, nothing stays constant for any significant length of time, so specifying latency with a single number is meaningless. You could send orders to market with an average latency of 1ms, but still have 10% or more taking over 100ms – presumably not a situation you want to be in.