The Top 3 Reasons Quant Strategies Fail 

What are the main reasons quant managers fail? It's all about the data, writes Nick Abe, Chief Operating Officer of Boosted.ai. Here Mr. Abe points to the three top downfalls of quant investing. Still, even with the right stuff or data, some of the biggest quant blowups might have been avoided with just a little more human oversight or, as he explains, the marriage of quantitative and fundamental techniques.

Investment management is a tough business. Every year, countless funds shutter, and 2019 was the fifth consecutive year of continued contraction for the hedge fund industry.

The quantitative space, which uses computer-driven algorithms to create models to beat the market, is even trickier – especially since those algorithms are only as good as their underlying data. Making sense of data is the job of a quant and their algorithms and models, but there can be many human missteps along the way.

In my experience, the top three addressable data problems with quant models are data cleansing, data normalization, and data ranges.

Data Cleansing

Data can be very messy; even experienced data vendors get it wrong. For example, calculation errors from a data vendor can result in bad data where the VWAP (volume weighted average price) appears as a number outside of the high or low of the day.

Another less obvious example is when a company has multiple securities (i.e. Berkshire Hathaway A and Berkshire Hathaway B) and per share amounts are reported incorrectly. If per share amounts are reported relative to BRK.A but the model is “training” on BRK.B, the values could be orders of magnitude off and impact the results of the entire model.

Data should be cleansed manually, by finding the problems, and automatically, through system checks that ensure data “makes sense” to the human and the machine.

Data Normalization

When comparing company data, what makes sense for one company might not make sense for another, as there are always large outliers. Take revenue/enterprise value: it can be extremely large for a growth company and smaller for an established company, but neither is inherently better than the other.

For a quant strategy’s models to work, the features that are incorporated must be normalized. Applying Z-scores and other normalizations help with non-stationary data or data that is constantly changing by making values comparable across an entire universe. For example, percent return over one month may have a positive absolute value, but, by definition, the average Z-score will be zero.

Data Ranges

When it comes to data ranges, the longer the better.

Long Term Capital Management was one of the most famous quant-based hedge funds, especially in the 1990s when there were far fewer players in the space. It infamously imploded in 1998, due to a combination of highly leveraged trading and the 1997 Asian financial crisis and the 1998 Russian financial crisis.

Financial crises are black swan events, which, by nature, are hard to predict. Historian Niall Ferguson, in his 2008 book The Ascent of Money, theorized that another reason LTCM failed was that their models only went back five years.

This underscores the point that quant strategies will do better with more data since machines are driven entirely by data. It is important to include a sampling of both good and bad periods in training data. If a model only includes five years of data and all five of those happen to be smooth sailing, then the machine will undoubtedly be unprepared for choppier waters.

Crafting a More Foolproof Quant Strategy

Much like how algorithms and models are only as good as their data, the real-life performance of a quant strategy is only as good as the human running it. In my experience, I have found that the best combination pairs the real-world fundamental experience of an investment manager with the speed, processing power and ability to find unintuitive connections that machine learning offers. This is becoming known as quantamental investing.

There have been infamous quant fund implosions that, had a human been involved in trade decisions, may have been avoided. In these instances, a human may have recognized it was time to stop selling in 2007 – or 2020 – or that travel stocks may have been affected by COVID-19. Essentially, funds that are “full quant” may lack the human intuition to intervene in certain trade situations.

Quantamental investing, on the other hand, combines fundamental and quantitative experience that marries the latest computing ability with real human instinct for what data and features to train the machine with. In the modern world, it is not the human portfolio or full machine portfolio that wins, but the union of the two that produces the best results.

The future of investment management is set to become even more challenging. Fee compression and changing investor sentiment towards passive funds mean that every bit of alpha an investment manager can capture will secure their fund’s future. Quantamental funds may be one way forward for the industry, but not without careful analysis of their pitfalls.

•  •  •

Nick Abe, CFA, is the COO of Boosted.ai, an artificial intelligence platform that allows institutional investors to implement machine learning in their portfolios quickly and easily. 

TabbFORUM is an open community that provides a platform for capital markets professionals to share their ideas and thought leadership with their peers. The views and opinions expressed are solely those of the author(s). They do not necessarily reflect the opinions of TABB Group, its analysts, TabbFORUM and its editors, or their employees, affiliates and partners.

Comments

Add a Comment