The easiest way to spot a bad financial machine learning paper and save yourself some precious time
Hey guys, your time is precious, and mine is, too! So stick around to get some quick and dirty advice on how to save yourself some minutes, the next time you read a finML paper.
Especially when it comes to reading scientific papers, we all have experienced this moment that we have read through an exhausting ML paper full of maths and descriptions, consuming at least half an hour or so of our time, simply to find out that the reported results are total bs.
Let me give you a very easy and handy way to quickly spot, with a high confidence, that a paper that you are about to read is full of fake predictive results.
The favorite focus topic of most finml paper is of course the prediction of asset prices. In that context, you often see papers displaying plots like this one (drawn from some random paper on the web, not my intention to discredit the authors, so I omit the title of the paper):
In that particular study, the authors, as so often, build a complex deep neural net (many layers, many fancy names for different types of network components, super sophisticated), and they train it to predict the next day’s close price, given data from the previous N days. So far so good, its not necessarily a bad approach, but…
…the result simply sucks. As you can see, the predicted value is simply a one-step lagged version of the true value. What does this mean? Simple. Your model learns to minimize the loss criterion (e.g. mean absolute error, MSE, etc.) by taking the current value as a prediction for the next value.
Paradoxically, in my opinion, this can even kind of make sense, as the model is simply learning the well-known martingale hypothesis (very fundamental in quantitative finance), which states that if a process is a random walk (which markets (on a daily scale) may be close to), the best prediction for the next price is the current price:
p*[t+1|t] = E[p|t] = p[t]
p* : predicted price, p : true price
This reads: “the predicted value of the price at the next timestep t+1 given data up to now, i.e. time t, is the current price at time t”.
Obviously, this is a “good” outcome, in order to minimize the loss criterion, but it is also a very naive model result (which is why it is even called the naive model typcially), because the prediction resulting by this results in random chance guessing. It obviously does not make much sense that the price at the next instant will simply be this instant’s price, and you can imagine, that one would fail miserably if they used this “static” prediction as a real forecast.
The reason why these kinds of plots are shown so often in research (and I see them all over medium, as well) is of course that they look really good, especially when you zoom out and the lag is hardly visible. You might think, wow, this is really close. Yet, these plots convey a dramatically misleading view. Imagine rather a plot that shows the difference of the predicted and true values, the so-called forecasting error. You will immediately see that the forecasting error can be very large most of the time. Strangely, error plots are never shown in the papers. Hmmm, why might that be???
Another point which makes it very obvious that such a model will not perform well is this one: financial market data is typically amongst the noisiest data you can get. We say, it has a very low signal-to-noise-ratio. This means, the true trend, or signal, of a stock price is “hidden” behind a lot of noise. This noise may even vary in intensity (so called heteroskedasticity), be autocorrelated and so on. This then means, in turn, that the true price is probably rather a smooth trend hidden behind the erratic movements of the price. Therefore, a model such as the one above which predicts exactly as erratic movements as the observed market price is very likely to be extremely overfit and not generalize well. A well trained and fit model would and could never display such accuracy replication of the noise of the price trajectory. Instead, it will rather give a smooth prediction. In the study above, it is obvious, the model simply learns to repeat what it has just seen. It thus has a one-step memory, but no predictive power. Imagine me saying A-A-A-A-A-….-B. If your policy is to repeat what I last said, I can say A for as long as I want and you can be more and more confident that you are doing a good job at predicting what I say, but, when I say B, you will still fail and say A. :D
Another point: imagine, you shift back the prediction one step. It would almost perfectly match the true value process. This indicates that the error of the backshifted prediction with respect to the true value would be almost zero. This means your prediction works better at the current time than at the actual next timestep, where it is supposed to belong. This is another strong indication that the results are simply bs. It is also a helpful way to quickly check the forecasting power of your time series models: whenever you make a prediction, check its error against versions of the target variable. When the error is much lower at past lags, chances are, your model just memorized a certain past value and will not generalize well.
To wrap up; most financial ML papers are still useful, as they do a lot of exploration of ideas of network architectures, and so on. You can in fact learn from them how to construct networks, and you can get inspired to create new networks. So, for educational purposes, one should read them, or at least some fundamental ones.
But on the other hand, if you are on the hunt for good prediction models, please always do check the results first. If you see anything like the plot above, there is a high chance, the reported results carry no value. The good thing is that this is actually fairly easy to spot, so I hope that this neat trick may save people some time, enabling them to focus on the valuable stuff!
Btw, one last comment; if you’re looking for predictive financial models, be prepared that 99.99% of the models presented out there won’t work, anyways. There is just some that make more and some that make less sense. But in the end, you know; who would publish a truly profitable predictive model? Let yourself be inspired by research, and build your own models from that, after you gained some experience.