Friday, January 18, 2013

Reflections on "The Signal and the Noise"


Summary: Predictions fail because it is impossible to eliminate uncertainty. We have to read through the natural randomness ("noise") to find out what the information is really telling us ("the signal"). This is easier said than done, even for professionals. Predictions should not be categorical statements (such as "It will rain tomorrow"), but rather should have a calculated chance of it happening (such as "There is a 40% chance of rain tomorrow"). A good prediction has a a feedback mechanism where the percentage can be corrected based off of past events ("Of all the times when we predicted a 40% chance of rain, it actually rained 40% of the time). One-time events are therefore difficult to predict since there is no feedback mechanism. 

A good prediction will beat the conventional wisdom more often than not, but it will still be less than perfect. If your forecasting model predicts a 20 percent chance of something happening, whereas conventional wisdom only says 10 percent - then it is a bet worth taking in mass. Your forecast will either succeed or fail  in the long run given a large enough sample size. Example - Buying stock with a Price to Earnings ratio of 15 is a proven strategy to make money in the long run, but it might be a big loser in the short run because of uncertainty. Eventually, the better forecasting system will become the conventional wisdom. 

A good forecasting system will openly admit its own shortcomings. Uncertainties that the model did not, or could not, account for. A strength in one model could be considered a flaw in another. Opinions vary on what to and what not to incorporate. Therefore no model is perfect and they all have flaws.  An aggregate of independent models often cancels out all of these flaws and therefore contains more predictive power than a single model does. The consensus of Intrade often has more predictive power than Vegas odds. This is not to say that a single model can not be better than the aggregate, but the burden of proof is high to show that the model was not just lucky in the short run. Did Public Policy Polling really do a better job than all other polling firms in the 2012 election or was it just luck? The sample size is small, so it is difficult to know for sure.


Both Nate Silver and Michael Lewis were part of the sabermetric revolution of baseball; now known as Moneyball based off the latter's book. The lesson they note is an important one: 

You can be doing the right thing (statistically speaking) and still have bad luck in the short run. It does not mean you should change strategies, but rather give time for results to show.

Billy Beane had great posture in the batting box. When he started to strike out a lot,  he was so embarrassed that he would crouch his posture the next time around. Doing so made the strike zone smaller and therefore prevented more strikeouts, but also took the power away from his swing. He went from being a power hitter that both stuck out and hit home runs (think Babe Ruth), to a guy that consistently grounded out to second base. His batting average and slugging percentage fell dramatically. Not being able to shrug off the strike outs as simply bad luck, meant his career as a baseball player was eventually short-circuited. 

Both are fantastic books that I would highly recommend.





No comments:

Post a Comment