Yves Peirsman bio photo

Yves Peirsman

Crazy about Natural Language Processing.

Twitter LinkedIn

I recently read Nate Silver’s The Signal and the Noise, a book about statistical prediction. Silver is famous for building an innovative system for predicting baseball performance, and later forecasting the results of the 2012 American presidential election. In his book, he explores a vast array of areas where statistical models are used to predict the future — from baseball and politics to the weather, earthquakes and even terrorism — and investigates how these models have improved through time.

The central premise of the book is that the best predictive models apply the so-called Bayesian approach. In contrast to frequentist methods, which rely only on the information in a small data sample, Bayesian models bring together three types of information. For example, if you want to determine if a patient has breast cancer after a positive mammogram, you need to know three things: the general prevalence of breast cancer (the so-called prior probability), the probability of a positive mammogram if the patient has cancer, and the probability of a positive mammogram if the patient does not have cancer. If you take a patient in her forties, the prior probability of breast cancer is relatively low: only 1.4%. Now assume our mammogram detects 75% of the breast cancers, and incorrectly claims the presence of cancer only 10% of the time. Although it sounds like bad news, such a positive mammogram raises the probability of cancer in our patient only to 10%. This counter-intuitive result is a surprise to many, and showcases the importance of Bayesian reasoning.

Bayesian methods pervade many areas of prediction, and Silver explores quite a few of them: sports, politics, chess, finance, terrorism, weather, poker, earthquakes, etc. This choice for breadth rather than depth makes sense for a popular science book, but it also makes the whole rather anecdotical. There are no answers here for readers who seek more information than the very basics of Bayes’ theorem, or who want to get deeper insights into how exactly Silver arrived at his own accurate predictions about sports and politics. Still, the book has many interesting and entertaining moments.

One part I particularly liked was Silver’s discussion of political pundits. In it, he reviews a study by Philip Tetlock, a professor of psychology and political science who was fascinated by the fact that so few political experts had predicted the collapse of the Soviet Union. Tetlock showed that on average, when they were asked to predict future political events, the political experts in his survey did barely better than random chance. The success of individual experts correlated largely with their type of personality: in general, it appeared that people who believe in Big Ideas (law-like principles that govern the world) are considerably worse at forecasting than people who believe in combining many little ideas and approaches. Unfortunately, the former type of personality is more often preferred by the media, even to the extent that “the more interviews that an expert had done with the press, Tetlock found, the worse his predictions tended to be.”

Another fascinating study that Silver introduced me to, is John Ioannidis’ Why Most Published Research Findings are False. In this paper, Ioannidis cites “a variety of statistical and theoretical arguments to claim that (as his title implies) the majority of hypotheses deemed to be true in journals in medicine and most other academic and scientific professions are, in fact, false”. His claim is borne out by the experience of Bayer Laboratories, which found it was unable to replicate two thirds of the positive findings in medical journals. Silver rightly notes that in this era of Big Data, our predictions may actually be more prone to failure than before. After all, as the amount of information grows, there is an exponential increase in the number of hypotheses to investigate, and therefore, the number of false positives we will find.

Finally, the most entertaining chapter in the book may be the story of how Deep Blue, the chess computer, defeated Gary Kasparov in 1997. Although Kasparov won the first game of their six-game match, he was baffled by one seemingly pointless move by Deep Blue, just before it resigned the game. Kasparov didn’t understand the computer could make such a tactical error in a simple position. As he studied the data, he found that the conventional play for Deep Blue would have led it to lose anyway, but only more than 20 moves later. The implication that the computer could see more than twenty moves ahead frightened him so much, Silver writes, that Kasparov would never defeat Deep Blue again. In fact, in the next game Kasparov resigned, although a tie was only seven moves away. “I was so impressed by the deep positional play of the computer”, he later said, “that I didn’t think there was any escape.”

It’s clear there’s a lot to like about Nate Silver’s book. It has a wealth of topics, many interesting anecdotes, and an enjoyable introduction into Bayes’ theorem somewhere in between. Personally I would have liked to see a little more depth rather than breadth, as some chapters could have been replaced by a more detailed discussion of the predictive models that Silver mentions. It’s a difficult balance, however, and I appreciate The Signal and the Noise is mainly intended for a general audience. It certainly deserves a wide readership in these times where data and algorithms shape so much of our lives. This time, for sure, there isn’t any escape.