outcome-irrelevant learning


I took a year-long break from news, starting in the spring of 2015, on the advice of my doctor, to reduce stress. It helped a bit, and I needed the help. I was working on the last of my hours for licensure in a stressful environment. It was worth it to give up my standing as a good citizen who keeps up with current events.

Then, a year later, I decided to listen to the back episodes of my main news sources, Fareed Zakaria’s Global Public Square and KCRW’s Left, Right & Center,* figuring that old news should be less stressful and that my good-citizenship could use some updating.

I found that old news is almost infinitely less stressful than new news. It is also, of course, significantly less interesting, probably through the same mechanism. But the main lesson for me was about spin. Listening to pundits and guests talk a year ago about the news, I realized that they are constantly making, or at least implying, predictions. Maybe every third declarative sentence is a prediction. And from the vantage point of a year later, it is clear that these extremely intelligent, well-informed people are very, very bad at predicting the future. Predictions with no predictive value are just spin, an attempt to create the future by moving the narrative in the direction of your ideology.

That news is largely spin is not a major theoretical revelation, but it has been a big deal to me experientially. It reminds me of the first time a press release I’d written appeared, with only minor edits, in a newspaper under a reporter’s name. I’d known from my publicity classes that 80% of print media was rewritten press releases, but seeing my words there in print, looking so official, I felt my brain shift: Just about every thing you read exists because someone else has a vested interest in your thinking what they want you to think. And the same goes for words spoken on news shows.

So after catching up on news and realizing this, I very nearly went off it again. How is it useful to listen to all this spin? It takes up a fair amount of time that could be spent reading or studying. Or, I thought, maybe I’d search for a news source that offered no “analysis,” just descriptions of events. Eventually I decided/rationalized that I’d be missing out on the most entertaining few months of news in my lifetime, so I stayed in it. To temper the stress, I’ve added some much nerdier sources, mostly FiveThirtyEight Elections, Vox’s The Weeds, and The Daily Evolver. It helps to have people talking about data, statistics, policy, and theory.

Maybe I’ll go back off news after the election. Maybe all media for a while. We’ll see.

—–

*I don’t mean to pick on GPS or LR&C, by any means. (Though I do consider LR&C a perfect example of outcome irrelevant learning.) They are both really good shows, and intended to be analysis of current events, not just descriptions.

Advertisements

I get half of my political news and analysis from a great podcast called Left Right & Center. (The other half is from Fareed Zakaria’s Global Public Square.) LR&C is an ongoing conversation between three guys from different political perspectives on what’s happened this week, and has been very valuable for the development of my own political thinking.

The other day, I was listening to another great podcast, This Week in Microbiology, and it hit me that these two shows have the exact same format. TWiM is also an ongoing conversation between three guys about the news of the week. The superficial difference is that TWiM is about bacteria and LR&C is about US politics.

The more abstract difference between these two podcasts, though, is that Left, Right & Center is an excercise in outcome-irrelevant learning, while This Week in Microbiology is an exercise in outcome-relevant learning. That is to say, the empirical events of the week change the opinions of the TWiM guys but almost never change the opinions of the LR&C guys. This is a huge difference. On TWiM, when there is a disagreement, they look up what is known about the issue and almost immediately come to an agreement based on facts: either one person is right and the other wrong, or else we really don’t yet know the answer to that question.

On LR&C, when there is disagreement (which there is on every topic), each fact that comes into the conversation is either disputed or used to proove each person’s own point. In politics, the facts are basically irrelevant. Makes me wonder why it remains so interesting.

Jim Berkland seemed to predict a large earthquake in mid- to late- March 2011 somewhere in North America. Watch the footage here. (The Fox commentator is pretty funny. At one point he says to pay attention because “he is a pretty good geologist.”)

There was no large earthquake during that time, but we can’t really know if Berkland was technically wrong, because what he actually predicted was a “high probability” of a large earthquake in North America. If you want to know how accurate a predictor who uses language like this is, you have to track outcomes of a whole bunch of their predictions, not just one. This is what Philip Tetlock does in his research on prediction accuracy–track the outcomes of hundreds of predictions of political experts. He also had to force the experts make specific enough predictions that they would either be true or false, not ambiguous–not always an easy task. Berkland, while casting a wide net, was fairly precise with “large earthquake” and “North America,” though we must wonder whether he would have claimed success if there had been a large earthquake, say, in the northern Pacific.

I’m not sure how many earthquake predictions Berkland has made, but if there have been enough, we could judge his rough accuracy: When he predicts a high probability of an earthquake, does it happen most of the time? When he predicts a low probability of an earthquake does it usually not happen? How about a medium probability?

The point is, if your prediction is of a probability, rather than a certainty of an event, we need to do some statistics to figure out if you’re a good predictor. And this is the form that careful people make their predictions. If, on the other hand, you tend to make predictions about certainties–100% or 0% probability events, it’s quite a bit easier to check your accuracy–as long as you make sufficiently specific, falsifiable predictions. Most prediction by ideologues, for example, set up what Tetlock calls an “outcome-irrelevant learning situation,” a situation in which the predictor can claim they were right no matter what actually happens. Every ideologue, therefore, is in the position to explain what happened, using their own ideology.

An example of that may be the Mayan-calendar predictions. Here is Graham Hancock on Art Bell’s radio show, seeming to predict something happening on December 21, 2012. It is full of talk of cataclysms, the end of the world, tumult, a ball of fire hitting the earth, etc. (And lots of talk about how accurate the Mayan calendar was, as if having a really accurate way to measure time lends credence to your predictions. Better ask the guy who invented the atomic clock!) I bet these guys will be patting themselves on the back on 12/21/2012 if a ball of fire does hit the earth. But if nothing particularly tumultuous happens, will they be wrong about anything? No. They are not precise at all, and they attach no probability to their “prediction.” There are plenty of “just mights” and “maybes” and “a window of about 40 years.” They even say that if humanity gets their act together in some vague way, we might avert what may or may not have been coming. This is a perfect setup for an outcome-irrelevant learning situation.

Tetlock says that when predictors are wrong, they generally either claim to be right in some way, based on the fuzziness of their prediction, or they use one of several “belief system defenses.” The most common of these is “Just off on timing.” The other two major defenses are the upward counterfactual defense, or “you think this is bad?” and the downward counterfactual defense, or “you think this is good?”

If nothing particularly tumultuous happens on 12/21/2011, and we ask Bell and his guest about it, how will they respond? They might use “just off on timing,” and blame our modern, inaccurate calendars. More likely they would claim to have been right, something like, “All the war and bad stuff happening on the earth–this is what we were talking about. It’s just a lot more slow and drawn out than we thought.” There is some, small, chance that they might cop to being wrong. I haven’t listened to Bell in over a decade, and I can’t remember how he handles his predictors being wrong, or if he even addresses it.

Berkland could also claim to be right: “Well, there was a high probability of a large earthquake, but not everything with a high probability happens every time.” A “just off on timing” defense would be pretty weak for him, since timing is everything in earthquake prediction.

The third predictor I’ve been thinking about, though, has given himself very little wiggle room. It takes guts  to make a prediction like this. According to Harold Camping, next Saturday, May 21, 2011:

“A great earthquake will occur the Bible describes it as “such as was not since men were upon the earth, so mighty an earthquake, and so great.” This earthquake will be so powerful it will throw open all graves. The remains of the all the believers who have ever lived will be instantly transformed into glorified spiritual bodies to be forever with God.

“On the other hand the bodies of all unsaved people will be thrown out upon the ground to be shamed.

“The inhabitants who survive this terrible earthquake will exist in a world of horror and chaos beyond description. Each day people will die until October 21, 2011 when God will completely destroy this earth and its surviving inhabitants.”

That’s from his website, which you can see here. I have also heard Camping say that millions of people are certain to die on May 21, 2011, and every day thereafter until the very end, October 21, 2011. I have heard him say “It is going to happen.” I have heard him say “It is absolutely certain.” I was disappointed when I heard him back down from that, recently, saying he can’t be absolutely certain, but he has stuck with “going to happen” and “there is no doubt.”

I wonder how Camping will react if his predictions are wrong. The counterfactual defenses won’t apply at all. It will be very difficult to argue that he was right in some way if there is not at least the largest earthquake ever recorded (that would be at least a 9.6), that all buried bodies are somehow exposed (ideally as the result of the earthquake), that millions of people will die on May 21, and that approximately 7 billion people will die by October 21.

So my prediction is that he will use “Just off on timing” and go back to calculating the real day of judgment. Based on social psychology research, I will also predict that in general, this event will increase believers conviction, rather than decrease it. And if I am wrong, I will do my best to just admit it.