Feral Jundi

Saturday, March 7, 2009

Kaizen: The Expert on Experts; Foxes and Hedgehogs

Filed under: Kaizen — Tags: , , , , , , — Matt @ 2:07 AM

   This was a tough article to define, because I originally thought this would be great for the Building Snowmobiles category.  Col. Boyd would have liked this, because this author echoed many of the philosophies Boyd had.  From the importance of randomness in warfare(experts have a hard time with randomness–great for beating an expert on the battlefield), to Boyd’s aversion to being called an expert or committing to one doctrine because of what it implies–that he knew everything or that doctrine was the end all.  And according to this article, I would definitely define Boyd as a Fox, and not a Hedgehog:

 What makes some forecasters better than others?

The most important factor was not how much education or experience the experts had but how they thought. You know the famous line that [philosopher] Isaiah Berlin borrowed from a Greek poet, “The fox knows many things, but the hedgehog knows one big thing”? The better forecasters were like Berlin’s foxes: self-critical, eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability. The less successful forecasters were like hedgehogs: They tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything. The media often love hedgehogs. 

    I also read this and started looking back at all of the examples of foxes and hedgehogs in media and in this industry.  From the anchorman on some cable news show, to some jackass you have come across out in the field or even online, all trying to convince us that they are the so-called ‘expert’.  With the studies that have been presented by Phil, it is nice to see what really defines a forecaster or so-called expert.  I am sure that some of the same rules apply to Opinion Leaders or Mavens, and further solidify the reasons why we even listen to these folks. It also gives a person a set of rules to follow, if they want to be more respected as a teacher, forecaster, leader, or Maven.  Be the fox.  Be the self-critical, eclectic thinker that is willing to update your beliefs when faced with contrary evidence, and always question grand schemes and be modest about your ability to predict.  You can definitely apply that to whatever niche you claim as yours, and constantly improve on your standing as leader in your field. –Matt    

 —————————————————————– 

Why the experts missed the crash

Which forecasters should you trust on the direction of the economy and the markets? Ask Philip Tetlock, who knows the kind of expert worth listening to – and what to listen for.

By Eric Schurenberg, Money Magazine

Last Updated: February 18, 2009: 4:10 PM ET

(Money Magazine) — You’ve probably never wanted expert insight more than today – and never trusted it less. After all, the intelligent, articulate, well-paid authorities voicing these opinions are the ones who created the crisis or failed to predict it or lost 30% of your 401(k) in it.

Yet we can’t tear ourselves away. The crisis has brought record ratings to CNBC and its parade of talking heads. You’re probably still entrusting your portfolio to the experts running mutual funds. Despite everything, we can’t shake the belief that elite forecasters know better than the rest of us what the future holds.

The record, unfortunately, proves no such thing. And no one knows that record better than Philip Tetlock, 54, a professor of organizational behavior at the Haas Business School at the University of California-Berkeley. Tetlock is the world’s top expert on, well, top experts. Some 25 years ago, he began an experiment to quantify the forecasting skill of political experts.

By the time he finished in 2003, Tetlock had signed up nearly 300 academics, economists, policymakers and journalists and mapped more than 82,000 forecasts against real-world outcomes, analyzing not just what the experts said but how they thought: how quickly they embraced contrary evidence, for example, or reacted when they were wrong. And wrong they usually were, barely beating out a random forecast generator.

But you shouldn’t simply write all gurus off. Tetlock’s research found that one kind of expert turns out consistently more accurate forecasts than others. Understanding what makes them better can help you make more reliable predictions in your own life. Tetlock explained it all to Money’s former managing editor, Eric Schurenberg, in a recent interview.

Why did so many experts miss the economic crash?

The people intimately involved in packaging [financial derivatives like] CDOs must have had some sense that they were unstable. But their superiors seem to have been lulled into complacency, partly because they were making a lot of money very fast and had no motivation to look closer. So greed played a role.

But hubris may have played a bigger one. Remember Greek tragedy? The gods don’t like mortals who get too uppity. In this case the biggest source of hubris was the mathematical models that claimed you could turn iffy loans into investment-grade securities. The models rested on a misplaced faith in the law of large numbers and on wildly miscalculated estimates of the likelihood of a national collapse in real estate. But mathematics has a certain mystique. People get intimidated by it, and no one challenged the models.

Americans were shocked at how wrong the experts were. You weren’t. Why not?

My research certainly prepared me for widespread forecasting failures. We found that our experts’ predictions barely beat random guesses – the statistical equivalent of a dart-throwing chimp – and proved no better than predictions of reasonably well-read nonexperts. Ironically, the more famous the expert, the less accurate his or her predictions tended to be.

Money has written about human mental quirks that lead ordinary folks to make investing mistakes. Do the same lapses affect experts’ judgment?

Of course. Like all of us, experts go wrong when they try to fit simple models to complex situations. (“It’s the Great Depression all over again!”) They go wrong when they leap to judgment or are too slow to change their minds in the face of contrary evidence.

And like all of us, experts have a hard time with randomness. I once witnessed an experiment that pitted a classroom of Yale undergrads against a lone Norwegian rat in a T-maze. Food was put in the maze in no particular pattern, except that it was designed to end up in the left side of the “T” 60% of the time. Eventually, the rat learned always to turn left and so was rewarded 60% of the time. The students, on the other hand, fell for a variant of the “gambler’s fallacy.” Picture a roulette player who sees a long sequence of red and puts all his money on black because it’s “due.” Or more subtly, he looks for complex, alternating patterns – the same kind of mental wild-goose chase that technical stock pickers go on. That’s what happened to the Yalies, who kept looking for some pattern that would predict where the food would be every time. They ended up being right just 52% of the time. Outsmarted by a rat.

What makes some forecasters better than others?

The most important factor was not how much education or experience the experts had but how they thought. You know the famous line that [philosopher] Isaiah Berlin borrowed from a Greek poet, “The fox knows many things, but the hedgehog knows one big thing”? The better forecasters were like Berlin’s foxes: self-critical, eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability. The less successful forecasters were like hedgehogs: They tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything. The media often love hedgehogs.

How do you know whether a talking head is a fox or a hedgehog?

Count how often they press the brakes on trains of thought. Foxes often qualify their arguments with “however” and “perhaps,” while hedgehogs build up momentum with “moreover” and “all the more so.” Foxes are not as entertaining as hedgehogs. But enduring a little tedium is worth it if you want realistic odds on possible futures.

So if you were looking for a money manager, you’d want a fox?

If you want good, stable long-term performance, you’re better off with the fox. If you’re up for a real roller-coaster ride, which might make you fabulously wealthy or leave you broke, go hedgehog.

But it was doomster hedgehogs like money managers Robert Rodriguez and Jeremy Grantham who first saw the crisis coming.

Hedgehogs are sometimes way, way out front. But they can also be way, way off.

Most of the experts who called the downturn are still bearish. Would you expect them to be able to call the rebound too?

No. In our research, the hedgehogs who get out front don’t tend to stay out front very long. They often overshoot. For example, among the few who correctly called the fall of the Soviet Union were what I call ethno-nationalist fundamentalists, who believed that multi-ethnic nations were likely to be torn apart. They were spectacularly right with Yugoslavia and the Soviet Union. But they also expected Nigeria, India and Canada to disintegrate. That’s how it is with hedgehogs: You get spectacular hits but lots of false alarms.

How can we nonexperts test our own hunches?

Listen to yourself talk to yourself. If you’re being swept away with enthusiasm for some particular course of action, take a deep breath and ask: Can I see anything wrong with this? And if you can’t, start worrying; you are about to go over a cliff.

Considering how wrong they are, why are the same old talking heads continuing to give advice?

Unless you force experts to be specific, as we did, they can make predictions that are difficult to falsify. You know the cynical clich “Never assign a date and a number to the same prediction.” That lets you get away with saying things like “Yes, I did say the Dow will hit 36,000, and it will – just wait. I was merely a little early.”

Experts are also very good at explaining errors away by concocting counterfactual history. “If only the world had heeded the warnings of, say, [libertarian-leaning Texas Congressman] Dick Armey about Fannie Mae and Freddie Mac, the financial crisis would have been far less severe.” This is a ridiculous line of reasoning. Nobody knows what would have happened in a hypothetical world.

Who are you listening to in this market?

I look for a combination of cognitive flexibility and high IQ. Moody’s Economy.com chief economist Mark Zandi is not a bad person to listen to. He was somewhat out in front in anticipating this crisis and has a capacity for seeing different points of view. Larry Summers, head of the National Economic Council, also has the kind of intelligence and cognitive style that makes him a good bet.

Could we live without experts?

No way. We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That’s why part of the responsibility for experts’ poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.  To top of page

First Published: February 18, 2009: 8:01 AM ET

Send feedback to Money Magazine

Story Here

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress