Fooled by common interest

Lew McCreary, writing on the Harvard Business Review Editors’ Blog, covers two of my favorite topics (prediction markets and nipping stupidity in the bud) with How to Kill Bad Projects:

Project owners creatively spun results for political reasons—mainly to prevent funding from being yanked. Consequently, there was a gaping disconnect between the project people down at ground level and the business leaders farther up the food chain when it came to understanding how projects were actually progressing. The leaders tended to think things were going much better than they actually were.

The problem of corrupted information flows stayed with Siegel and ultimately led him to found his current company, Inkling Markets, a software-as-service venture aimed at helping companies conduct successful prediction markets. What does a prediction market have to do with eliminating spin? Siegel sees an opportunity to produce higher quality decision support in businesses by tapping anonymous input “from people who aren’t normally asked their opinions, in samples large enough to filter out individual agendas.”

In the case of an internal prediction market, employees might be asked to weigh in anonymously (wagering a sum of token currency) on a statement like this: “The Voldemort Project will meet all of its defined performance targets by the end of 2008.”

Unfortunately, the post includes just a bit of its own stupidity (emphasis added):

While many are naturally captivated by the black-swan-finding potential of prediction markets, another sweet spot may be their use as a form of institutional lie detection—guaranteeing the integrity of internal reporting and keeping the progress of business initiatives transparent.

What the heck is he talking about? I have never heard of anyone claiming that a prediction market could find — to the contrary, a black swan is almost by definition something a prediction market will fail to signal — the knowledge does not exist to be aggregated. Chris Masse quoting Nassim Taleb:

If, as Niall Ferguson showed, war bonds did not forecast the great war, it was a Black Swan

Now prediction markets and black swans both have something to do with prediction and probability, but they’re otherwise ships passing not in the night, but on opposite sides of the globe — with one in the night.

DRM strikes me as another example of people fooled by common interest, in this case of cryptography and censorshipcopyright enforcement. Both have something to do with preventing someone from getting access to information. That doesn’t make one a tool for the other (in either direction). Of course that knowledge was distributed, but apparently not visibly in the right places, resulting in lots of bad projects.

Via Inkling.

4 Responses

  1. victor says:

    from the article: “what other methods have worked in combating grade inflation awarded to undeserving projects?”

    How about: the boss walks down the hall (?)

    I’m sure there’s more to the lie-detecting software service that post mentions but in successful companies I’ve work for, the executives in charge of allocating budgets and resources would never take any proposal/presentation by mid-management at face value and always did their own research by, you know, talking to the engineers.

    meanwhile “black swan” has become a buzz word, that’s all. People buy that book, never read it (or like me, read it and don’t get it) and assume it’s safe to throw the term around.

  2. Adam says:

    Mike,

    The context of that discussion was talking about allowing people to create their own markets vs. having them only be run by a central entity or only through recommendations by a consulting firm.
    We were also talking about the insights you may get by running prediction markets that are not readily apparent in the market results.

    The original point was, by allowing people to ask as many questions as possible, the questions may be a signal themselves pointing to something that you didn’t previously know about. If someone asks a question about the probability of a risk factor occurring that you never even considered before, for example. That would never have been uncovered, otherwise because the “prediction market administrators” wouldn’t even have known to ask.

  3. gurdonark says:

    I think that DRM shows the dangers of focusing on solving the problem at hand–without understanding the implications of the problem at hand.

Leave a Reply