Archive for June, 2005

Autonomous Liberalization

Thursday, June 23rd, 2005

Tyler Cowen gives CAFTA a very qualified endorsement which I mostly agree with. The clincher:

Failure of the treaty would be a disaster, again for symbolic reasons. Trade negotiations would slow down significantly, and the age of trade agreements might be over.

What age of trade agreements? According to the World Bank’s World Economic Prospects: Trade, Regionalism, and Development unilateral trade liberalization accounts for two thirds of tariff reductions over the past twenty years. Regional agreements like CAFTA only accounted for ten percent.

Downgrade symbolism and upgrade strategy: unilateral free trade is the way forward, followed by worldwide agreements, the latter spurred by the former. And drop the non-trade stuff, like exporting intellectual protectionism.

Still, I find it hard to not root for CAFTA, if only because the economic neanderthals on the other side are so ugly.

(CAFTA is doubtless a very ugly treaty too, with payoffs and exceptions galore. Dare I say that those pursuing treaties rather than unilateral liberalization overestimate public good problems and underestimate rent seeking problems?)

Sort of open source economic models

Tuesday, June 14th, 2005

Mark Thoma is building an “open source” repository for economic models. Well, sort of open source. Unfortunately none of the four models included so far, nor the initial post, which Thoma says is open source, say anything about copyright or licenses.

Unfortunately under this default copyright regime, explicit licensing (or dedication to the public domain) is required for an open source project to scale. If five people contribute to a model posted to Thoma’s repository none of the contributors, including the original author, nor anyone else, has any right to distribute the resulting model, or allow others to further modify the model.

That’s why open source projects use explicit open source licenses and open source repositories require each project in the repository to use an explicit license. That’s what an open source economic models repository, or indeed any repository that wants to emulate the open source model, should also do.

NB creators of open source economic models may wish to consider an open source-like license intended for “content” rather than code, e.g., the Free Documentation License (that’s what Wikipedia uses) or a liberal Creative Commons license (e.g., Attribution or Attribution-ShareAlike).

Also see the open access movement, commons-based peer production and Science Commons. I don’t know how familiar the mainstream economics profession is familiar with these concepts, but “they” ought to be.

Via Alex Tabarrok.

Aubrey de Grey at Stanford

Sunday, June 12th, 2005

Biogerontologist Aubrey de Grey gave lectures at Stanford Thursday evening and Friday morning. de Grey laid out his argument for life extension research at the first talk. My brief summary:

  • There are seven changes at the cellular level that accumulate and eventually cause pathology.
  • No new such changes have been discovered in over twenty years despite massively increased capability to study organisms at the cellular level in that time; seven is probably it.
  • All seven changes should be repairable at the cellular level.
  • Repair damange below levels that cuase pathology, goodbye aging.
  • If an individual lives through the first breakthrough that extends lifespan by decades they will probably survive through the next, and the next… Hello, indefinite lifespan.
  • Aging and death are barbaric and must be stopped.

de Grey calls the second to last point “escape velocity” and presented at least two arguments for it:

  • Once a breakthrough is made, science progresses in a relatively straightforward and rapid fashion for awhile.
  • Other primates’ aging is very similar to humans’, only at least twice as fast. There is time to discover and cure any new disease of extended life before any humans get it.

The second talk, apparently more intended for biologists, was a repeat of the first to a disappointing extent. I was prepared to understand very little, but de Grey only spoke for awhile on one of his proposed solutions to one of the seven types of damage–extracellular junk. The solution takes a cue from bioremediation: find microbes that break down the extracellular junk. Where? Human remains of course. From Appropriating microbial catabolism: a proposal to treat and prevent neurodegeneration:

Soil microbes display astonishing catabolic diversity, something exploited for decades in the bioremediation industry. Environments enriched in human remains impose selective pressure on the microbial population to evolve the ability to degrade any recalcitrant, energy-rich human material. Thus, microbes may exist that can degrade these lysosomal toxins. If so, it should be possible to isolate the genes responsible and modify them for therapeutic activity in the mammalian lysosome.

Neat idea. Later de Grey said that this idea is the easiest to explain to non-specialists and that the others that he has personally worked on would have required far longer to introduce than the hour lecture format allowed.

de Grey is attempting to jump start anti-aging interventions with the Methuselah Mouse Prize[s] for extending the lifespan of mice, inspired by the X Prize. His “engineering” approach sounds good to me and I wholly endorse the goal of defeating aging. I will donate more once more information is provided about the participating scientists and their mice–not much is available at this point.

There are four (unfortunately not real money) claims related to the M Prize. Three directly concern the prize:

Methuselah Mouse Postponement. Predicting a 2929 day old mouse by 2010/01/01. The current record is 1819 days.

Methuselah Mouse Post up 1 yr. Says there’s a 2/3 chance of a 2284 day old mouse by 2012/11/01. That doesn’t seem to jibe with MMPost above (time to short MMPost I think). [Correction 20060102: I misread the claim. As of 20050612 it predicted a 2284 day old mouse on 2010/02/01, which still didn’t jibe with MMPost, though the discrepancy was not as bad as I thought]

Methuselah Mouse Reversal<2015. The wording of this claim could be better (the current prize holder is for 1551 days, the claim would pay 1 if 3102 day old mice were obtained by 2015/01/01). Last trade at .67, predicting a 2590 day old mouse with anti-aging interventions only begun late in life within 10 years.

Immortality in mammal by 2015. Not really immortality, but three times a species’s maximum life span as of 1996. Possibly the world’s oldest mouse in 1996 was just shy of four years. If so, this claim would predict a less than one in five chance of a 4380 day old mouse by 2015/12/31. (Another mammilian species could meet this claim.)

It would be very interesting to see versions of the above claims conditioned on the M Prize reaching some fundraising goal.

Typing International Apartheid

Saturday, June 11th, 2005

I claim that legal restrictions on the ability of people to travel, work and live across national borders is equivalent to apartheid, so naturally I’m intrigued by Randy McDonald’s Towards A Typology of Apartheid in response to a query from Jonathan Edelstein. McDonald lists six characteristics of an apartheid regime. Let’s see how the international version stacks up (read McDonald’s post for descriptions, I only reproduce openings below):

The group favouring apartheid is either a minority population or about to become a minority population.

In the case of the U.S. anti-immigration activists see an imminent threat anglo culture being swamped and ruined by hispanics and harbor fears that Mexican “elites” plan with the help of Mexicans living in the U.S. to reconquer the southwestern U.S., lost by Mexico in the war of 1848.

The group favouring apartheid believes itself to be indigenous.

In spades.

The group favouring apartheid believes that it must act immediately.

Anti-immigration activists want a “temporary” moratorium on all immigration and immediate “sealing” of the U.S.-Mexico border.

Under apartheid, each group must develop separately.

Check. They should fix their own country instead of coming here and stealing our jawbs and living off welfare, natch.

The group behind the apartheid system must establish as complete a monopoly over power as possible.

This may be a stretch, but consider the extent to which U.S. relations with and interventions in Mexico, Haiti, Cuba, and others are aimed at assuring that “they” don’t come “here” in masses.

Defending the apartheid system requires constant vigilance.

Of course. This feels like a throwaway, but I’ll note that anti-immigration activists often claim that “we” face an invasion. What but vigilance could be required?

I think McDonald may have missed two characteristics:

The apartheid system is natural. The regime only gives the force of law to the natural ordering of things. People naturally live and work in their homelands and are most comfortable in their own culture.

The apartheid system is moral. People who are not born into a culture cannot really buy into a culture and introducing these people leads to moral rot and cultural decline.

I apologize for the U.S.-centric nature of the above. Similar could be written concerning anywhere non-open borders exist, particularly where freedom and economic opportunities available to individuals differ greatly across borders.

Zocalo experiment

Saturday, June 11th, 2005

Friday afternoon I saw a demo by Chris Hibbert of Zocalo, to be an open source platform for running markets. The demo involved playing an apparently classic experimental economics game originally run by Charles Plott.

The game was extremely simple, but more educational for me regarding the methods of experimental economics than having read the occasional popular account over the years. I imagine that such games could be useful in basic education. The dynamics of power seem more intuitive than the dynamics of exchange, yet the former (politics, war and history seen through their lens) gets far more time (possibly this has something to do with the phenomenon of overestimating market failure and underestimating political failure). Perhaps in the near future youth participation in virtual world economies will help fill this educational gap.

Also of note: As of the demo Zocalo is built on mod_pubsub (roughly javascript client in browser keeps http connection open to server, allowing real time updates, no polling and no flash, java or similar required) and has a cool logo. I look forward to the results of further development.

Read the white paper: Zocalo: An Open-Source Platform for Deploying Prediction Markets.

Betting Policy Consequences

Thursday, June 9th, 2005

Michael Stastny quoting a closed Financial Times column:

President John F. Kennedy helped to revive the City of London in 1963 by imposing a tax on US investment in foreign securities. That made the international bond market move to London, allowing the City to regain its 19th century status as Wall Street’s rival in capital markets.

I did not know this bit of history. It seems like a perfect illustration of some obvious but often ignored truth, perhaps simply that policies have consequences. Consideration of more than policy advocates’ lies may be in order. Betting market prices may be one valuable source of more information.

A small irony then, that U.S. regulation is ensuring that the leading betting markets are located outside the U.S., largely in London. Eventually this may be a big deal:

It does not sound like a very worrying loss for Wall Street given its strong position in equities, bonds and derivatives. But Mr Bloomberg should watch out: in an arena of financial innovation that is rapidly converging with other forms of trading and investment, New York is drifting behind London.

As an anti-nationalist, I don’t care much where the leading markets locate; I just hate to see stupid policy implemented anywhere, including the U.S. If I were betting on the consequences of this policy I’d short New York.

Financial markets too gauche? Think through the likely consequences of heavy handed cloning regulation.

Ugly metadata deployed

Friday, June 3rd, 2005

Peter Saint-André, a good person for preferring the public domain and much else, writes about Creative Commons metadata:

It’d be cool if smart search engines could automagically find web pages that are offered under one of the Creative Commons licenses.

I agree, which is why we (I work for Creative Commons, though I do not speak for them in this publication) built a prototype in early 2004 and a more robust beta based on Nutch later that year. March this year brought Yahoo! Search for Creative Commons, very recently also added to Yahoo! Advanced Search. I predict more and better for CC and other potentially metadata-enhanced searches.

For reasons unknown to mere mortals like me, CC recommends placing some RDF in an HTML comment as the proper way to “tag” a web page (Uche explains more here). Well, gosh, who thought that up? Are we not in possession of fine XHTML metadata technologies like the <meta/> tag?

Aaron Swartz thought it up, for good reasons. You can find a brief explanation I believe written by Aaron here (linked at the Wayback machine for reference as the current documentation may change). However, this doesn’t capture the most important reason, which I’ve had the pleasure of explaining a gazillion times, e.g., here

A separate RDF file is a nonstarter for CC. After selecting a license a user gets a block of HTML to put in their web page. That block happens to include RDF (unfortunately embedded in comments). Users don’t have to know or think about metadata. If we need to explain to them that you need to create a separate file, link to it in the head of the document, and by the way the separate file needs to contain an explicit URI in rdf:about … forget about it.

and here

Requiring metadata placed in the HEAD of an HTML page will dramatically decrease metadata adoption. The only reason so much CC metadata is out there now is that including it is a zero-cost operation. When the user selects a license and copies&pastes the HTML with a license statement and button into their web page, they get the embedded RDF without having to know anything about it. Getting people to take extra steps to include or produce metadata is very hard, perhaps futile. I tend to believe that good metadata must either be a side effect of some other process (e.g., selecting a license) or a collaborative effort by an interested community (e.g., Amazon book reviews, Bitzi, DMoz, MusicBrainz) (leaving out the case of $$$ for knowledge workers).

in reply to people who want CC metadata included with web documents in various fashions. On that, see my recent reply to someone else suggesting the same method Peter proposes:

There are zillions of options for sticking metadata into a [X]HTML document. If you must use whatever you prefer. It is my concern to encourage dominant uses so that software can reliably find metadata. IMO there are now three fairly widely deployed schemes for CC licenses, not all mutually exclusive:

1. Embed RDF in HTML comment
2. rel=”license” attribute on <a href=”license-uri”>
3. <link> to an external RDF file

#1 is our legacy format, the default produced by licensing engine, very widely deployed
#2 is also now produced by licensing engine, has support of small-s semantic web/semantic XHTML people, and will be RDF-compatible via GRDDL eventually
#3 is used by other RDF apps and is only non-controversial means of including RDF with an XHTML document. Wikipedia publishes CC compatible metadata using this method

In the future we’ll probably add a fourth, which will replace #1 and #2 in license engine output, when it gets baked into a W3C standard, which is ongoing — http://www.formsplayer.com/notes/rdf-a.html

Yes, RDF embedded in HTML comments is a horribly ugly hack. Eventually it’ll be superseded. In the meantime, massive deployment wins. Sorry.

Kragen Sitaker on Dominant Assurance Contracts

Thursday, June 2nd, 2005

Kragen Sitaker thinks out loud about dominant assurance contracts for funding public goods, especially free software. My first post on dominant assurance contracts is here. A few thoughts regarding Kragen’s analysis follow.

On public goods:

Generally public goods tend to be underprovided

Almost by definition, but my intuition is that there are important and almost universally unacknowledged exceptions where the good is nonrival, production generates large private benefits, consumption opportunities are limited, or perhaps some combination of these, e.g., recorded music. However, I have no rigorous backing for this intuition. Todo: read existing literature on socially optimal copyright.

[Richard Stallman] would be a happier man today had he spent those years [writing free software] not working with computers at all

I don’t know whether Stallman is happy, but this sounds suspect. He has gained tremendous personal benefits through his programming that he probably couldn’t have obtained otherwise (though perhaps this does not matter, as he shouldn’t have expected to become famous and leader of a very significant movement, unless he was a megalomaniac). It would be more interesting and clearer to make a case that the modal free software contributor acts selflessly, but that would be a long argument and beside the point, which I suppose is simply that unselfish action can produce some public goods.

On dominant assurance contracts:

I suspect that the analysis extends to a more general case, in which each contributor chooses the amount of their own contribution $S, the escrow agent performs the project if the total contributions are over some basic amount, and the extra refund is a specified percentage of the contribution rather than a specified dollar amount; but Tabarrok does not mention this in his paper.

Looks like a very useful extension.

However, copyright places the risk on the artist, while dominant assurance contracts place the risk on the artist’s fans.

I think here the risk is of a worse than expected work. It ought to be possible for an artist to assume more risk by making fulfillment of the contract (and thus not having to refund contributions plus a penalty) contingent on some agreed and hopefully minimally gameable quality measure.

[Update 20050605:On second thought I’m confusing (or extending) the dominant assurance contract idea, which only stipulates that a failure penalty be paid when not enough resources are raised, not when a successfully funded project is not successfully completed.]

Someone also asked whether it was possible to model a dominant assurance contract as a normal assurance contract with a separate prediction market, like the Iowa Electronic Markets, in which people traded idea futures on the likelihood of the completion of the funding. I don’t know how to model it in those terms, although it might be possible.

I don’t know how to model an assurance contract plus prediction market hedging either, but I suspect it may not work as well as a dominant assurance contract.

First, with a dominant assurance contract only contributors receive a payoff in the case of failure. If contribution and failure payoff are unbundled, how are incentives to contribute any different than a plain assurance contract? One can hedge against failure without contributing to sucess.

Second, risk and management of risk is transferred from the entrepreneur to the contributor. Managing risk by hedging securities is hard and costly. The entrepreneur offering the contract may be far more capable of managing risk than contributors.

Prediction market prices may prove helpful to entrepreneurs and potential contributors in deciding what contracts to offer and accept, but this is orthogonal to the structure of dominant assurance contracts, which attack contribution problems rather than revelation problems.

Finally, Tabarrok suggests that the market for escrow agents should be highly competitive because there are low barriers to entry — all you have to do is write a three-line contract and hold some money, assuming that the possible contributors first hold some kind of competition to select which escrow agent they want to use. I think that’s a big assumption, and that escrow agents are likely to wield substantial market power by virtue of network effects, and consequently extract substantial profits from this business.

A well-known escrow agent will be able to attract many more contributors, and so will be able to require much less money from each, which is likely to be a large incentive to use the well-known
agent.

Tabarrok does not mention escrow agents, who may well be involved, but I see no reason to assume the market for such services should be any less competitive than any other market for financial intermediaries. He says that he expects the market for contract providers to be competitive. Presumably these will be entrepreneurs with an expertise in producing a particular public good or aggregators. We have examples of these, from contractors to the United Way or eBay. How would dominant assurance contracts alter the competitive landscape, for better or worse?

[Update 20050605:The distinction I draw between escrow agents and contract providers may not be relevant. It appears that Fundable acts as an aggregator/marketplace and an escrow agent. Also, citing eBay may not inspire confidence. I’ve read, but cannot find a cite for, that it has 85% market share in the US person-to-person online auction market. Whether this is something to worry about will be in the eye of the beholder, e.g., what “market” is relevant — eBay faces indirect competition from garage sales, new goods at retail, and everything in between. Kragen will “just” have to work on zFundable.]

Kragen also has good thoughts on how dominant assurance contracts could prove useful in several fields, potential problems, and responses to several irrelevant objections. Read the whole thing and see Tabarrok’s paper and recent post without which none of the current discussants would be aware of the idea.

Public Goods Rent Seeking

Wednesday, June 1st, 2005

Bryan Caplan points to a fascinating paper on the economics of extreme religious groups which explains the relationship of public goods produced by such groups and sacrifice demanded by the same. Caplan writes:

The upshot is that economists overestimate the severity of public goods problems but underestimate the severity of rent-seeking.

I think Caplan probably has the upshot of this particular paper wrong (I haven’t read the whole paper carefully yet, more later perhaps) but I suspect he’s correct about a bias to overestimate public goods problems and underestimate rent seeking. I wonder if anyone has attempted to detect such a bias either experimentally (in an economics lab) or through painful survey of various popular and academic literatures?

I’m pleased that Ernest Miller made the connection to copyright, though he riffs off the weaker part of Caplan’s post.

Copyright is (should be) the textbook case of wildly overestimating the public goods problem while ignoring rent seeking problems (NB “how can an artist make a full time living doing only art” is not a public goods problem). Witness massive production of art where expected profit from sales of copies and licensing is nil, both outside the content industry and where restrictions on copying are not enforced. Consider who benefits from perpetual copyright — not the public.