Post Open Source

Predict what will be free

Thursday, August 4th, 2005

Jimmy Wales, guest blogging at Lessig’s, has started what promises to be an interesting series of posts on ten things that will be free (as in free software):

[T]his is not a dream list of things which I hope through some magic to become free, but a list of things which I believe are solvable in reality, things that will be free. Anyone whose business model for the next 100 years depends on these things remaining proprietary better watch out: free culture is coming to get you.

For each of the ten, I will try to give some basic (and hopefully not too ambiguous) definitions for what it will mean for each of them to be “solved”, and we can all check back for the next 25 or 50 years to see how we are doing.

In a subsequent post Wales is even more explicit:

[T]he point of naming the list “will be free” rather than “should be free” or “must be free” is that I am making concrete predictions rather than listing a pie in the sky list of things I wish to see.

I’d love to see similar (but shorter term and more thoroughly specified) predictions as claims on a prediction market. With the right set of claims we can more easily talk about, and plan for, which things are more likely to be free, and when.

Thus far Wales has predicted encyclopedias and curricula will be free. I can’t think of any segments that I am fairly certain will be free, are associated with large businesses, and have not already been alluded to in the comments on his first post.

However, regarding widely deployed software (e.g., operating systems, productivity applications) I have a theory explaining why it will be free: Microsoft Windows and Office have a half life–eventually a release of each will be a failure, at which point the only viable alternaives will be free, and any non-free alternaitves will face slow death–think commercial Unixes in the face of Linux. I’m not going to stand by this theory–it probably assumes too little change, of any sort.

EFF15

Monday, August 1st, 2005

The Electronic Frontient Foundation is 15 and wants “to hear about your ‘click moment’–the very first step you took to stand up for your digital rights.

I don’t remember. It musn’t have been a figurative “click moment.” Probably not a literal “click moment” either–I doubt I used a mouse.

A frequent theme of other EFF15 posts seems to be “how I become a copyfighter” or “how I became a digital freedom activist.” I’ve done embarrassingly little (the occasional letter to a government officeholder, Sklyarov protests, the odd mailing list or blog post, running non-infringing P2P nodes, a more often lapsed than not EFF membership), but that’s the tack I’ll take here.

As a free speech absolutist I’ve always found the concept of “digital rights” superfluous. Though knowledge of computers may have helped me understand “the issues,” I needed none to oppose crypto export laws, the clipper chip, CDA, DMCA, perpetual copyright extension and the like. Still, I hold “ditigal rights,” for lack of a better term, near and dear. So how I became a copyfighter of sorts: four “click themes,” one with a “click moment.” All coalesced around 1988-1992, happily matching my college years, which otherwise were a complete waste of time.

First, earliest, and most important, I’d had an ear for “experimental” music since before college. At college I scheduled and skipped classes and missed sleep around WEFT schedule. Nothing was better than great music, and from my perspective, big record companies provided none of it. There was and is more mind-blowingly escastic music made for peanuts than I could hope to experience in many lifetimes. I didn’t have the terms just yet, but it was intuitively obvious that there was no public goods provisioning problem for art, at least not for anything I appreciated, while there was a massive oversupply of abominable anti-art.

Second, somewhere between reading libertarian tracts and studying economics, I hit upon the idea that “intellectual property” may be neither. Those are likely sources anyway–I don’t remember where I first came across the idea. I kept an eye out for confirmation and somewhere, also forgotten, I found a reference to Tom Palmer‘s Intellectual Property: A Non-Posnerian Law and Economics Approach. Finding and reading the article, which describes intellectual property as a state-granted monopoly privilege developed through rent seeking by publishers and non-monopoly means of producing intangible goods, at my university’s law library was my “click moment.”

Third, I saw great promise in the nascent free software movement, and I wanted to run UNIX on my computer. I awaited 386BSD with baited breath and remember when Torvalds announced Linux on Usenet. I prematurely predicted world domination a few times, but regardless, free software was and is the most concrete, compelling and hopeful sign that large scale non-monopoly production of non-rivalrous goods is possible and good, and that the net facilitates such production, and that freedom on the net and free software together render each other more useful, imporant, and defensible.

Fourth, last, and least important, I followed the cypherpunks list for some time, where the ideas of crypto anarchy and BlackNet were developed. In the ten years or so since the net has not turned inside out nor overturned governments and corporations, yet we are very early in its history. Cypherpunk outcomes may remain vaporware indefinitely, but nonetheless are evocative of the transformational potential of the net. I do not know what ends will occur, but I’ll gladly place my bets on, and defend, the means of freedom and decentralization rather than control and protectionism.

The EFF has done an immense amount of great work over the past 15 years. You should join, and I will update my membership. However, my very favorite thing about the EFF is indirect–I’ve seen co-founder and board member John Gilmore at both drug war and DMCA protests. If you care about digital rights or any rights at all and do not understand descruction of individuals, rights, and societies wreaked by the drug war, there’s no time like the present to learn–the first step needed in order to stand up for your rights.


Blog-a-thon tag:

Reverse bounties improved

Thursday, July 21st, 2005

Gordon Mohr suggested in a comment that as a name Dominant Assurance Contract is no good and perhaps “refund bonus” would be better. That may be right, though “refund bonus” seems to only describe part of the arrangement.

I conducted a brief search for alternatives, unsatisfactory apart from discovering that the term reverse bounty was recently (April) used to describe an offer by a programmer to develop a feature when a certain amount of money is raised (a bounty is offered by someone requesting a feature). At least one reverse bounty successfully raised the amount requested. I cannot tell what happens to contributed funds in the case where the amount requested is not raised. Notably for the successful reverse bounty the developer said they would ‘top up’ the money and ensure the feature got built. A subsequent reverse bounty seems to have raised nothing so far, perhaps in part because it does not appear to come with any guarantee (also the proposed feature is probably has a narrow audience and the reverse bounty is being called a “request for funding”–true, but very dull).

An assurance contract returns contributions (or cancels pledges) in case the amount requested is not raised. A dominant assurance contract returns contributions plus a failure payoff or refund bouns, making it worthwhile for interested parties to contribute even if they believe the contribution threshold will not be met. Both concepts could easily be applied to reverse bounties.

Realtime Wiki PileUp

Friday, July 15th, 2005

Although I’ve long thought collaborative realtime editing a useful concept, I’d never tried it hands on until a month ago when Christopher Allen got me to use MoonEdit. Very useful, but not free software and as far as I know no OS X support.

At the iCommons summit someone wanted participants to collaboratively take notes on a wiki. Wikis so far are great for asynchronous distributed collaboration, unworkable for synchronous distributed collaboration. Luis Villa pointed out ☠, an experimental collaborative realtime editor hosted in MediaWiki pages (but not very tightly integrated with MediaWiki–only new chunks of a page may be collaboratively edited in realtime, and the server side of this AJAX application is a separate server written in Java). Great idea, and the whiteboarding sub-application is neat, but I’d really prefer to be able to collaboratively edit an entire page.

In the past week I’ve noticed a couple more addtions to the Wikipedia collaborative realtime editior article, one of them web-based (Oxyd), then yesterday JotSpot Live.

Seems to me that almost any content creation application could benefit from optional realtime collaboration, legacy desktop applications included. Word processors could get their first useful new feature in many years.

Update: Also see ting-wiki for a hybrid web-local editor approach.

Dominant assurance contract implementations?

Friday, July 8th, 2005

None yet, but Russ Nelson left a comment on a Slashdot article about using Fundable for open source software saying that he has plans to modify the Public Software Fund so it allows for dominant assurance contracts.

Just in case the people behind Fundable were not aware of the concept I just suggested that they also offer dominant assurance contracts.

Separately, see Kragen Sitaker’s explanation of assurance contracts as put options inspired by Anton Sherwood’s description of using call options rather than eminent domain to acquire land for a road or pipeline.

Where is server side JavaScript?

Thursday, July 7th, 2005

Nearly a decade ago Netscape released Enterprise Server 2.0 with LiveWire, their name for JavaScript used as a server side web scripting language as PHP is most commonly today. LiveWire was extremely buggy, but Netscape was golden in large organizations, so I had the opportunity to develop or support development of several large web applications written in LiveWire. The world’s buggiest webmail client ever was a component of one of them.

Thankfully Netscape’s server products faded over the next few years. As far as I know LiveWire is almost completely forgotten.

The only uses of server side JavaScript that I’m aware of today are in Helma Object Publisher and as an alternative scripting language for Active Server Pages (though I understand the vast majority of ASP development uses VBScript). Some Java-based web applications may embed the Rhino JavaScript engine (that’s what the Helma framework does, prominently).

I’m mildly suprised that server side JavaScript isn’t more popular, given the opportunity for sharing skills and code with client side JavaScript. Data validation is one obvious opportunity for the same code executed on both the web browser and server, however the one that prompted me to write this post is web services. Suppose you want to offer a “web service” that can be consumed directly by browsers, i.e., a JavaScript application without a UI. You also want to offer approximately the same service as a traditional web service for consumption by non-JavaScript clients, and you don’t want to write much of the same code twice.

The only page I could find about sharing JavaScript code between client and server applications is a terse article on Shared Sides JavaScript.

So why hasn’t JavaScript seen more use on the server side? Possibilities:

  • JavaScript is unfairly looked down upon by server developers. (The success of PHP is a counterexample.)
  • Client side JavaScript is typically spaghetti code written by designers. The cost of sharing code between client and server applications in this context is too high.
  • No obvious way to deploy JavaScript code in Apache. There was work on mod_javascript/mod_js in the late nineties, but I see no evidence it went anywhere.
  • It’s easier for developers to handle different languages on the client and server. In my experience with LiveWire last decade I did encounter a few developers unclear on the concept that some JavaScript executed on the server, some on the client.

Perhaps the recent hype around AJAX will attact a critical mass of good programmers to JavaScript, some of whom will want to reuse well structured code on the server, leading to server side JavaScript’s first renaissance.

Pre-posting update: As I was about to post this Brian Heung told me about TrimJunction, which has more or less the motivation I was thinking of:

With an eye towards Don’t Repeat Yourself, spiced up with a little bit of Ajax, the grand vision of the TrimPath Junction project is to be able to write web application logic just once, in familiar languages like JavaScript.

A Junction-based web application should run, validate input, process data, generate output and do its thing on both the server AND the client. We intend to use the Rhino JavaScript runtime for the server and use your favorite, modern web browser for the client.

Check out the Tales on TrimPath blog for some interesting JavaScript ideas. The Junction announcement is here, two months old.

Update 20050716: OpenMocha is ready for a spin:

The goal of OpenMocha is to maximize the fun and productivity of Javascript development by blending the gap between browser and server based scripting.

Sort of open source economic models

Tuesday, June 14th, 2005

Mark Thoma is building an “open source” repository for economic models. Well, sort of open source. Unfortunately none of the four models included so far, nor the initial post, which Thoma says is open source, say anything about copyright or licenses.

Unfortunately under this default copyright regime, explicit licensing (or dedication to the public domain) is required for an open source project to scale. If five people contribute to a model posted to Thoma’s repository none of the contributors, including the original author, nor anyone else, has any right to distribute the resulting model, or allow others to further modify the model.

That’s why open source projects use explicit open source licenses and open source repositories require each project in the repository to use an explicit license. That’s what an open source economic models repository, or indeed any repository that wants to emulate the open source model, should also do.

NB creators of open source economic models may wish to consider an open source-like license intended for “content” rather than code, e.g., the Free Documentation License (that’s what Wikipedia uses) or a liberal Creative Commons license (e.g., Attribution or Attribution-ShareAlike).

Also see the open access movement, commons-based peer production and Science Commons. I don’t know how familiar the mainstream economics profession is familiar with these concepts, but “they” ought to be.

Via Alex Tabarrok.

Kragen Sitaker on Dominant Assurance Contracts

Thursday, June 2nd, 2005

Kragen Sitaker thinks out loud about dominant assurance contracts for funding public goods, especially free software. My first post on dominant assurance contracts is here. A few thoughts regarding Kragen’s analysis follow.

On public goods:

Generally public goods tend to be underprovided

Almost by definition, but my intuition is that there are important and almost universally unacknowledged exceptions where the good is nonrival, production generates large private benefits, consumption opportunities are limited, or perhaps some combination of these, e.g., recorded music. However, I have no rigorous backing for this intuition. Todo: read existing literature on socially optimal copyright.

[Richard Stallman] would be a happier man today had he spent those years [writing free software] not working with computers at all

I don’t know whether Stallman is happy, but this sounds suspect. He has gained tremendous personal benefits through his programming that he probably couldn’t have obtained otherwise (though perhaps this does not matter, as he shouldn’t have expected to become famous and leader of a very significant movement, unless he was a megalomaniac). It would be more interesting and clearer to make a case that the modal free software contributor acts selflessly, but that would be a long argument and beside the point, which I suppose is simply that unselfish action can produce some public goods.

On dominant assurance contracts:

I suspect that the analysis extends to a more general case, in which each contributor chooses the amount of their own contribution $S, the escrow agent performs the project if the total contributions are over some basic amount, and the extra refund is a specified percentage of the contribution rather than a specified dollar amount; but Tabarrok does not mention this in his paper.

Looks like a very useful extension.

However, copyright places the risk on the artist, while dominant assurance contracts place the risk on the artist’s fans.

I think here the risk is of a worse than expected work. It ought to be possible for an artist to assume more risk by making fulfillment of the contract (and thus not having to refund contributions plus a penalty) contingent on some agreed and hopefully minimally gameable quality measure.

[Update 20050605:On second thought I’m confusing (or extending) the dominant assurance contract idea, which only stipulates that a failure penalty be paid when not enough resources are raised, not when a successfully funded project is not successfully completed.]

Someone also asked whether it was possible to model a dominant assurance contract as a normal assurance contract with a separate prediction market, like the Iowa Electronic Markets, in which people traded idea futures on the likelihood of the completion of the funding. I don’t know how to model it in those terms, although it might be possible.

I don’t know how to model an assurance contract plus prediction market hedging either, but I suspect it may not work as well as a dominant assurance contract.

First, with a dominant assurance contract only contributors receive a payoff in the case of failure. If contribution and failure payoff are unbundled, how are incentives to contribute any different than a plain assurance contract? One can hedge against failure without contributing to sucess.

Second, risk and management of risk is transferred from the entrepreneur to the contributor. Managing risk by hedging securities is hard and costly. The entrepreneur offering the contract may be far more capable of managing risk than contributors.

Prediction market prices may prove helpful to entrepreneurs and potential contributors in deciding what contracts to offer and accept, but this is orthogonal to the structure of dominant assurance contracts, which attack contribution problems rather than revelation problems.

Finally, Tabarrok suggests that the market for escrow agents should be highly competitive because there are low barriers to entry — all you have to do is write a three-line contract and hold some money, assuming that the possible contributors first hold some kind of competition to select which escrow agent they want to use. I think that’s a big assumption, and that escrow agents are likely to wield substantial market power by virtue of network effects, and consequently extract substantial profits from this business.

A well-known escrow agent will be able to attract many more contributors, and so will be able to require much less money from each, which is likely to be a large incentive to use the well-known
agent.

Tabarrok does not mention escrow agents, who may well be involved, but I see no reason to assume the market for such services should be any less competitive than any other market for financial intermediaries. He says that he expects the market for contract providers to be competitive. Presumably these will be entrepreneurs with an expertise in producing a particular public good or aggregators. We have examples of these, from contractors to the United Way or eBay. How would dominant assurance contracts alter the competitive landscape, for better or worse?

[Update 20050605:The distinction I draw between escrow agents and contract providers may not be relevant. It appears that Fundable acts as an aggregator/marketplace and an escrow agent. Also, citing eBay may not inspire confidence. I’ve read, but cannot find a cite for, that it has 85% market share in the US person-to-person online auction market. Whether this is something to worry about will be in the eye of the beholder, e.g., what “market” is relevant — eBay faces indirect competition from garage sales, new goods at retail, and everything in between. Kragen will “just” have to work on zFundable.]

Kragen also has good thoughts on how dominant assurance contracts could prove useful in several fields, potential problems, and responses to several irrelevant objections. Read the whole thing and see Tabarrok’s paper and recent post without which none of the current discussants would be aware of the idea.

Public Goods Group Shopping

Friday, May 13th, 2005

Discussing Fundable, Alex Tabarrok explains assurance contracts and cites his improvement, dominant assurance contracts (emphasis in original, link added):

In a dominant assurance contract if the group goal is not met then everyone who offered to contribute is given their money back plus a bonus. It turns out that it then becomes a dominant strategy to contribute and the public good is always provided!

Very interesting. I’ve mentioned before in passing that many political problems can be thought of as public goods problems.

I’d really like to see some analysis of what sorts of public goods are amenable to provision via dominant assurance contracts and then implementation. The only other instances of “dominant assurance contract(s)” I can find are in Zane Spindler‘s pedagogical A Tale of Twin Cities: A Parable in Urban Political Economy! and this which indicates that Tabarrok was working on a book on the subject ten years ago. I hope someone else follows up.

I’m also interested in further analysis of other proposed mechanisms for funding the production of public goods, including how their “payoff” would be effected by the addition of a failure bonus:

Fundable reminds me a little of the (failed: mobshop.com, mercata.com, etc.) group shopping phenomena, but far more general.

Imperial Public License

Friday, April 8th, 2005

This is too stupid to blog, but I’m going to go ahead and expose my inability to exercise self restraint on my moron level intelligence.

CNET reports on Sun executive Jonathan Schwartz critisizing the GPL as a tool of U.S. imperialism:

The GPL purports to have freedom at its core, but it imposes on its users “a rather predatory obligation to disgorge all their IP back to the wealthiest nation in the world,” the United States, where the GPL originated, Schwartz said. “If you look at the difference between the license we elected to use and GPL, there are no obligations to economies or universities or manufacturers that take the source code and embed it in (their own) code.”

This has got to be one of the more wrongheaded statements by software executives about free software (though I haven’t followed SCO in a long time).

Should one choose to incorporate GPL’d code in their software, there is an obligation to release the derived software’s code under the GPL. Anyone in the world may use the code under the GPL’s terms. Only in the sense that the U.S. is part of the world is there a requirement to “disgorge” relevant IP (the derived software’s code) to the U.S.

This is predatory and imperialistic in approximately the same manner that trade between people in different nations is considered by some to be predatory and imperialistic — it isn’t, except in the clouded heads of Schwartz and economic neanderthals.

Oh, and the geographic origin of the GPL is completely irrelevant.

Reported in the same story, Schwartz makes another wrongheaded argument. At least this one isn’t a complete non sequitur:

“Economies and nations need intellectual property (IP) to pull themselves up by their own bootstraps. I’ve talked to developing nations, representatives from academia and manufacturing companies that had begun to incorporate GPL software into their products, then…found they had an obligation to deliver their IP back into the world,” Schwartz said.

To the contrary, ignoring IP has proven a great way to develop quickly. The U.S. did not enforce European claims until the 1890s. More recently all of the Asian tigers have engaged in copycat development. Imitation is simply a great way to quickly close the technology gap with the most advanced economies. IP owners in the U.S. and other advanced economies want governments of developing economies to enforce strong IP — becuase that is in the IP owners’ interest, not because it is a reasonable development strategy.

By the way, ignoring IP can mean ignoring the requirements of the (copyright dependent) GPL as well.

Via Dana Blankenhorn.

Also today, read about Jonathan Schwartz, visionary.