Post Public Goods

Yet Another Biaxial Political Spectrum

Sunday, November 21st, 2010

Thought of while microblogging (emphasis added):

@glynmoody … I find it nice that movement gelling on both govt-skeptic and market-skeptic sides eg c4sif.org and p2pfoundation.net

As one moves toward increasing skepticism of both mechanisms, one might focus more on institutional design (wherein there is a huge space for exploration: two areas I’ve occasionally rambled about are commons and futarchy, both applicable to arrangements across state and market), as everything is broken and needs fixing. If one is much more skeptical about one mechanism than the other, one will assume the more confidence-inspiring mechanism will adequately check any problems of the other — e.g., so-called Masonomics:

At the University of Chicago, economists lean to the right of the economics profession. They are known for saying, in effect, “Markets work well. Use the market.”

At MIT and other bastions of mainstream economics, most economists are to the left of center but to the right of the academic community as a whole. These economists are known for saying, in effect, “Markets fail. Use government.”

Masonomics says, “Markets fail. Use markets.”

Presumably the prototypical Masonomist on the above spectrum would be far on the left (extremely skeptical of the state) and in the middle (somewhat skeptical of markets), leading such a person to always favor market solutions (the state being a lost cause), with more emphasis on the design of market institutions than someone merely confident in the market and skeptical of the state might. Schools of socialism that roughly mirror Masonomics must exist — “Governments fail. Use government [carefully].” — I just don’t know their names, so I put “rational socialism” on the spectrum.

It seems that from many places on the spectrum, one might beneficially increase skepticism of one’s preferred mechanism, so as to focus on making that mechanism work better, and thus “win” more in the long term. Admittedly, this might seem an awful tradeoff for an activist focused on bashing (whatever they see as) evil in the short term. Further, one genuinely interested in improving the world as opposed to making ideological points might focus on improving mechanisms that make criticism and improvement of all mechanisms easier (nothing remotely new about this observation) — these are public goods that facilitate the provision of more public goods.

Completely coincidentally (noted while writing this post), died today. His name is associated with a fairly well known . Nolan also founded the strategically unsound U.S. Libertarian Party. If it mattered at all, and weren’t in bad taste, I’d suggest it die with him!

Micropatronage 1.0

Monday, November 9th, 2009

I last looked closely at a new / site 2 years ago, having resolved that nobody was likely to take an interesting or well executed approach to the idea that would end up making a significant impact. Since then I’ve heard in passing of a number of new projects that fit my low expectations, but also two that appear very well executed and successful on a scale large enough that it isn’t ridiculous to imagine this sort of mechanism becoming important at least for cultural production — (French; a few English links gathered here) and Kickstarter.

The occasion of this post is Fred Benenson’s announcement that he’s joining Kickstarter after having done outreach and product management for Creative Commons for the last year and a half (and involved as an intern and activist for much longer). It’s sad to see them go, but great to see recent CC alumni start or join projects that at least have the potential to be important enablers of the free and open world — in addition to Fred, also Asheesh Laroia (OpenHatch) and Jon Phillips ().

Congratulations all!

It also feels good to hire people at Creative Commons who have demonstrated some commitment and capacity nearby — Fred, Asheesh, and Jon were all examples of that, and more recently Chris Webber, who was a hacker before coming to CC.

Elinor Ostrom’s Nobel-winning Commons

Monday, October 12th, 2009

On the Creative Commons blog I highlight the connection between 2009 Nobel Prize in Economics winner ‘s work on the governance of and relatively recent work on knowledge commons, including a 2003 paper she co-authored addressing the connection.

Great choice. There are countless posts in the econoblogosphere about the prize — I’ll mention two. Paul Romer (a favorite to win the Nobel himself) praises her practice of economics, essentially as being based on an investigation of reality rather than wishful thinking (what Romer calls a “skyhook”):

They, more than anyone else in the profession, spelled out the program that economists should follow. To make the rules that people follow emerge as an equilibrium outcome instead of a skyhook, economists must extend our models of preferences and gather field and experimental evidence on the nature of these preferences.

Economists who have become addicted to skyhooks, who think that they are doing deep theory but are really just assuming their conclusions, find it hard to even understand what it would mean to make the rules that humans follow the object of scientific inquiry. If we fail to explore rules in greater depth, economists will have little to say about the most pressing issues facing humans today – how to improve the quality of bad rules that cause needless waste, harm, and suffering.

Cheers to the Nobel committee for recognizing work on one of the deepest issues in economics. Bravo to the political scientist who showed that she was a better economist than the economic imperialists who can’t tell the difference between assuming and understanding.

Alex Tabarok (who I’ve mentioned before on the related problem of private provision of public goods provides a summary of Ostrom’s work on the well-governed commons. Here’s Tabarrok’s excellent closing paragraph:

For Ostrom it’s not the tragedy of the commons but the opportunity of the commons. Not only can a commons be well-governed but the rules which help to provide efficiency in resource use are also those that foster community and engagement. A formally government protected forest, for example, will fail to protect if the local users do not regard the rules as legitimate. In Hayekian terms legislation is not the same as law. Ostrom’s work is about understanding how the laws of common resource governance evolve and how we may better conserve resources by making legislation that does not conflict with law.

This speaks directly to commons-pool (rivalrous, non-excludable) goods, but applies analogously to public (non-rivalrous, non-excludable) goods.

Content layer infrastructure

Saturday, July 25th, 2009

Last Sunday I appeared (mp4 download) on a tech interview program called Press: Here. It went ok. Most of the questions were softball and somewhat repetitive. Lots more could have been said about any of them, but I think I did a pretty good job of hitting a major point on each and not meandering. However, one thing I said (emphasized below) sounds like pure bs:

this has been done in the open source software world for a couple decades now and now that people are more concerned about the content layer that’s really part of the infrastructure having a way to clear those permissions without the lawyer-to-lawyer conversation happen every single time is necessary

I could’ve omitted the bolded words above and retained the respect of any viewer with a brain. What the heck did I mean? I was referring to an argument, primarily made by Joi Ito over the last year or so, using a stylized version of the layers of a protocol stack. David Weinberger’s live-blogging of Ito provides a good summary:

Way back when, it was difficult to connect computers. Then we got Ethernet, then TCP/IP, and then HTTP (the Web). These new layers allow participation without permission. The cost of sending information and the cost of innovation have gone down (because the cost of failure has gone down). Now we’re getting another layer: Creative Commons. “By standardizing and simplifying the legal layer … I think we will lower the costs and create another explosion of innovation.”

Protocol geeks may object, but I think it’s a fairly compelling argument, at least for explaining why what Creative Commons does is “big”. The problems of not having a top layer (I called it “content”, the slide photographed above says “knowledge” — what it calls “content” is usually called “application”, and the note above says “legal”, referring to one required mechanism for opening up permissions around content, knowledge, or whatever one wishs to call it) in which a commons can be taken for granted (ie like infrastructure) is evident, for example in the failure by lawsuit of most interesting online music services, or the inaccessibility of much of the scientific literature to most humans and machines (eg for data mining), as are powerful hints as to what is possible where it exists, for example the vast ecology enabled by Wikipedia’s openness such as DBpedia.

I didn’t make that argument on-screen. Probably a good thing, given the previous paragraph’s tortured language. I shall practice. Critique welcome.

Press: Here is broadcast from its SF bay area home station (NBC) and I’ve heard is syndicated to many other stations. However, its website says nothing about how to view the program on TV, even on its home station. I even had a hard time finding any TV schedule on the NBC Bay Area website — a tiny link in the footer takes one to subpages for the station with lame schedule information syndicated from TV Guide. I found this near total disconnect between TV and the web a very odd, but then again, I don’t really care where the weird segment of the population that watches TV obtains schedule information. Press: Here ought to release its programs under a liberal CC license as soon as the show airs. Its own website gets very little traffic, many of the interviews would be relevant for uploading to Wikimedia Commons, and the ones that got used in Wikipedia would drive significant traffic back to the program website.

Conjectured impact of Wikipedia license interoperability?

Sunday, May 31st, 2009

Wikipedians voted overwhelmingly against kryptonite — for using Creative Commons Attribution-ShareAlike (CC BY-SA) as the main content license for Wikipedias and their sibling projects, permitting these to incorporate work offered under CC BY-SA, the main non-software copyleft license used outside of Wikipedia, and other CC BY-SA licensed projects to incorporate content from Wikipedia. The addition of CC BY-SA to Wikimedia sites should happen in late June and there is an outreach effort to encourage non-Wikimedia wikis under the Free Documentation License (FDL; usually chosen for Wikipedia compatibility) to also migrate to CC BY-SA by August 1.

This change clearly ought to over time increase the proportion of content licensed under free-as-in-freedom copyleft licenses. More content licensed under a single or interoperable copyleft licenses increases the reasons to cooperate with that regime — to offer new work under the dominant copyleft license (in the non-software case, now unambiguously CC BY-SA) in order to have access to content under that regime — and decreases the reasons to avoid copylefted work, one of which is the impossibility of incorporating works under multiple and incompatible copyleft licenses (when relying on the permissions of those licenses, modulo fair use). Put another way, the unified mass and thus gravitational pull of the copylefted content body is about to increase substantially.

Sounds good — but what can we expect from the actual impact of making legally interoperable the mass of Free Culture and its exemplar, Wikipedia? How can we gauge that impact, short of access to a universe where Wikipedians reject CC BY-SA? A few ideas:

(1) Wikimedia projects will be dual licensed after the addition of CC BY-SA — content will continue to be available under the FDL, until CC BY-SA content is mixed in, at which point the article or other work in question is only available under CC BY-SA. One measure of the licensing change’s direct impact on Wikimedia projects would be the number and proportion of CC BY-SA-only articles over time, assuming an effort to keep track.

I suspect it will take a long time (years?) for a non-negligible proportion of Wikipedia articles to be CC BY-SA-only, i.e., to have directly incorporated external CC BY-SA content. However, although most direct, this is probably the least significant impact of the change, and my suspicion could be upset if other impacts (below) turn out to be large, creating lots of CC BY-SA content useful for incorporating into Wikipedia articles.

(2) Content from Wikipedias and other Wikimedia projects could be incorporated in non-Wikimedia projects more. The difficulty here is measurement, but given academic interest in Wikipedia and the web generally, it wouldn’t be surprising to see the requisite data sets (historical and ongoing) and expertise brought together to analyze the use of Wikimedia project content elsewhere over time. Note that a larger than expected (there’s the rub) increase in such use could be the result of CC BY-SA being more straightforward for users than the FDL (indeed, a major reason for the change) as much or more than the result of license interoperability.

(3) New and existing projects could adopt or switch to CC BY-SA when they otherwise wouldn’t have in order to gain compatibility with Wikimedia projects. One sure indication of this would involve major projects using a CC license with a “noncommercial” term switching to CC BY-SA and giving interoperability with Wikipedia as the reason for the switch. Another indicator would simply be an increase in the use of CC BY-SA (and even more permissive instruments such as CC BY and CC0, to the extent the motivation is primarily to create content that can be used in Wikipedia rather than to use content from Wikipedia) relative to more restrictive (and non-interoperable with Wikipedia) licenses.

(4) Apart from needing to be compatible with Wikipedia because one desires to incorporate its content, one might want to be compatible with Wikipedia because it is “cool” to be so. I don’t know that this has occurred on a significant scale to this date, so if it begins to one possible factor in such a development would be the change to CC BY-SA. How could this be? As cool as Wikipedia compatibility sounds, having to adopt a hard to understand license intended for software documentation (the FDL) makes attaining this coolness seem infeasible. Consideration of the FDL just hasn’t been on the radar of many outside of the spaces of documentation, encyclopedias, and perhaps educational materials, while consideration and oftentimes use of CC licenses is active in many segments. However, in most of these more restrictive CC licenses (i.e., those prohibiting commercial use or adaptation) are most popular. So if we see an upsurge in the use of CC BY-SA for popular culture works (music, film) the beginning of which coincides with the Wikimedia licensing change, it may not be unreasonable to guess that the latter caused the former.

(5) The weight of Wikipedia and relative accessibility of CC BY-SA could further consensus that the freedoms demanded by Wikimedia projects are some combination of “good”, “correct”, “moral”, and “necessary” — if some of these can be distinguished from “cool”. In the long term, this could be indicated by the sidelining of terms for content that do not qualify as free and open, as they have been for software, where and similar obvious competitors for important free software niches are strategically irrelevant.

Obviously 3, 4, and 5 overlap somewhat.

(6) I conjecture that making more cultural production more wiki-like (or to gain WikiNature) is probably the biggest determinant of the success of Free Culture. More interplay between the Wikipedia, both the most significant free culture project and the most significant wiki, and the rest of the free culture and open content universe can only further this trend — though I have no idea how to measure the possible impact of the licensing change here, and wouldn’t want to ascribe too much weight to it.

(7) Last, the attention of the Wikipedia community ought to have a positive impact on the quality of future versions of Creative Commons licenses (there shouldn’t be another version until 2011 or so, and hopefully there won’t be another version after that for much longer). Presumably Wikipedians also would have had a positive impact on future versions of the FDL, but arguably less so given the Free Software Foundation’s (excellent) focus on software freedom.

Will any of the above play out in a significant way? How much will it be reasonable to attribute to the license change? Will researchers bother to find out? Here’s to hoping!

Prior to the Wikipedia community vote on adopting CC BY-SA it crossed my mind to set up several play money prediction market contracts concerning the above outcomes conditioned on Wikipedia adopting CC BY-SA by August 1, 2009, for which I did set up a contract. It is just as well that I didn’t — or rather if I had, I would have had to heavily promote all of the contracts in order to stimulate any play trading — the basic adoption contract at this point hasn’t budged from 56% since the vote results were announced, which means nobody is paying attention to the contract on Hubdub.

CC6+

Wednesday, December 17th, 2008

December 16 marked six years since the release of the first Creative Commons licenses. Most of the celebrations around the world have already taken place or are going on right now, though San Francisco’s is on December 18. (For CC history before 2002-12-16, see video of a panel recorded a few days ago featuring two of CC’s founding board members and first executive director or read the book Viral Spiral, available early next year, though my favorite is this email.)

I’ve worked for CC since April, 2003, though as I say in the header of this blog, I don’t represent any organization here. However, I will use this space to ask for your support of my and others’ work at CC. We’re nearing the end of our fourth annual fall public fundraising campaign and about halfway to our goal of raising US$500,000. We really need your support — past campaigns have closed out with large corporate contributions, though one has to be less optimistic about those given the financial meltdown and widespread cutbacks. Over the longer term we need to steadily decrease reliance on large grants from visionary foundations, which still contribute the majority of our funding.

Sadly I have nothing to satisfy a futarchist donor, but take my sticking around as a small indicator that investing in Creative Commons is a highly leveraged way to create a good future. A few concrete examples follow.

became a W3C Recommendation on October 14, the culmination of a 4+ year effort to integrate the Semantic Web and the Web that everyone uses. There were several important contributors, but I’m certain that it would have taken much longer (possibly never) or produced a much less useful result without CC’s leadership (our motivation was first to describe CC-licensed works on the web, but we’re also now using RDFa as infrastructure for building decoupled web applications and as part of a strategy to make all scientific research available and queryable as a giant database). For a pop version (barely mentioning any specific technology) of why making the web semantic is significant, watch Kevin Kelly on the next 5,000 days of the web.

Wikipedia seems to be on a path to migrating to using the CC BY-SA license, clearing up a major legal interoperability problem resulting from Wikipedia starting before CC launched, when there was no really appropriate license for the project. The GNU FDL, which is now Wikipedia’s (and most other Wikimedia Foundation Projects’) primary license, and CC BY-SA are both copyleft licenses (altered works must be published under the same copyleft license, except when not restricted by copyright), and incompatible widely used copyleft licenses are kryptonite to the efficacy of copyleft. If this migration happens, it will increase the impact of Wikipedia, Creative Commons, free culture, and the larger movement for free-as-in-freedom on the world and on each other, all for the good. While this has basically been a six year effort on the part of CC, FSF, and the Wikimedia Foundation, there’s a good chance that without CC, a worse (fragmented, at least) copyleft landscape for creative works would result. Perhaps not so coincidentally, I like to point out that since CC launched, there has been negative in the creative works space, the opposite of the case in the software world.

Retroactive copyright extension cripples the public domain, but there are relatively unexplored options for increasing the effective size of the public domain — instruments to increase certainty and findability of works in the public domain, to enable works not in the public domain to be effectively as close as possible, and to keep facts in the public domain. CC is pursuing all three projects, worldwide. I don’t think any other organization is placed to tackle all of these thorny problems comprehensively. The public domain is not only tremendously important for culture and science, but the only aesthetically pleasing concept in the realm of intellectual protectionism (because it isn’t) — sorry, copyleft and other public licensing concepts are just necessary hacks. (I already said I’m giving my opinion here, right?)

CC is doing much more, but the above are a few examples where it is fairly easy to see its delta. CC’s Science Commons and ccLearn divisions provide several more.

I would see CC as a wild success if all it ever accomplished was to provide a counterexample to be used by those who fight against efforts to cripple digital technologies in the interest of protecting ice delivery jobs, because such crippling harms science and education (against these massive drivers of human improvement, it’s hard to care about marginal cultural production at all), but I think we’re on the way to accomplishing much more, which is rather amazing.

More abstractly, I think the role of creating “commons” (what CC does and free/open source software are examples) in nudging the future in a good direction (both discouraging bad outcomes and encouraging good ones) is horribly underappreciated. There are a bunch of angles to explore this from, a few of which I’ve sketched.

While CC has some pretty compelling and visible accomplishments, my guess is that most of the direct benefits of its projects (legal, technical, and otherwise) may be thought of in terms of lowering transaction costs. My guess is those benefits are huge, but almost never perceived. So it would be smart and good to engage in a visible transaction — contribute to CC’s annual fundraising campaign.

So, how could programmers make a living?

Saturday, April 12th, 2008

Richard Stallman in Gnu’s Bulletin Vol. 1 No. 1, February 1986:

There are plenty of ways that programmers could make a living without selling the right to use a program. This way is customary now because it brings programmers and businessmen the most money, not because it is the only way to make a living. It is easy to find other ways if you want to find them. Here are a number of examples.

A manufacturer introducing a new computer will pay for the porting of operating systems onto the new hardware.

The sale of teaching, hand-holding and maintenance services could also employ programmers.

People with new ideas could distribute programs as freeware, asking for donations from satisfied users, or selling hand-holding services. I have met people who are already working this way successfully.

Users with related needs can form users’ groups, and pay dues. A group would contract with programming companies to write programs that the group’s members would like to use.

In the intervening twentysomething years much practical experience has been gained, evidenced by large businesses employing many programmers following these models. Well, except for the last one, which has turned out to be insignificant so far, though perhaps there remains lots of experimentation before it plays out.

What the above misses is that most software is not created for licensing (commercial or public) and most programmers’ jobs do not depend on licensing, much as most musicians are not in the pay of the recorded music distribution business.

The purpose-driven voluntary sector

Sunday, January 27th, 2008

I’ve always had reservations about and similar phrasings. Nathan Smith’s alternative delights me:

I like to call this the “purpose-driven voluntary sector,” as distinct from (a) the profit-driven voluntary sector, i.e. the private sector, and (b) the purpose-driven coercive sector, i.e., the public sector.

Don’t forget the (AKA , to varying degrees). Of course there’s a fair amount of overlap.

The most exciting parts of the purpose-driven voluntary sector involve peer production.

Smith also used this terminology in an excellent comment on the nonprofit boom last October:

Some labor economists have distinguished the “intrinsic rewards” (love of the work itself) and the “extrinsic rewards” (money, benefits) from working.

By working for a non-profit, you may sacrifice some extrinsic rewards for some intrinsic rewards. As people get more and more affluent, it makes sense that more and more people will be willing to make that trade-off.

I think of non-profits as the “purpose-driven voluntary sector.” It’s distinct from the pure profit sector, officially dedicated to profits, and the government sector, which is ultimately financed through coercion. If more and more public goods can be provided through the purpose-driven voluntary sector, government can shrink.

Patri Friedman’s basic views on copyright and patents

Tuesday, December 25th, 2007

Patri Friedman just posted a nice essay concerning his basic views on copyright and patents, which I’ll summarize as “Policy should aim for economic efficiency …”:

So an economically optimal regime would have different rules for different industries, protecting some but not others, based on their exactly supply/demand curves.

“… but don’t forget about enforcement costs.”:

But really, it doesn’t matter. There is just no fucking way that IP protection is worth the police state it would take to enforce it. And unenforced/unenforceable laws poison society by teaching people not to respect the law.

This leads more or less to my understanding of the sentiment, something like “There’s nothing wrong with copyright per se, but any civil liberties infringement in the name of copyright protection is totally unacceptable.”

I recommend Friedman’s essay, but of course the reason I write is to complain … about the second half of the essay’s last sentence:

Therefore I favor accepting the inevitable as soon as possible, so that we can find new ways to compensate content producers.

This closing both gives comfort to producerists (but in the beginning of the essay Friedman says that people love to create — I agree, see paying to create — and Tom W. Bell has a separate argument that should result in less concern for producers that I’ve been meaning to blog about, but should be obvious from the title — Outgrowing Copyright: The Effect of Market Size on Copyright Policy) and is a stretch — copyright might make alternatives less pressing and interesting, but it certainly does not prevent experimentation.

While I’m complaining, enforcement costs aren’t the only often forgotten problem.

Requirements for community funding of open source

Saturday, November 24th, 2007

Last month another site for aggregating donation pledges to open source software projects launched.

I’m not sure there’s anything significant that sets Cofundos apart from microPledge featurewise. Possibly a step where bidders (pledgers) vote on which developer bid to accept. However I’m not certain how a developer is chosen on microPledge — their FAQ says “A quote will be chosen that delivers the finished and paid product to the pledgers most quickly based on their current pledging rate (not necessarily the shortest quote).” microPledge’s scheme for in progress payments may set it apart.

In terms of marketing and associations, Cofundos comes from the Agile Knowledge Engineering and Semantic Web research group at the University of Leipzig, producers of , about which I’ve written. Many of the early proposed projects are directly related to AKSW research. Their copyright policy is appreciated.

microPledge is produced by three Christian siblings who don’t push their religion.

Cofundos lists 61 proposed projects after one month, microPledge lists about 160 after about three and a half months. I don’t see any great successes on either site, but both are young, and perhaps I’m not looking hard enough.

Cofundos and microPledge are both welcome experiments, though I don’t expect either to become huge. On the other hand, even modest success would set a valuable precedent. In that vein I’ve been pretty skeptical about the chances of Fundable, they seem to have attracted a steady stream of users. Although most projects seem to be uninteresting (pledges for bulk purchases, group trips, donations to an individual’s college fund, etc), some production of public goods does seem to being funded, including several film projects in the small thousands of dollars range. Indeed, “My short film” is the default project name in their form for starting a project.

It seems to me that creating requirements and getting in front of interested potential donors are the main challenges for sites focused on funding open source software like Cofundos and microPledge (both say they are only starting with software). Requirements are just hard, and there’s little incentive for anyone to visit an aggregator who hasn’t aggregated anything of interest.

I wonder if integrating financial donations into project bug tracking systems would address both challenges? Of course doing so would have risks, both of increasing bureaucracy around processing bugs and feature requests, necessity of implementing new features (and bugs) in the relevant bug tracking software, and altering the incentives of volunteer contributors.

Via Open Knowledge Foundation Blog.