Post P2P

API commons

Thursday, May 29th, 2014

Notes for panel The API Copyright Emergency: What’s Next? today at API Con SF. The “emergency” is the recent decision in Oracle v. Google, which I don’t discuss directly below, though I did riff on the ongoing case last year.

I begin with and come back to a few times Creative Commons licenses as I was on the panel as a “senior fellow” for that organization, but apart from such emphasis and framing, this is more or less what I think. I got about 80% of the below in on the panel, but hopefully still worth reading even for attendees.

A few follow-up thoughts after the notes.

Creative Commons licenses, like other public licenses, grant permissions around copyright, but as CC’s statement on copyright reform concludes, licenses “are not a substitute for users’ rights, and CC supports ongoing efforts to reform copyright law to strengthen users’ rights and expand the public domain.” In the context of APIs, default policy should be that independent implementation of an API never require permission from the API’s designer, previous implementer, or other rightsholder.

Without such a default policy of permission-free innovation, interoperability and competition will suffer, and the API community invites late and messy regulation at other levels intending to protect consumers from resulting lock-in.

Practically, there are things API developers, service providers, and API consumers can do and demand of each other, both to protect the community from a bad turn in default policy, and to go further in creating a commons. But using tools such as those CC provides, and choosing the right tools, requires looking at what an API consists of, including:

  1. API specification
  2. API documentation
  3. API implementations, server
  4. API implementations, client
  5. Material (often “data”) made available via API
  6. API metadata (e.g, as part of API directory)

(depending on construction, these could all be generated from an annotated implementation, or could each be separate works)

and what restrictions can be pertinent:

  1. Copyright
  2. Patent

(many other issues can arise from providing an API as a service, e.g., privacy, though those are usually not in the range of public licenses and are orthogonal to API “IP”, so I’ll ignore them here)

1-4 are clearly works subject to copyright, while 5 and 6 may or may not be (e.g., hopefully not if purely factual data). Typically only 3 and 4 might be restricted by patents.

Standards bodies typically do their work primarily around 1. Relatively open ones, like the W3C, obtain agreement from all contributors to the standard to permit royalty-free implementation of the standard by anyone, typically including a patent license and permission to prepare and perform derivative works (i.e., copyright, to extent such permission is necessary). One option you have is to put your API through an existing standards organization. This may be too heavyweight, or may be appropriate yet if your API is really a multi-stakeholder thing with multiple peer implementations; the W3C now has a lightweight community group venue which might be appropriate. The Open Web Foundation’s agreements allow you to take this approach for your API without involvement of an existing standards body​. Lawrence Rosen has/will talk about this.

Another approach is to release your API specification (and necessarily 2-4 to the extent they comprise one work, ideally even if they are separate) under a public copyright license, such as one of the CC licenses, the CC0 public domain dedication, or an open source software license. Currently the most obvious choice is the Apache License 2.0, which grants copyright permission as well as including a patent peace clause. One or more of the CC licenses are sometimes suggested, perhaps because specification and documentation are often one work, and the latter seems like a “creative” work. But keep in mind that CC does not recommend using its licenses for software, and instead recommends using an open source software licenses (such as Apache): no CC license includes explicit patent permission, and depending on the specific CC license chosen, it may not be compatible with software licenses, contrary to goal of granting clear permission for independent API implementation, even in the face of a bad policy turn.

One way to go beyond mitigating “API copyrightability” is to publish open source implementations, preferably production, though reference implementations are better than nothing. These implementations would be covered by whatever copyright and patent permissions are granted by the license they are released under — again Apache 2.0 is a good choice, and for software implementation CC licenses should not be used; other software licenses such as [A]GPL might be pertinent depending on business and social goals.

Another way to create a “thick” API commons is to address material made available via APIs, and metadata about APIs. There, CC tools are likely pertinent, e.g., use CC0 for data and metadata to ensure that “facts are free”, as they ought be in spite of other bad policy turns.

To get even thicker, consider the architecture, for lack of a better term, around API development, services, and material accessed and updated via APIs. Just some keywords: Linked Open Data, P2P, federation, Lots of Copies Keep Stuff Safe, collaborative curation.

The other panelists were Pamela Samuelson, Lawrence Rosen, and Annette Hurst, moderated by David Berlind.

I’m fairly familiar with Samuelson’s and Rosen’s work and don’t have comments on what they said on the panel. If you want to read more, I recommend among Samuelson’s papers The Strange Odyssey of Software Interfaces and Intellectual Property Law which shows that the “API copyright emergency” of the panel title is recurrent and intertwined with patent, providing several decades of the pertinent history up to 2008. Contrary to my expectation in the notes above, Rosen didn’t get a chance to talk about the Open Web Foundation agreements, but you can read his 2010 article Implementing Open Standards in Open Source which covers OWF.

Hurst is a lawyer for Orrick representing Oracle in the Oracle v. Google case, so understandably advocated for API copyright, but in the process made several deeply flawed assertions could have consumed the entire duration of the panel, but Berlind did a good job of keeping the conversation moving forward. Still, I want to mention two high level ones here, my paraphrases and responses:

Without software copyright the software economy would go away. This is refuted by software development not for the purposes of selling licenses (which is the vast majority of it), especially free/open source software development, and services (e.g., API provision, the source of which is often never published, though it ought be, see “going beyond” recommendations above). Yes the software economy would change, with less winner-take-all monopoly and less employment for Intellectual Parasite lawyers. But the software economy would be huge and very competitive. Software is eating the world, remember? One way to make it help rather than pejoratively eat the world is to eject the parasites along for the ride.

Open source can’t work without software copyright. This is refuted by 1) software source sharing before software copyright; 2) preponderance of permissively licensed open source software, in which the terms do not allow suing downstream developers who do not share back; 3) the difficulty of enforcing copyleft licenses which do allow for suing downstream developers who do not share back; 4) the possibility of non-copyright regulation to force sharing of source (indeed I see the charitable understanding of copyleft as prototyping such regulation; for perspective on the Oracle v. Google case from someone with a more purely charitable understanding of copyleft, see Bradley Kuhn); and 5) demand and supply mechanisms for mandating sharing of source (e.g., procurement policies, distribution policies such as Debian’s).

These came up because Hurst seemed to really want the audience to conflate software copyright in general (not at issue in the case, settled in a bad place since the early 1980s) and API copyright specifically. Regarding the latter, another point which could have been made is the extent to which free/open source software has been built around providing alternatives to proprietary software, often API-compatible. If API copyright could prevent compatible implementation without permission of any sort, open source, competition, and innovation would all be severely hampered.

There is a recent site called API Commons, which seems to be an API directory (Programmable Web, which ran the conference, also has one). My general suggestion to both would be to implement and facilitate putting all elements of APIs listed above in my notes in the commons. For example, they could clarify that API metadata they collect is in the public domain, publish it as Linked Open Data, and encourage API developers and providers they catalog to freely license specifications, documentation, implementations, and data, and note such in the directories.

In order to get a flavor for the conference, I listened to yesterday morning’s keynotes, both of which made valiant attempts to connect big picture themes to day to day API development and provision. Allow me to try to make connections back to “API commons”.

Sarah Austin, representing the San Francisco YMCA, pointed out that the conference is near the Tenderloin neighborhood, the poorest in central San Francisco. Austin asked if kids from the Tenderloin would be able to find jobs in the “API economy” or would they be priced out of the area (many tech companies have moved nearby in the last years, Twitter perhaps the best known).

Keith Axline claimed The Universe Is Programmable. We Need an API for Everything, or to some extent, that learning about the universe and how to manipulate it is like programming. Axline’s talk seemed fairly philosophical, but could be made concrete with reference to the Internet of Things, programmable matter, robots, nanobots, software eating the world … much about the world will indeed soon be software (programmable) or obsolete.

Axline’s conclusion was in effect largely about knowledge policy, including mourning energy wasted on IP, and observing that we should figure out public support for science or risk a programmable world dominated by IP. That might be part of it, but keeps the focus on funding, which is just where IP advocates want it — IP is an off-the-balance-sheets, “free” taking. A more direct approach is needed — get the rules of knowledge policy right, put freedom and equality as its top goals, reject freedom infringing regimes, promote commons (but mandating all these as a condition of public and publicly interested funding is a reasonable starting place) — given these objectives and constraints, then argue about market, government, or other failure and funding.

Knowledge policy can’t directly address the Austin’s concerns in the Tenderloin, but it does indirectly affect them, and over the long term tremendously affect them, in the Tenderloin and many other places. As the world accelerates its transition from an industrial to a knowledge dominated economy, will that economy be dominated by monopoly and inequality or freedom and equality? Will the former concentrations continue to abet instances of what Jane Jacobs called “catastrophic money” rushing into ill-prepared neighborhoods, or will the latter tendencies spread the knowledge, wealth, and opportunity?

Innovation Policy in a World With Less Scarcity

Friday, March 28th, 2014

Mark Lemley’s new paper IP in a World Without Scarcity provides good overviews of the case “that on the Internet, we increasingly get creativity in spite of, rather than because of, IP law” — the exclusivity incentive for creation story, if it were ever true, is drowning in non-exclusive creativity, and theories that distribution and revelation also require an exclusivity incentive also seem quaint given the Internet — and of 3D printing, general purpose robotics, and synthetic biology, which “share two essential characteristics with the Internet: they radically reduce the cost of production and distribution of things, and they separate the informational content of those things (the design) from their manufacture.” Thus, Lemley argues, economics and policy need increasingly to grapple with an end to scarcity, IP will be increasingly important, and we can draw lessons from the Internet about how this all will and should play out.

The paper is a quick read at 55 double-spaced pages. I recommend it to anyone interested in near future technology and policy. The paper’s final sentence:

Thinking about such questions has been the province of science fiction authors, but understanding what a post-scarcity economy will look like is the great task of economics for the next century.

Lemley cites two SF books very familiar to many readers: Down and Out in the Magic Kingdom by Cory Doctorow (my positive review) and The Diamond Age: Or, A Young Lady’s Illustrated Primer by Neal Stephenson, which just a few days ago I exploited in a private communication: “…the primer is an interactive learning notebook which adapts as the owner learns, informing a generation of geeks’ vision of education and development. Such tools are increasingly feasible. Will all humans have full access to, and ability to participate in the development of such tools? Only if they are developed in the commons, which will only happen with intentional action.” That’s probably a good segue into my disagreements with and additional idiosyncratic observations about IP in a World Without Scarcity.

By IP, Lemley means intellectual property: mostly copyright, patent, trademark. That has been and will be increasingly a terrible frame for thinking about policy. It gives away the farmfuture to owners of the past, who, as Lemley notes “will fight the death of scarcity” as they have fought the Internet — with more criminalization, more lawsuits, more attempts to fundamentally alter technologies in order to protect their rents. This seems rather suboptimal given that we know the theory upon which IP rests is largely bunk. The alternative, assuming we still only wish to maximize innovation, is to make innovation policy the frame. This makes turning the enclosure dial up or down a sideshow, and pulls in non-enclosure incentives and a host of more indirect and probably much more important determinants of innovation, e.g., education and governance.

The paper provides a couple reasons for focusing on the enclosure version of IP (Lemley doesn’t need any reason; he’s an IP scholar, and though I wish such people would reconceptualize themselves as commons scholars, I have no expectation; in any case the “reasons” are my reading). First, the framing isn’t as harmful as I made it out to be, because IP owners’ fight against the Internet “didn’t work. Copyright infringement remains rampant” and against other democratizing technologies, “IP owners will (probably) lose that fight.” But winning isn’t binary, nor is the continued existence of rampant copyright infringement a good indicator.

Given that network effects are highly relevant for many kinds of knowledge products — a tool or experience is much more valuable if other people can be expected to know it — a significant level of piracy can be profit-maximizing for an IP rent collector. Better indicators might be the collapse of profits from IP rents (the movie industry continues to grow, and while the recorded music industry has declined from its peak, this is nothing like an icehouse collapse, and many other IP rent sectors continue to grow) and the displacement of IP rent collectors as the marketers the dominant knowledge products of the age by other entities better adapted to a world in which fighting against the Internet doesn’t work (the mass and high-status markets are dominated by IP rent collectors in nearly all fields, exceptions being encyclopedias and certain kinds of infrastructure software). These might be minor, highly debatable (maybe the music industry will soon recommence a full collapse, be joined by movies, both displaced by crowdfunding and crowdmarketing; I doubt it given the properties controlled by IP rent collectors and other entities’ unchanged desperation to cut unfavorable deals with them) quibbles, if the IP owners’ “losing” fight against the Internet hadn’t significantly damaged the Internet.

But the Internet has been damaged by the IP owners’ fight. Absent an academic characterization of how significant that damage is (which I would love to read), here are some of the ways:

  • Chilling effect on P2P research, result: more centralization;
  • Services police user content; expensive, barrier to entry, result: more centralization, near monopoly platforms;
  • Services cut rare and unfavorable deals with IP owners, result: same;
  • Innovative services fail to cut deals, or sustainable deals, with IP owners, result: less innovation, more Internet as TV;
  • Monopoly abets monopoly; creates opportunities for bundling monopolies, result: threat to net neutrality;
  • Copyright-based censorship provides cover for all kinds of political censorship, result: political censors have additional justification, doing what Hollywood does;
  • All of above centralization and monopoly makes dominant entities a target for compromise, result: mass surveillance and non-state cybercrime abetted;
  • Our imagination and expectation of what the Internet makes possible is diminished, result: DRM TV and radio and silos organized for spying are seen as the norm, information organized for public benefit such as Wikipedia, unusual; this flipping of democratic hopes for the Internet, a partial AOL scenario, is collateral damage from the IP owners’ war on the Internet.

Similar damage will be done to the potential of new technologies with Internet-like characteristics (in addition to those discussed in the paper, others add the Internet of Things, distributed energy generation, and educational technologies, e.g., Jeremy Rifkin in his new book The Zero Marginal Cost Society, which I plan to review soon) by incumbents. This makes Lemley’s policy recommendations seem overly tentative and timid:

[It] is hard to translate this skepticism into immediate policy prescriptions, both because the whole point is that the need for IP will be sensitive to individual industry characteristics and because the technologies I am discussing are still in their infancy […] “we should resist the tendency to expand IP reflexively to meet every new technological challenge” […] “IP owners should not be allowed to reach beyond suing infringers and seek to shut down or modify the technology itself” […] “IP law needs to make it easier for creators to opt out of the IP regime.”

IP rent collectors will not hold off protecting their interests pending idealized analysis of more fully developed technologies. The damage they do will be built into another generation of technology and society, with IP scholars and activists left to worry that policy is contrary to evidence and to take rearguard actions to protect the level of openness they’ve become accustomed to, but fail to imagine what would have been possible had the stranglehold of IP rent collectors been broken. Just like the Internet today. I’ll come back to less timid and more proactive policy response in a bit.

Second reason for focus on the enclosure version of IP, the usual — big budget movies (and regulated pharma, mentioned earlier in the paper):

There is still a role for IP on the Internet. There are some works that are so costly to create even in the digital world that they are unlikely to be made without effective IP protection. Big-budget movies and video games cost hundreds of millions of dollars to make. No amount of creative fire will drive someone who doesn’t have hundreds of millions of dollars to make Peter Jackson’s Lord of the Rings trilogy. They need corporate backing, and the corporate backers need a revenue stream. But in the Internet era those works are increasingly the the exception, not the rule.

My usual response — we should allow enclosure of our freedom, equality, and the democratic potential of the Internet in order to ensure an ongoing supply of spectacle provided in the same way it has for decades? Spectacle over freedom, really? Of course the “reason” is far more pessimal than that, as the cost of producing and distributing spectacle is going down fast, as is the cost of coordinating distributed patrons who want product, not rent collection. Further, because culture is also so dominated by network effects, we’ll all love whatever spectacle is produced, whether it took 15 or 500 months of work per minute of spectacle. It’s not as insane to contemplate threatening liberal values in order to get new drugs as it is to get new movies — but then considering non-enclosure mechanisms for developing and evaluating new drugs, and the issues of access and equality are more pressing…

More Lemley:

IP is essentially a form of government regulation. The government restricts entry into the market, or alternatively controls the price at which that entry can occur, in order to serve valuable social ends. But regulation is not a moral entitlement or something that we must take for granted. In the past, government regulated all sorts of industries – railroads, trucking, electric power, gas, telephones – because it could not see given the economics of those industries how a free market could produce socially optimal results. But in a surprising number of cases, when we deregulated those industries we found that the market could indeed find a way to supply goods we thought would be provided only with government rule-making. IP is no different in this respect than any other form of regulation. Regulation as a whole shouldn’t disappear, but regulation of particular industries often turns out to be a reflexive response to a failure of imagination, something we do because we have done it for so long that we cannot imagine how a market in that industry could function without it.

This is certainly superior to the rights/owner/property characterization inherent in IP — it recasts “owners” as beneficiaries of regulation — and I think implicitly makes the case for switching one’s frame from intellectual property to innovation policy. That leads us to what the goal of “innovation policy” regulation ought be, and sufficiently proactive policies to achieve that. Should the goal be to maximize “innovation”, “creativity”, the “progress of science and useful arts”, or the like? It would be a huge improvement to sideline enclosure as the primary mechanism and retain the same top objective. But even that improvement would be short sighted, given how systematically innovation policy regulation has and will increasingly shape society. A success of imagination would be to make freedom and equality the top objectives of and constraints on innovation policy, and only then maximize innovation. The innovations generated by a free and equal society are the ones I want. Others are to be gawked at with dismay and guilt.

On proactive policies required, in brief they are pro-commons policies, and I return to Benkler:

Regulators concerned with fostering innovation may better direct their efforts toward providing the institutional tools that would help thousands of people to collaborate without appropriating their joint product, making the information they produce freely available rather than spending their efforts to increase the scope and sophistication of the mechanisms for private appropriation of this public good as they now do.

That we cannot fully understand a phenomenon does not mean that it does not exist. That a seemingly growing phenomenon refuses to fit our longstanding perceptions of how people behave and how economic growth occurs counsels closer attention, not studied indifference and ignorance. Commons-based peer production presents a fascinating phenomenon that could allow us to tap substantially underutilized reserves of human creative effort. It is of central importance that we not squelch peer production, but that we create the institutional conditions needed for it to flourish.

Which implies that commons scholarship ought displace intellectual property scholarship (except as a historical investigation of commons malgovernance).

I realize that I haven’t provided any specific pro-commons policy recommendations in this post, nevermind any that are especially pertinent in a world with less scarcity. I’m deeply skeptical that lower, different costs substantially change innovation policy or knowledge commons arguments — the same ones have recurred since at least the 1800s — and am extremely doubtful that the usual assumption that digital networks fundamentally change desirable policy (or here, that further technologies with digital network like characterizations further change desirable policy) is true or non-harmful — these assumptions give away (legitimize) the past to those who now use it to control the future. Some short term and narrow but valuable pro-commons policy suggestions arising from the Wikimedia movement; the free software movement offers other suggestions, if we take some of its practices as prototypes for regulation enforced by mechanisms other than copyright holder whim, more powerful and better aligned with its claims of software freedom as a human right.

A few final quotes from Lemley’s IP in a World Without Scarcity, first two from footnotes:

The challenge posed to copyright by collective production sites like Wikipedia is not just one of the need for incentives. Collective production challenges the whole concept of authorship.

Indeed, and as I keep repeating effective product competition from the commons (such as Wikipedia) re-imagines the range of desirable policy and reduces the resources available to enclosure industries to lobby for protectionism — in sum shifting what is politically possible.

It is possible that creators create in hopes of being one of the few superstars whose work is actually rewarded by copyright law. It is well known that people systematically overvalue the prospect of a large but unlikely reward; that’s why they buy lottery tickets. Some scholars have suggested that the same effect may be at work in IP. But if so, the incentive on which we rely is, as Kretschmer puts it, “based on a systematic cognitive mistake.” In effect, we are coaxing works out of these creators by lying to them about their chances of getting paid.

This has long struck me as being the case. The question is then (in addition to considerations above), do we really want a culture dominated by fools and sell-outs?

A world without scarcity requires a major rethinking of economics, much as the decline of the agrarian economy did in the 19th century. How will our economy function in a world in which most of the things we produce are cheap or free? We have lived with scarcity for so long that it is hard even to begin to think about the transition to a post-scarcity economy. IP has allowed us to cling to scarcity as an organizing principle in a world that no longer demands it. But it will no more prevent the transition than agricultural price supports kept us all farmers. We need a post-scarcity economics, one that accepts rather than resists the new opportunities technology will offer us. Developing that economics is the great task of the 21st century.

But we should aim for much better than the travesty of developed country agricultural policy (even before considering its baneful intersection with IP) as the legacy of this transition! But the consequences of continued capture of innovation policy have the potential to be far worse. Even if few are employed in information industries, there is no transition on the way to displace arranging information as the dominant mode of the economy (however measured; previous modes being hunting/gathering, agriculture, and industry); if the mode is largely controlled by rent collectors, the result could be a very unfree and unequal society — perhaps on the order of pre-industrial agricultural societies.

Keep Fighting Forward

Tuesday, February 11th, 2014

Today is the day to mass call for regulation of mass surveillance. I did, please do it too.

I’m still underwhelmed by the rearguard nature of such actions, wonder how long they continue to be effective (e.g., when co-opted, or when policymakers realize mass calls don’t translate into votes, or forever…since at least 1996), and am even enraged by their focus on symptoms. But my feelings are probably wrong. Part of me applauds those who enjoy fighting the shortest term and broadest appeal possible battles. Such probably helps prevent things from getting worse, at least for a time, and that’s really valuable. Anyone who believes things must get worse before they get better is dangerous, because that’s when real trolls take over, damn your revolution.

I enjoyed Don Marti’s imperfect but perfectly provocative analogy, which I guess implies (he doesn’t say) the correct response to mass surveillance is to spend on end-to-end crypto, rejection of private tracking, decentralization, and other countermeasures, sealing net communications from security state poison. I’m all for that, and wish advocacy for same were a big part of mass calls to action like today’s. But I see the two as mostly complementary, as much as I’d like to scream “you’re doing it entirely wrong!”

Also QuestionCopyright’s assertion that Copyright + Internet = Surveillance. Or another version: Internet, Privacy, Copyright; Choose Two. I could quibble that these are too weak (freedom was infringed by copyright before the net) and too strong (not binary), but helpfully provocative.

Addendum: Also, Renata Avila:

For me is . Otherwise, we will be in serious trouble. Donate to resistance tools like or

Sleepwalking past Freedom’s Commons, or how peer production could increase democracy, equality, freedom, and innovation, all of them!

Sunday, February 9th, 2014

2007:

The most interesting parts of ‘s The Wealth of Networks concern how peer production facilitates liberal values. I’ll blog a review in the fullness of time.

In lieu of that which may never come, some motivated notes on Coase’s Penguin, or Linux and the Nature of the Firm (2002, 78 pages) and Freedom in the Commons: Towards a Political Economy of Information (2003, 32 pages; based on a 2002 lecture). A friend wanted to trial a book group with the former. Re-reading that led me to the latter, which I hadn’t read before. Reading them together, or even just the latter, might be a good alternative to reading The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006, 473 pages).

As might be expected from decade plus old internet research, some of the examples in the papers and book are a bit stale, but sadly their fundamental challenge remains largely unacknowledged, and only taken as a byproduct. I would love to be convinced otherwise. Is the challenge (or my extrapolation) wrong, unimportant, or being met satisfactorily?

Excerpts from Freedom in the Commons (emphasis added by me in all quotes that follow):

[Commons-based peer production] opens a range of new opportunities for pursuing core political values of liberal societies—democracy, individual freedom, and social justice. These values provide three vectors of political morality along which the shape and dimensions of any liberal society can be plotted. Because, however, they are often contradictory rather than complementary, the pursuit of each of these values places certain limits on how we conceive of and pursue the others, leading different liberal societies to respect them in different patterns.

An underlying efficient limit on how we can pursue any mix of arrangements to implement our commitments to democracy, autonomy, and equality, however, has been the pursuit of productivity and growth.

[Commons-based peer production] can move the boundaries of liberty along all three vectors of liberal political morality.

There is no benevolent historical force, however, that will inexorably lead the technological-economic moment to develop towards an open, diverse, liberal equilibrium. If the transformation occurs, it will lead to substantial redistribution of power and money from the twentieth-century, industrial producers of information, culture, and communications—like Hollywood, the recording industry, and the telecommunications giants—to a widely diffuse population around the globe. None of the industrial giants of yore are going to take this redistribution lying down. Technology will not overcome their resistance through some insurmountable progressive impulse. The reorganization of production, and the advances it can bring in democracy, autonomy, and social justice will emerge, if it emerges, only as a result of social and political action. To make it possible, it is crucial that we develop an understanding of what is at stake and what are the possible avenues for social and political action. But I have no illusions, and offer no reassurances, that any of this will in fact come to pass. I can only say that without an effort to focus our attention on what matters, the smoke and mirrors of flashy toys and more convenient shopping will be as enlightening as Aldous Huxley’s soma and feelies, and as socially constructive as his orgy porgy.

Let us think, then, of our being thrust into this moment as a challenge. We are in the midst of a technological, economic, and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will largely depend on policy choices that we will make over the next decade or two. To be able to understand these choices, to be able to make them well, we must understand that they are part of a social and political choice—a choice about how to be free, equal, and productive human beings under anew set of technological and economic conditions. As economic policy, letting yesterday’s winners dictate the terms of economic competition tomorrow is disastrous. As social policy, missing an opportunity to enrich democracy, freedom, and equality in our society, while maintaining or even enhancing our productivity, is unforgivable.

Although the claim that the Internet leads to some form or another of “decentralization” is not new, the fundamental role played in this transformation by the emergence of non-market, nonproprietary production and distribution is often over-looked, if not willfully ignored.

First, if the networked information economy is permitted to emerge from the institutional battle, it will enable an outward shift of the limits that productivity places on the political imagination. Second, a society committed to any positive combination of the three values needs to adopt robust policies to facilitate these modes of production,because facilitating these modes of production does not represent a choice between productivity and liberal values, but rather an opportunity actually to relax the efficient limit on the plausible set of political arrangements available given the constraints of productivity.

We are at a moment in our history at which the terms of freedom and justice are up for grabs. We have an opportunity to improve the way we govern ourselves—both as members of communities and as autonomous individuals. We have an opportunity to be more just at the very core of our economic system. The practical steps we must take to reshape the boundaries of the possible in political morality and to improve the pattern of liberal society will likely improve productivity and growth through greater innovation and creativity. Instead of seizing these opportunities, however, we are sleepwalking.

What arrangements favor reorganization towards commons-based peer production? From Coase’s Penguin:

This suggests that peer production will thrive where projects have three characteristics. First, they must be modular. That is, they must be divisible into components, or modules, each of which can be produced of the production of the others. This enables production to be incremental and asynchronous, pooling the efforts of different people, with different capabilities, who are available at different times. Second, the granularity of the modules is important and refers to the sizes of the project’s modules. For a peer production process to pool successfully a relatively large number of contributors, the modules should be predominately fine-grained, or small in size. This allows the project to capture contributions from large numbers of contributors whose motivation levels will not sustain anything more than small efforts toward the project. Novels, for example, at least those that look like our current conception of a novel, are likely to prove resistant to peer production. In addition, a project will likely be more efficient if it can accommodate variously sized contributions. Heterogeneous granularity will allow people with different levels of motivation to collaborate by making smaller- or larger-grained contributions, consistent with their levels of motivation. Third, and finally, a successful peer production enterprise must have low-cost integration, which includes both quality control over the modules and a mechanism for integrating the contributions into the finished product.

Regulators concerned with fostering innovation may better direct their efforts toward providing the institutional tools that would help thousands of people to collaborate without appropriating their joint product, making the information they produce freely available rather than spending their efforts to increase the scope and sophistication of the mechanisms for private appropriation of this public good as they now do.

That we cannot fully understand a phenomenon does not mean that it does not exist. That a seemingly growing phenomenon refuses to fit our longstanding perceptions of how people behave and how economic growth occurs counsels closer attention, not studied indifference and ignorance.  Commons-based peer production presents a fascinating phenomenon that could allow us to tap substantially underutilized reserves of human creative effort. It is of central importance that we not squelch peer production, but that we create the institutional conditions needed for it to flourish.

There’s been some progress on institutional tools (i.e., policy arrangements writ large, the result of “political action” above) in the 11 or so years since (e.g., Open Access mandates), but not nearly enough to outweigh global ratcheting of intellectual freedom infringing regimes, despite the occasional success of rearguard actions against such ratcheting. Neither these rearguard actions, nor mainstream (nor reformist) discussion of “reform” put commons at the center of their concerns. The best we can expect from this sleepwalking is to muddle through, with policy protecting and promoting commons where such is coincidentally aligned with some industrial interest (often simplified to “Google” in the past several years, but that won’t last forever).

My extrapolation (again, tell me if facile or wrong): shifting production arrangements so as to favor commons-based peer production is as important as, complementary to, and almost necessary for positive policy change. Commons-based product competition simultaneously changes the facts on the ground, the range of policies imaginable, and potentially create a commons “industrial” interest group which is recognizably important to regulators and makes commons-based peer production favoring policy central to its demands — the likely Wikimedia response to the European Commission copyright consultation is a hopeful example.

There has been lots of progress on improving commons-based peer production (e.g., some trends), but also not nearly enough to keep up with proprietary innovation, particularly lacking and missing huge opportunities where proprietary incumbents real advantages sit — not production per se, but funding and distribution/marketing/cultural relevance making. Improving commons-based peer production, shifting the commanding heights (i.e., Hollywood premium video and massively expensive and captured pharma regulatory apparatus) to forms more amenable to commons-based peer production, and expanding the scope of commons-based peer production to include funding and relevance making are among the most potent political projects of our time.

Wake up. ^_^

“I would love it if all patents evaporated” (WebRTC)

Monday, November 11th, 2013

I’ve been following WebRTC (Real Time Communications) because (1) it is probably the most significant addition to the web in terms of enabling a new class of applications at least since the introduction of Ajax (1998, standardized by 2006), and perhaps since the introduction of Javascript (1995, standardized by 1997). The IETF working group charter puts it well (another part of the work is at W3C):

There are a number of proprietary implementations that provide direct interactive rich communication using audio, video, collaboration, games, etc. between two peers’ web-browsers. These are not interoperable, as they require non-standard extensions or plugins to work. There is a desire to standardize the basis for such communication so that interoperable communication can be established between any compatible browsers. The goal is to enable innovation on top of a set of basic components. One core component is to enable real-time media like audio and video, a second is to enable data transfer directly between clients.

(See pad.textb.org (source) for one simple application; simpleWebRTC seems to be a popular library for building WebRTC applications.)

And (2) because WebRTC is the scene of the latest fight to protect open web standards from rent seekers.

The IETF working group is choosing between H.264 Constrained Baseline Profile Level 1.2 and VP8 as the Mandatory To Implement (MTI) video codec (meaning all applications can count on that codec being available) for WebRTC. H.264 cannot be included in free and open source software, VP8 can, due to their respective patent situations. (For audio-only WebRTC applications, the free Opus codec seems to be a non-controversial requirement.)

Cisco has recently promised that in 2014 they will make available a binary implementation of H.264 for which they will pay license fees for all comers (there is an annual cap on fees, allowing them to do this). That’s nice of them, but the offer is far from ideal for any software (a binary must be downloaded from Cisco servers for each user), and a nonstarter for applications without some kind of plugin system, and for free and open source software distributions, which must be able to modify source code.

Last week I remotely attended a meeting on the MTI video codec choice. No consensus was reached; discussion continues on the mailing list. One interesting thing about the non-consensus was the split between physical attendees (50% for H.264 and 30% for VP8) and remote attendees (20% for H.264, 80% for VP8). A point mentioned several times was the interest of “big players” (mostly fine with paying H.264 fees, and are using it in various other products) and “little players” (fees are significant, eg startups, or impossible, eg free and open source projects); depending on one’s perspective, the difference shows how venue biases participation in one or both directions.

Jonathan Rosenberg, the main presenter for H.264, at about 22 minutes into a recording segment:

I would love it if all patents evaporated, if all the stuff was open source in ways that we could use, and we didn’t have to deal with any of this mess.

The argument for why H.264 is the best choice for dealing with “this mess” boils down to H.264 having a longer history and broader adoption than VP8 (in other applications; the two implementation of WebRTC so far, in recent versions of Chrome and Firefox, so far exclusively use VP8).

Harald Alvestrand, the main presenter for VP8, at about 48 minutes into another recording segment:

Development of codecs has been massively hampered and held back by the fact that it has been done in a fashion that has served to maximize the patent encumbrances on codecs. Sooner or later, we should see a way forward to abandon the dependence on encumbered codecs also for video software. My question, at this juncture, is if not now, when?

Unsurprisingly, I find this (along with the unworkability of H.264 for free and open source software) a much more compelling argument. The first step toward making patents evaporate (or at least irrelevant for digital video) is to select a codec which has been developed to maximize freedom, rather than developed to maximize encumbrances and rent collection.

What are individuals and entities pushing H.264 as the best codec for now, given the mess, doing for the longer term? Are they working on H.265, in order to bake in rents for the next generation? Or are they contributing to VP9, the next-next generation Daala, and the elimination of software patents?

Addendum: Version of this post sent to rtcweb@ietf.org (and any followups).

Economics and the Commons Conference [knowledge stream] report

Wednesday, October 30th, 2013

Economics and the Common(s): From Seed Form to Core Paradigm. A report on an international conference on the future of the commons (pdf) by David Bollier. Section on the knowledge stream (which I coordinated; pre-conference post) copied below, followed by an addendum with thanks and vague promises. First, video of the stream keynote (slides) by Carolina Botero (introduced by me; archive.org copy).

III. “Treating Knowledge, Culture and Science as Commons”

Science, and recently, free software, are paradigmatic knowledge commons; copyright and patent paradigmatic enclosures. But our vision may be constrained by the power of paradigmatic examples. Re-conceptualization may help us understand what might be achieved by moving most provisioning of knowledge to the commons; help us critically evaluate our commoning; and help us understand that all commons are knowledge commons. Let us consider, what if:

  • Copyright and patent are not the first knowledge enclosures, but only “modern” enforcement of inequalities in what may be known and communicated?
  • Copyright and patent reform and licensing are merely small parts of a universe of knowledge commoning, including transparency, privacy, collaboration, all of science and culture and social knowledge?
  • Our strategy puts commons values first, and views narrow incentives with skepticism?
  • We articulate the value of knowledge commons – qualitative, quantitative, ethical, practical, other – such that knowledge commons can be embraced and challenged in mainstream discourse?

These were the general questions that the Knowledge, Culture and Science Stream addressed.

Knowledge Stream Keynote Summary

Carolina Botero Cabrera, a free culture activist, consultant and lawyer from Colombia, delivered a plenary keynote for the Knowledge Stream entitled, “What If Fear Changes Sides?” As an author and lecturer on free access, free culture and authors’ rights, Botero focused on the role of information and knowledge in creating unequal power relationships, and how knowledge and cultural commons can rectify such problems.

“If we assume that information is power and acknowledge the power of knowledge, we can start by saying that controlling information and knowledge means power. Why does this matter?” she asked. “Because the control of information and knowledge can change sides. The power relationship can be changed.”

One of the primary motives of contemporary enclosures of information and knowledge, said Botero, is to instill fear in people – fear of violating copyright law, fear of the penalties for doing so. This inhibits natural tendencies to share and re-use information. So the challenge facing us is to imagine if fear could change sides. Can we imagine a switch in power relationships over the control of knowledge – how we produce, distribute and use knowledge? Botero said we should focus on the question: “How can we switch the tendency of knowledge regulation away from enclosure, so that commons can become the rule and not the exception?”

“There are still many ways to produce things, to gain knowledge,” said Botero, who noted that those who use the word “commons” [in the context of knowledge production] are lucky because it helps name these non-market forms of sharing knowledge. “In Colombia, we don’t even have that word,” she said.

To illustrate how customary knowledge has been enclosed in Colombia, Botero told the story of parteras, midwives, who have been shunted aside by doctors, mostly men, who then asserted control over women’s bodies and childbirth, and marginalized the parteras and their rich knowledge of childbirth. This knowledge is especially important to those communities in remote areas of Colombia that do not have access to doctors. There is currently a huge movement of parteras in Colombia who are fighting for the recognition of their knowledge and for the legal right to act as midwives.

Botero also told about how copyright laws have made it illegal to reproduce sheet music for songs written in 18th and 19th century Colombia. In those times, people simply shared the music among each other; there was no market for it. But with the rise of the music industry in the 20th century, especially in the North, it is either impossible or unaffordable to get this sheet music because most of it is copyrighted. So most written music in Colombia consists of illegally photocopied versions. Market logic has criminalized the music that was once natural and freely flowing in Colombian culture. Botero noted that this has increased inequality and diminished public culture.

She showed a global map illustrating which nations received royalties and fees from copyrights and patents in 2002; the United States receives more than half of all global revenues, while Latin America, Africa, India and other countries of the South receive virtually nothing. This is the “power relationships” that Botero was pointing to.

Botero warned, “We have trouble imagining how to provision and govern resources, even knowledge, without exclusivity and control.” Part of the problem is the difficulty of measuring commons values. Economists are not interested, she said, which makes it difficult to go to politicians and persuade them why libraries matter.

Another barrier is our reliance on individual incentives as core value in the system for regulating knowledge, Botero said. “Legal systems of ‘intellectual property’ place individual financial incentives at the center for knowledge regulation, which marginalizes commons values.” Our challenge is to find ways to switch from market logics by showing that there are other logics.

One reason that it is difficult to displace market logics is because we are reluctant or unable to “introduce the commons discourse from the front door instead of through the back door,” said Botero. She confessed that she herself has this problem because most public debate on this topic “is based on the premise that knowledge requires enclosure.” It is difficult to displace this premise by talking about the commons. But it is becoming increasingly necessary to do so as new policy regimes, such as the Transpacific Trade (TPP) Agreement, seek to intensify enclosures. The TPP, for example, seeks to raise minimum levels of copyright restriction, extend the terms of copyrights, and increase the prison terms for copyright violations.

One way to reframe debate, suggested Botero, is to see the commons “not as the absence of exclusivity, but the presence of non-exclusivity. Th is is a slight but important difference,” she said, “that helps us see the plenitude of non-exclusivity” – an idea developed by Séverine Dussolier, professor and director of the Revue Droit des Technologies de l’Information (RDTI, France). This shift “helps us to shift the discussion from the problems with the individual property and market-driven perspective, to a framework and society that – as a norm – wants its institutions to be generative of sharing, cooperation and equality.”

Ultimately, what is needed are more “efficient and effective ways to protect the ethic and practice of sharing,” or as she put it, “better commoning.” Reforming “intellectual property” is only one small part of the universe of knowledge commoning, Botero stressed. It also includes movements for “transparency, privacy, collaboration, and potentially all of science and culture.”

“When and how did we accept that the autonomy of all is subservient to control of knowledge by the few?” asked Botero. “Most important, can we stop this? Can we change it? Is the current tragedy our lack of knowledge of the commons?” Rediscovering the commons is an important challenge to be faced “if fear is going to change sides.”

An Account of the Knowledge, Culture and Science Stream’s Deliberations

There were no presentations in the Knowledge Stream breakout sessions, but rather a series of brief provocations. These were intended to spur a lively discussion and to go beyond the usual debates heard at free and open software/free culture/open science conferences. A primary goal of the breakout discussions was to consider what it means to regard knowledge as a commons, rather than as a “carve-out” exception from a private property regime. The group was also asked to consider how shared knowledge is crucial to all commoning activity. Notes from the Knowledge Stream breakout sessions were compiled through a participatory titanpad, from which this account is adapted.

The Knowledge Stream focused on two overarching themes, each taking advantage of the unique context of the conference:

  1. Why should commoners of all fields care about knowledge commons?
  2. If we consider knowledge first as commons, can we be more visionary, more inclusive, more effective in commoning software, science, culture, seeds … and much more?

The idea of the breakout session was to contextualize knowledge as a commons, first and foremost: knowledge as a subset of the larger paradigm of commons and commoning, as something far more than domain-specific categories such as software, scientific publication and educational materials.

An overarching premise of the Knowledge Stream was the point made by Silke Helfrich in her keynote, that all commons are knowledge commons and all commons are material commons. Saving seeds in the Svalbaard Seedbank are of no use if we forget how to cultivate them, for example, and various digital commons are ultimately grounded in the material reality of computers, electricity infrastructures and the food that computer users need to eat.

There is a “knowledge commons” at the center of each commons. This means that interest in a “knowledge commons” isn’t confined to those people who only care about software, scientific publication, and so on. It also means that we should refrain from classifying commons into categories such as “natural resources” and “digital,” and begin to make the process of commoning itself the focal point.

Of course, one must immediately acknowledge that digital resources do differ in fundamental ways from finite natural resources, and therefore the commons management strategies will differ. Knowledge commons can make cheap or virtually free copies of intangible information and creative works, and this knowledge production is often distributed at very small scales. For cultural commons, noted Philippe Aigrain, a French analyst of knowledge governance and CEO of Sopinspace, a maker for free software for collaboration and participatory democracy, “the key challenge is that average attention becomes scarcer in a world of abundant production.” This means that more attention must be paid on “mediating functions” – curating – and “revising our cultural expectations about ‘audiences’.”

It is helpful to see the historical roots of Internet-enabled knowledge commons, said Hilary Wainwright, the editor behind the UK political magazine Red Pepper and a research at the Transnational Institute. The Internet escalated the practice of sharing knowledge that began with the feminist movement’s recognition of a “plurality of sources.” It also facilitated the socialization of knowledge as a kind of collective action.

That these roots are not widely appreciated points to the limited vision of many knowledge commons, which tend to rely on a “deeply individualistic ethical ontology,” said Talha Syed, a professor of law at the University of California, Berkeley. This worldview usually leads commoners to focus on coercion – enclosures of knowledge commons – as the problem, he said. But “markets are problematic even if there is no monopoly,” he noted, because “we need to express both threats and positive aspirations in a substantive way. Freedom is more than people not coercing us.”

Shun-Ling Chen, a Taiwanese professor of law at the University of Arizona, noted that even free, mass-collaboration projects such as Wikipedia tend to fall back on western, individualistic conceptions of authorship and authority. This obscures the significance of traditional knowledge and history from the perspective of indigenous peoples, where less knowledge is recorded by “reliable sources.”

As the Stream recorded in its notes, knowledge commons are not just about individual freedoms, but about “marginalized people and social justice.” “The case for knowledge commons as necessary for social justice is an undeveloped theme,” the group concluded. But commons of traditional knowledge may require different sorts of legal strategies than those that are used to protect the collective knowledge embodied in free software or open access journal. The latter are both based on copyright law and its premises of individual rights, whereas traditional knowledge is not recognized as the sum of individual creations, but as a collective inheritance and resource.

This discussion raised the question whether provisioning knowledge through commons can produce different sorts of “products” as those produced by corporate enclosures, or whether they will simply create similar products with less inequality. Big budget movies and pharmaceuticals are often posited as impossibilities for commons provision (wrongly, by the way). But should these industries be seen as the ‘commanding heights’ of culture and medicine, or would a commons-based society create different commanding heights?”

One hint at an answer comes from seeing informality as a kind of knowledge commons. “Constructed commons” that rely upon copyright licenses (the GPL for software, Creative Commons licenses for other content) and upon policy reforms, are generally seen as the most significant, reputable knowledge commons. But just as many medieval commons relied upon informal community cooperation such as “beating the bounds” to defend themselves, so many contemporary knowledge commons are powerful because they are based on informal social practice and even illegality.

Alan Toner of Ireland noted that commoners who resist enclosures often “start from a position of illegality” (a point made by Ugo Mattei in his keynote talk). It may be better to frankly acknowledge this reality, he said. After all, remix culture would be impossible without civil disobedience to various copyright laws that prohibit copying, sharing and re-use – even if free culture people sometimes have a problem with such disrespectful or illegal resistance. “Piracy” is often a precursor to new social standards and even ne w legal rules. “What is legal is continent,” said Toner, because practices we spread now set traditions and norms for the future. We therefore must be conscious about the traditions we are creating. “The law is gray, so we must push new practices and organizations need to take greater risks,” eschewing the impulse to be “respectable” in order to become a “guiding star.”

Felix Stalder, a professor of digital culture at Zurich University of the Arts, agreed that civil disobedience and piracy are often precisely what is needed to create a “new normal,” which is what existing law is explicitly designed to prevent. “Piracy is building a de facto commons,” he added, “even if it is unaware of this fact. It is a laboratory of the new that can enrich our understanding of the commons.”

One way to secure the commons for the future, said Philippe Aigrain of Sopinspace, is to look at the specific challenges facing the commons rather than idealizing them or over-relying on existing precedents. As the Stream discussion notes concluded, “Given a new knowledge commons problem X, someone will state that we need a ‘copyleft for X.’ But is copyleft really effective at promoting and protecting the commons of software? What if we were to re-conceptualize copyleft as a prototype for effective, pro-commons regulation, rather than a hack on enclosure?”

Mike Linksvayer, the former chief technology officer of Creative Commons and the coordinator of the Knowledge Stream, noted that copyleft should be considered as “one way to “force sharing of information, i.e., of ensuring that knowledge is in the commons. But there may be more effective and more appropriate regulatory mechanisms that could be used and demanded to protect the commons.”

One provocative speculation was that there is a greater threat to the commons than enclosure – and that is obscurity. Perhaps new forms of promotion are needed to protect the commons from irrelevance. It may also be that excluding knowledge that doesn’t really contribute to a commons is a good way to protect a commons. For example, projects like Wikipedia and Debian mandate that only free knowledge and software be used within their spaces.


Addendum

Thanks to everyone who participated in the knowledge stream. All who prepared and delivered deep and critical provocations in the very brief time allotted:
Bodó Balázs
Shun-Ling Chen
Rick Falkvinge
Marco Fioretti
Charlotte Hess
Gaëlle Krikorian
Glyn Moody
Mayo Fuster Morrell
Prabir Purkayastha
Felix Stalder
Talha Syed
Wouter Tebbens
Alan Toner
Chris Watkins

Also thanks to Mayo Fuster Morrell and Petros for helping coordinate during the stream, and though neither could attend, Tal Niv and Leonhard Dobusch for helpful conversations about the stream and its goals. I enjoyed working with and learned much from the other stream coordinators: Saki Bailey (nature), Heike Löschmann (labor & care), Ludwig Schuster (money), and especially Miguel Said Vieira (infrastructure; early collaboration kept both infrastructure and knowledge streams relatively focused); and stream keynote speaker Carolina Botero; and conference organizers/Commons Strategy Group members: David Bollier, Michel Bauwens, and Silke Helfrich (watch their post-conference interview).

See the conference wiki for much more documentation on each of the streams, the overall conference, and related resources.

If a much more academic and apolitical approach is of interest, note the International Association for the Study of the Commons held its 2013 conference about 10 days after ECC. I believe there was not much overlap among attendees, one exception being Charlotte Hess (who also chaired a session on Governance of the Knowledge and Information Commons at the IASC conference).

ECC only strengthened my feeling (but, of course I designed the knowledge stream to confirm my biases…) that a much more bold, deep, inclusive (domains and methods of commoning, including informality, and populations), critical (including self-critical; a theme broached by several of the people thanked above), and competitive (product: displacing enclosure; policy: putting equality & freedom first) knowledge commons movement, or vanguard of those movements. Or as Carolina Botero put it in the stream keynote: bring the commons in through the front door. I promise to contribute to this project.

ECC also made me reflect much more on commons and commoning as a “core paradigm” for understanding and participating in the arrangements studied by social scientists. My thoughts are half baked at best, but that will not stop me from making pronouncements, time willing.

Exit skype loyalty

Thursday, July 18th, 2013

Why Doesn’t Skype Include Stronger Protections Against Eavesdropping?

At the EFF blog Seth Schoen speculates that Microsoft could be under continuous secret court orders which could possibly be interpreted to not allow it to add privacy protecting features to Skype. Maybe, but this can’t explain why Skype did not protect users prior to acquisition by Microsoft.

Schoen’s post closes with (emphasis in original):

That’s certainly not the case today, legally or technically—today, different kinds of calls offer drastically different levels of privacy and security. On some mobile networks, calls aren’t encrypted at all and hence are even broadcast over the air. Some Internet calls are encrypted in a way that protects users against some kinds of interception and not others. Some calls are encrypted with tools that include privacy and security features that Skype is lacking. Users deserve to understand exactly how the communications technologies they use do or don’t protect them. If Microsoft has reasons to think this situation is going to change, we need to know what those reasons are.

I’ll throw out some definite reasons users aren’t getting the protection and information deserved (secret court orders may be additional reasons):

  • Features have costs (engineering, UX, support); why should a developer bother with any feature when:
  • Few users have expressed demand for such features through either exit or voice;
  • Advocates who believe users deserve protection and information have failed to adequately increase actual user and policy demand for such;
  • Advocates and would-be providers of tools giving users what they deserve have failed to adequately deliver (especially to market! few users know about these tools) such.

In short Skype has not protected users or informed them about lack of protection because they face near zero threat (regulatory or competitive product) which would interest them in doing so.

EFF is doing as well and as much as any entity at generally informing users who probably already care a little bit (they’re reached by the EFF’s messages) and a whole lot more deserving of support. Keep that voice up but please always include exit instructions. Name “tools that include privacy and security features”; I see a screenshot of Pidgin in the EFF post, give them some love! Or better, Jitsi, the most feasible complete Skype replacement for all platforms. Otherwise your good efforts will be swamped by Skype user loyaltynetwork effect lockin.

Related argument: Realize Document Freedom Day; on topic: Free, open, secure and convenient communications: Can we finally replace Skype, Viber, Twitter and Facebook?

Economics and The Wealth of the Commons Conference

Thursday, May 9th, 2013

The Wealth of the Commons: A world beyond market & state is finally available online in its entirety.

I’ll post a review in the fullness of time, but for now I recommend reading the 73 essays in the book (mine is not the essay I’d contribute today, but think it useful anyway) not primarily as critiques of market, state, their combination, or economics — it’s very difficult to say anything new concerning these dominant institutions. Instead read the essays as meditations, explorations, and provocations for expanding the spaces in human society — across a huge range of activity — which are ruled not via exclusivity (of property or state control) but are nonetheless governed to the extent needed to prevent depredation.

The benefits of moving to commons regimes might be characterized any number of ways, e.g., reducing transaction costs, decreasing alienation and rent seeking, increasing autonomy and solidarity. Although a nobel prize in economics has been awarded for research on certain kinds of commons, my feeling is that the class is severely under-characterized and under-valued by social scientists, and thus by almost everyone else. At the extreme we might consider all of civilization and nature as commons upon which our seemingly dominant institutions are merely froth.

Another thing to keep in mind when reading the book’s diverse essays is that the commons “paradigm” is pluralistic. I wonder the extent to which reform of any institution, dominant or otherwise, away from capture and enclosure, toward the benefit and participation of all its constituents, might be characterized as commoning?

Whatever the scope of commoning, we don’t know how to do it very well. How to provision and govern resources, even knowledge, without exclusivity and control, can boggle the mind. I suspect there is tremendous room to increase the freedom and equality of all humans through learning-by-doing (and researching) more activities in a commons-orientated way. One might say our lack of knowledge about the commons is a tragedy.

Later this month the Economics and the Commons Conference, organized by Wealth of the Commons editors David Bollier and Silke Helfrich, with Michel Bauwens, will bring together 240 researchers, practitioners, and advocates deeply enmeshed in various commons efforts. There will be overlapping streams on nature, work, money, infrastructure, and the one I’m coordinator for, knowledge.

I agreed to coordinate the stream because I found exchanges with Bollier and Helfrich stimulating (concerning my book essay, a panel on the problematic relationship of Creative Commons and commons, and subsequently), and because I’m eager to consider knowledge commoning (e.g., free software, culture, open access, copyright reform) outside of their usual venues and endlessly repeated debates, and because I feel that knowledge commons movements have failed dismally to communicate their pertinence to each other and with the rest of the world — thus I welcome the challenge and test case to communicate the pertinence of all knowledge commons movements to other self-described commoners — and finally, to learn from them.

Here are the key themes I hope we can explore in the stream:

  • All commons as knowledge commons, e.g., the shared knowledge necessary to do anything in a commons-oriented way, easily forgotten once exclusivity and control take hold.
  • Knowledge enclosure and commoning throughout history, pre-dating copyright and patent, let alone computers.
  • How to think about and collaborate with contemporary knowledge commoners outside of the contractually constructed and legal reform paradigms, eg transparency and filesharing activists.
  • How can we characterize the value of knowledge commons in ways that can be critiqued and thus are possibly convincing? What would a knowledge commons research agenda look like?
  • If we accept moving the provisioning of almost all knowledge to the commons as an achievable and necessary goal, what strategies and controversies of existing knowledge commons movements (tuned to react against burgeoning enclosure and make incremental progress, while mostly accepting the dominant “intellectual property” discourse) might be reconsidered?

This may appear vastly too much material to cover in approximately 5 hours of dedicated stream sessions, but the methodology consists of brief interventions and debates, not long presentations, and the goal is provocation of new, more commons-oriented, and more cross-cutting strategies and collaborations among knowledge commoners and others, not firm conclusions.

I aim for plenty of stream documentation and followup, but to start the public conversation (the conference has not been publicized thus far due to a hard limit on attendees; now those are settled) by asking each of the “knowledge commoner” participants to recommend a resource (article, blog post, presentation, book, website…) that will inform the conversation on one or more of the themes above. Suggestions are welcome from everyone, attending or not; leave a comment or add to the wiki. Critiques of any of the above also wanted!

“Circulations of culture” in Poland and everyland

Thursday, March 21st, 2013

My comments on/included in The Circulations of Culture. On Social Distribution of Content. (PDF), an English translation of Obiegi kultury. Społeczna cyrkulacja treści.

Thanks to researcher Alek Tarkowski for asking for my comments. I enjoyed reading and thinking about the report, and recommend it to all.

A top-posted postscript to my comments:

I’ve been slightly keeping my eye out for offline circulations since reading the paper. I recently chatted with some people who live in the middle of the US outside a small city and found that people there swap artists’ recorded output on DVD-R. These are elderly people who have a poor understanding of how email or the web works, are on satellite or dialup, and wouldn’t be able to use a filesharing program at all, if they knew what one was. Of course just an anecdote and maybe I’m seeing what I’m looking for.

With respect to informal cultures: document, understand, predict; policy last

This report is an important contribution to the all too new genre of research treating informal circulations of information as socially interesting phenomena to be accurately described rather than exploited for policy advocacy, whether pejorative or apologia in nature. Such accurate descriptions may help society understand what constitutes good policy, but are still problematic to the extent they are created and consumed for policy reasons rather than as social research.

Even with accurate descriptions, these at best provide indications, but not proof, of extent to which informal circulations substitute for and complement formal circulations. Nor are such questions, the bait of so much writing on the subject of filesharing, the most interesting for either policy or the study of culture. One of the most enjoyable and informative aspects of this study is its focus on the nuances of the culture and market in a particular country, and its historic context. If I may grotesquely exploit this context a bit: would anyone consider the most interesting social and economic aspects of informal circulations during the communist period to be the extent to which these circulations impacted the output and employment prospects of state propagandists?

I submit that the anthropology of informal circulations, in either context, is more interesting and challenging, than conjecture about their effect on “industry”. But this anthropology may help build intuitions about what are the first order questions important for policy to consider, even if not providing proofs of effects on entities that deeply influence policy. For example, access and freedom.

It is my hope that the genre of this report will continue to grow rapidly, for informal circulations are changing rapidly. As they are hard to study, every temporal and cultural context not surveyed is a crucial link in the history of human culture that is lost forever.

Consider three observations made in this report:

  • “sharing digital content outside of the Internet is negligible”
  • “over the age of 50 the percentage of [active Internet] users in the population drops dramatically and we thus did not include them in the sample”
  • “the data collected in our country clearly points out that the development of new communications technologies has not resulted in a radical increase in bottom-up creativity”

Each of these will certainly change in interesting ways, e.g.:

  • “All culture on a thumb drive” each day comes closer to reality, with capacities increasing and prices falling quickly enough that differences in cultural context and infrastructure could swamp a 3x or even greater difference in wealth; in other words, physical sharing of digital content may become pertinent again, and it could easily happen first outside of the wealthiest geographies.
  • The current generations of active Internet users will continue to use the net as they age; will even younger generations be even more connected? And don’t discount slow but steadily increasing use by long-lived older generations. How will each of these effect and create new informal circulations?
  • Bottom-up creativity may well increase, but also we have to consider, especially with respect to informal circulations, that curation is a form of creativity. What is the future of peer-produced cultural relevance (popularity) and preservation?

Relatedly, if I may close with questions that may be interpreted as ones of policy: How does and will informality affect bottom-up production, including of relevance and preservation? How does informality affect the ability of researchers to document and understand the development of our culture? For it is impossible to fully escape the underlying social policy question by characterizing an activity with the relatively neutral term of “informal”: should this activity be legalized, or crushed?

Video hosting longevity experiment

Friday, October 12th, 2012

Some friends have been working on a to-be-federated media hosting web application called MediaGoblin. They’re running a crowdfunding campaign to pay for more developer time. Please join me in feeding the developers.

For irony and goading (noted and fixed before I could get this post up), an appreciation (but not implementation) of POSSE (probably complementary to federation, but a different take on not entirely the same problem), and a test of hosting (which includes identifiers) permanence, I uploaded their campaign video various places. I’ve ordered below by my guess at longevity, from high to low (* next to those I did not upload).


Internet Archive.


YouTube.

BitTorrent magnet link, pirate bay listing.


Commented out FSF static hosting found in source of the campaign page.*


MediaGoblin instance at goblin.se.*


My MediaGoblin instance.


CDN hosted files found in source of the campaign page.*