Post Public Goods

Snowdrift

Sunday, November 30th, 2014

Co-founders David Thomas and Aaron Wolf (the Woz and Jobs of the project) have been working on Snowdrift.coop for at least 2 years (project announcement thread). I’ve been following their progress since, and occasionally offered advice (including on the linked thread).

Snowdrift is crowdfunding platform for ongoing (as opposed to one-off) funding, with scaled (as opposed to thresholded or unqualified) contributions, exclusively for free/open/libre (as opposed to unconditioned, mostly non-open) outputs. These features raise my interest:

  • I’ve been eager to see more nuanced crowdfunding arrangements tried since before relatively simple one-off threshold systems became popular — probably in part due to their simplicity. Snowdrift’s mechanism is both interesting, and has been criticized (see linked thread) for its complexity. It’ll be fun to see it tried out, and simplified, or even made more complex, as warranted.
  • If Snowdrift were to become a dominant platform for funding free/libre/open projects, scaling (contributors increase their contributions as more people contribute) could help create clear winners among the proliferation of such projects.
  • Today’s crowdfunding platforms were influenced (by now, mostly indirectly) by Kelsey and Schneier’s “Street Performer Protocol” paper, which set out to devise an alternative funding system for public domain works. But most crowdfunded works are not in the commons, indicating an need for better coordination of street patrons.

Snowdrift has additional interesting features, including organization as a cooperative, an honor code that goes beyond free/libre/open requirements, and being developed in the programming language Haskell. I’ve barely mentioned these things in the past, but they’re all interesting — alternative institutional arrangements, post-software-freedom, safety. The Snowdrift wiki has pages covering many of these topics and more in depth. They’ve also generally chosen to develop an integrated platform rather than to use existing software (e.g., for wiki, discussion, issues, mailing list) except for revision control hosting. Clearly Snowdrift is not trying to innovate in only one dimension.

Now, Snowdrift is doing a “traditional” one-off crowdfunding drive in order to get itself to production, such that the project and other free/libre/open projects can be funded on an ongoing fashion using the Snowdrift platform and mechanism.

Donate, share, and critique if you’re a fan of interesting mechanisms and freedom.

Open policy for a secure Internet-N-Life

Saturday, June 28th, 2014

(In)Security in Home Embedded Devices Jim Gettys says software needs to be maintained for decades considering where it is being deployed (e.g., embedded in products with multi-decade lifetimes, such as buildings) and the criticality of some of that software, an unpredictable attribute — a product might become unplanned “infrastructure” for example if it is widely deployed and other things come to depend on it. Without maintenance, including deployment of updates in the field, software (and thus systems it is embedded in) becomes increasingly insecure as vulnerabilities are discovered (cites a honeymoon period enjoyed by new systems).

This need for long-term maintenance and field deployment implies open source software and devices that users can upgrade — maintenance needs to continue beyond the expected life of any product or organization. “Upgrade” can also mean “replace” — perhaps some kinds of products should be more modular and with open designs so that parts that are themselves embedded systems can be swapped out. (Gettys didn’t mention, but replacement can be total. Perhaps “planned obsolescence” and “throwaway culture” have some security benefits. I suspect the response would be that many things continue to be used for a long time after they were planned to be obsolete and most of their production run siblings are discarded.)

But these practices are currently rare. Product developers do not demand source from chip and other hardware vendors and thus ship products with “binary blob” hardware drivers for Linux kernel which cannot be maintained, often based on kernel years out of date when product is shipped. Linux kernel near-monoculture for many embedded systems, increasing security threat. Many problems which do not depend on hardware vendor cooperation, ranging from unintentionally or lazily not providing source needed for rest of system, to intentionally shipping proprietary software, to intentionally locking down device to prevent user updates. Product customers do not demand long-term secure devices from product developers. There is little effort to fund commons-oriented embedded development (in contrast with Linux kernel and other systems development for servers, which many big companies fund).

Gettys is focused on embedded software in network devices (e.g., routers) as network access is critical infrastructure much else depends on, including the problem at hand: without network access, many other systems cannot be feasibly updated. He’s working on CeroWrt a cutting edge version of OpenWrt firmware, either of which is several years ahead of what typically ships on routers. A meme Gettys wishes to spread, the earliest instance of which I could find is on cerowrt-devel, a harsh example coming the next week:

Friends don’t let friends run factory firmware.

Cute. This reminds me of something a friend said in a group discussion that touched on security and embedded in body (or perhaps it was mind embedded in) systems, along the lines of “I wouldn’t run (on) an insecure system.” Or malware would give you a bad trip.

But I’m ambivalent. Most people, thus most friends, don’t know what factory firmware is. Systems need to be much more secure (for the long term, including all that implies) as shipped. Elite friend advice could help drive demand for better systems, but I doubt “just say no” will help much — its track records for altering mass outcomes, e.g., with respect to proprietary software or formats, seems very poor.

In Q&A someone asked about centralized cloud silos. Gettys doesn’t like them, but said without long-term secure alternatives that can be deployed and maintained by everyone there isn’t much hope. I agree.

You may recognize open source software and devices that users can upgrade above as roughly the conditions of GPL-3.0. Gettys mentioned this and noted:

  • It isn’t clear that copyright-based conditions are effective mechanism for enforcing these conditions. (One reason I say copyleft is a prototype for more appropriate regulation.)
  • Of “life, liberty, and pursuit of happiness”, free software has emphasized the latter two, but nobody realized how important free software would be for living one’s life given the extent to which one interacts with and depends on (often embedded) software. In my experience people have realized this for many years, but it should indeed move to the fore.

Near the end Gettys asked what role industry and government should have in moving toward safer systems (and skip the “home” qualifier in the talk title; these considerations are at least as important for institutions and large-scale infrastructure). One answer might be in open policy. Public, publicly-interested, and otherwise coordinated funders and purchasers need to be convinced there is a problem and that it makes sense for them to demand their resources help shift the market. The Free Software Foundation’s Respects Your Freedom criteria (ignoring the “public relations” item) is a good start on what should be demanded for embedded systems.

Obviously there’s a role for developers too. Gettys asked how to get beyond the near Linux kernel monoculture, mentioning BSD. My ignorant wish is that developers wanting to break the monoculture instead try to build systems using better tools, at least better languages (not that any system will reduce the need for security in depth).

Here’s to a universal, secure, and resilient web and technium. Yes, these features cost. But I’m increasingly convinced that humans underinvest in security (not only computer, and at every level), especially in making sure investments aren’t theater or worse.

Without Intellectual Property Day [edit]

Saturday, April 26th, 2014
Without Intellectual Property Day by Parker Higgins of the EFF is quite good, and released under CC-BY. Clearly deserving of adaptation. Mine below, followed by a diff.

April 26 is the day marked each year since 2000 by the World Intellectual Property Organization (WIPO) as “World Intellectual Property Day”, in which WIPO tries to associate its worldwide pushes for more enclosure with creativity.

Celebrating creativity is a good thing, but when you’re a hammer, everything looks like a nail. For the World Intellectual Property Organization, it may seem like creativity and “intellectual property” are inextricably linked. That’s not the case. In the spirit of adding to the conversation, let’s honor all the creativity and industry that is happening without a dependence on a system intellectual property.

There’s an important reason to encourage and promote creativity outside the bounds of increasingly restrictive laws: to the extent such creativity succeeds, it helps us re-imagine the range of desirable policy and reduces the resources available to enclosure industries to lobby for protectionism — in sum shifting what is politically possible. It’s incumbent on all of us who want to encourage creativity to continue to explore and utilize structures that reward creators without also restricting speech.

Comedy, Fashion, Cooking, Magic, and More

In the areas in which intellectual freedom is not typically infringed, there is tremendous innovation and consistent creativity outside of the intellectual property system. Chefs create new dishes, designers imagine new styles, comedians write new jokes, all without a legal enforcement mechanism to restrict others from learning and building on them.

There may be informal systems that discourage copying—the comedy community, to take one example, will call out people who are deemed to be ripping off material—but for the most part these work without expensive litigation, threats of ruinous fines, and the creation of systems of surveillance and censorship.

Contributing to a Creative Commons

The free software movement pioneered the practice of creating digital media that can legally and freely be shared and expanded, building a commons. The digital commons idea is being pushed in more areas than ever before, including culture, education, government, hardware design, and research. There are some projects we’re all familiar with — Wikipedia is perhaps the most prominent, creating an expansive and continuously updated encyclopedia that is freely accessible under permissive terms to the entire world.

Focusing on this year’s World IP Day theme of movies, there have been some impressive contributions the commons over the years. Nina Paley’s feature animation Sita Sings The Blues, which she released into the public domain, has spread widely, inspired more work, and earned her money. The short films from the Blender Foundation have demonstrated cutting-edge computer graphics made with free software and, though they’ve sometimes been on the receiving end of bogus copyright takedowns, have been watched many millions of times.

Kickstarting and Threshold Pledges

Finally, crowdfunding platforms like Kickstarter and Indie-Go-Go have made a major splash in the last few years as another fundraising model that can complement, or even replace, copyright exclusivity. These platforms build on theoretical framework laid out by scholars like John Kelsey and Bruce Schneier in the influential “Street Performer Protocol” paper, which set out to devise an alternative funding system for public domain works. But most crowdfunded works are not in the commons, indicating an need for better coordination of street patrons.

Looking at movies in particular: Kickstarter alone has enabled hundreds of millions of dollars of pledges, hundreds of theatrical releases, and seven Oscar-nominated films (including Inocente, winner of the Best Documentary Short category). Blender Foundation is currently crowdfunding its first feature length film, Gooseberry.

***

The conceit of copyright and other “intellectual property” systems is that they can be calibrated to promote the progress of science and the useful arts. But the reality of these systems is corruption and rent seeking, not calibration. The cost is not just less creativity and innovation, but less freedom and equality.

It’s clear from real world examples that other systems can achieve the goal of promoting creativity, progress, and innovation. We must continue to push for both practice and policy that favors these systems, ultimately rendering “intellectual property” a baffling anachronism. In a good future, a policy-oriented celebration of creativity and innovaion would be called World Intellectual Freedom Day.

wdiff -n eff-wipd.html eff-wipd-edit.html |colordiff |aha -w > eff-wipd-diff.html
[-<p>Today, April 26,-]{+<p>April 26+} is the day marked each year since 2000 [-as "Intellectual Property Day"-] by the <a href="https://www.eff.org/issues/wipo">World Intellectual Property Organization [-(WIPO)</a>. There are many areas where EFF has not historically agreed with WIPO,-] {+(WIPO)</a> as "World Intellectual Property Day", in+} which [-has traditionally pushed-] {+WIPO tries to associate its <a href="https://www.eff.org/deeplinks/2013/03/ustr-secret-copyright-agreements-worldwide">worldwide pushes+} for more [-restrictive agreements and served as a venue for <a href="https://www.eff.org/deeplinks/2013/03/ustr-secret-copyright-agreements-worldwide">domestic policy laundering</a>, but we agree that celebrating-] {+enclosure</a> with creativity.</p>+}
{+<p>Celebrating+} creativity is a good [-thing.</p>-]
[-<p>As the saying goes, though:-] {+thing, but+} when you're a hammer, everything looks like a nail. For the World Intellectual Property Organization, it may seem like creativity and <a href="https://www.eff.org/issues/intellectual-property/the-term">"intellectual property"</a> are inextricably linked. That's not the case. In the spirit of adding to the conversation, [-we'd like to-] {+let's+} honor all the creativity and industry that is happening <i>without</i> a dependence on a system intellectual property.</p>
<p>There's an important reason to encourage {+and promote+} creativity outside the bounds of increasingly restrictive [-laws, too. As Ninth Circuit Chief Justice Alex Kozinski eloquently explained in <a href="http://notabug.com/kozinski/whitedissent">a powerful dissent</a> some 20 years ago, pushing only for more IP restrictions tips a delicate balance against creativity:</p>-]
[-<blockquote><p>Overprotecting intellectual property is as harmful as underprotecting it. Creativity is impossible without a rich public domain. Nothing today, likely nothing since we tamed fire, is genuinely new: Culture, like science and technology, grows by accretion, each new creator building on-] {+laws: to+} the [-works-] {+extent such creativity succeeds, it helps us re-imagine the range+} of [-those who came before. Overprotection stifles the very creative forces it's supposed-] {+desirable policy <i>and</i> reduces the resources available+} to [-nurture.</p></blockquote>-]
[-<p>It's-] {+enclosure industries to lobby for protectionism -- in sum shifting what is politically possible. It's+} incumbent on all of us who want to encourage creativity to continue to explore {+and utilize+} structures that reward creators without also restricting speech.</p>
<h3>Comedy, Fashion, Cooking, Magic, and More</h3>
<p>In the areas [-known as copyright's "negative spaces,"-] {+in which intellectual freedom is not typically infringed,+} there is tremendous innovation and consistent creativity outside of the intellectual property system. Chefs create new dishes, designers imagine new styles, comedians write new jokes, all without a legal enforcement mechanism to restrict others from learning and building on them.</p>
<p>There may be informal systems that discourage copying—the comedy community, to take one example, <a href="http://www.slate.com/articles/arts/culturebox/features/2014/the_humor_code/joke_theft_can_a_comedian_sue_if_someone_steals_his_material.html">will call out people</a> who are deemed to be ripping off material—but for the most part these work without expensive litigation, threats of ruinous fines, and the creation of systems [-that can be abused to silence lawful non-infringing speech.</p>-] {+of surveillance and censorship.</p>+}
<h3>Contributing to a Creative Commons</h3>
<p>The free software movement [-may have popularized-] {+pioneered+} the [-idea-] {+practice+} of creating digital media that can legally and freely be shared and expanded, [-but the free culture movement has pushed the-] {+building a commons. The digital commons+} idea [-further-] {+is being pushed in more areas+} than ever [-before.-] {+before, including culture, education, government, hardware design, and research.+} There are some projects we're all familiar [-with—Wikipedia-] {+with -- Wikipedia+} is perhaps the most prominent, creating an expansive and continuously updated encyclopedia that is freely accessible under permissive terms to the entire world.</p>
<p>Focusing on this year's World IP Day theme of movies, there have been some impressive contributions the commons over the years. Nina Paley's feature animation <i><a href="http://www.sitasingstheblues.com/">Sita Sings The Blues</a></i>, which she released into the public domain, has spread widely, inspired more work, and earned her money. The <a href="http://www.techdirt.com/articles/20101002/20174711259/open-source-animated-movie-shows-what-can-be-done-today.shtml">short films from the Blender Foundation</a> have demonstrated cutting-edge computer graphics made with free software and, though they've sometimes been on <a href="http://www.techdirt.com/articles/20140406/07212626819/sony-youtube-take-down-sintel-blenders-open-source-creative-commons-crowdfunded-masterpiece.shtml">the receiving end of bogus copyright takedowns</a>, have been watched many millions of times.</p>
<h3>Kickstarting and Threshold Pledges</h3>
<p>Finally, crowdfunding platforms like Kickstarter and Indie-Go-Go have made a major splash in the last few years as another fundraising model that can complement, or even replace, [-traditional-] copyright exclusivity. These platforms build on theoretical framework laid out by scholars like John Kelsey and [-EFF board member-] Bruce Schneier in <a href="https://www.schneier.com/paper-street-performer.html">the influential "Street Performer Protocol" paper</a>, which set out to devise an alternative funding system for public [-works.</p>-] {+domain works. But most crowdfunded works are not in the commons, indicating an need for better <a href="https://gondwanaland.com/mlog/2013/08/10/street-patrons-missing-coordination-protocol/">coordination of street patrons</a>.</p>+}
<p>Looking at movies in particular: Kickstarter alone has <a href="https://www.kickstarter.com/blog/a-big-day-for-film">enabled hundreds of millions of dollars of pledges</a>, hundreds of theatrical releases, and seven Oscar-nominated films (including <i>Inocente</i>, winner of the Best Documentary Short category). [-Along with other-] {+Blender Foundation is currently+} crowdfunding [-sites, it has allowed the development of niche projects that might never have been possible under the traditional copyright system.&nbsp;</p>-] {+its first feature length film, <em><a href="http://gooseberry.blender.org/">Gooseberry</a></em>.</p>+}
<h3>***</h3>
[-<p>As the Constitution tells us,-]
{+<p>The conceit of+} copyright and other "intellectual property" systems [-can, when-] {+is that they can be+} calibrated [-correctly,-] {+to+} promote the progress of science and the useful arts. [-We continue to work pushing for a balanced law that would better achieve that end.</p>-]
[-<p>But it's also-] {+But the reality of these systems is corruption and rent seeking, not calibration. The cost is not just less creativity and innovation, but less freedom and <a href="https://gondwanaland.com/mlog/2014/01/30/tech-wealth-ip/">equality</a>.</p>+}
{+<p>It's+} clear from [-these-] real world examples that other systems can achieve [-that-] {+the+} goal [-as well. Promoting-] {+of promoting+} creativity, progress, and [-innovation is an incredibly valuable mission—it's good to know that it doesn't have-] {+innovation. We must continue+} to [-come through systems-] {+push for both practice and <a href="https://gondwanaland.com/mlog/2014/02/09/freedoms-commons/#regulators">policy+} that [-can-] {+favors these systems</a>, ultimately rendering "intellectual property" a baffling anachronism. In a good future, a policy-oriented celebration of creativity and innovaion would+} be [-abused to stifle valuable speech.</p>-] {+called World Intellectual Freedom Day.</p>+}

Gov[ernance]Lab impressions

Friday, March 7th, 2014

First, two excerpts of my previous posts to explain my rationale for this one. 10 months ago:

I wonder the extent to which reform of any institution, dominant or otherwise, away from capture and enclosure, toward the benefit and participation of all its constituents, might be characterized as commoning?

Whatever the scope of commoning, we don’t know how to do it very well. How to provision and govern resources, even knowledge, without exclusivity and control, can boggle the mind. I suspect there is tremendous room to increase the freedom and equality of all humans through learning-by-doing (and researching) more activities in a commons-orientated way. One might say our lack of knowledge about the commons is a tragedy.

26:

Other than envious destruction of power (the relevant definition and causes of which being tenuous, making effective action much harder) and gradual construction of alternatives, how can one be a democrat? I suspect more accurate information and more randomness are important — I’ll sometimes express this very specifically as enthusiasm for futarchy and sortition — but I’m also interested in whatever small increases in accurate information and randomness might be feasible, at every scale and granularity — global governance to small organizations, event probabilities to empirically validated practices.

I read about the Governance Lab @ NYU (GovLab) in a forward of a press release:

Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people.

That sounded interesting, perhaps not deceivingly — as I browsed the site, open tabs accumulated. Notes on some of those follow.

GovLab’s hypothesis:

When institutions open themselves to diverse participation and collaborative problem solving, they become more effective and the decisions they make are more legitimate.

I like this coupling of effectiveness and legitimacy. Another way of saying politics isn’t about policy is that governance isn’t about effectiveness, but about legitimizing power. I used to scoff at the concept of legitimacy, and my mind still boggles at arrangements passing as “legitimate” that enable mass murder, torture, and incarceration. But our arrangements are incredibly path dependent and hard to improve; now I try to charitably consider legitimacy a very useful shorthand for arrangements that have some widely understood and accepted level of effectiveness. Somewhat less charitably: at least they’ve survived, and one can do a lot worse than copying survivors. Arrangements based on open and diverse participation and collaborative problem solving are hard to legitimate: not only do they undermine what legitimacy is often really about, it is hard to see how they can work in theory or practice, relative to hierarchical command and control. Explicitly tackling effectiveness and legitimacy separately and together might be more useful than assuming one implies the other, or ignoring one of them. Refutation of the hypothesis would also be useful: many people could refocus on increasing the effectiveness and legitimacy of hierarchical, closed systems.

If We Only Knew:

What are the essential questions that if answered could help accelerate the transformation of how we solve public problems and provide for public goods?

The list of questions isn’t that impressive, but not bad either. The idea that such a list should be articulated is great. Every project ought maintain such a list of essential questions pertinent to the project’s ends!

Proposal 13 for ICANN: Provide an Adjudication Function by Establishing “Citizen” Juries (emphasis in original):

As one means to enhance accountability – through greater engagement with the global public during decision-making and through increased oversight of ICANN officials after the fact – ICANN could pilot the use of randomly assigned small public groups of individuals to whom staff and volunteer officials would be required to report over a given time period (i.e. “citizen” juries). The Panel proposes citizen juries rather than a court system, namely because these juries are lightweight, highly democratic and require limited bureaucracy. It is not to the exclusion of other proposals for adjudicatory mechanisms.

Anyone interested in random selection and juries has to be at least a little interesting, and on the right track. Or so I’ve thought since hearing about the idea of science courts and whatever my first encounter with sortition advocacy was (forgotten, but see most recent), both long ago.

Quote in a quote:

“The largest factor in predicting group intelligence was the equality of conversational turn-taking.”

What does that say about:

  • Mailing lists and similar fora used by projects and organizations, often dominated by loudmouths (to say nothing of meetings dominated by high-status talkers);
  • Mass media, including social media dominated by power law winners?

Surely it isn’t pretty for the intelligence of relevant groups. But perhaps impetus to actually implement measures often discussed when a forum gets out of control (e.g., volume or flamewars) such as automated throttling, among many other things. On the bright side, there could be lots of low hanging fruit. On the dim side, I’m surely making extrapolations (second bullet especially) unsupported by research I haven’t read!

Coordinating the Commons: Diversity & Dynamics in Open Collaborations, excerpt from a dissertation:

Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism. […] To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia.

Interesting for a few reasons:

  • I like the title, cf. commons coordination (though I was primarily thinking of inter-project/movement coordination);
  • OpenHatchy;
  • I like the further inquiry’s usefulness for research and the researched community;
  • Improving the effectiveness of mass collaboration is important, including for its policy effects.

Back to the press release:

Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.

I hope those technology platforms will be open to audit and improvement by the public, i.e., free/open source software. GovLab’s site being under an open license (CC-BY-SA) could be a small positive indicator (perhaps not rising to the level of an essential question for anyone, but I do wonder how release and use of “content” or “data” under an open license correlates with release and use of open source software, if there’s causality in either direction, and if there could be interventions that would usefully reinforce any such).

I’m glad that NGOs are a target. Seems it ought be easier to adopt and spread governance innovation among NGOs (and businesses) than among governments, if only because there’s more turnover. But I’m not impressed. I imagine this could be due, among other things, to my ignorance: perhaps over a reasonable time period non-state governance has improved more rapidly than state governance, or to non-state governance being even less about effectiveness and more about power than is state governance, or to governance being really unimportant for survival, thus a random walk.

Something related I’ll never get around to blogging separately: the 2 year old New Ambiguity of ‘Open Government’ (summary), concerning the danger of allowing term to denote a government that publishes data, even merely politically insensitive data around service provision, rather than politically sensitive transparency and ability to demand accountability. I agree about the danger. The authors recommend maintaining distinctions between accountability, service provision, and adaptability of data. I find these distinctions aren’t often made explicit, and perhaps they shouldn’t be: it’d be a pain. But on the activist side, I think most really are pushing for politically sensitive transparency (and some focused on data about service provision might fairly argue such is often deeply political); certainly none want open data to be a means of openwashing. For one data point, I recommend the Oakland chapter of Beyond Transparency. Finally, Stop Secret Contracts seems like a new campaign entirely oriented toward politically sensitive transparency and accountability rather than data release. I hope they get beyond petitions, but I signed.

Sleepwalking past Freedom’s Commons, or how peer production could increase democracy, equality, freedom, and innovation, all of them!

Sunday, February 9th, 2014

2007:

The most interesting parts of ‘s The Wealth of Networks concern how peer production facilitates liberal values. I’ll blog a review in the fullness of time.

In lieu of that which may never come, some motivated notes on Coase’s Penguin, or Linux and the Nature of the Firm (2002, 78 pages) and Freedom in the Commons: Towards a Political Economy of Information (2003, 32 pages; based on a 2002 lecture). A friend wanted to trial a book group with the former. Re-reading that led me to the latter, which I hadn’t read before. Reading them together, or even just the latter, might be a good alternative to reading The Wealth of Networks: How Social Production Transforms Markets and Freedom (2006, 473 pages).

As might be expected from decade plus old internet research, some of the examples in the papers and book are a bit stale, but sadly their fundamental challenge remains largely unacknowledged, and only taken as a byproduct. I would love to be convinced otherwise. Is the challenge (or my extrapolation) wrong, unimportant, or being met satisfactorily?

Excerpts from Freedom in the Commons (emphasis added by me in all quotes that follow):

[Commons-based peer production] opens a range of new opportunities for pursuing core political values of liberal societies—democracy, individual freedom, and social justice. These values provide three vectors of political morality along which the shape and dimensions of any liberal society can be plotted. Because, however, they are often contradictory rather than complementary, the pursuit of each of these values places certain limits on how we conceive of and pursue the others, leading different liberal societies to respect them in different patterns.

An underlying efficient limit on how we can pursue any mix of arrangements to implement our commitments to democracy, autonomy, and equality, however, has been the pursuit of productivity and growth.

[Commons-based peer production] can move the boundaries of liberty along all three vectors of liberal political morality.

There is no benevolent historical force, however, that will inexorably lead the technological-economic moment to develop towards an open, diverse, liberal equilibrium. If the transformation occurs, it will lead to substantial redistribution of power and money from the twentieth-century, industrial producers of information, culture, and communications—like Hollywood, the recording industry, and the telecommunications giants—to a widely diffuse population around the globe. None of the industrial giants of yore are going to take this redistribution lying down. Technology will not overcome their resistance through some insurmountable progressive impulse. The reorganization of production, and the advances it can bring in democracy, autonomy, and social justice will emerge, if it emerges, only as a result of social and political action. To make it possible, it is crucial that we develop an understanding of what is at stake and what are the possible avenues for social and political action. But I have no illusions, and offer no reassurances, that any of this will in fact come to pass. I can only say that without an effort to focus our attention on what matters, the smoke and mirrors of flashy toys and more convenient shopping will be as enlightening as Aldous Huxley’s soma and feelies, and as socially constructive as his orgy porgy.

Let us think, then, of our being thrust into this moment as a challenge. We are in the midst of a technological, economic, and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will largely depend on policy choices that we will make over the next decade or two. To be able to understand these choices, to be able to make them well, we must understand that they are part of a social and political choice—a choice about how to be free, equal, and productive human beings under anew set of technological and economic conditions. As economic policy, letting yesterday’s winners dictate the terms of economic competition tomorrow is disastrous. As social policy, missing an opportunity to enrich democracy, freedom, and equality in our society, while maintaining or even enhancing our productivity, is unforgivable.

Although the claim that the Internet leads to some form or another of “decentralization” is not new, the fundamental role played in this transformation by the emergence of non-market, nonproprietary production and distribution is often over-looked, if not willfully ignored.

First, if the networked information economy is permitted to emerge from the institutional battle, it will enable an outward shift of the limits that productivity places on the political imagination. Second, a society committed to any positive combination of the three values needs to adopt robust policies to facilitate these modes of production,because facilitating these modes of production does not represent a choice between productivity and liberal values, but rather an opportunity actually to relax the efficient limit on the plausible set of political arrangements available given the constraints of productivity.

We are at a moment in our history at which the terms of freedom and justice are up for grabs. We have an opportunity to improve the way we govern ourselves—both as members of communities and as autonomous individuals. We have an opportunity to be more just at the very core of our economic system. The practical steps we must take to reshape the boundaries of the possible in political morality and to improve the pattern of liberal society will likely improve productivity and growth through greater innovation and creativity. Instead of seizing these opportunities, however, we are sleepwalking.

What arrangements favor reorganization towards commons-based peer production? From Coase’s Penguin:

This suggests that peer production will thrive where projects have three characteristics. First, they must be modular. That is, they must be divisible into components, or modules, each of which can be produced of the production of the others. This enables production to be incremental and asynchronous, pooling the efforts of different people, with different capabilities, who are available at different times. Second, the granularity of the modules is important and refers to the sizes of the project’s modules. For a peer production process to pool successfully a relatively large number of contributors, the modules should be predominately fine-grained, or small in size. This allows the project to capture contributions from large numbers of contributors whose motivation levels will not sustain anything more than small efforts toward the project. Novels, for example, at least those that look like our current conception of a novel, are likely to prove resistant to peer production. In addition, a project will likely be more efficient if it can accommodate variously sized contributions. Heterogeneous granularity will allow people with different levels of motivation to collaborate by making smaller- or larger-grained contributions, consistent with their levels of motivation. Third, and finally, a successful peer production enterprise must have low-cost integration, which includes both quality control over the modules and a mechanism for integrating the contributions into the finished product.

Regulators concerned with fostering innovation may better direct their efforts toward providing the institutional tools that would help thousands of people to collaborate without appropriating their joint product, making the information they produce freely available rather than spending their efforts to increase the scope and sophistication of the mechanisms for private appropriation of this public good as they now do.

That we cannot fully understand a phenomenon does not mean that it does not exist. That a seemingly growing phenomenon refuses to fit our longstanding perceptions of how people behave and how economic growth occurs counsels closer attention, not studied indifference and ignorance.  Commons-based peer production presents a fascinating phenomenon that could allow us to tap substantially underutilized reserves of human creative effort. It is of central importance that we not squelch peer production, but that we create the institutional conditions needed for it to flourish.

There’s been some progress on institutional tools (i.e., policy arrangements writ large, the result of “political action” above) in the 11 or so years since (e.g., Open Access mandates), but not nearly enough to outweigh global ratcheting of intellectual freedom infringing regimes, despite the occasional success of rearguard actions against such ratcheting. Neither these rearguard actions, nor mainstream (nor reformist) discussion of “reform” put commons at the center of their concerns. The best we can expect from this sleepwalking is to muddle through, with policy protecting and promoting commons where such is coincidentally aligned with some industrial interest (often simplified to “Google” in the past several years, but that won’t last forever).

My extrapolation (again, tell me if facile or wrong): shifting production arrangements so as to favor commons-based peer production is as important as, complementary to, and almost necessary for positive policy change. Commons-based product competition simultaneously changes the facts on the ground, the range of policies imaginable, and potentially create a commons “industrial” interest group which is recognizably important to regulators and makes commons-based peer production favoring policy central to its demands — the likely Wikimedia response to the European Commission copyright consultation is a hopeful example.

There has been lots of progress on improving commons-based peer production (e.g., some trends), but also not nearly enough to keep up with proprietary innovation, particularly lacking and missing huge opportunities where proprietary incumbents real advantages sit — not production per se, but funding and distribution/marketing/cultural relevance making. Improving commons-based peer production, shifting the commanding heights (i.e., Hollywood premium video and massively expensive and captured pharma regulatory apparatus) to forms more amenable to commons-based peer production, and expanding the scope of commons-based peer production to include funding and relevance making are among the most potent political projects of our time.

Wake up. ^_^

Hierarchy of mechanisms for limiting copyright and copyright-like barriers to use of Public Sector Information, or More or Less Universal Government License(s)

Sunday, November 24th, 2013

This sketch is in part motivated by a massive proliferation of copyright and copyright-like licenses for government/public sector information, e.g., sub- and sub-sub-national jurisdiction licenses and sector- and jurisdiction-specific licenses intended to combat license proliferation within a sector within a jurisdiction. Also by longstanding concern about coordination among entities working to limit barriers to use of PSI and knowledge commons governance generally.

Everything following concerns PSI only relative to copyright and copyright-like barriers. There are other pertinent regulations and considerations to follow when publishing or using PSI (e.g., privacy and fraud; as these are pertinent even without copyright, it is silly and unnecessarily complicating to include them in copyright licenses) and other important ways to make PSI more useful technically and politically (e.g., open formats, focusing on PSI that facilitates accountability rather than openwashing).

Eliminate copyright and copyright-like restrictions

No longer barriers to use of PSI, because no longer barriers to use of information. May be modulated down to any general copyright or copyright-like barrier reduction, where the barrier is pertinent to use of PSI. Examples: eliminate sui generis database restrictions where they exist, increase threshold of originality required for information to be subject to copyright restriction, expand exceptions and limitations to copyright restrictions, expand affirmative user rights.

Eliminate copyright and copyright-like restrictions for PSI

For example, works produced by employees of the U.S. federal government are not subject to copyright restrictions in the U.S. Narrower exclusions from copyright restrictions (e.g., of laws, court rulings) are fairly common worldwide. These could be generalized to include eliminate copyright and copyright-like restrictions for PSI, worldwide, and expanded to include PSI produced by contractors or other non-government but publicly funded entities. PSI could be expanded to include any information produced with public funding, e.g., research and culture funded by public grants.

“Standard” international licenses for PSI

Public copyright licenses not specifically intended for only PSI are often used for PSI, and could be more. CC0 is by far the best such license, but other Creative Commons (CC) and Open Data Commons (ODC) licenses are frequently used. Depending on the extent to which the licenses used leave copyright and copyright-like restrictions in place (e.g., CC0: none; CC-BY-NC-ND, lots, thus considered non-open) and how they are applied (from legislative mandate for all PSI to one-off use for individual reports and datasets at discretion of agency), could have effect similar to eliminating copyright and copyright-like restrictions for PSI, or almost zero effect.

Universal Government License

Governments at various levels have chosen to make up their own licenses rather than use a standard international license. Some of the better reasons for doing so will be eliminated by the forthcoming version 4.0 of 6 of the CC licenses (though again, CC0 has been the best choice, since 2009, and will remain so). But some of the less good reasons (uncharitable characterization: vanity) can’t be addressed by a standard international license, and furthermore seem to be driving the proliferation of sub-sub-national licenses, down to licenses specific to an individual town.

Ideally this extreme license proliferation trend would terminate with mass implementation of one of the above options, though this seems unlikely in the short term. Maybe yet another standard license would help! The idea of an “open government license” which various governments would have a direct role in creating and stewarding has been casually discussed in the past, particularly several years ago when the current proliferation was just beginning, the CC 4.0 effort had not begun, and CC and ODC were not on the same page. Nobody is particularly incented to make this unwieldy project happen, but nor is it an impossibility — due to the relatively small world of NGOs (such as CC and the Open Knowledge Foundation, of which ODC is a project) and government people who really care and know about public licenses, and the possibility their collective exhaustion and exasperation over license details, incompatibility, and proliferation could reach a tipping point into collective action. There’s a lot to start from, including the research that went into CC-BY-4.0, and the OGL UK 2.0, which is a pretty good open license.

But why think small? How many other problems could be addressed simultaneously?

  • Defend the traditional meaning of ‘open government’ by calling the license something else, e.g., Universal/Uniform/Unified Government License.
  • Rallying point for public sector worldwide to commit more firmly and broadly to limiting copyright and copyright-like barriers to use of PSI, more rapidly establishing global norm, and leading to mandates. The one thing to be said for massive PSI license proliferation could be increased commitment from proliferating jurisdictions to use their custom licenses (I know of no data on this). A successful UGL would swamp any increased local commitment due to local vanity licenses through much higher level expectation and mandate.
  • Make the license work well for software (including being approved by the Open Source Initiative), as:
    • Generically “open” licenses are inevitably used for software, whether the steward apparently intends this (OGL UK 2.0) or does not (CC).
    • The best modern permissive license for software (Apache 2.0) is relatively long and unreadable for what it does, and has an discomfiting name (not nearly as bad as certain pro sports organizations, but still); it ought be superseded.
  • Ensure the license works for other domains, e.g., open hardware, which don’t really require domain-specific licenses, are headed down the path of proliferation and incompatibility, and that governments have obvious efficiency, regulatory, security, and welfare interests in.
  • Foster broader “open innovation community” engagement with government and public policy and vice versa, and more knowledge transfer across OIC domains, on legal instruments at the least.
  • Uniform Public License may be a better name than UGL in some respects (whatever the name, it ought be usable by the public sector, and the general public), but Government may be best overall, a tip of the hat to both the vision within governments that would be necessary to make the license succeed, and to the nature of copyright and copyright-like barriers as government regulatory regimes.

National jurisdiction licenses for PSI

A more likely mechanism for license proliferation deceleration and harm reduction in the near term is for governments within a national jurisdiction to use a single license, and follow various license stewardship and use best practices. Leigh Dodds recently blogged about the problem and highlighted this mechanism in a post titled The Proliferation of Open Government Licences.

Sub-national jurisdiction licenses for PSI

Each province/state and sub-jurisdiction thereof, down to towns and local districts, could use its own vanity license. This appears to be the trend in Canada. It would be possible to push further in this direction with multiple vanity licenses per jurisdiction, e.g., various licenses for various kinds of data, reports, and other materials.

Licenses for each PSI dataset or other work

Each and every government dataset or other publication could come with its own bespoke license. Though these licenses would grant permissions around some copyright and copyright-like restrictions, I suspect their net effect would be to heighten copyright and copyright-like restrictions as a barrier to both the use and publication of PSI, on an increased cost basis alone. This extreme highlights one of the downsides of copyright licenses, even unambiguously open ones — implementing, understanding, and using them can be seen as significant cost centers, creating an additional excuse for not opening materials, and encouraging the small number of people who really understand the mechanisms to be jealous and wary of any other reform.

None

Included for completeness.

Privatization of PSI copyright

Until now, I’ve assumed that copyright and copyright-like restrictions are barriers to use of PSI. But maybe there aren’t enough restrictions, or they aren’t allocated to the right entities, such that maximum value is realized from use of PSI. Control of copyright and copyright-like restrictions in PSI could be auctioned off to entities with the highest ability to extract rents from PSI users. These businesses could be government-owned, with various public-private partnerships in between. This would increase the direct contribution of PSI to GDP, incent the creation and publication of more PSI, ensure PSI is maintained and marketed, reaching citizens that can affordneed it, and provide a solid business model for Government 2.0, academia, cultural heritage, and all other publicly funded and publicly interested sectors, which would otherwise fail to produce an optimal level of PSI and related materials and innovations.

Do not let any of the above trick you into paying more attention to possible copyright and copyright-like barriers and licenses than actually doing stuff, especially with PSI, especially with “data”, doubly with “government data”.

I agree with Denny Vrandečić’s paradoxical sounding but correct directive:

Data is free. Free the data!

I tried to communicate the same in a chapter of the Data Journalism Handbook, but lacked the slogan.

Data is free. Free the data!

And what is not data? ☻

Addendum: Entirely by coincidence (in response to a European Commission consultation on PSI, which I had already forgotten about), today posts by Timothy Vollmer for the Communia Association and Creative Commons call out the license proliferation problem and endorse public domain as the default for PSI.

Economics and the Commons Conference [knowledge stream] report

Wednesday, October 30th, 2013

Economics and the Common(s): From Seed Form to Core Paradigm. A report on an international conference on the future of the commons (pdf) by David Bollier. Section on the knowledge stream (which I coordinated; pre-conference post) copied below, followed by an addendum with thanks and vague promises. First, video of the stream keynote (slides) by Carolina Botero (introduced by me; archive.org copy).

III. “Treating Knowledge, Culture and Science as Commons”

Science, and recently, free software, are paradigmatic knowledge commons; copyright and patent paradigmatic enclosures. But our vision may be constrained by the power of paradigmatic examples. Re-conceptualization may help us understand what might be achieved by moving most provisioning of knowledge to the commons; help us critically evaluate our commoning; and help us understand that all commons are knowledge commons. Let us consider, what if:

  • Copyright and patent are not the first knowledge enclosures, but only “modern” enforcement of inequalities in what may be known and communicated?
  • Copyright and patent reform and licensing are merely small parts of a universe of knowledge commoning, including transparency, privacy, collaboration, all of science and culture and social knowledge?
  • Our strategy puts commons values first, and views narrow incentives with skepticism?
  • We articulate the value of knowledge commons – qualitative, quantitative, ethical, practical, other – such that knowledge commons can be embraced and challenged in mainstream discourse?

These were the general questions that the Knowledge, Culture and Science Stream addressed.

Knowledge Stream Keynote Summary

Carolina Botero Cabrera, a free culture activist, consultant and lawyer from Colombia, delivered a plenary keynote for the Knowledge Stream entitled, “What If Fear Changes Sides?” As an author and lecturer on free access, free culture and authors’ rights, Botero focused on the role of information and knowledge in creating unequal power relationships, and how knowledge and cultural commons can rectify such problems.

“If we assume that information is power and acknowledge the power of knowledge, we can start by saying that controlling information and knowledge means power. Why does this matter?” she asked. “Because the control of information and knowledge can change sides. The power relationship can be changed.”

One of the primary motives of contemporary enclosures of information and knowledge, said Botero, is to instill fear in people – fear of violating copyright law, fear of the penalties for doing so. This inhibits natural tendencies to share and re-use information. So the challenge facing us is to imagine if fear could change sides. Can we imagine a switch in power relationships over the control of knowledge – how we produce, distribute and use knowledge? Botero said we should focus on the question: “How can we switch the tendency of knowledge regulation away from enclosure, so that commons can become the rule and not the exception?”

“There are still many ways to produce things, to gain knowledge,” said Botero, who noted that those who use the word “commons” [in the context of knowledge production] are lucky because it helps name these non-market forms of sharing knowledge. “In Colombia, we don’t even have that word,” she said.

To illustrate how customary knowledge has been enclosed in Colombia, Botero told the story of parteras, midwives, who have been shunted aside by doctors, mostly men, who then asserted control over women’s bodies and childbirth, and marginalized the parteras and their rich knowledge of childbirth. This knowledge is especially important to those communities in remote areas of Colombia that do not have access to doctors. There is currently a huge movement of parteras in Colombia who are fighting for the recognition of their knowledge and for the legal right to act as midwives.

Botero also told about how copyright laws have made it illegal to reproduce sheet music for songs written in 18th and 19th century Colombia. In those times, people simply shared the music among each other; there was no market for it. But with the rise of the music industry in the 20th century, especially in the North, it is either impossible or unaffordable to get this sheet music because most of it is copyrighted. So most written music in Colombia consists of illegally photocopied versions. Market logic has criminalized the music that was once natural and freely flowing in Colombian culture. Botero noted that this has increased inequality and diminished public culture.

She showed a global map illustrating which nations received royalties and fees from copyrights and patents in 2002; the United States receives more than half of all global revenues, while Latin America, Africa, India and other countries of the South receive virtually nothing. This is the “power relationships” that Botero was pointing to.

Botero warned, “We have trouble imagining how to provision and govern resources, even knowledge, without exclusivity and control.” Part of the problem is the difficulty of measuring commons values. Economists are not interested, she said, which makes it difficult to go to politicians and persuade them why libraries matter.

Another barrier is our reliance on individual incentives as core value in the system for regulating knowledge, Botero said. “Legal systems of ‘intellectual property’ place individual financial incentives at the center for knowledge regulation, which marginalizes commons values.” Our challenge is to find ways to switch from market logics by showing that there are other logics.

One reason that it is difficult to displace market logics is because we are reluctant or unable to “introduce the commons discourse from the front door instead of through the back door,” said Botero. She confessed that she herself has this problem because most public debate on this topic “is based on the premise that knowledge requires enclosure.” It is difficult to displace this premise by talking about the commons. But it is becoming increasingly necessary to do so as new policy regimes, such as the Transpacific Trade (TPP) Agreement, seek to intensify enclosures. The TPP, for example, seeks to raise minimum levels of copyright restriction, extend the terms of copyrights, and increase the prison terms for copyright violations.

One way to reframe debate, suggested Botero, is to see the commons “not as the absence of exclusivity, but the presence of non-exclusivity. Th is is a slight but important difference,” she said, “that helps us see the plenitude of non-exclusivity” – an idea developed by Séverine Dussolier, professor and director of the Revue Droit des Technologies de l’Information (RDTI, France). This shift “helps us to shift the discussion from the problems with the individual property and market-driven perspective, to a framework and society that – as a norm – wants its institutions to be generative of sharing, cooperation and equality.”

Ultimately, what is needed are more “efficient and effective ways to protect the ethic and practice of sharing,” or as she put it, “better commoning.” Reforming “intellectual property” is only one small part of the universe of knowledge commoning, Botero stressed. It also includes movements for “transparency, privacy, collaboration, and potentially all of science and culture.”

“When and how did we accept that the autonomy of all is subservient to control of knowledge by the few?” asked Botero. “Most important, can we stop this? Can we change it? Is the current tragedy our lack of knowledge of the commons?” Rediscovering the commons is an important challenge to be faced “if fear is going to change sides.”

An Account of the Knowledge, Culture and Science Stream’s Deliberations

There were no presentations in the Knowledge Stream breakout sessions, but rather a series of brief provocations. These were intended to spur a lively discussion and to go beyond the usual debates heard at free and open software/free culture/open science conferences. A primary goal of the breakout discussions was to consider what it means to regard knowledge as a commons, rather than as a “carve-out” exception from a private property regime. The group was also asked to consider how shared knowledge is crucial to all commoning activity. Notes from the Knowledge Stream breakout sessions were compiled through a participatory titanpad, from which this account is adapted.

The Knowledge Stream focused on two overarching themes, each taking advantage of the unique context of the conference:

  1. Why should commoners of all fields care about knowledge commons?
  2. If we consider knowledge first as commons, can we be more visionary, more inclusive, more effective in commoning software, science, culture, seeds … and much more?

The idea of the breakout session was to contextualize knowledge as a commons, first and foremost: knowledge as a subset of the larger paradigm of commons and commoning, as something far more than domain-specific categories such as software, scientific publication and educational materials.

An overarching premise of the Knowledge Stream was the point made by Silke Helfrich in her keynote, that all commons are knowledge commons and all commons are material commons. Saving seeds in the Svalbaard Seedbank are of no use if we forget how to cultivate them, for example, and various digital commons are ultimately grounded in the material reality of computers, electricity infrastructures and the food that computer users need to eat.

There is a “knowledge commons” at the center of each commons. This means that interest in a “knowledge commons” isn’t confined to those people who only care about software, scientific publication, and so on. It also means that we should refrain from classifying commons into categories such as “natural resources” and “digital,” and begin to make the process of commoning itself the focal point.

Of course, one must immediately acknowledge that digital resources do differ in fundamental ways from finite natural resources, and therefore the commons management strategies will differ. Knowledge commons can make cheap or virtually free copies of intangible information and creative works, and this knowledge production is often distributed at very small scales. For cultural commons, noted Philippe Aigrain, a French analyst of knowledge governance and CEO of Sopinspace, a maker for free software for collaboration and participatory democracy, “the key challenge is that average attention becomes scarcer in a world of abundant production.” This means that more attention must be paid on “mediating functions” – curating – and “revising our cultural expectations about ‘audiences’.”

It is helpful to see the historical roots of Internet-enabled knowledge commons, said Hilary Wainwright, the editor behind the UK political magazine Red Pepper and a research at the Transnational Institute. The Internet escalated the practice of sharing knowledge that began with the feminist movement’s recognition of a “plurality of sources.” It also facilitated the socialization of knowledge as a kind of collective action.

That these roots are not widely appreciated points to the limited vision of many knowledge commons, which tend to rely on a “deeply individualistic ethical ontology,” said Talha Syed, a professor of law at the University of California, Berkeley. This worldview usually leads commoners to focus on coercion – enclosures of knowledge commons – as the problem, he said. But “markets are problematic even if there is no monopoly,” he noted, because “we need to express both threats and positive aspirations in a substantive way. Freedom is more than people not coercing us.”

Shun-Ling Chen, a Taiwanese professor of law at the University of Arizona, noted that even free, mass-collaboration projects such as Wikipedia tend to fall back on western, individualistic conceptions of authorship and authority. This obscures the significance of traditional knowledge and history from the perspective of indigenous peoples, where less knowledge is recorded by “reliable sources.”

As the Stream recorded in its notes, knowledge commons are not just about individual freedoms, but about “marginalized people and social justice.” “The case for knowledge commons as necessary for social justice is an undeveloped theme,” the group concluded. But commons of traditional knowledge may require different sorts of legal strategies than those that are used to protect the collective knowledge embodied in free software or open access journal. The latter are both based on copyright law and its premises of individual rights, whereas traditional knowledge is not recognized as the sum of individual creations, but as a collective inheritance and resource.

This discussion raised the question whether provisioning knowledge through commons can produce different sorts of “products” as those produced by corporate enclosures, or whether they will simply create similar products with less inequality. Big budget movies and pharmaceuticals are often posited as impossibilities for commons provision (wrongly, by the way). But should these industries be seen as the ‘commanding heights’ of culture and medicine, or would a commons-based society create different commanding heights?”

One hint at an answer comes from seeing informality as a kind of knowledge commons. “Constructed commons” that rely upon copyright licenses (the GPL for software, Creative Commons licenses for other content) and upon policy reforms, are generally seen as the most significant, reputable knowledge commons. But just as many medieval commons relied upon informal community cooperation such as “beating the bounds” to defend themselves, so many contemporary knowledge commons are powerful because they are based on informal social practice and even illegality.

Alan Toner of Ireland noted that commoners who resist enclosures often “start from a position of illegality” (a point made by Ugo Mattei in his keynote talk). It may be better to frankly acknowledge this reality, he said. After all, remix culture would be impossible without civil disobedience to various copyright laws that prohibit copying, sharing and re-use – even if free culture people sometimes have a problem with such disrespectful or illegal resistance. “Piracy” is often a precursor to new social standards and even ne w legal rules. “What is legal is continent,” said Toner, because practices we spread now set traditions and norms for the future. We therefore must be conscious about the traditions we are creating. “The law is gray, so we must push new practices and organizations need to take greater risks,” eschewing the impulse to be “respectable” in order to become a “guiding star.”

Felix Stalder, a professor of digital culture at Zurich University of the Arts, agreed that civil disobedience and piracy are often precisely what is needed to create a “new normal,” which is what existing law is explicitly designed to prevent. “Piracy is building a de facto commons,” he added, “even if it is unaware of this fact. It is a laboratory of the new that can enrich our understanding of the commons.”

One way to secure the commons for the future, said Philippe Aigrain of Sopinspace, is to look at the specific challenges facing the commons rather than idealizing them or over-relying on existing precedents. As the Stream discussion notes concluded, “Given a new knowledge commons problem X, someone will state that we need a ‘copyleft for X.’ But is copyleft really effective at promoting and protecting the commons of software? What if we were to re-conceptualize copyleft as a prototype for effective, pro-commons regulation, rather than a hack on enclosure?”

Mike Linksvayer, the former chief technology officer of Creative Commons and the coordinator of the Knowledge Stream, noted that copyleft should be considered as “one way to “force sharing of information, i.e., of ensuring that knowledge is in the commons. But there may be more effective and more appropriate regulatory mechanisms that could be used and demanded to protect the commons.”

One provocative speculation was that there is a greater threat to the commons than enclosure – and that is obscurity. Perhaps new forms of promotion are needed to protect the commons from irrelevance. It may also be that excluding knowledge that doesn’t really contribute to a commons is a good way to protect a commons. For example, projects like Wikipedia and Debian mandate that only free knowledge and software be used within their spaces.


Addendum

Thanks to everyone who participated in the knowledge stream. All who prepared and delivered deep and critical provocations in the very brief time allotted:
Bodó Balázs
Shun-Ling Chen
Rick Falkvinge
Marco Fioretti
Charlotte Hess
Gaëlle Krikorian
Glyn Moody
Mayo Fuster Morrell
Prabir Purkayastha
Felix Stalder
Talha Syed
Wouter Tebbens
Alan Toner
Chris Watkins

Also thanks to Mayo Fuster Morrell and Petros for helping coordinate during the stream, and though neither could attend, Tal Niv and Leonhard Dobusch for helpful conversations about the stream and its goals. I enjoyed working with and learned much from the other stream coordinators: Saki Bailey (nature), Heike Löschmann (labor & care), Ludwig Schuster (money), and especially Miguel Said Vieira (infrastructure; early collaboration kept both infrastructure and knowledge streams relatively focused); and stream keynote speaker Carolina Botero; and conference organizers/Commons Strategy Group members: David Bollier, Michel Bauwens, and Silke Helfrich (watch their post-conference interview).

See the conference wiki for much more documentation on each of the streams, the overall conference, and related resources.

If a much more academic and apolitical approach is of interest, note the International Association for the Study of the Commons held its 2013 conference about 10 days after ECC. I believe there was not much overlap among attendees, one exception being Charlotte Hess (who also chaired a session on Governance of the Knowledge and Information Commons at the IASC conference).

ECC only strengthened my feeling (but, of course I designed the knowledge stream to confirm my biases…) that a much more bold, deep, inclusive (domains and methods of commoning, including informality, and populations), critical (including self-critical; a theme broached by several of the people thanked above), and competitive (product: displacing enclosure; policy: putting equality & freedom first) knowledge commons movement, or vanguard of those movements. Or as Carolina Botero put it in the stream keynote: bring the commons in through the front door. I promise to contribute to this project.

ECC also made me reflect much more on commons and commoning as a “core paradigm” for understanding and participating in the arrangements studied by social scientists. My thoughts are half baked at best, but that will not stop me from making pronouncements, time willing.

5 fantasy Internet Archive announcements

Thursday, October 24th, 2013

Speaking of public benefit spaces on the internet, tonight the Internet Archive is having its annual celebration and announcements event. It’s a top contender for the long-term most important site on the internet. The argument for it might begin with it having many copies at many points in time of many sites, mostly accessible to the public (Google, the NSA and others must have vast dark archives), but would not end there.

I think the Internet Archive is awesome. Brewster Kahle, its founder, is too. It is clear to me that he’s the most daring and innovative founder or leader in the bay area/non-profit/open/internet field and adjacencies. And he calls himself Digital Librarian. Hear, hear!

But, the Internet Archive could be even more awesome. Here’s what I humbly wish they would announce tonight:

  • A project to release all of the code that runs their websites and all other processes, under free/open source software licenses, and do their work in public repositories, issue trackers, etc. Such crucial infrastructure ought be open to public audit, and welcoming to public contribution. Obviously much of the code is ancient, crufty, and likely has security issues. No reason for embarrassment or obscurity. The code supporting the recording of this era of human communication is itself a dark archive. Danger! Fix it.
  • WikiNurture media collections. I believe media item metadata is now unversioned. It should be versioned. And the public should be able to enhance and correct metadata. Currently media in the Internet Archive is much less useful than it could be due to poor metadata (eg I expect music I download from the archive to not have good artist/album/title tags, making it a huge pain to integrate into my listenng habits, including to tell the world and make popular) and very limited relations among media items.
  • Aggressively support new free media formats, specifically Opus and WebM right now. This is an important issue for the free and open issue, and requires collective action. Internet Archive is in a key position, and should be exploit is strong position.
  • On top of existing infrastructure and much richer data, above, build Netflix-level experiences around the highest quality media in the archive, and perhaps all media with high quality metadata. This could be left to third parties, but centralization is powerful.
  • Finally, and perhaps the deadly combination of most contentious and least exciting: stop paying DRM vendors and publishers. Old posts on this: 1, 2, 3. Internet Archive is not in the position Mozilla apparently think they are, of tolerating DRM out of fear of losing relevance. Physical libraries may think they are in such a position, but only to the extent they think of themselves as book vendors, and lack vision. Please, show leadership to the digital libraries we want in the future, not grotesque compromises, Digital Librarian!

These enhancements would elevate Internet Archive to is proper status, and mean nobody could ever again justifiably say that ‘Aside from Wikipedia, there is no large, popular space being carved out for the public good.’

Addendum: The actual announcements were great, and mostly hinted at on the event announcement post. The Wayback Machine now can instantly archive any URL (“Save Page Now”). I expect to use that all the time, replacing webcitation.org. This post pre-addendum, including many spelling errors (written on the 38 Geary…). Javascript MESS and the software archive are tons of fun: “Imagine every computer that ever existed, in your browser.” No talk of DRM, but also no talk of books, unless I missed something.

Addendum 20131110: “What happened to the Library of Alexandria?” as a lead in to explaining why the Internet Archive has multiple data centers will take on new meaning from a few days ago, when there was a fire at its scanning center (no digital records were lost). Donate.

What’s *really* wrong with the free and open internet — and how we could win it

Thursday, October 24th, 2013

A few days ago Sue Gardner, ED of the Wikimedia Foundation, posted What’s *really* wrong with nonprofits — and how we can fix it. Judging by seeing the the link sent around, it has been read to confirm various conflicting biases different people in the SF bay area/internet/nonprofit space and adjacent already had. May I? Excerpt-based-summary:

A major structural flaw of many nonprofits is that their revenue is decoupled from mission work, which pushes them to focus on providing a positive donor experience often at the expense of doing their core work.

WMF makes about 95% of its money from the many-small-donors model
…
I spend practically zero time fundraising. We at the WMF get to focus on our core work of supporting and developing Wikipedia, and when donors talk with us we want to hear what they say, because they are Wikipedia readers
…
I think the usefulness of the many-small-donors model, ultimately, will extend far beyond the small number of nonprofits currently funded by it.
…
[Because Internet.]
…
For organizations that can cover their costs with the many-small-donors model I believe there’s the potential to heal the disconnect between fundraising and core mission work, in a way that supports nonprofits being, overall, much more effective.

I agree concerning extended potential. I thought (here comes confirmation of biases) that Creative Commons should make growing its small donor base its number one fundraising effort, with the goal of having small donors provide the majority of funding as soon as possible — realistically, after several years of hard work on that model. While nowhere close to that goal, I recall that about 2006-2009 individual giving grew rapidly, in numbers and diversity (started out almost exclusively US-based), even though it was never the number one fundraising priority. I don’t think many, perhaps zero, people other than me believed individual giving could become CC’s main source of support. Wikimedia’s success in that, already very evident, and its unique circumstance, was almost taken as proof that CC couldn’t. I thought instead Wikimedia’s methods should be taken as inspiration. The “model” had already been proven by nearby organizations without Wikimedia’s eyeballs; e.g., the Free Software Foundation.

An organization that wants to rely on small donors will have to work insanely hard at it. And, if it had been lucky enough to be in a network affording it access to large foundation grants, it needs to be prepared to shrink if the foundations tire of the organization before individual giving supplants them, and it may never fully do so. (But foundations might tire of the organization anyway, resulting in collapse without individual donors.) This should not be feared. If an organization has a clear vision and operating mission, increased focus on core work by a leaner team, less distracted by fundraising, ought be more effective than a larger, distracted team.

But most organizations don’t have a clear vision and operating mission (I don’t mean words found in vision and mission statements; rather the shared and deep knowing-what-we’re-trying-to-do-and-how that allows all to work effectively, from governance to program delivery). This makes any coherent strategic change more difficult, including transitioning to small donor support. It also gives me pause concerning some of the bits of Gardner’s post that I didn’t excerpt above. For most organizations I’d bet that real implementation of nonprofit “best practices” regarding compliance, governance, management, reporting, etc, though boring and conservative, would be a big step up. Even trying to increase the much-maligned program/(admin+fundraising) ratio is probably still a good general rule. I’d like to hear better ones. Perhaps near realtime reporting of much more data than can be gleaned from the likes of a Form 990 will help “big data scientists” find better rules.

It also has to be said that online small donor fundraising can be just as distracting and warping (causing organization to focus on appearing appealing to donors) as other models. We (collectively) have a lot of work to do on practices, institutions, and intermediaries that will make the extended potential of small donor support possible (read Gardner’s post for the part I lazily summarized as [Because Internet.]) in order for the outcome to be good. What passes as savvy advice on such fundraising (usually centered around “social media”) has for years been appalling and unrealistic. And crowdfunding has thus far been disappointing in some ways as an method of coordinating public benefit.

About 7 months ago Gardner announced she would be stepping down as ED after finding a replacement (still in progress), because:

I’ve always aimed to make the biggest contribution I can to the general public good. Today, this is pulling me towards a new and different role, one very much aligned with Wikimedia values and informed by my experiences here, and with the purpose of amplifying the voices of people advocating for the free and open internet. I don’t know exactly what this will look like — I might write a book, or start a non-profit, or work in partnership with something that already exists.

My immediate reaction to this was exactly what Виктория wrote in reply to the announcement:

I cannot help but wonder what other position can be better for fighting consumerisation, walling-in and freedom curtailment of the Internet than the position of executive director of the Wikimedia Foundation.

I could take this as confirming another of my beliefs: that the Wikimedia movement (and other constructive free/open movements and organizations) do not realize their potential political potency — for changing the policy narrative and environment, not only taking rear guard actions against the likes of SOPA. Of course then, the Wikimedia ED wouldn’t think Wikimedia the most effective place from which to work for a free and open internet. But, my beliefs are not widely held, and likely incorrect. So I was and am mostly intrigued, and eager to see what Gardner does next.

After reading the What’s *really* wrong with nonprofits post above, I noticed that 4 months ago Gardner had posted The war for the free and open internet — and how we are losing it, which I eagerly read:

[non-profit] Wikipedia is pretty much alone. It’s NOT the general rule: it’s the exception that proves the rule.
…
The internet is evolving into a private-sector space that is primarily accountable to corporate shareholders rather than citizens. It’s constantly trying to sell you stuff. It does whatever it wants with your personal information. And as it begins to be regulated or to regulate itself, it often happens in a clumsy and harmful way, hurting the internet’s ability to function for the benefit of the public. That for example was the story of SOPA.
…
[Stories of how Wikipedia can fight censorship because it is both non-profit and very popular]
…
Aside from Wikipedia, there is no large, popular space being carved out for the public good. There are a billion tiny experiments, some of them great. But we should be honest: we are not gaining ground.
…
The internet needs serious help if it is to remain free and open, a powerful contributor to the public good.

Final exercise in confirming my biases (this post): yes, what the internet needs is more spaces carved our for the public good — more Wikipedias — categories other than encyclopedia in which a commons-based product out-competes proprietary incumbents, increasing equality and freedom powerfully in both the short and long (capitalization aligned with rent seeking demolished) term. Wikipedia is unique in being wildly successful and first and foremost a website, but not alone (free software collectively must many times more liberating by any metric, some of it very high profile, eg Firefox; Open Access is making tremendous progress, and I believe PLOS may have one of the strongest claims to operating not just to make something free, but to compete directly with and eventually displace incumbents).

A free and open internet, and society, needs intense competition from commons-based initiatives in many more categories, including those considered the commanding heights of culture and commerce, eg premium video, advertising, social networking, and many others. Competition does not mean just building stuff, but making it culturally relevant, meaning making it massively popular (which Wikipedia lucked into, being the world’s greatest keyword search goldmine). Nor does it necessarily mean recapitulating proprietary products exactly, eg some product expectations might moved to ones more favorable to mass collaboration.

Perhaps Gardner’s next venture will aim to carve out a new, popular space for the public good on the internet. Perhaps it will be to incubate other projects with exactly that aim (there are many experiments, as her post notes, but not many with “take overliberate the world” vision or resources; meanwhile there is a massive ecosystem churning out and funding attempts to take over the world new proprietary products). Perhaps it will be to build something which helps non-profits leverage the extended potential of the small donor model, in a way that maximizes public good. Most likely, something not designed to confirm my biases. ☺ But, many others should do just that!

Pro-DRM stories

Tuesday, October 22nd, 2013

Microsoft Thinks DRM Can Solve the Privacy Problem:

Under the model imagined by Mundie, applications and services that wanted to make use of sensitive data, such as a person’s genome sequence or current location, would have to register with authorities. A central authority would distribute encryption keys to applications, allowing them to access protected data in the ways approved by the data’s owners.

The use of cryptographic wrappers would ensure that an application or service couldn’t use the data in any other way. But the system would need to be underpinned by new regulations, said Mundie: “You want to say that there are substantial legal penalties for anyone that defies the rules in the metadata. I would make it a felony to subvert those mechanisms.”

If I understand correctly, this idea really is calling for DRM. Only difference is the use case: instead intending to restrict individual user’s control over their computing device in order to prevent them from doing certain things with some “content” on/accessed their device, Mundie wants applications (i.e., organizations) to be prevented from doing certain things with some “data” on/accessed via their computers.

Sounds great. Conceivably could even be well intentioned. But, just as “consumer” DRM abets monopoly and does not prevent copying, this data DRM would…do exactly the same thing.

Meanwhile, law enforcement, politicians, and media see devices locked down by a vendor, rather than controlled by users, as the solution to device theft (rendering the device relatively unsalable, and data inaccessible).

I want but don’t recall any anti-info-freedom (not how it’d self-describe anyway) speculative/science fiction/fantasy, dystopian, utopian, or otherwise. Above gives some hint about how to go about it: imagine a world in which DRM+criminal law works great, and tell stories about how various types of bad actors are thwarted by the combination. Or, where society falls apart because it hasn’t been implemented.

Another pro-IP story idea: the world faces some intractable problem that requires massive intellectual input, cannot coordinate to solve. Maybe a disease. Maybe in the form of alien invasion that can only be defeated by creating an alien disease. Or everyone is enslaved because all is known, and everyone knows that no privacy means no freedom. But someone has the bright idea to re-introduce or strengthen IP mechanisms which save the day.

One story I’d like to think wouldn’t work in even cardboard form is that nobody produces and promotes big budget cultural artifacts due to lack of IP or its enforcement, and as a result everyone is sad. The result is highly unlikely as people love whatever cultural works they’re surrounded by. But, maybe the idea could work as a discontinuity: suddenly there are no more premium video productions. People have grown up with such being the commanding heights of culture, and without this, they are sad. They have nothing to talk to friends about, and society breaks down. If this story were a film, people could appear smart by informing their friends that maybe the director really intended to question our dependence on premium video such as the film in question.