Post Patents

Annual thematic doubt

Friday, January 10th, 2014

As promised, my first annual thematic doubt post, expressing doubts I have about themes I blogged about during 2013.

Intellectual Freedom

If this blog were to have a main purpose other than serving as a despository for my tangents, it’d be protecting and promoting intellectual freedom, in particular through the mechanisms of free/open/knowledge commons movements, and in reframing information and innovation policy with freedom and equality outcomes as top. Some representative posts: Economics and the Commons Conference [knowledge stream] report, Flow ∨ incentive 2013 anthology winner, z3R01P. I’m also fond of pointing out where these issues surface in unusual places and surfacing them where they are latent.

I’m fairly convinced on this theme: regimes infringing on intellectual freedom are individual and collective mind-rot, and “merely” accentuate the tendencies toward inequality and control of whatever systems they are embedded in. Mitigating, militating against, outcompeting, and abolishing such regimes are trivially for the good, low risk, and non-revolutionary. But sure, I have doubts:

  • Though I see their accentuation of inequality and control as increasingly important, and high leverage for determining future outcomes, copyright and patent could instead be froth. The cause of intellectual freedom might be better helped by fighting for traditional free speech issues, for tolerance, against mass incarceration, against the drug war, against war, against corruption, for whatever one’s favored economic system is…
  • The voluntarily constructed commons that I emphasize (e.g., free software, open access) could be a trap: everything seems to grow fast as population (and faster, internet population) grows, but this could cloud these commons being systematically outcompeted. Rather than being undersold, product competition from the commons will never outgrow their dwarfish forms, will never shift nor take the commanding heights (e.g., premium video, pharma) and hence are a burden to both policy and beating-of-the-bounds competition. Plus, copyright and the like are mind-rot: generations of commons activists minds have been rotted and co-opted by learning to work within protectionist regimes rather than fighting and ignoring them.
  • An intellectual freedom infringing regime which produced faster technical innovation than an intellectual freedom respecting regime could render the latter irrelevant, like industrial societies rendered agricultural societies irrelevant, and agricultural societies rendered hunter-gatherer societies irrelevant, whatever the effects of those transitions on freedom and other values were. I don’t believe the current regime is anywhere close to being such a thing, nor are the usual “IP maximalism” reforms taking it in that direction. But it is possible that innovation policy is all that matters. Neither freedom and equality nor the rents of incumbents matter, except as obstacles and diversions from discovering and implementing innovation policy optimized to produce the most technical innovation.

I’m not, but can easily imagine being won over by these doubts. Each merits engagement, which could result in much stronger arguments for intellectual freedom, especially knowledge commons.

Critical Cheering

Unplanned, unnoticed by me until late in the year, my most pervasive subtheme was criticism-embedded-in-praise of free/open/commons entities and actions. Representative posts, title replaced with main target: Creative Commons, crowdfunding, Defensive Patent License, Document Freedom Day, DRM-in-HTML5 outrage, EFF, federated social web, Internet Archive, Open Knowledge Foundation, SOPA/ACTA online protests, surveillance outrage, and the Wikimedia movement.

This is an old theme: examples from 2004, 2005, 2006, 2007, 2008, 2011, and 2012. 2009 and 2010 are absent, but the reason for my light blogging here bears some relation to the theme: those are the years I was, in theory, most intensely trying to “walk my talk” at Creative Commons (and mostly failed, side-tracked by trying to get the organization to follow much more basic best practices, and by vast amounts of silliness).

Doubts about the cheering part are implied in the previous section. I’ll focus on the criticism here, but cheering is the larger component, and real: of entities criticized in the above links, in 2013 I donated money to at least EFF, FSF, and Internet Archive, and uncritically promoted all of them at various points. The criticism part amounts to:

  • Gains could be had from better coordination among entities and across domains, ranging from collaboration toward a short term goal (e.g., free format adoption) to diffuse mutual reinforcement that comes from shared knowledge, appreciation, and adoption of free/open/commons tools and materials across domains (e.g., open education people use open source software as inherent part of their practice of openness, and vice versa).
  • The commons are politically potent, in at least two ways: minimally, as existence proof for creativity and innovation in an intellectual freedom respecting regime (carved out); and vastly underappreciated, as destroyer of rents dependent on the intellectual freedom infringing regime, and of resources available for defending those rents and the regime. Commons are not merely to be protected from further bad policy, but are actors in creating a good policy environment, and should be promoted at every turn.

To be clear, my criticism is not usually a call for more “radical” or “extreme” steps or messages, rather more fulsome and coordinated ones. Admittedly, sometimes it may be hard to tell the difference — and this leads to my doubts:

  • Given that coordination is hard, gaining knowledge is expensive, and optimization path dependent, the entities and movements I criticize may not have room to improve, at least not in the direction I want them to improve in. The cost of making “more fulsome and coordinated” true might be greater than mutual reinforcement and other gains.
  • See the second doubt in the previous section — competition from the commons might be futile. Rather than promoting them at every turn, they should sometimes be held up as victims of bad policy, to be protected, and sometimes hidden from policy discourse.

The first doubt is surely merited, at least for many entities on many issues. For any criticism I have in this space, it makes sense to give the criticized the benefit of the doubt; they know their constraints pretty well, while I’m just making abstract speculations. Still, I think it’s worthwhile to call for more fulsome and coordinated strategy in the interstices of these movements, e.g., conversation and even this blog, in the hope of long-term learning, played out over years in existing entities and movements, and new ones. I will try henceforth to do so more often in a “big picture” way, or through example, and less often through criticism of specific choices made by specific entities — in retrospect the stream of the latter on this blog over the last year has been tedious.

International Apartheid

For example: Abolish Foreignness, Do we have any scrap of evidence that [the Chinese Exclusion Act] made us better off?, and Opposing “illegal” immigration is xenophobic, or more bluntly, advocating for apartheid “because it’s the law”. I hinted at a subtheme about the role of cities, to be filled out later.

The system is grossly unjust and ought be abolished, about that I have no doubt. Existing institutions and arrangements must adapt. But, two doubts about my approach:

  • Too little expression of empathy with those who assume the goodness of current policy. Fear of change, competition, “other” are all deep. Too little about how current unjust system can be unwound in a way the mitigates any reality behind these fears. Too little about how benefits attributed to current unjust system can be maintained under a freedom respecting regime. (This doubt also applies to the intellectual freedom theme.)
  • Figuring out development might be more feasible, and certainly would have more impact on human welfare, individual autonomy, than smashing the international apartheid system. Local improvements to education, business, governance, are what all ought focus on — though development economics has a dismal record, it at least has the right target. Migration is a sideshow.

As with the intellectual freedom theme, these doubts merit engagement, and such will strengthen the case for freedom. But even moreso than in the case of intellectual freedom infringing regimes, the unconscionable and murderous injustice of the international apartheid regime must be condemned from the rooftops. It is sickening and unproductive to allow discourse on this topic to proceed as if the regime is anything but an abomination, however unfeasible its destruction may seem in the short term.

Politics

Although much of what I write here can be deemed political, one political theme not subsumed by others is inadequate self-regulation of the government “market”, e.g., What to do about democratically elected terrorist regimes, Suppose they gave a war on terror and a few exposed it as terror, and Why does the U.S. federal government permit negative sum competition among U.S. states and localities?

The main problem with this theme is omission rather than doubt — no solutions proposed. Had I done so, I’d have plenty to doubt.

Refutation

I fell behind, doing refuting only posts from first and second quarters of 2005. My doubt about this enjoyable exercise is that it is too contrived. Many of the refutations are flippant and don’t reflect any real doubts or knowledge gained in the last 8 years. That doubt is what led me to the exercise of this post. How did I do?

Clubbing out of the vicious circle of bad policy (patents)

Thursday, January 2nd, 2014

Glyn Moody in Defensive Patent Licence: Nice Idea; Not Much Use:

The rest of Linksvayer’s thoughtful post explores these ideas and their background, and in particular looks at how they fit with other aspects of free software.

My fascinating post (thanks).

It’s well worth reading, even if the DPL itself is likely to have relatively little impact. That’s because it only applies to those who join the DPL club, which creates a typical vicious circle: few entities in the club to start with mean that few patents are made available on an royalty-free basis, and so there’s little incentive for more entities to join.

The vicious cycle can be overcome. Joining the club is very low barrier: gratis, and an entity doesn’t even have to hold any patents. Royalty-free patents from club members is only part of the reason for joining. Another is expression — taking advantage of the patent skepticism of many people, and exploiting for ethical branding and recruitment. These patent pool and expressive incentives could be mutually re-enforcing: the more entities join, the larger the pool, and the stronger the expectation that non-evil entities join.

Whether the vicious cycle will be overcome comes down to sales. The DPL people have put in place a lot of groundwork that will help — seemingly a large amount of work by credible people into making the DPL a robust legal instrument, a credible group of people as advisors (and presumably an impressive board when it reaches that stage), presumably some amount of funding. This combination of gravitas and resources would make it possible for a tireless campaigner (the pre-conditions do remind me of Creative Commons, whose tireless campaigner was Lawrence Lessig) or sales team befitting the target market to succeed in getting lots of entities to join the club.

One indicator after the DPL’s public launch next month will be whether the next columns and stories by journalists continue to focus on the barrier of lack of network effects, or on celebrating early joiners and urging other entities to follow as an urgent matter of public policy or industry best practice. This will be an indicator in large part because the DPL people’s efforts right now can shape these stories.

Still, it’s nice to see people thinking innovatively in this space as we work towards the ultimate goal of full abolition of software patents everywhere.

Indeed, though the DPL applies to all patents, and all patents everywhere should be fully abolished, as I’m pretty sure Moody agrees (but probably not the DPL people; that’s OK, they made a useful tool).

You can attend the DPL launch conference in Berkeley: February 28November 7, 2014, gratis registration. Your organization should join the club, now!

Video of the DPL birthday is up on the Internet Archive.

[Semi]Commons Coordinations & Copyright Choices 4.0

Monday, December 9th, 2013

CC0 is superior to any of the Creative Commons (CC) 4.0 licenses, because CC0 represents a superior policy (public domain). But if you’re unable or unwilling to upgrade to CC0, the CC 4.0 licenses are a great improvement over the 3.0 licenses. The people who did the work, led by Diane Peters (who also led CC0), many CC affiliates (several of whom were also crucial in making CC0 a success), and Sarah Pearson and Kat Walsh, deserve much praise. Bravo!

Below read my idiosyncratic take on issues addressed and not addressed in the 4.0 licenses. If that sounds insufferable, but you want to know about details of the 4.0 licenses, skip to the excellent version 4 and license versions pages on the CC wiki. I don’t bother linking to sections of those pages pertinent to issues below, but if you want detailed background beyond my idiosyncratic take on each issue, it can be found there.

Any criticism I have of the 4.0 licenses concerns policy choices and is not a criticism of the work done or people involved, other than myself. I fully understand that the feasible choices were and are highly constrained by previous choices and conditions, including previous versions of the CC licenses, CC’s organizational history, users of CC licenses, and the overall states of knowledge commons and info regulation and CC’s various positions within these. I always want CC and other “open” organizations to take as pro-commons of a stance as possible, and generally judge what is possible to be further than that of the conventional wisdom of people who pay any attention to this scene. Sometimes I advocated for more substantial policy changes in the 4.0 licenses, though just as often I deemed such advocacy futile. At this point I should explain that I worked for CC until just after the 4.0 licenses process started, and have consulted a bit on 4.0 licenses issues since then as a “fellow”. Not many people were in a better position to influence the 4.0 licenses, so any criticisms I have are due to my failure to convince, or perhaps incorrect decision to not try in some cases. As I’ve always noted on this blog, I don’t represent any organization here.

Desiderata

Pro-commons? As opposed to what? The title of the CC blog post announcing the formal beginning of work on the new licenses:

Copyright Experts Discuss CC License Version 4.0 at the Global Summit

My personal blog post:

Commons experts to develop version 4.0 of the CC licenses

The expertise that CC and similar organizations ought to bring to the world is commons coordination. There are many copyright experts in the world, and understanding public copyright licenses, and drafting more, are no great intellectual challenges. The copyright expertise needed to do so ought be purely instrumental, serving the purpose of commons coordination. Or so I think.

Throughout CC’s existence, it has presented itself, and been perceived as, to varying extents, an organization which provides tools for copyright holders to exercise their copyrights, and an organization which provides tools for building a commons. (What it does beyond providing tools adds another dimension, not unrelated to “copyright choice” vs. “commons coordination”; there’s some discussion of these issues in a video included in my personal post above.)

I won’t explain in this post, but I think the trend through most of CC’s history has been very slow movement in the “commons coordination” direction, and the explicit objectives of the 4.0 versioning process fit that crawl.

“Commons coordination” does not directly imply the usual free/open vs. proprietary/closed dichotomy. I think it does mostly fall out that way, in small part due to “license interoperability” practicalities, but probably mostly because I think the ideal universal copyregulation policy corresponds to the non-discriminatory commons that “free/open” terms and communities carve out on a small scale, including the pro-sharing policy that copyleft prototypes, and excluding any role for knowledge enclosure, monopoly, property, etc. But it is certainly possible, indeed usual, to advocate for a mixed regime (I enjoy the relatively new term “semicommons”, but if you wish to see it everywhere, try every non-demagogic call for “balance”), in which case [semi]commons tools reserving substantial exclusivity (e.g., “commercial use”) make perfect sense for [semi]commons coordination.

Continuing to ignore the usual [non-]open dichotomy, I think there still are a number of broad criteria for would-be stewards of any new commons coordinating license (and make no mistake, a new version of a license is a new license; CC introduced 6 new licenses with 4.0) to consider carefully, and which inform my commentary below:

  • Differentiation: does the new license implement some policy not currently available in existing licenses, or at least offer a great improvement in implementation (not to provide excuses for new licenses, but the legal text is just one part of implementation; also consider branding/positioning, understandability, and stewardship) of policy already available?
  • Permissions: does the new license grant all permissions needed to realize its policy objective?
  • Regulation: how does the license’s policy objective model regulation that ought be adopted at a wider scale, e.g., how does it align with usual “user rights” and “copyright reform” proposals?
  • Interoperability: is the new license maximally compatible with existing licenses, given the constraints of its policy objectives, and indeed, to the expense of its immediate policy objectives, given that incompatibility, non-interoperability, and proliferation must fragment and diminish the value of commons?
  • Cross-domain impact: how does the license impact license interoperability and knowledge sharing across fields/domains/communities (e.g., software, data, hardware, “content”, research, government, education, culture…)? Does it further silo existing domains, a tragedy given the paucity of knowledge about governing commons in the world, or facilitate sharing and collaboration across domains?

Several of these are merely a matter of good product design and targeting, and would also apply to an organization that really had a primary goal of offering copyright holders additional choices the organization deems are under-provided. I suspect there is plenty of room for innovation in “copyright choice” tools, but I won’t say more in this post, as such have little to do with commons, and whatever CC’s history of copyright choice rhetoric and offering a gaggle of choices, creating such tools is distant from its immediate expertise (other than just knowing lots about copyright) and light years from much of its extended community.

Why bother?

Apart from amusing myself and a few others, why this writeup? The CC 4.0 licenses won’t change, and hopefully there won’t be CC 4.1 or 4.5 or 5.0 licenses for many years. Longevity was an explicit goal for 4.0 (cf. 1.0: 17 months, 2.0: 12 months; 2.5: 20 months; 3.0: 81 months). Still, some of the issues covered here may be interesting to people choosing to use one of the CC 4.0 licenses, and people creating other licenses. Although nobody wants more licenses, often called license proliferation, as an end in itself, many more licenses is the long term trend, of which the entire history of CC is just a part. Further, more licenses can be a good, to the extent they are significantly different from and better than, and as compatible as possible with, existing licenses.

To be totally clear: many new licenses will be created and used over the next 10 years, intended for various domains. I would hope, some for all domains. Proliferators, take heed!

Development tools

A 4.0 wiki page and a bunch of pages under that were used to lay out objectives, issues and options for resolution, and link to drafts. Public discussion was on the cc-licenses list, with tangential debate pushed to cc-community. Drafts and changes from previous drafts were published as redlined word processor files. This all seems to have worked fairly well. I’d prefer drafts as plain text files in a git repository, and an issue tracker, in addition to a mailing list. But that’s a substantially different workflow, and word processor documents with track changes and inline comments do have advantages, not limited to lawyers being familiar with those tools.

100% wiki would also work, with different tradeoffs. In the future additional tools around source repositories, or wikis, or wikis in source repositories, will finally displace word processor documents, but the tools aren’t there yet. Or in the bad future, all licenses will be drafted in word processors in the cloud.

(If it seems that I’m leaving a a lot out, e.g., methodology for gathering requirements and feedback, in-person and teleconferences, etc., I merely have nothing remotely interesting to say, and used “tools” rather than “process” to narrow scope intentionally.)

Internationalization

The 4.0 licenses were drafted to be jurisdiction neutral, and there will be official, equivalent, verbatim language translations of the licenses (the same as CC0, though I don’t think any translations have been made final yet). Legal “porting” to individual jurisdictions is not completely ruled out, but I hope there will be none. This is a wholly positive outcome, and probably the most impactful change for CC itself (already playing out over the past few years, e.g., in terms of scope and composition of CC affiliates), though it is of small direct consequence to most users.

Now, will other license drafters and would-be drafters follow CC’s lead and stop with the vanity jurisdiction license proliferation already?

Databases

At least the EU, Mexico, Russia, and South Korea have created “database rights” (there have been attempts in other jurisdictions), copyright-like mechanisms for entities that assemble databases to persecute others who would extract or copy substantial portions of said databases. Stupid policies that should be abolished, copyright-like indeed.

Except for CC0 and some minor and inconsistent exceptions (certain within-EU jurisdiction “port” versions), CC licenses prior to 4.0 have not “covered” database rights. This means, modulo any implied license which may or may not be interpreted as existing, that a prior-to-4.0 (e.g., CC-BY-3.0) licensee using a database subject to database restrictions (when this occurs is a complicated question) would have permission granted by the licensor around copyright restrictions, but not around database restrictions. This is a pretty big fail, considering that the first job of a public license is to grant adequate permissions. Actual responses to this problem:

  • Tell all database publishers to use CC0. I like this, because everyone should just use CC0. But, it is an inadequate response, as many will continue to use less permissive terms, often in the form of inadequate or incompatible licenses.
  • Only waive or license database restrictions in “ports” of licenses to jurisdictions in which database restrictions exist. This is wholly inadequate, as in the CC scheme, porting involves tailoring the legal language of a license to a jurisdiction, but there’s no guarantee a licensor or licensee in such jurisdictions will be releasing or using databases under one of these ports, and in fact that’s often not the case.
  • Have all licenses waive database restrictions. This sounds attractive, but is mostly confusing — it’s very hard to discern when only database and not copyright restrictions apply, such that a licensee could ignore a license’s conditions — and like “tell database publishers to use CC0” would just lead many to use different licenses that do purport to conditionally license database rights.
  • Have all licenses grant permissions around database restrictions, under whatever conditions are present in the license, just like copyright.

I think the last is the right approach, and it’s the one taken with the CC 4.0 licenses, as well as by other licenses which would not exist but for CC 3.0 licenses not taking this approach. I’m even more pleased with their generality, because other copyright-like restrictions are to be expected (emphasis added):

Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

The exclusions of 2(b)(1)-(2) are a mixed bag; see moral and personality rights, and patents below.

CC0 also includes a definition with some generality:

Copyright and Related Rights include, but are not limited to, the following:

  1. the right to reproduce, adapt, distribute, perform,
    display, communicate, and translate a Work;
  2. moral rights retained by the original author(s) and/or
    performer(s);
  3. publicity and privacy rights pertaining to a person’s
    image or likeness depicted in a Work;
  4. rights protecting against unfair competition in regards
    to a Work, subject to the limitations in paragraph 4(a),
    below;
  5. rights protecting the extraction, dissemination, use and
    reuse of data in a Work;
  6. database rights (such as those arising under Directive
    96/9/EC of the European Parliament and of the Council of 11
    March 1996 on the legal protection of databases, and under
    any national implementation thereof, including any amended
    or successor version of such directive); and
  7. other similar, equivalent or corresponding rights
    throughout the world based on applicable law or treaty, and
    any national implementations thereof.

As does GPLv3:

“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

Do CC0 and CC 4.0 licenses cover semiconductor mask restrictions (best not to use for this purpose anyway, see patents)? Does GPLv3 cover database restrictions? I’d hope the answer is yes in each case, and if the answer is no or ambiguous, future licenses further improve on the generality of restrictions around which permissions are granted.

There is one risk in licensing everything possible, and culturally it seems, specifically in licensing database rights — the impression that licensee which do so ‘create obligations’ related to those rights. I find this an odd way to think of a conditional permission as the creation of an obligation, when the user’s situation without said permission is unambiguously worse, i.e., no permission. Further, this impression is a problem for non-maximally-permissive licenses around copyright, not only database or other copyright-like rights.

In my opinion the best a public license can do is to grant permissions (conditionally, if not a maximally permissive license) around restrictions with as much generality as possible, and expressly state that a license is not needed (and therefore conditions to not apply) if a user can ignore underlying restrictions for some other reason. Can the approach of CC version 4.0 licenses to the latter be improved?

For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.

These are all trivialities for license nerds. For publishers and users of databases: Data is free. Free the data!

Moral and personality rights

CC 4.0 licenses address them well:

Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.

To understand just how well, CC 3.0 licenses say:

Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Adaptations or Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author’s honor or reputation. Licensor agrees that in those jurisdictions (e.g. Japan), in which any exercise of the right granted in Section 3(b) of this License (the right to make Adaptations) would be deemed to be a distortion, mutilation, modification or other derogatory action prejudicial to the Original Author’s honor and reputation, the Licensor will waive or not assert, as appropriate, this Section, to the fullest extent permitted by the applicable national law, to enable You to reasonably exercise Your right under Section 3(b) of this License (right to make Adaptations) but not otherwise.

Patents and trademark

Prior versions were silent, CC 4.0 licenses state:

Patent and trademark rights are not licensed under this Public License.

Perhaps some potential licensor will be reassured, but I consider this unnecessary and slightly harmful, replicating the main deficiency of CC0. The explicit exclusion makes it harder to see an implied license. This is especially troublesome when CC licenses are used in fields in which patents can serve as a barrier. Software is one, for which CC has long disrecommended use of CC licenses largely because software is already well-covered by licenses with which CC licenses are mostly incompatible with; the explicit patent exclusion in the CC 4.0 licenses makes them even less suitable. Hardware design is another such field, but one with fragmented licensing, including use of CC licenses. CC should now explicitly disrecommend using CC licenses for hardware designs and declare CC-BY-SA-4.0 one-way compatible with GPLv3+ so that projects using one of the CC-BY-SA licenses for hardware designs have a clear path to a more appropriate license.

Patents of course can be licensed separately, and as I pointed out before regarding CC0, there could be curious arrangements for projects using such licenses with patent exclusions, such as only accepting contributions from Defensive Patent License users. But the better route for “open hardware” projects and the like to take advantage of this complementarity is to do both, that is use a copyright and related rights license that includes a patent peace clause, and join the DPL club.

DRM

CC 4.0 licenses:

The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures.

This is a nice addition, which had been previously suggested for CC 3.0 licenses and rejected — the concept copied from GPLv3 drafts at the time. I would have preferred to also remove the limited DRM prohibition in the CC licenses.

Attribution

The CC 4.0 licenses slightly streamline and clarify the substance of the attribution requirement, all to the good. The most important bit, itself only a slight streamlining and clarification of similar in previous versions:

You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.

This pulls in the wild use from near zero to-the-letter compliance to fairly high.

I’m not fond of the requirement to remove attribution information if requested by the licensor, especially accurate information. I don’t know whether a licensor has ever made such a request, but that makes the clause only pointless rather than harmful. Not quite though, as it does make for a talking point.

NonCommercial

not primarily intended for or directed towards commercial advantage or private monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.

Not intended to be a substantive change, but I’ll take it. I’d have preferred a probably more significantly narrowed definition and a re-branding so as to increase the range of and differentiation among the licenses that CC stewards. But at the beginning of the 4.0 licenses process, I expected no progress, so am not disappointed. Branding and other positioning changes could come post-launch, if anyone is so inclined.

I think the biggest failure of the range of licenses with an NC term (and there are many preceding CC) is not confusion and pollution of commons, very roughly the complaints of people who would like NC to have a more predictable meaning and those who think NC offers inadequate permissions, respectively, but lack of valuable use. Licenses with the NC term are certainly used for hundreds of millions of photos and web pages, and some (hundreds of?) thousands of songs, videos, and books, but few where either the licensor or the public gains significant value above what would have been achieved if the licensor had simply offered gratis access (i.e., put stuff on the web, which is incredibly valuable even with no permissions granted). As far as I know, NC licenses haven’t played a significant role in enabling (again, relative to gratis access) any disruptive product or policy, and their use by widely recognized artists and brands is negligible (cf. CC-BY-SA, which Wikipedia and other mass collaboration projects rely on to exist, and CC-BY and CC0, which are part of disruptive policy mandates).

CC is understandably somewhat stuck between free/open norms, which make licenses with the NC an embarrassment, and their numerically large but low value uses. A license steward or would-be steward that really believed a semicommons license regime could do much more would try to break out of this rut by doing a complete rethink of the product (or that part of the product line), probably resulting in something much more different from the current NC implementation than the mere definitional narrowing and rebranding that I started out preferring. This could be related to my commentary on innovation in “copyright choice” tools above; whether the two are really the same thing would be a subject for inquiry.

NoDerivatives

If there were licenses that should not have been brought to version 4.0, at least not under the CC brand, it would have been CC-BY-NC-ND and CC-BY-ND.

Instead, an express permission to make derivatives so long as they are not shared was added. This change makes so-called text/content/data mining of any work under any of the CC version 4.0 licenses unambiguously permitted, and makes ND stick out a tiny bit less as an aberration from the CC license suite modeling some moderate copyright reform baseline.

There are some costs to this approach: surprise that a “no derivatives” license permits derivatives, slight reduction in scope and differentiation among licenses that CC stewards, giving credence to ND licenses as acceptable for scholarship, and abetting the impression that text/content/data mining requires permission at all. The last is most worrisome, but (as with similar worries around licensing databases) can be turned into a positive to the extent CC and everyone knowledgeable emphasizes that you ought not and probably don’t need a license; we’re just making sure you have the freedoms around CC licensed works that you ought to have anyway, in case the info regulation regime gets even worse — but please, mine away.

ShareAlike

This is the most improved named (BY/NC/ND/SA) elements in CC 4.0 licenses, and the work is not done yet. But first, I wish it had been improved even more, by making more uses unambiguously “trigger” the SA provision. This has been done once, starting in 2.0:

For the avoidance of doubt, where the Work is a musical composition or sound recording, the synchronization of the Work in timed-relation with a moving image (“synching”) will be considered a Derivative Work for the purpose of this License.

The obvious next expansion would have been use of images (still or moving) in contextual relation to other material, eg illustrations used in a text. Without this expansion, CC-BY-SA and CC-BY-NC-SA are essentially identical to CC-BY and CC-BY-NC respectively for the vast majority of actual “reuse” instances. Such an expansion would have substantially increased the range of and differentiation among licenses that CC stewards. The main problem with such an expansion (apart from specifying it exactly) would be increasing the cost of incompatibility, where texts and images use different licenses. This problem would be mitigated by increasing compatibility among copyleft licenses (below), or could be eliminated by broadening the SA licensing requirement for uses triggered by expansion, eg any terms granting at least equivalent permissions, such that a CC-BY-SA illustration could still be used in a text licensed under CC-BY or CC0. Such an expansion did not make the cut, but I think together with aforementioned broadening of licensing requirements, such a modulation (neither strictly “stronger” nor “weaker”) would make for an interesting and heretofore unimplemented approach to copyleft, in some future license.

Apart from a subtle improvement that brings SA closer to a full “or later versions” license, and reflects usual practice and understanding (incidentally, “no sublicensing” in non-SA licenses remains pointless, is not to be found in most non-CC permissive licenses, and should not be replicated), the big improvements in CC 4.0 licenses with the SA element are the addition of the potential for one-way compatibility to CC-BY-SA, adding the same compatibility mechanism to CC-BY-NC-SA, and discussions with stewards of potentially compatible licenses which make the realization of compatibility more likely. (I would have included a variation on the more complex but in my view elegant and politically advisable mechanism introduced in MPL 2.0, which allows for continued use under the donor compatible license as long as possible. Nobody demanded such, so not adding the complexity was perhaps a good thing.)

I hope that in 2014 CC-BY-SA-4.0 will be declared bilaterally compatible with the Free Art License 1.3, or if a new FAL version is required, it is being worked on, with achieving bilateral compatibility as a hard requirement, and more importantly, that CC-BY-SA-4.0 is declared one-way compatible (as a donor) with GPLv3+. An immediate step toward those ends will be finalizing an additional statement of intent regarding the stewardship of licenses with the ShareAlike element.

Though I’ll be surprised if any license appears as a candidate for compatibility with CC-BY-NC-SA-4.0, adding the mechanism to that license is a good thing: as a matter of general license stewardship, reducing the barriers to someone else creating a better NC license (see above), and keeping “porting” completely outside the 4.0 license texts (hopefully there will be no porting, but if there is any, compatibility with the international versions in licenses with the SA element would be exclusively via the compatibility mechanism used for any potentially compatible license).

Tech

All license clauses have id attributes, allowing direct linking to a particular clause. These direct links are used for references within the licenses. These are big usability improvements.

I would have liked to see an expansive “tech” (including to some extent design) effort synchronized with the 4.0 licenses, from the practical (e.g., a canonical format for license texts, from which HTML, plain text, and others are generated; that may be HTML, but the current license HTML is inadequate for the task) to the impractical (except for increasing CC’s reputation, e.g., investigating whether any semantic annotation and structure, preferably building on existing research, would be useful, in theory, for the license texts, and possibly even a practical aid to translation), to testing further upgrades to the ‘legal user interface’ constituted by the license texts and “deed” summaries (e.g., combining these), to just bringing various CC tooling and documentation up to date with RDFa 1.1 Lite. But, some of these things could be done post-launch if anyone is so inclined, and my understanding is that CC has only a single technology person on staff, dedicated to creating other products, and most importantly, the ability to directly link to any license clause probably has more practical benefits than anything on my wishlist.

Readability

One of the best things about the CC 4.0 licenses is their increased understandability. This is corroborated by crude automated readability metrics below, but I suspect these do not adequately characterize the improvement, for they include three paragraphs of explanatory text not present in previous versions, probably don’t fully reflect the improvement of splitting hairball paragraphs into lists, and have no mechanism for accounting for how the improved usability of linking to individual clauses contributes to understandability.

CC-BY-NC-SA (the license with the most stuff in it, usually used as a drafting template for others) from version 1.0 through 4.0, including 4.0 drafts (lower numbers indicate better readability, except in the case of Flesch; Chars/(Flesch>=1) is my gross metric for how painful it is to read a document; see license automated readability metrics for an explanation):

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
39b2ef67be9e5b4e743e5269a31ad1691515eede CC-BY-NC-SA-1.0 10228 13.3 16.3 14.2 17.0 59.7 14.2 48.4 211
5800ac2d32e35ace035cdcae693423cd9ff5bb6f CC-BY-NC-SA-2.0 11927 13.3 16.2 14.7 17.1 60.0 14.4 47.0 253
e5f44c2df6b1391d1ddb6efb2db6f90670e4ae67 CC-BY-NC-SA-2.5 12013 13.1 16.0 14.6 16.9 59.6 14.2 47.7 251
a63b7e81e7b9e30df5d253aed1d2991af47992df CC-BY-NC-SA-3.0 17134 16.4 19.7 14.2 20.6 67.0 16.3 38.8 441
8b36c30ed0510d9ca9c69a2ef826b9fd52992474 by-nc-sa-4.0d1 12465 13.0 15.0 14.9 16.3 57.4 14.0 43.9 283
4a87c7af5cde7729e2e456ee0e8958f8632e3005 by-nc-sa-4.0d2 11583 13.1 14.8 14.2 16.8 56.2 14.4 44.7 259
bb6f239f7b39343d62440bff00de24da2b3d256f by-nc-sa-4.0d3 14422 14.1 15.8 15.1 18.2 61.0 15.4 38.6 373
cf5629ae38a745f4f9eca429f7b26af2e71eb109 by-nc-sa-4.0d4 14635 13.8 15.6 15.5 17.8 60.2 15.2 38.6 379
a5e1b9829fd287cbe255df71eb9a5aad7fb19dbc by-nc-sa-4.0d4v2 14808 14.0 15.8 15.5 18.0 60.6 15.2 38.1 388
887f9a5da675cf681421eab3ac6d61f82cf34971 CC-BY-NC-SA-4.0 14577 13.1 14.7 15.7 17.1 58.6 14.7 40.1 363

Versions 1.0 through 4.0 of each of the six CC licenses brought to version 4.0, and CC0:

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
74286ae0dfea38c489437bf659b209737945145c CC0-1.0 5116 16.2 19.5 15.0 19.5 66.3 15.6 36.8 139
c766cc6d5e63277e46a3d83c6254e3528082587b CC-BY-1.0 8867 12.6 15.5 14.1 16.4 57.8 13.8 51.3 172
bf23729bec8ffd0de4d319fb33395c595c5c762b CC-BY-2.0 9781 12.1 14.9 14.3 16.1 56.7 13.7 51.9 188
024bb6d37d0a17624cf532bd14fbd42e15c5a963 CC-BY-2.5 9867 11.9 14.7 14.2 15.8 56.3 13.6 52.6 187
20dc61b94cfe1f4ba5814b340095b4c3fa23e801 CC-BY-3.0 14956 16.1 19.4 14.1 20.4 66.1 16.2 40.0 373
00b29551deee9ced874ffb9d29379b92f1487045 CC-BY-4.0 13003 13.0 14.5 15.4 16.9 57.9 14.6 41.1 316
e0c4b13ec5f9b5702d2e8b88d98b803e07d65cf8 CC-BY-NC-1.0 9313 13.2 16.2 14.3 17.0 59.3 14.1 49.3 188
970421995789d2e8189bb12071ab838a3fcf2a1a CC-BY-NC-2.0 10635 13.1 16.1 14.6 17.2 59.5 14.4 48.1 221
08773bb9bc13959c6f00fd49fcc081d69bda2744 CC-BY-NC-2.5 10721 12.9 15.8 14.5 16.9 59.0 14.2 48.9 219
9639556280637272ace081949f2a95f9153c0461 CC-BY-NC-3.0 15732 16.5 19.9 14.1 20.8 67.2 16.4 38.7 406
afcbb9791897e1e2f949d9d56ba64164746e0828 CC-BY-NC-4.0 13520 13.2 14.8 15.6 17.2 58.6 14.8 39.8 339
9ab2a3818e6ccefbc6ffdd48df7ecaec25e32e41 CC-BY-NC-ND-1.0 8729 12.7 15.8 14.4 16.4 58.6 13.8 51.0 171
966c97357e3b529e9c8bb8166fbb871c5bc31211 CC-BY-NC-ND-2.0 10074 13.0 16.1 14.7 17.0 59.7 14.3 48.8 206
c659a0e3a5ee8eba94aec903abdef85af353f11f CC-BY-NC-ND-2.5 10176 12.8 15.9 14.6 16.8 59.2 14.2 49.3 206
ad4d3e6d1fb6f89bbd28a44e263a89430b575dfa CC-BY-NC-ND-3.0 14356 16.3 19.7 14.1 20.5 66.8 16.2 39.7 361
68960bdf512ff5219909f932b8a81fdb255b4642 CC-BY-NC-ND-4.0 13350 13.3 14.8 15.7 17.2 58.4 14.8 39.4 338
39b2ef67be9e5b4e743e5269a31ad1691515eede CC-BY-NC-SA-1.0 10228 13.3 16.3 14.2 17.0 59.7 14.2 48.4 211
5800ac2d32e35ace035cdcae693423cd9ff5bb6f CC-BY-NC-SA-2.0 11927 13.3 16.2 14.7 17.1 60.0 14.4 47.0 253
e5f44c2df6b1391d1ddb6efb2db6f90670e4ae67 CC-BY-NC-SA-2.5 12013 13.1 16.0 14.6 16.9 59.6 14.2 47.7 251
a63b7e81e7b9e30df5d253aed1d2991af47992df CC-BY-NC-SA-3.0 17134 16.4 19.7 14.2 20.6 67.0 16.3 38.8 441
887f9a5da675cf681421eab3ac6d61f82cf34971 CC-BY-NC-SA-4.0 14577 13.1 14.7 15.7 17.1 58.6 14.7 40.1 363
e4851120f7e75e55b82a2c007ed98ffc962f5fa9 CC-BY-ND-1.0 8280 12.3 15.5 14.3 16.1 57.9 13.6 52.4 158
f1aa9011714f0f91005b4c9eb839bdb2b4760bad CC-BY-ND-2.0 9228 11.9 14.9 14.5 15.8 56.9 13.5 52.7 175
5f665a8d7ac1b8fbf6b9af6fa5d53cecb05a1bd3 CC-BY-ND-2.5 9330 11.8 14.7 14.4 15.6 56.5 13.4 53.2 175
3fb39a1e46419e83c99e4c9b6731268cbd1591cd CC-BY-ND-3.0 13591 15.8 19.2 14.1 20.0 65.6 15.9 41.2 329
ac747a640273815cf3a431be0afe4ec5620493e3 CC-BY-ND-4.0 12830 13.0 14.4 15.4 16.9 57.6 14.6 40.7 315
dda55573a1a3a80d294b1bb9e1eeb3a6c722968c CC-BY-SA-1.0 9779 13.1 16.1 14.2 16.8 59.1 14.0 49.5 197
9cceb80d865e52462983a441904ef037cf3a4576 CC-BY-SA-2.0 11044 12.5 15.3 14.4 16.2 57.9 13.8 50.2 220
662ca9fce7fed61439fcbc27ca0d6db0885718d9 CC-BY-SA-2.5 11130 12.3 15.0 14.4 16.0 57.5 13.6 50.9 218
4a5bb64814336fb26a9e5d36f22896ce4d66f5e0 CC-BY-SA-3.0 17013 16.4 19.8 14.1 20.5 67.2 16.2 38.9 437
8632363dcc2c9fc44f582b14274259b3a35744b2 CC-BY-SA-4.0 14041 12.9 14.4 15.4 16.8 57.8 14.5 41.4 339

It’s good for automated readability metrics that from 3.0 to 4.0 CC-BY-SA is most improved (the relevant clause was a hairball paragraph; CC-BY-NC-SA should have improved less, as it gained the compatibility mechanism) and CC-BY-ND is least improved (it gained express permission for private adaptations).

Next

I leave a list of recommendations (many already mingled in or implied by above) to a future post. But really, just use CC0.

Software Patent NATO, 1993

Tuesday, November 26th, 2013

In my thoughts on the Defensive Patent License, I neglected to note in the history section a similar proposal made in 1993 by John Walker, founder of Autodesk, PATO: Collective Security In the Age of Software Patents:

[T]he trend toward increased litigation, constraining innovation in the software industry, is accelerating. The U.S. government is using trade negotiations to force other countries to institute software patents in their own markets.

While eliminating software patents would be the best solution, changing the law takes a long time and is uncertain to succeed. I’ve been trying to puzzle out how the software industry might rescue itself from immolation through litigation and came up with the following proposal.

Could have been written in 2013.

I’ve been thinking about using NATO as a model of a patent defence consortium. Suppose a bunch of big software companies (perhaps led by Oracle, who’s already taken the point on this) were to form PATO–Patent And Technology Organisation–and contribute all their current software patents, and all new software patents they were granted as long as they remained a member of PATO, to its “cross-licensing pool”. To keep the lawyers and shareholders from going nuts, the patents would be licensed through PATO but would remain the property of the member–a member could withdraw with appropriate notice and take the patents back from the pool.

Any member of PATO would be granted an automatic, royalty-free license to use any patent in the cross-licensing pool. Thus, by putting your patents in the pool, you obtain access to all the others automatically (but if you withdraw and pull your patents, of course you then become vulnerable for those you’ve used, which creates a powerful disincentive to quit).

The basic principle of NATO is that an attack on any member is considered an attack on all members. In PATO it works like this–if any member of PATO is alleged with infringement of a software patent by a non-member, then that member may counter-sue the attacker based on infringement of any patent in the PATO cross-licensing pool, regardless of what member contributed it. Once a load of companies and patents are in the pool, this will be a deterrent equivalent to a couple thousand MIRVs in silos–odds are that any potential plaintiff will be more vulnerable to 10 or 20 PATO patents than the PATO member is to one patent from the aggressor. Perhaps the suit will just be dropped and the bad guy will decide to join PATO….

Differences with the DPL, two decades hence:

  • PATO was to cover software patents only; a challenge to define.
  • PATO members could counter-sue attackers with patents from any other member; I have no idea whether this is legally feasible.
  • PATO never moved beyond raw idea stage, as far as I know, while legal work on the DPL has gone on for a few years, DPL 1.0 is complete, and the project is set for a public launch in February.

In 1993, software patents were new, and still opposed by Oracle and Microsoft. Since then both have become software patent aggressors and defend the idea of software patents.

Many companies that claim to dislike software patent aggression in 2013 will become aggressors over the next years, or their patents will be obtained and used by trolls and other aggressors. Becoming a DPL user now may be an effective way for such companies to avoid this fate, and avoid contributing to the stifling of equality, freedom, and innovation.

Addendum 20131202: Another difference between the PATO sketch and the DPL implementation is that the former includes “US$25/year” to be a member, while the latter is gratis. I assume that the nascent DPL Foundation will be able to attract adequate grants and other support, perhaps more than could be obtained through a membership fee, but the choice is at the least an interesting and important one.

Innovation Pending

Wednesday, November 20th, 2013

Does the U.S. Patent System Stifle Innovation? Pro: Christopher Kelty, Laura Sydell. Con: Jaz Banga, Scott Snibbe. Moderator: Eric Goldman. Video:

The moderator was by far the best performer. Watch above, or read his introduction and audience voting instructions.

The pro side’s opening statement was funny, involving the definition of “stifle”, freedom as the oxygen of innovation, and innovation occurring within the iron lungs of large corporations, due to the patent system. Otherwise they stuck to a narrow argument: the current U.S. patent system is beset by trolls (Sydell was a reporter for When Patents Attack and II) and lawsuits and some would-be inventors do give up after realizing they are in a heavily patented field, ergo, the U.S. patent system stifles innovation.

The con side often seemed to make contradictory arguments that didn’t support their side. At one point the moderator interrupted to ask if they were really making a claim they seemed to be; nobody was phased, though I could swear at various points the pro side was looking incredulously at the con side (the recording is at the wrong angle to really see). But their fundamental argument was that there’s lots of innovation happening, patents and IP generally are American as apple pie, and trolls, while bad, aren’t a big deal for companies like Apple with many billions of dollars, ergo, the U.S. patent system does not stifle innovation.

The audience voted for the con side.

In my previous post noting that this debate was coming up, I concluded with “I hope they also consider equality and freedom.” They did a bit with regard to innovators — “freedom to innovate” and how “small” and “large” innovators fare in the system. But I had in mind expanding the discourse to include the effects of innovation policy on the freedom and equality of all humans.

“Patent” and “stifle” were expertly and humorously defined by Goldman and Kelty, but “innovation” remained undefined. The closest the debate came to exploring the contours of what innovation means, or ought mean, may have been in points made about the triviality of some patents, and the contrast between “small” and “large” innovators. Is innovation ‘done in a fashion that has served to maximize the patent encumbrances’ so it can be controlled by Apple, Microsoft, IBM, Monsanto, et al, the innovation we want?

Both the pro and con sides seemed to dislike patent trolls (while disagreeing on their importance). I wonder if any of the participants (particularly the con side) will endorse, or better yet, sign up for the Defensive Patent License (my discussion)? Or any of the other reforms reviewed by Goldman in Fixing Software Patents?

The debate was part of ZERO1 Garage’s Patent Pending exhibition, open through December 20. Each of the exhibited works is somehow related to a patent held or filed for by the artist.

One patent related to a work is pending, thus the work required an NDA for viewing:

nda

The handful of people I showed this image to were each appalled. But, in the context of the show, I have to admit it is cute. And, perhaps unintended, a critique of patent theory — which claims that patents encourage revelation.

Each of the pieces is interesting to experience. I particularly enjoyed the sounds made and shadows cast by (con side debater) Snibbe’s fan work (controlled by blowing through a smaller fan):

fans

My only disappointment from the exhibition is that there wasn’t a touching sample of these bricks, apparently made in part from fungus:

fungus brick

Bonus link: Discussions On The Abolition Of Patents In The UK, France, Germany And The Netherlands, From 1869. As I’ve mentioned before, these debates are nothing new, though it’s popular even for “reformers” to claim that current innovation policy is somehow mismatched with the “digital age”. The only difference between old and current debates is that the public interest is far more buried in the current ones.

Defensive Patent License 1.0 birthday

Saturday, November 16th, 2013

Defensive Patent License version 1.0 turned 0 yesterday. The Internet Archive held a small celebration. The FAQ says the license may be used now:

Sign up and start using the DPL by emailing defensivepatent@gmail.com.

There will be a launch conference 2014-02-2811-07 in Berkeley: gratis registration. By that time I gather there should be a list of launch DPL users, a website for registering and tracking DPL users, and a non-profit organization to steward the license, for which the Internet Archive will serve as a 501(c)3 fiscal sponsor.

Loosely organized thoughts follow. But in short:

  • DPL users grant a royalty free license (except for the purpose of cloning products) for their entire patent portfolio, to all other DPL users. This grant is irrevocable, unless the licensee (another DPL user) withdraws from the DPL or initiates patent litigation against any DPL user — but note that the withdrawing or aggressing entity’s grant of patents to date to all other DPL users remains in force forever.
  • Participation is on an entity basis, i.e., a DPL user is an organization or individual. All patents held or gained while a DPL user are included. But the irrevocable license to other DPL users then travels with individual patents, even when transferred to a non-DPL user entity.
  • An entity doesn’t need any patents to become a DPL user.
  • DPL doesn’t replace or conflict with patent peace provisions in modern free/open source licenses (e.g., Apache2, GPLv3, MPL2); it’s a different, complementary approach.
  • It may take years for the pool of DPL users’ patents to be significant enough to gain strong network effects and become a no-brainer for businesses in some industries to join. It may never. But it seems possible, and well worth trying.
  • Immediately, DPL seems like something for organizations that want to make a strong commitment, but a narrow one (only to others making the commitment), to patent non-aggression, ought to get on board with. Entities that want to make a broader commitment, including those that have already made complementary commitments through free/open source licenses or non-aggression pledges for certain uses (e.g., implementing a standard), should also get on board.

History

Last year I’d read Protecting Open Innovation: The Defensive Patent License as a New Approach to Patent Threats, Transaction Costs, and Tactical Disarmament (by Jennifer Urban and Jason Schultz, also main authors of the DPL 1.0) with interest and skepticism, and sent some small comments to the authors. The DPL 1.0, available for use now, incorporates some changes suggested in A Response to a Proposal for a Defensive Patent License (DPL) (and probably elsewhere; quite a few people worked on the license). Both papers are pretty good reads for understanding the idea and some of the choices made in DPL 1.0.

Two new things I learned yesterday are that the DPL was Internet Archive founder Brewster Kahle’s idea, and work on the license started in 2009. Kahle had been disturbed that patents with his name on them that he had been told were obtained for defensive purposes while an engineer at Thinking Machines, were later used offensively by an entity that had acquired the patents. This made him wonder if there could be a way for an entity to commit to using patents only defensively. Kahle acknowledged that others have had similar ideas, but the DPL is now born, and it just may be the idea that works.

(No specific previous ideas were mentioned, but a recent one that comes to mind is Paul Graham’s 2011 suggestion of a pledge to not initiate patent litigation against organizations with fewer that 25 employees. Intentionally imprecise, not legally binding, and offering no benefit other than appearing on a web page, probably not surprising it didn’t take off. Another is Twitter’s Innovator’s Patent Agreement (2012), in which a company promises an employee to seek their permission for any non-defensive uses of patents in the employee’s name; unclear uptake. Additional concepts are covered at End Soft Patents.)

Kahle, Urban, and Schultz acknowledged inspiration from the private ordering/carving out of free spaces (for what Urban and Schulz call “open innovation communities” to practice) through public licenses such as the GPL and various Creative Commons licenses. But the DPL is rather different in a few big ways (and details which fall out of these):

  1. Subject of grant: patent vs. copyright
  2. Scope of grant: all subject rights controlled by an entity vs individual things (patents or works subject to copyright)
  3. Offered to: club participants vs. general public

I guess there will be a tendency to assume the second and third follow strictly from the first. I’m not so sure — I can imagine free/open source software and/or free culture/open content/data worlds which took the entity and club paths (still occasionally suggested) — and I think the assumption would under-appreciate the creativity of the DPL.

DPL and free/open source software

The DPL is not replacement for patent clauses in free/open source licenses, which are conditions of public copyright licenses with different subject, scope, and audience (see previous). Additionally, the DPL’s non-grant for cloning products, which I do not understand the scope of, probably further reduces any overlap between modern FLOSS license patent provisions and the DPL that may exist. But, I see no conflict, and some complementarity.

A curiosity would be DPL users releasing software under free software licenses without patent provisions, or even with explicit patent non-grants, like CC0. A complementary curiosity would be free/source projects which only accept contributions from DPL users. Yet another would be a new software license only granting copyright permissions to DPL users (this would almost certainly not be considered free/open source), or releasing DPL users from some license conditions (this could be done as an exception to an existing license).

The DPL isn’t going to directly solve any patent problems faced by free/open source software (e.g., encumbered ‘standards’) any time soon. But, to the extent the DPL decreases the private value (expected rents) of patents and encourages more entities to not see patents as useful for collecting rents, this ought push the problems faced away, just a bit. Even if software patents were to evaporate tomorrow (as they should!), users of free/open source software would encounter patents impacted all sorts of devices running said software; patents would still be a problem for software freedom.

I hope that many free/open source software entities become DPL users, for the possible slowly accruing benefits above, but also to make common cause with others fighting for (or reforming slightly towards) intellectual freedom. Participation in broader discourse by free/open source software entities is a must, for the health of free software, and the health of free societies.

End Soft Patents’ entry on the DPL will probably be a good place to check years hence on how the DPL is viewed from the perspective of free/open source software.

DPL “enforcement”

In one sense, the DPL requires no enforcement — it is a grant of permission, which one either takes or not by also becoming a DPL user. But, although it contains provisions to limit obvious gaming, if it becomes significant, doubtless some entities will try to push its boundaries, perhaps by obfuscating patent ownership, or interpreting “cloning” expansively. Or, the ability to leave with 180 days notice could prove to be a gaping hole, with entities taking advantage of the pool until they are ready to file a bunch of patents. Or, the lack of immediate termination of licenses from all DPL users and the costliness of litigation may mean the DPL pool does little to restrain DPL users from leaving, or worse, initiating litigation (or threatening to do so, or some other extortion) against other DPL users.

Perhaps the DPL Foundation with a public database of DPL users will play a strong coordinating function, facilitating uncovering obfuscated ownership, disseminating notice of bad behavior, and revocation of licenses to litigators and leavers.

DPL copyleft?

In any discussion of X remotely similar to free/open source software, the question of “what is copyleft for X?” comes up — and one of the birthday presenters mentioned that the name DPL is a hat tip to the GPL — is the DPL “copyleft for patents”?

It does have reciprocality — only DPL users get DPL grants from other DPL users. I will be surprised if at some point someone doesn’t pejoratively say the DPL is “viral” — because the license to DPL users stays with patents even if they are transferred to a non-DPL user entity. A hereditary effect more directly analogous to the GPL might involve a grant conditioned on an licensee’s other patents which read on the licensed patent being similarly licensed, but this seems ineffective at first blush (and has been thought of and discarded innumerable times).

The DPL doesn’t have a regulatory side. Forced revelation, directly analogous to the GPL’s primary regulatory side, would be the obvious thing to investigate for a DPL flavor, but the most naive requirement (entity must reveal all patentable inventions in order to remain a DPL user in good standing) would be nearly impossible to comply with, or enforce. It may be more feasible to require revelation of designs and documentation for products or services (presumably source code, for software) that read on any patents in the DPL pool. This would constitute a huge compliance and enforcement challenge, and probably very difficult to bootstrap a significant pool, but would be an extremely interesting regulatory experiment if it gained any traction.

DPL “Troll-proof”?

The slogan must be taken with a mountain of salt. Still, the DPL, if widely adopted, would mitigate the troll problem. Because grants to DPL users are irrevocable, and follow a patent upon changes of ownership, any patent with a grant to DPL users will be less valuable for a troll to acquire, because there are fewer entities for the troll to sue. To the extent DPL adoption reduces patenting in an industry, or overall, there will be less ammunition available for trolls to buy and use to hold anyone up. In the extreme of success, all practicing entities become DPL users. Over a couple decades, the swamp is drained.

Patents are still bad

The only worrisome thing I heard yesterday (and I may have missed some nuance) was the idea that it is unfortunate that many engineers, and participants in open innovation communities in particular, see patents as unethical, and that as free/open source software people learned to use public copyright licenses (software was not subject to copyright until 30-40 years ago), they and others should learn to use appropriate patent tools, i.e., the DPL.

First, the engagement of what has become free/open source software, open access, open data, etc., with copyright tools, has not gone swimmingly. Yes, much success is apparent, but compared to what? The costs beg to be analyzed: isolation, conservatism, internal fighting, gaming of tools used, disengagement from policy and boundary-pushing, reduction (and stunting) of ethics to license choice. My ideal, as hinted above, would be for engagement with the DPL to help open innovation communities escape this trap, rather than adding to its weight.

Second, in part because extreme “drain the swamp” level of success is almost certainly not going to be achieved, abolition (of software patents) is the only solution. And beyond software, the whole system should be axed. Of course this means not merely defending innovators, including open innovation communities, from some expense and litigation, but moving freedom and equality to the top of our innovation policy ordering.

DPL open infrastructure?

I hope, in part to make the DPL attractive to existing open innovation communities, I really hope the DPL Foundation will make everything it does free and open with traditional public copyright and publishing tools;

  • Open content: the website and all documentation ought be licensed under CC0 (though CC-BY or CC-BY-SA would be acceptable).
  • Open source/open service: source code of the eventual website, including applications for tracking DPL users, should be developed in a public repository, and licensed under either Apache2 or AGPLv3 (latter if the Foundation wishes to force those using the software elsewhere to reveal their modifications).
  • Open data: all data concerning DPL users, licensed patents, etc., should be machine-readable, downloadable in bulk, and released under CC0.

DPL readability

I found the DPL surprisingly brief and readable. My naive guess, given a description of how it works, would have been something far longer and more inscrutable. But the DPL actually compares to public licenses very favorably on automated readability metrics. Table below shows these for DPL 1.0 and some well known public copyright licenses (lower numbers indicate better readability, except in the case of Flesch; Chars/(Flesch>=1) is my gross metric for how painful it is to read a document; see license automated readability metrics for an explanation):

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8ffe2c5c25b85e52f42fcde68c2cf6a88b7abd69 Apache-2.0 8310 16.8 19.8 15.1 20.7 64.6 16.6 33.6 247
20dc61b94cfe1f4ba5814b340095b4c3fa23e801 CC-BY-3.0 14956 16.1 19.4 14.1 20.4 66.1 16.2 40.0 373
bbf850220781d9423be9e478fbc07098bfd2b5ad DPL-1.0 8256 15.1 18.9 15.7 18.4 65.9 15.0 40.6 203
0473f7b5cf37740d7170f29232a0bd088d0b16f0 GPL-2.0 13664 13.3 16.2 12.5 16.2 57.0 12.7 52.9 258
d4ec7d0b46077b89870c66cb829457041cd03e8d GPL-3.0 27588 13.7 16.0 13.3 16.8 57.5 13.8 47.2 584
78fe0ed5d283fd1df26be9b4afe8a82124624180 MPL-2.0 11766 14.7 16.9 14.5 17.9 60.5 14.9 40.1 293

Automated readability metrics are probably at best an indicator for license drafters, but offer no guidance on actually improving readability. Last month Luis Villa (incidentally, on the DPL’s advisory board) reviewed a manual of style for contract drafting by editing Twitter’s Innovator’s Patent Agreement per the manual’s advice. I enjoyed Villa’s post, but have not attempted to discern (and discernment may be beyond my capability) how closely DPL 1.0 follows the manual’s advice. By the way, Villa’s edit of the IPA per the manual did improve its automated readability metrics:

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8774cfcefbc3b008188efc141256b0a8dbe89296 IPA 4778 19.6 24.0 15.5 22.7 75.8 17.0 27.1 176
b7a39883743c7b1738aca355c217d1d14c511de6 IPA-MSCD 4665 17.4 21.2 15.6 20.4 70.2 16.0 32.8 142

Net

Go back to the top, read the DPL, get your and other entities in the queue to be DPL users at its launch! Or, explain to me why this is a bad idea.

“I would love it if all patents evaporated” (WebRTC)

Monday, November 11th, 2013

I’ve been following WebRTC (Real Time Communications) because (1) it is probably the most significant addition to the web in terms of enabling a new class of applications at least since the introduction of Ajax (1998, standardized by 2006), and perhaps since the introduction of Javascript (1995, standardized by 1997). The IETF working group charter puts it well (another part of the work is at W3C):

There are a number of proprietary implementations that provide direct interactive rich communication using audio, video, collaboration, games, etc. between two peers’ web-browsers. These are not interoperable, as they require non-standard extensions or plugins to work. There is a desire to standardize the basis for such communication so that interoperable communication can be established between any compatible browsers. The goal is to enable innovation on top of a set of basic components. One core component is to enable real-time media like audio and video, a second is to enable data transfer directly between clients.

(See pad.textb.org (source) for one simple application; simpleWebRTC seems to be a popular library for building WebRTC applications.)

And (2) because WebRTC is the scene of the latest fight to protect open web standards from rent seekers.

The IETF working group is choosing between H.264 Constrained Baseline Profile Level 1.2 and VP8 as the Mandatory To Implement (MTI) video codec (meaning all applications can count on that codec being available) for WebRTC. H.264 cannot be included in free and open source software, VP8 can, due to their respective patent situations. (For audio-only WebRTC applications, the free Opus codec seems to be a non-controversial requirement.)

Cisco has recently promised that in 2014 they will make available a binary implementation of H.264 for which they will pay license fees for all comers (there is an annual cap on fees, allowing them to do this). That’s nice of them, but the offer is far from ideal for any software (a binary must be downloaded from Cisco servers for each user), and a nonstarter for applications without some kind of plugin system, and for free and open source software distributions, which must be able to modify source code.

Last week I remotely attended a meeting on the MTI video codec choice. No consensus was reached; discussion continues on the mailing list. One interesting thing about the non-consensus was the split between physical attendees (50% for H.264 and 30% for VP8) and remote attendees (20% for H.264, 80% for VP8). A point mentioned several times was the interest of “big players” (mostly fine with paying H.264 fees, and are using it in various other products) and “little players” (fees are significant, eg startups, or impossible, eg free and open source projects); depending on one’s perspective, the difference shows how venue biases participation in one or both directions.

Jonathan Rosenberg, the main presenter for H.264, at about 22 minutes into a recording segment:

I would love it if all patents evaporated, if all the stuff was open source in ways that we could use, and we didn’t have to deal with any of this mess.

The argument for why H.264 is the best choice for dealing with “this mess” boils down to H.264 having a longer history and broader adoption than VP8 (in other applications; the two implementation of WebRTC so far, in recent versions of Chrome and Firefox, so far exclusively use VP8).

Harald Alvestrand, the main presenter for VP8, at about 48 minutes into another recording segment:

Development of codecs has been massively hampered and held back by the fact that it has been done in a fashion that has served to maximize the patent encumbrances on codecs. Sooner or later, we should see a way forward to abandon the dependence on encumbered codecs also for video software. My question, at this juncture, is if not now, when?

Unsurprisingly, I find this (along with the unworkability of H.264 for free and open source software) a much more compelling argument. The first step toward making patents evaporate (or at least irrelevant for digital video) is to select a codec which has been developed to maximize freedom, rather than developed to maximize encumbrances and rent collection.

What are individuals and entities pushing H.264 as the best codec for now, given the mess, doing for the longer term? Are they working on H.265, in order to bake in rents for the next generation? Or are they contributing to VP9, the next-next generation Daala, and the elimination of software patents?

Addendum: Version of this post sent to rtcweb@ietf.org (and any followups).

Economics and the Commons Conference [knowledge stream] report

Wednesday, October 30th, 2013

Economics and the Common(s): From Seed Form to Core Paradigm. A report on an international conference on the future of the commons (pdf) by David Bollier. Section on the knowledge stream (which I coordinated; pre-conference post) copied below, followed by an addendum with thanks and vague promises. First, video of the stream keynote (slides) by Carolina Botero (introduced by me; archive.org copy).

III. “Treating Knowledge, Culture and Science as Commons”

Science, and recently, free software, are paradigmatic knowledge commons; copyright and patent paradigmatic enclosures. But our vision may be constrained by the power of paradigmatic examples. Re-conceptualization may help us understand what might be achieved by moving most provisioning of knowledge to the commons; help us critically evaluate our commoning; and help us understand that all commons are knowledge commons. Let us consider, what if:

  • Copyright and patent are not the first knowledge enclosures, but only “modern” enforcement of inequalities in what may be known and communicated?
  • Copyright and patent reform and licensing are merely small parts of a universe of knowledge commoning, including transparency, privacy, collaboration, all of science and culture and social knowledge?
  • Our strategy puts commons values first, and views narrow incentives with skepticism?
  • We articulate the value of knowledge commons – qualitative, quantitative, ethical, practical, other – such that knowledge commons can be embraced and challenged in mainstream discourse?

These were the general questions that the Knowledge, Culture and Science Stream addressed.

Knowledge Stream Keynote Summary

Carolina Botero Cabrera, a free culture activist, consultant and lawyer from Colombia, delivered a plenary keynote for the Knowledge Stream entitled, “What If Fear Changes Sides?” As an author and lecturer on free access, free culture and authors’ rights, Botero focused on the role of information and knowledge in creating unequal power relationships, and how knowledge and cultural commons can rectify such problems.

“If we assume that information is power and acknowledge the power of knowledge, we can start by saying that controlling information and knowledge means power. Why does this matter?” she asked. “Because the control of information and knowledge can change sides. The power relationship can be changed.”

One of the primary motives of contemporary enclosures of information and knowledge, said Botero, is to instill fear in people – fear of violating copyright law, fear of the penalties for doing so. This inhibits natural tendencies to share and re-use information. So the challenge facing us is to imagine if fear could change sides. Can we imagine a switch in power relationships over the control of knowledge – how we produce, distribute and use knowledge? Botero said we should focus on the question: “How can we switch the tendency of knowledge regulation away from enclosure, so that commons can become the rule and not the exception?”

“There are still many ways to produce things, to gain knowledge,” said Botero, who noted that those who use the word “commons” [in the context of knowledge production] are lucky because it helps name these non-market forms of sharing knowledge. “In Colombia, we don’t even have that word,” she said.

To illustrate how customary knowledge has been enclosed in Colombia, Botero told the story of parteras, midwives, who have been shunted aside by doctors, mostly men, who then asserted control over women’s bodies and childbirth, and marginalized the parteras and their rich knowledge of childbirth. This knowledge is especially important to those communities in remote areas of Colombia that do not have access to doctors. There is currently a huge movement of parteras in Colombia who are fighting for the recognition of their knowledge and for the legal right to act as midwives.

Botero also told about how copyright laws have made it illegal to reproduce sheet music for songs written in 18th and 19th century Colombia. In those times, people simply shared the music among each other; there was no market for it. But with the rise of the music industry in the 20th century, especially in the North, it is either impossible or unaffordable to get this sheet music because most of it is copyrighted. So most written music in Colombia consists of illegally photocopied versions. Market logic has criminalized the music that was once natural and freely flowing in Colombian culture. Botero noted that this has increased inequality and diminished public culture.

She showed a global map illustrating which nations received royalties and fees from copyrights and patents in 2002; the United States receives more than half of all global revenues, while Latin America, Africa, India and other countries of the South receive virtually nothing. This is the “power relationships” that Botero was pointing to.

Botero warned, “We have trouble imagining how to provision and govern resources, even knowledge, without exclusivity and control.” Part of the problem is the difficulty of measuring commons values. Economists are not interested, she said, which makes it difficult to go to politicians and persuade them why libraries matter.

Another barrier is our reliance on individual incentives as core value in the system for regulating knowledge, Botero said. “Legal systems of ‘intellectual property’ place individual financial incentives at the center for knowledge regulation, which marginalizes commons values.” Our challenge is to find ways to switch from market logics by showing that there are other logics.

One reason that it is difficult to displace market logics is because we are reluctant or unable to “introduce the commons discourse from the front door instead of through the back door,” said Botero. She confessed that she herself has this problem because most public debate on this topic “is based on the premise that knowledge requires enclosure.” It is difficult to displace this premise by talking about the commons. But it is becoming increasingly necessary to do so as new policy regimes, such as the Transpacific Trade (TPP) Agreement, seek to intensify enclosures. The TPP, for example, seeks to raise minimum levels of copyright restriction, extend the terms of copyrights, and increase the prison terms for copyright violations.

One way to reframe debate, suggested Botero, is to see the commons “not as the absence of exclusivity, but the presence of non-exclusivity. Th is is a slight but important difference,” she said, “that helps us see the plenitude of non-exclusivity” – an idea developed by Séverine Dussolier, professor and director of the Revue Droit des Technologies de l’Information (RDTI, France). This shift “helps us to shift the discussion from the problems with the individual property and market-driven perspective, to a framework and society that – as a norm – wants its institutions to be generative of sharing, cooperation and equality.”

Ultimately, what is needed are more “efficient and effective ways to protect the ethic and practice of sharing,” or as she put it, “better commoning.” Reforming “intellectual property” is only one small part of the universe of knowledge commoning, Botero stressed. It also includes movements for “transparency, privacy, collaboration, and potentially all of science and culture.”

“When and how did we accept that the autonomy of all is subservient to control of knowledge by the few?” asked Botero. “Most important, can we stop this? Can we change it? Is the current tragedy our lack of knowledge of the commons?” Rediscovering the commons is an important challenge to be faced “if fear is going to change sides.”

An Account of the Knowledge, Culture and Science Stream’s Deliberations

There were no presentations in the Knowledge Stream breakout sessions, but rather a series of brief provocations. These were intended to spur a lively discussion and to go beyond the usual debates heard at free and open software/free culture/open science conferences. A primary goal of the breakout discussions was to consider what it means to regard knowledge as a commons, rather than as a “carve-out” exception from a private property regime. The group was also asked to consider how shared knowledge is crucial to all commoning activity. Notes from the Knowledge Stream breakout sessions were compiled through a participatory titanpad, from which this account is adapted.

The Knowledge Stream focused on two overarching themes, each taking advantage of the unique context of the conference:

  1. Why should commoners of all fields care about knowledge commons?
  2. If we consider knowledge first as commons, can we be more visionary, more inclusive, more effective in commoning software, science, culture, seeds … and much more?

The idea of the breakout session was to contextualize knowledge as a commons, first and foremost: knowledge as a subset of the larger paradigm of commons and commoning, as something far more than domain-specific categories such as software, scientific publication and educational materials.

An overarching premise of the Knowledge Stream was the point made by Silke Helfrich in her keynote, that all commons are knowledge commons and all commons are material commons. Saving seeds in the Svalbaard Seedbank are of no use if we forget how to cultivate them, for example, and various digital commons are ultimately grounded in the material reality of computers, electricity infrastructures and the food that computer users need to eat.

There is a “knowledge commons” at the center of each commons. This means that interest in a “knowledge commons” isn’t confined to those people who only care about software, scientific publication, and so on. It also means that we should refrain from classifying commons into categories such as “natural resources” and “digital,” and begin to make the process of commoning itself the focal point.

Of course, one must immediately acknowledge that digital resources do differ in fundamental ways from finite natural resources, and therefore the commons management strategies will differ. Knowledge commons can make cheap or virtually free copies of intangible information and creative works, and this knowledge production is often distributed at very small scales. For cultural commons, noted Philippe Aigrain, a French analyst of knowledge governance and CEO of Sopinspace, a maker for free software for collaboration and participatory democracy, “the key challenge is that average attention becomes scarcer in a world of abundant production.” This means that more attention must be paid on “mediating functions” – curating – and “revising our cultural expectations about ‘audiences’.”

It is helpful to see the historical roots of Internet-enabled knowledge commons, said Hilary Wainwright, the editor behind the UK political magazine Red Pepper and a research at the Transnational Institute. The Internet escalated the practice of sharing knowledge that began with the feminist movement’s recognition of a “plurality of sources.” It also facilitated the socialization of knowledge as a kind of collective action.

That these roots are not widely appreciated points to the limited vision of many knowledge commons, which tend to rely on a “deeply individualistic ethical ontology,” said Talha Syed, a professor of law at the University of California, Berkeley. This worldview usually leads commoners to focus on coercion – enclosures of knowledge commons – as the problem, he said. But “markets are problematic even if there is no monopoly,” he noted, because “we need to express both threats and positive aspirations in a substantive way. Freedom is more than people not coercing us.”

Shun-Ling Chen, a Taiwanese professor of law at the University of Arizona, noted that even free, mass-collaboration projects such as Wikipedia tend to fall back on western, individualistic conceptions of authorship and authority. This obscures the significance of traditional knowledge and history from the perspective of indigenous peoples, where less knowledge is recorded by “reliable sources.”

As the Stream recorded in its notes, knowledge commons are not just about individual freedoms, but about “marginalized people and social justice.” “The case for knowledge commons as necessary for social justice is an undeveloped theme,” the group concluded. But commons of traditional knowledge may require different sorts of legal strategies than those that are used to protect the collective knowledge embodied in free software or open access journal. The latter are both based on copyright law and its premises of individual rights, whereas traditional knowledge is not recognized as the sum of individual creations, but as a collective inheritance and resource.

This discussion raised the question whether provisioning knowledge through commons can produce different sorts of “products” as those produced by corporate enclosures, or whether they will simply create similar products with less inequality. Big budget movies and pharmaceuticals are often posited as impossibilities for commons provision (wrongly, by the way). But should these industries be seen as the ‘commanding heights’ of culture and medicine, or would a commons-based society create different commanding heights?”

One hint at an answer comes from seeing informality as a kind of knowledge commons. “Constructed commons” that rely upon copyright licenses (the GPL for software, Creative Commons licenses for other content) and upon policy reforms, are generally seen as the most significant, reputable knowledge commons. But just as many medieval commons relied upon informal community cooperation such as “beating the bounds” to defend themselves, so many contemporary knowledge commons are powerful because they are based on informal social practice and even illegality.

Alan Toner of Ireland noted that commoners who resist enclosures often “start from a position of illegality” (a point made by Ugo Mattei in his keynote talk). It may be better to frankly acknowledge this reality, he said. After all, remix culture would be impossible without civil disobedience to various copyright laws that prohibit copying, sharing and re-use – even if free culture people sometimes have a problem with such disrespectful or illegal resistance. “Piracy” is often a precursor to new social standards and even ne w legal rules. “What is legal is continent,” said Toner, because practices we spread now set traditions and norms for the future. We therefore must be conscious about the traditions we are creating. “The law is gray, so we must push new practices and organizations need to take greater risks,” eschewing the impulse to be “respectable” in order to become a “guiding star.”

Felix Stalder, a professor of digital culture at Zurich University of the Arts, agreed that civil disobedience and piracy are often precisely what is needed to create a “new normal,” which is what existing law is explicitly designed to prevent. “Piracy is building a de facto commons,” he added, “even if it is unaware of this fact. It is a laboratory of the new that can enrich our understanding of the commons.”

One way to secure the commons for the future, said Philippe Aigrain of Sopinspace, is to look at the specific challenges facing the commons rather than idealizing them or over-relying on existing precedents. As the Stream discussion notes concluded, “Given a new knowledge commons problem X, someone will state that we need a ‘copyleft for X.’ But is copyleft really effective at promoting and protecting the commons of software? What if we were to re-conceptualize copyleft as a prototype for effective, pro-commons regulation, rather than a hack on enclosure?”

Mike Linksvayer, the former chief technology officer of Creative Commons and the coordinator of the Knowledge Stream, noted that copyleft should be considered as “one way to “force sharing of information, i.e., of ensuring that knowledge is in the commons. But there may be more effective and more appropriate regulatory mechanisms that could be used and demanded to protect the commons.”

One provocative speculation was that there is a greater threat to the commons than enclosure – and that is obscurity. Perhaps new forms of promotion are needed to protect the commons from irrelevance. It may also be that excluding knowledge that doesn’t really contribute to a commons is a good way to protect a commons. For example, projects like Wikipedia and Debian mandate that only free knowledge and software be used within their spaces.


Addendum

Thanks to everyone who participated in the knowledge stream. All who prepared and delivered deep and critical provocations in the very brief time allotted:
Bodó Balázs
Shun-Ling Chen
Rick Falkvinge
Marco Fioretti
Charlotte Hess
Gaëlle Krikorian
Glyn Moody
Mayo Fuster Morrell
Prabir Purkayastha
Felix Stalder
Talha Syed
Wouter Tebbens
Alan Toner
Chris Watkins

Also thanks to Mayo Fuster Morrell and Petros for helping coordinate during the stream, and though neither could attend, Tal Niv and Leonhard Dobusch for helpful conversations about the stream and its goals. I enjoyed working with and learned much from the other stream coordinators: Saki Bailey (nature), Heike Löschmann (labor & care), Ludwig Schuster (money), and especially Miguel Said Vieira (infrastructure; early collaboration kept both infrastructure and knowledge streams relatively focused); and stream keynote speaker Carolina Botero; and conference organizers/Commons Strategy Group members: David Bollier, Michel Bauwens, and Silke Helfrich (watch their post-conference interview).

See the conference wiki for much more documentation on each of the streams, the overall conference, and related resources.

If a much more academic and apolitical approach is of interest, note the International Association for the Study of the Commons held its 2013 conference about 10 days after ECC. I believe there was not much overlap among attendees, one exception being Charlotte Hess (who also chaired a session on Governance of the Knowledge and Information Commons at the IASC conference).

ECC only strengthened my feeling (but, of course I designed the knowledge stream to confirm my biases…) that a much more bold, deep, inclusive (domains and methods of commoning, including informality, and populations), critical (including self-critical; a theme broached by several of the people thanked above), and competitive (product: displacing enclosure; policy: putting equality & freedom first) knowledge commons movement, or vanguard of those movements. Or as Carolina Botero put it in the stream keynote: bring the commons in through the front door. I promise to contribute to this project.

ECC also made me reflect much more on commons and commoning as a “core paradigm” for understanding and participating in the arrangements studied by social scientists. My thoughts are half baked at best, but that will not stop me from making pronouncements, time willing.

Why does the U.S. federal government permit negative sum competition among U.S. states and localities?

Monday, October 14th, 2013

I dimly recall learning that the point of the second paragraph of Article 1, Section 10 of the U.S. Constitution was to avoid ruinous trade competition among the states:

No State shall, without the Consent of the Congress, lay any Imposts or Duties on Imports or Exports, except what may be absolutely necessary for executing it’s inspection Laws: and the net Produce of all Duties and Imposts, laid by any State on Imports or Exports, shall be for the Use of the Treasury of the United States; and all such Laws shall be subject to the Revision and Controul of the Congress.

Any remotely modern conception of trade competition includes non-tariff barriers.* To what extent have U.S. states and localities been prohibited from implementing such barriers, and why hasn’t civic extortion — large businesses negotiating with several jurisdictions for ever larger public subsidy — been outlawed?

Of course I’m thinking of the professional sports racket. Another example in today’s media: $285m public subsidy for Detroit pro sports teams, while the city is bankrupt. But there’s also a probably much larger practice of states and localities goaded to offer huge subsidies to businesses in order to move their headquarters or other facilities. Sometimes only a matter of blocks, as in the case of Kansas-Missouri competition in the Kansas City metro area. What could be more clearly negative sum?

*Internationally, non-tariff barrier removal by treaty and other negotiation is often cover for spreading other anti-competitive and inequality promoting practices. I’m not a fan, especially considering that non-treaty autonomous liberalization has for decades been the main source of trade barrier reduction. I’m amused that contributors to the English Wikipedia article on non-tariff barriers to trade have listed “Intellectual property laws (patents, copyrights)” as examples of such barriers. This should be taken literally.

Freedom At Stake As Oracle Clings To Java API Copyrights In Google Fight

Monday, April 29th, 2013

Developer Freedom At Stake As Oracle Clings To Java API Copyrights In Google Fight (dated 2013-03-30; I failed to complete this post in one sitting and let it sit…):

Oracle lost in their attempt to protect their position using patents. They lost in their attempt to claim Google copied anything but a few lines of code. If they succeed in claiming you need their permission to use the Java APIs that they pushed as a community standard, software developers and innovation will be the losers. Learning the Java language is relatively simple, but mastering its APIs is a major investment you make as a Java developer. What Android did for Java developers is to allow them to make use of their individual career and professional investment to engage in a mobile marketplace that Sun failed to properly engage in.

Johan Söderberg, Hackers GNUnited! (2008; appeared as chapter in book I also contributed to; Söderberg’s text stuck with me, as I’ve quoted an extended bit of it before):

Intellectual property rights prevent mobility of employees in so forth that their knowledge are locked in in a proprietary standard that is owned by the employer. This factor is all the more important since most of the tools that programmers are working with are available as cheap consumer goods (computers, etc.). The company holds no advantage over the worker in providing these facilities (in comparison to the blue-collar operator referred to above whose knowledge is bound to the Fordist machine park). When the source code is closed behind copyrights and patents, however, large sums of money is required to access the software tools. In this way, the owner/firm gains the edge back over the labourer/programmer.

These kinds of critiques of intellectual protectionism from the perspective of developer freedom to do their trade, in addition to developer freedom to modify and control their computing environment, to tinker, are too rare. I’m also reminded of the fun title Noncompete Agreements Are The DRM Of Human Capital. So are copyright and patent.

Back to Developer Freedom At Stake…:

Will our economy thrive and be more competitive because companies can easily switch from one service provider to the other by leveraging identical APIs? Or will our economy be throttled by allowing vendors to inhibit competition through API lock-in? And should this happen only because a handful of legacy software vendors wanted to protect their franchises for a few more years?

Clearly this isn’t just about developer freedom. Nor is it just about user freedom — non-users are affected by anti-competitive practices — and the freedom of all is put at risk.

Bonus: What do APIs have in common with advertising?