Post Inequality Promotion

Software Patent NATO, 1993

Tuesday, November 26th, 2013

In my thoughts on the Defensive Patent License, I neglected to note in the history section a similar proposal made in 1993 by John Walker, founder of Autodesk, PATO: Collective Security In the Age of Software Patents:

[T]he trend toward increased litigation, constraining innovation in the software industry, is accelerating. The U.S. government is using trade negotiations to force other countries to institute software patents in their own markets.

While eliminating software patents would be the best solution, changing the law takes a long time and is uncertain to succeed. I’ve been trying to puzzle out how the software industry might rescue itself from immolation through litigation and came up with the following proposal.

Could have been written in 2013.

I’ve been thinking about using NATO as a model of a patent defence consortium. Suppose a bunch of big software companies (perhaps led by Oracle, who’s already taken the point on this) were to form PATO–Patent And Technology Organisation–and contribute all their current software patents, and all new software patents they were granted as long as they remained a member of PATO, to its “cross-licensing pool”. To keep the lawyers and shareholders from going nuts, the patents would be licensed through PATO but would remain the property of the member–a member could withdraw with appropriate notice and take the patents back from the pool.

Any member of PATO would be granted an automatic, royalty-free license to use any patent in the cross-licensing pool. Thus, by putting your patents in the pool, you obtain access to all the others automatically (but if you withdraw and pull your patents, of course you then become vulnerable for those you’ve used, which creates a powerful disincentive to quit).

The basic principle of NATO is that an attack on any member is considered an attack on all members. In PATO it works like this–if any member of PATO is alleged with infringement of a software patent by a non-member, then that member may counter-sue the attacker based on infringement of any patent in the PATO cross-licensing pool, regardless of what member contributed it. Once a load of companies and patents are in the pool, this will be a deterrent equivalent to a couple thousand MIRVs in silos–odds are that any potential plaintiff will be more vulnerable to 10 or 20 PATO patents than the PATO member is to one patent from the aggressor. Perhaps the suit will just be dropped and the bad guy will decide to join PATO….

Differences with the DPL, two decades hence:

  • PATO was to cover software patents only; a challenge to define.
  • PATO members could counter-sue attackers with patents from any other member; I have no idea whether this is legally feasible.
  • PATO never moved beyond raw idea stage, as far as I know, while legal work on the DPL has gone on for a few years, DPL 1.0 is complete, and the project is set for a public launch in February.

In 1993, software patents were new, and still opposed by Oracle and Microsoft. Since then both have become software patent aggressors and defend the idea of software patents.

Many companies that claim to dislike software patent aggression in 2013 will become aggressors over the next years, or their patents will be obtained and used by trolls and other aggressors. Becoming a DPL user now may be an effective way for such companies to avoid this fate, and avoid contributing to the stifling of equality, freedom, and innovation.

Addendum 20131202: Another difference between the PATO sketch and the DPL implementation is that the former includes “US$25/year” to be a member, while the latter is gratis. I assume that the nascent DPL Foundation will be able to attract adequate grants and other support, perhaps more than could be obtained through a membership fee, but the choice is at the least an interesting and important one.

Hierarchy of mechanisms for limiting copyright and copyright-like barriers to use of Public Sector Information, or More or Less Universal Government License(s)

Sunday, November 24th, 2013

This sketch is in part motivated by a massive proliferation of copyright and copyright-like licenses for government/public sector information, e.g., sub- and sub-sub-national jurisdiction licenses and sector- and jurisdiction-specific licenses intended to combat license proliferation within a sector within a jurisdiction. Also by longstanding concern about coordination among entities working to limit barriers to use of PSI and knowledge commons governance generally.

Everything following concerns PSI only relative to copyright and copyright-like barriers. There are other pertinent regulations and considerations to follow when publishing or using PSI (e.g., privacy and fraud; as these are pertinent even without copyright, it is silly and unnecessarily complicating to include them in copyright licenses) and other important ways to make PSI more useful technically and politically (e.g., open formats, focusing on PSI that facilitates accountability rather than openwashing).

Eliminate copyright and copyright-like restrictions

No longer barriers to use of PSI, because no longer barriers to use of information. May be modulated down to any general copyright or copyright-like barrier reduction, where the barrier is pertinent to use of PSI. Examples: eliminate sui generis database restrictions where they exist, increase threshold of originality required for information to be subject to copyright restriction, expand exceptions and limitations to copyright restrictions, expand affirmative user rights.

Eliminate copyright and copyright-like restrictions for PSI

For example, works produced by employees of the U.S. federal government are not subject to copyright restrictions in the U.S. Narrower exclusions from copyright restrictions (e.g., of laws, court rulings) are fairly common worldwide. These could be generalized to include eliminate copyright and copyright-like restrictions for PSI, worldwide, and expanded to include PSI produced by contractors or other non-government but publicly funded entities. PSI could be expanded to include any information produced with public funding, e.g., research and culture funded by public grants.

“Standard” international licenses for PSI

Public copyright licenses not specifically intended for only PSI are often used for PSI, and could be more. CC0 is by far the best such license, but other Creative Commons (CC) and Open Data Commons (ODC) licenses are frequently used. Depending on the extent to which the licenses used leave copyright and copyright-like restrictions in place (e.g., CC0: none; CC-BY-NC-ND, lots, thus considered non-open) and how they are applied (from legislative mandate for all PSI to one-off use for individual reports and datasets at discretion of agency), could have effect similar to eliminating copyright and copyright-like restrictions for PSI, or almost zero effect.

Universal Government License

Governments at various levels have chosen to make up their own licenses rather than use a standard international license. Some of the better reasons for doing so will be eliminated by the forthcoming version 4.0 of 6 of the CC licenses (though again, CC0 has been the best choice, since 2009, and will remain so). But some of the less good reasons (uncharitable characterization: vanity) can’t be addressed by a standard international license, and furthermore seem to be driving the proliferation of sub-sub-national licenses, down to licenses specific to an individual town.

Ideally this extreme license proliferation trend would terminate with mass implementation of one of the above options, though this seems unlikely in the short term. Maybe yet another standard license would help! The idea of an “open government license” which various governments would have a direct role in creating and stewarding has been casually discussed in the past, particularly several years ago when the current proliferation was just beginning, the CC 4.0 effort had not begun, and CC and ODC were not on the same page. Nobody is particularly incented to make this unwieldy project happen, but nor is it an impossibility — due to the relatively small world of NGOs (such as CC and the Open Knowledge Foundation, of which ODC is a project) and government people who really care and know about public licenses, and the possibility their collective exhaustion and exasperation over license details, incompatibility, and proliferation could reach a tipping point into collective action. There’s a lot to start from, including the research that went into CC-BY-4.0, and the OGL UK 2.0, which is a pretty good open license.

But why think small? How many other problems could be addressed simultaneously?

  • Defend the traditional meaning of ‘open government’ by calling the license something else, e.g., Universal/Uniform/Unified Government License.
  • Rallying point for public sector worldwide to commit more firmly and broadly to limiting copyright and copyright-like barriers to use of PSI, more rapidly establishing global norm, and leading to mandates. The one thing to be said for massive PSI license proliferation could be increased commitment from proliferating jurisdictions to use their custom licenses (I know of no data on this). A successful UGL would swamp any increased local commitment due to local vanity licenses through much higher level expectation and mandate.
  • Make the license work well for software (including being approved by the Open Source Initiative), as:
    • Generically “open” licenses are inevitably used for software, whether the steward apparently intends this (OGL UK 2.0) or does not (CC).
    • The best modern permissive license for software (Apache 2.0) is relatively long and unreadable for what it does, and has an discomfiting name (not nearly as bad as certain pro sports organizations, but still); it ought be superseded.
  • Ensure the license works for other domains, e.g., open hardware, which don’t really require domain-specific licenses, are headed down the path of proliferation and incompatibility, and that governments have obvious efficiency, regulatory, security, and welfare interests in.
  • Foster broader “open innovation community” engagement with government and public policy and vice versa, and more knowledge transfer across OIC domains, on legal instruments at the least.
  • Uniform Public License may be a better name than UGL in some respects (whatever the name, it ought be usable by the public sector, and the general public), but Government may be best overall, a tip of the hat to both the vision within governments that would be necessary to make the license succeed, and to the nature of copyright and copyright-like barriers as government regulatory regimes.

National jurisdiction licenses for PSI

A more likely mechanism for license proliferation deceleration and harm reduction in the near term is for governments within a national jurisdiction to use a single license, and follow various license stewardship and use best practices. Leigh Dodds recently blogged about the problem and highlighted this mechanism in a post titled The Proliferation of Open Government Licences.

Sub-national jurisdiction licenses for PSI

Each province/state and sub-jurisdiction thereof, down to towns and local districts, could use its own vanity license. This appears to be the trend in Canada. It would be possible to push further in this direction with multiple vanity licenses per jurisdiction, e.g., various licenses for various kinds of data, reports, and other materials.

Licenses for each PSI dataset or other work

Each and every government dataset or other publication could come with its own bespoke license. Though these licenses would grant permissions around some copyright and copyright-like restrictions, I suspect their net effect would be to heighten copyright and copyright-like restrictions as a barrier to both the use and publication of PSI, on an increased cost basis alone. This extreme highlights one of the downsides of copyright licenses, even unambiguously open ones — implementing, understanding, and using them can be seen as significant cost centers, creating an additional excuse for not opening materials, and encouraging the small number of people who really understand the mechanisms to be jealous and wary of any other reform.

None

Included for completeness.

Privatization of PSI copyright

Until now, I’ve assumed that copyright and copyright-like restrictions are barriers to use of PSI. But maybe there aren’t enough restrictions, or they aren’t allocated to the right entities, such that maximum value is realized from use of PSI. Control of copyright and copyright-like restrictions in PSI could be auctioned off to entities with the highest ability to extract rents from PSI users. These businesses could be government-owned, with various public-private partnerships in between. This would increase the direct contribution of PSI to GDP, incent the creation and publication of more PSI, ensure PSI is maintained and marketed, reaching citizens that can affordneed it, and provide a solid business model for Government 2.0, academia, cultural heritage, and all other publicly funded and publicly interested sectors, which would otherwise fail to produce an optimal level of PSI and related materials and innovations.

Do not let any of the above trick you into paying more attention to possible copyright and copyright-like barriers and licenses than actually doing stuff, especially with PSI, especially with “data”, doubly with “government data”.

I agree with Denny Vrandečić’s paradoxical sounding but correct directive:

Data is free. Free the data!

I tried to communicate the same in a chapter of the Data Journalism Handbook, but lacked the slogan.

Data is free. Free the data!

And what is not data? ☻

Addendum: Entirely by coincidence (in response to a European Commission consultation on PSI, which I had already forgotten about), today posts by Timothy Vollmer for the Communia Association and Creative Commons call out the license proliferation problem and endorse public domain as the default for PSI.

Innovation Pending

Wednesday, November 20th, 2013

Does the U.S. Patent System Stifle Innovation? Pro: Christopher Kelty, Laura Sydell. Con: Jaz Banga, Scott Snibbe. Moderator: Eric Goldman. Video:

The moderator was by far the best performer. Watch above, or read his introduction and audience voting instructions.

The pro side’s opening statement was funny, involving the definition of “stifle”, freedom as the oxygen of innovation, and innovation occurring within the iron lungs of large corporations, due to the patent system. Otherwise they stuck to a narrow argument: the current U.S. patent system is beset by trolls (Sydell was a reporter for When Patents Attack and II) and lawsuits and some would-be inventors do give up after realizing they are in a heavily patented field, ergo, the U.S. patent system stifles innovation.

The con side often seemed to make contradictory arguments that didn’t support their side. At one point the moderator interrupted to ask if they were really making a claim they seemed to be; nobody was phased, though I could swear at various points the pro side was looking incredulously at the con side (the recording is at the wrong angle to really see). But their fundamental argument was that there’s lots of innovation happening, patents and IP generally are American as apple pie, and trolls, while bad, aren’t a big deal for companies like Apple with many billions of dollars, ergo, the U.S. patent system does not stifle innovation.

The audience voted for the con side.

In my previous post noting that this debate was coming up, I concluded with “I hope they also consider equality and freedom.” They did a bit with regard to innovators — “freedom to innovate” and how “small” and “large” innovators fare in the system. But I had in mind expanding the discourse to include the effects of innovation policy on the freedom and equality of all humans.

“Patent” and “stifle” were expertly and humorously defined by Goldman and Kelty, but “innovation” remained undefined. The closest the debate came to exploring the contours of what innovation means, or ought mean, may have been in points made about the triviality of some patents, and the contrast between “small” and “large” innovators. Is innovation ‘done in a fashion that has served to maximize the patent encumbrances’ so it can be controlled by Apple, Microsoft, IBM, Monsanto, et al, the innovation we want?

Both the pro and con sides seemed to dislike patent trolls (while disagreeing on their importance). I wonder if any of the participants (particularly the con side) will endorse, or better yet, sign up for the Defensive Patent License (my discussion)? Or any of the other reforms reviewed by Goldman in Fixing Software Patents?

The debate was part of ZERO1 Garage’s Patent Pending exhibition, open through December 20. Each of the exhibited works is somehow related to a patent held or filed for by the artist.

One patent related to a work is pending, thus the work required an NDA for viewing:

nda

The handful of people I showed this image to were each appalled. But, in the context of the show, I have to admit it is cute. And, perhaps unintended, a critique of patent theory — which claims that patents encourage revelation.

Each of the pieces is interesting to experience. I particularly enjoyed the sounds made and shadows cast by (con side debater) Snibbe’s fan work (controlled by blowing through a smaller fan):

fans

My only disappointment from the exhibition is that there wasn’t a touching sample of these bricks, apparently made in part from fungus:

fungus brick

Bonus link: Discussions On The Abolition Of Patents In The UK, France, Germany And The Netherlands, From 1869. As I’ve mentioned before, these debates are nothing new, though it’s popular even for “reformers” to claim that current innovation policy is somehow mismatched with the “digital age”. The only difference between old and current debates is that the public interest is far more buried in the current ones.

Defensive Patent License 1.0 birthday

Saturday, November 16th, 2013

Defensive Patent License version 1.0 turned 0 yesterday. The Internet Archive held a small celebration. The FAQ says the license may be used now:

Sign up and start using the DPL by emailing defensivepatent@gmail.com.

There will be a launch conference 2014-02-2811-07 in Berkeley: gratis registration. By that time I gather there should be a list of launch DPL users, a website for registering and tracking DPL users, and a non-profit organization to steward the license, for which the Internet Archive will serve as a 501(c)3 fiscal sponsor.

Loosely organized thoughts follow. But in short:

  • DPL users grant a royalty free license (except for the purpose of cloning products) for their entire patent portfolio, to all other DPL users. This grant is irrevocable, unless the licensee (another DPL user) withdraws from the DPL or initiates patent litigation against any DPL user — but note that the withdrawing or aggressing entity’s grant of patents to date to all other DPL users remains in force forever.
  • Participation is on an entity basis, i.e., a DPL user is an organization or individual. All patents held or gained while a DPL user are included. But the irrevocable license to other DPL users then travels with individual patents, even when transferred to a non-DPL user entity.
  • An entity doesn’t need any patents to become a DPL user.
  • DPL doesn’t replace or conflict with patent peace provisions in modern free/open source licenses (e.g., Apache2, GPLv3, MPL2); it’s a different, complementary approach.
  • It may take years for the pool of DPL users’ patents to be significant enough to gain strong network effects and become a no-brainer for businesses in some industries to join. It may never. But it seems possible, and well worth trying.
  • Immediately, DPL seems like something for organizations that want to make a strong commitment, but a narrow one (only to others making the commitment), to patent non-aggression, ought to get on board with. Entities that want to make a broader commitment, including those that have already made complementary commitments through free/open source licenses or non-aggression pledges for certain uses (e.g., implementing a standard), should also get on board.

History

Last year I’d read Protecting Open Innovation: The Defensive Patent License as a New Approach to Patent Threats, Transaction Costs, and Tactical Disarmament (by Jennifer Urban and Jason Schultz, also main authors of the DPL 1.0) with interest and skepticism, and sent some small comments to the authors. The DPL 1.0, available for use now, incorporates some changes suggested in A Response to a Proposal for a Defensive Patent License (DPL) (and probably elsewhere; quite a few people worked on the license). Both papers are pretty good reads for understanding the idea and some of the choices made in DPL 1.0.

Two new things I learned yesterday are that the DPL was Internet Archive founder Brewster Kahle’s idea, and work on the license started in 2009. Kahle had been disturbed that patents with his name on them that he had been told were obtained for defensive purposes while an engineer at Thinking Machines, were later used offensively by an entity that had acquired the patents. This made him wonder if there could be a way for an entity to commit to using patents only defensively. Kahle acknowledged that others have had similar ideas, but the DPL is now born, and it just may be the idea that works.

(No specific previous ideas were mentioned, but a recent one that comes to mind is Paul Graham’s 2011 suggestion of a pledge to not initiate patent litigation against organizations with fewer that 25 employees. Intentionally imprecise, not legally binding, and offering no benefit other than appearing on a web page, probably not surprising it didn’t take off. Another is Twitter’s Innovator’s Patent Agreement (2012), in which a company promises an employee to seek their permission for any non-defensive uses of patents in the employee’s name; unclear uptake. Additional concepts are covered at End Soft Patents.)

Kahle, Urban, and Schultz acknowledged inspiration from the private ordering/carving out of free spaces (for what Urban and Schulz call “open innovation communities” to practice) through public licenses such as the GPL and various Creative Commons licenses. But the DPL is rather different in a few big ways (and details which fall out of these):

  1. Subject of grant: patent vs. copyright
  2. Scope of grant: all subject rights controlled by an entity vs individual things (patents or works subject to copyright)
  3. Offered to: club participants vs. general public

I guess there will be a tendency to assume the second and third follow strictly from the first. I’m not so sure — I can imagine free/open source software and/or free culture/open content/data worlds which took the entity and club paths (still occasionally suggested) — and I think the assumption would under-appreciate the creativity of the DPL.

DPL and free/open source software

The DPL is not replacement for patent clauses in free/open source licenses, which are conditions of public copyright licenses with different subject, scope, and audience (see previous). Additionally, the DPL’s non-grant for cloning products, which I do not understand the scope of, probably further reduces any overlap between modern FLOSS license patent provisions and the DPL that may exist. But, I see no conflict, and some complementarity.

A curiosity would be DPL users releasing software under free software licenses without patent provisions, or even with explicit patent non-grants, like CC0. A complementary curiosity would be free/source projects which only accept contributions from DPL users. Yet another would be a new software license only granting copyright permissions to DPL users (this would almost certainly not be considered free/open source), or releasing DPL users from some license conditions (this could be done as an exception to an existing license).

The DPL isn’t going to directly solve any patent problems faced by free/open source software (e.g., encumbered ‘standards’) any time soon. But, to the extent the DPL decreases the private value (expected rents) of patents and encourages more entities to not see patents as useful for collecting rents, this ought push the problems faced away, just a bit. Even if software patents were to evaporate tomorrow (as they should!), users of free/open source software would encounter patents impacted all sorts of devices running said software; patents would still be a problem for software freedom.

I hope that many free/open source software entities become DPL users, for the possible slowly accruing benefits above, but also to make common cause with others fighting for (or reforming slightly towards) intellectual freedom. Participation in broader discourse by free/open source software entities is a must, for the health of free software, and the health of free societies.

End Soft Patents’ entry on the DPL will probably be a good place to check years hence on how the DPL is viewed from the perspective of free/open source software.

DPL “enforcement”

In one sense, the DPL requires no enforcement — it is a grant of permission, which one either takes or not by also becoming a DPL user. But, although it contains provisions to limit obvious gaming, if it becomes significant, doubtless some entities will try to push its boundaries, perhaps by obfuscating patent ownership, or interpreting “cloning” expansively. Or, the ability to leave with 180 days notice could prove to be a gaping hole, with entities taking advantage of the pool until they are ready to file a bunch of patents. Or, the lack of immediate termination of licenses from all DPL users and the costliness of litigation may mean the DPL pool does little to restrain DPL users from leaving, or worse, initiating litigation (or threatening to do so, or some other extortion) against other DPL users.

Perhaps the DPL Foundation with a public database of DPL users will play a strong coordinating function, facilitating uncovering obfuscated ownership, disseminating notice of bad behavior, and revocation of licenses to litigators and leavers.

DPL copyleft?

In any discussion of X remotely similar to free/open source software, the question of “what is copyleft for X?” comes up — and one of the birthday presenters mentioned that the name DPL is a hat tip to the GPL — is the DPL “copyleft for patents”?

It does have reciprocality — only DPL users get DPL grants from other DPL users. I will be surprised if at some point someone doesn’t pejoratively say the DPL is “viral” — because the license to DPL users stays with patents even if they are transferred to a non-DPL user entity. A hereditary effect more directly analogous to the GPL might involve a grant conditioned on an licensee’s other patents which read on the licensed patent being similarly licensed, but this seems ineffective at first blush (and has been thought of and discarded innumerable times).

The DPL doesn’t have a regulatory side. Forced revelation, directly analogous to the GPL’s primary regulatory side, would be the obvious thing to investigate for a DPL flavor, but the most naive requirement (entity must reveal all patentable inventions in order to remain a DPL user in good standing) would be nearly impossible to comply with, or enforce. It may be more feasible to require revelation of designs and documentation for products or services (presumably source code, for software) that read on any patents in the DPL pool. This would constitute a huge compliance and enforcement challenge, and probably very difficult to bootstrap a significant pool, but would be an extremely interesting regulatory experiment if it gained any traction.

DPL “Troll-proof”?

The slogan must be taken with a mountain of salt. Still, the DPL, if widely adopted, would mitigate the troll problem. Because grants to DPL users are irrevocable, and follow a patent upon changes of ownership, any patent with a grant to DPL users will be less valuable for a troll to acquire, because there are fewer entities for the troll to sue. To the extent DPL adoption reduces patenting in an industry, or overall, there will be less ammunition available for trolls to buy and use to hold anyone up. In the extreme of success, all practicing entities become DPL users. Over a couple decades, the swamp is drained.

Patents are still bad

The only worrisome thing I heard yesterday (and I may have missed some nuance) was the idea that it is unfortunate that many engineers, and participants in open innovation communities in particular, see patents as unethical, and that as free/open source software people learned to use public copyright licenses (software was not subject to copyright until 30-40 years ago), they and others should learn to use appropriate patent tools, i.e., the DPL.

First, the engagement of what has become free/open source software, open access, open data, etc., with copyright tools, has not gone swimmingly. Yes, much success is apparent, but compared to what? The costs beg to be analyzed: isolation, conservatism, internal fighting, gaming of tools used, disengagement from policy and boundary-pushing, reduction (and stunting) of ethics to license choice. My ideal, as hinted above, would be for engagement with the DPL to help open innovation communities escape this trap, rather than adding to its weight.

Second, in part because extreme “drain the swamp” level of success is almost certainly not going to be achieved, abolition (of software patents) is the only solution. And beyond software, the whole system should be axed. Of course this means not merely defending innovators, including open innovation communities, from some expense and litigation, but moving freedom and equality to the top of our innovation policy ordering.

DPL open infrastructure?

I hope, in part to make the DPL attractive to existing open innovation communities, I really hope the DPL Foundation will make everything it does free and open with traditional public copyright and publishing tools;

  • Open content: the website and all documentation ought be licensed under CC0 (though CC-BY or CC-BY-SA would be acceptable).
  • Open source/open service: source code of the eventual website, including applications for tracking DPL users, should be developed in a public repository, and licensed under either Apache2 or AGPLv3 (latter if the Foundation wishes to force those using the software elsewhere to reveal their modifications).
  • Open data: all data concerning DPL users, licensed patents, etc., should be machine-readable, downloadable in bulk, and released under CC0.

DPL readability

I found the DPL surprisingly brief and readable. My naive guess, given a description of how it works, would have been something far longer and more inscrutable. But the DPL actually compares to public licenses very favorably on automated readability metrics. Table below shows these for DPL 1.0 and some well known public copyright licenses (lower numbers indicate better readability, except in the case of Flesch; Chars/(Flesch>=1) is my gross metric for how painful it is to read a document; see license automated readability metrics for an explanation):

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8ffe2c5c25b85e52f42fcde68c2cf6a88b7abd69 Apache-2.0 8310 16.8 19.8 15.1 20.7 64.6 16.6 33.6 247
20dc61b94cfe1f4ba5814b340095b4c3fa23e801 CC-BY-3.0 14956 16.1 19.4 14.1 20.4 66.1 16.2 40.0 373
bbf850220781d9423be9e478fbc07098bfd2b5ad DPL-1.0 8256 15.1 18.9 15.7 18.4 65.9 15.0 40.6 203
0473f7b5cf37740d7170f29232a0bd088d0b16f0 GPL-2.0 13664 13.3 16.2 12.5 16.2 57.0 12.7 52.9 258
d4ec7d0b46077b89870c66cb829457041cd03e8d GPL-3.0 27588 13.7 16.0 13.3 16.8 57.5 13.8 47.2 584
78fe0ed5d283fd1df26be9b4afe8a82124624180 MPL-2.0 11766 14.7 16.9 14.5 17.9 60.5 14.9 40.1 293

Automated readability metrics are probably at best an indicator for license drafters, but offer no guidance on actually improving readability. Last month Luis Villa (incidentally, on the DPL’s advisory board) reviewed a manual of style for contract drafting by editing Twitter’s Innovator’s Patent Agreement per the manual’s advice. I enjoyed Villa’s post, but have not attempted to discern (and discernment may be beyond my capability) how closely DPL 1.0 follows the manual’s advice. By the way, Villa’s edit of the IPA per the manual did improve its automated readability metrics:

SHA1 License Characters Kincaid ARI Coleman-Liau Fog Lix SMOG Flesch Chars/(Flesch>=1)
8774cfcefbc3b008188efc141256b0a8dbe89296 IPA 4778 19.6 24.0 15.5 22.7 75.8 17.0 27.1 176
b7a39883743c7b1738aca355c217d1d14c511de6 IPA-MSCD 4665 17.4 21.2 15.6 20.4 70.2 16.0 32.8 142

Net

Go back to the top, read the DPL, get your and other entities in the queue to be DPL users at its launch! Or, explain to me why this is a bad idea.

NFL IP II

Friday, November 8th, 2013

In an imperial capital city, expect to see the heads of conquered people on display.

Second time in about a month, someone has suggested US professional football businesses’ ability to censor be modulated if they continue to act against the public interest. First, copyright and civic extortion, now trademark and display of the heads of conquered people.

The first is fantasy at this point. The second may well happen, and not soon enough.

That modulating pro sports businesses’ ability to censor is deemed a potentially powerful incentive to stop bad behavior highlights the role of copyright and trademark in fostering a culture of spectacle and inequality — without these rents, team owners’ wealth and power would decrease significantly.

If professional sport is one of the things that brings classes and cultures in a community together, let’s enhance that by allowing everyone to view, share, make, and vend bits and atoms featuring elements of this togetherness, in their own way, without legal threat from ultra rich business owners.

Yes, let’s bring the heads down; that’ll get us some distance into modernity. But the empire, and its killing and torture, goes on. End that.

Economics and the Commons Conference [knowledge stream] report

Wednesday, October 30th, 2013

Economics and the Common(s): From Seed Form to Core Paradigm. A report on an international conference on the future of the commons (pdf) by David Bollier. Section on the knowledge stream (which I coordinated; pre-conference post) copied below, followed by an addendum with thanks and vague promises. First, video of the stream keynote (slides) by Carolina Botero (introduced by me; archive.org copy).

III. “Treating Knowledge, Culture and Science as Commons”

Science, and recently, free software, are paradigmatic knowledge commons; copyright and patent paradigmatic enclosures. But our vision may be constrained by the power of paradigmatic examples. Re-conceptualization may help us understand what might be achieved by moving most provisioning of knowledge to the commons; help us critically evaluate our commoning; and help us understand that all commons are knowledge commons. Let us consider, what if:

  • Copyright and patent are not the first knowledge enclosures, but only “modern” enforcement of inequalities in what may be known and communicated?
  • Copyright and patent reform and licensing are merely small parts of a universe of knowledge commoning, including transparency, privacy, collaboration, all of science and culture and social knowledge?
  • Our strategy puts commons values first, and views narrow incentives with skepticism?
  • We articulate the value of knowledge commons – qualitative, quantitative, ethical, practical, other – such that knowledge commons can be embraced and challenged in mainstream discourse?

These were the general questions that the Knowledge, Culture and Science Stream addressed.

Knowledge Stream Keynote Summary

Carolina Botero Cabrera, a free culture activist, consultant and lawyer from Colombia, delivered a plenary keynote for the Knowledge Stream entitled, “What If Fear Changes Sides?” As an author and lecturer on free access, free culture and authors’ rights, Botero focused on the role of information and knowledge in creating unequal power relationships, and how knowledge and cultural commons can rectify such problems.

“If we assume that information is power and acknowledge the power of knowledge, we can start by saying that controlling information and knowledge means power. Why does this matter?” she asked. “Because the control of information and knowledge can change sides. The power relationship can be changed.”

One of the primary motives of contemporary enclosures of information and knowledge, said Botero, is to instill fear in people – fear of violating copyright law, fear of the penalties for doing so. This inhibits natural tendencies to share and re-use information. So the challenge facing us is to imagine if fear could change sides. Can we imagine a switch in power relationships over the control of knowledge – how we produce, distribute and use knowledge? Botero said we should focus on the question: “How can we switch the tendency of knowledge regulation away from enclosure, so that commons can become the rule and not the exception?”

“There are still many ways to produce things, to gain knowledge,” said Botero, who noted that those who use the word “commons” [in the context of knowledge production] are lucky because it helps name these non-market forms of sharing knowledge. “In Colombia, we don’t even have that word,” she said.

To illustrate how customary knowledge has been enclosed in Colombia, Botero told the story of parteras, midwives, who have been shunted aside by doctors, mostly men, who then asserted control over women’s bodies and childbirth, and marginalized the parteras and their rich knowledge of childbirth. This knowledge is especially important to those communities in remote areas of Colombia that do not have access to doctors. There is currently a huge movement of parteras in Colombia who are fighting for the recognition of their knowledge and for the legal right to act as midwives.

Botero also told about how copyright laws have made it illegal to reproduce sheet music for songs written in 18th and 19th century Colombia. In those times, people simply shared the music among each other; there was no market for it. But with the rise of the music industry in the 20th century, especially in the North, it is either impossible or unaffordable to get this sheet music because most of it is copyrighted. So most written music in Colombia consists of illegally photocopied versions. Market logic has criminalized the music that was once natural and freely flowing in Colombian culture. Botero noted that this has increased inequality and diminished public culture.

She showed a global map illustrating which nations received royalties and fees from copyrights and patents in 2002; the United States receives more than half of all global revenues, while Latin America, Africa, India and other countries of the South receive virtually nothing. This is the “power relationships” that Botero was pointing to.

Botero warned, “We have trouble imagining how to provision and govern resources, even knowledge, without exclusivity and control.” Part of the problem is the difficulty of measuring commons values. Economists are not interested, she said, which makes it difficult to go to politicians and persuade them why libraries matter.

Another barrier is our reliance on individual incentives as core value in the system for regulating knowledge, Botero said. “Legal systems of ‘intellectual property’ place individual financial incentives at the center for knowledge regulation, which marginalizes commons values.” Our challenge is to find ways to switch from market logics by showing that there are other logics.

One reason that it is difficult to displace market logics is because we are reluctant or unable to “introduce the commons discourse from the front door instead of through the back door,” said Botero. She confessed that she herself has this problem because most public debate on this topic “is based on the premise that knowledge requires enclosure.” It is difficult to displace this premise by talking about the commons. But it is becoming increasingly necessary to do so as new policy regimes, such as the Transpacific Trade (TPP) Agreement, seek to intensify enclosures. The TPP, for example, seeks to raise minimum levels of copyright restriction, extend the terms of copyrights, and increase the prison terms for copyright violations.

One way to reframe debate, suggested Botero, is to see the commons “not as the absence of exclusivity, but the presence of non-exclusivity. Th is is a slight but important difference,” she said, “that helps us see the plenitude of non-exclusivity” – an idea developed by Séverine Dussolier, professor and director of the Revue Droit des Technologies de l’Information (RDTI, France). This shift “helps us to shift the discussion from the problems with the individual property and market-driven perspective, to a framework and society that – as a norm – wants its institutions to be generative of sharing, cooperation and equality.”

Ultimately, what is needed are more “efficient and effective ways to protect the ethic and practice of sharing,” or as she put it, “better commoning.” Reforming “intellectual property” is only one small part of the universe of knowledge commoning, Botero stressed. It also includes movements for “transparency, privacy, collaboration, and potentially all of science and culture.”

“When and how did we accept that the autonomy of all is subservient to control of knowledge by the few?” asked Botero. “Most important, can we stop this? Can we change it? Is the current tragedy our lack of knowledge of the commons?” Rediscovering the commons is an important challenge to be faced “if fear is going to change sides.”

An Account of the Knowledge, Culture and Science Stream’s Deliberations

There were no presentations in the Knowledge Stream breakout sessions, but rather a series of brief provocations. These were intended to spur a lively discussion and to go beyond the usual debates heard at free and open software/free culture/open science conferences. A primary goal of the breakout discussions was to consider what it means to regard knowledge as a commons, rather than as a “carve-out” exception from a private property regime. The group was also asked to consider how shared knowledge is crucial to all commoning activity. Notes from the Knowledge Stream breakout sessions were compiled through a participatory titanpad, from which this account is adapted.

The Knowledge Stream focused on two overarching themes, each taking advantage of the unique context of the conference:

  1. Why should commoners of all fields care about knowledge commons?
  2. If we consider knowledge first as commons, can we be more visionary, more inclusive, more effective in commoning software, science, culture, seeds … and much more?

The idea of the breakout session was to contextualize knowledge as a commons, first and foremost: knowledge as a subset of the larger paradigm of commons and commoning, as something far more than domain-specific categories such as software, scientific publication and educational materials.

An overarching premise of the Knowledge Stream was the point made by Silke Helfrich in her keynote, that all commons are knowledge commons and all commons are material commons. Saving seeds in the Svalbaard Seedbank are of no use if we forget how to cultivate them, for example, and various digital commons are ultimately grounded in the material reality of computers, electricity infrastructures and the food that computer users need to eat.

There is a “knowledge commons” at the center of each commons. This means that interest in a “knowledge commons” isn’t confined to those people who only care about software, scientific publication, and so on. It also means that we should refrain from classifying commons into categories such as “natural resources” and “digital,” and begin to make the process of commoning itself the focal point.

Of course, one must immediately acknowledge that digital resources do differ in fundamental ways from finite natural resources, and therefore the commons management strategies will differ. Knowledge commons can make cheap or virtually free copies of intangible information and creative works, and this knowledge production is often distributed at very small scales. For cultural commons, noted Philippe Aigrain, a French analyst of knowledge governance and CEO of Sopinspace, a maker for free software for collaboration and participatory democracy, “the key challenge is that average attention becomes scarcer in a world of abundant production.” This means that more attention must be paid on “mediating functions” – curating – and “revising our cultural expectations about ‘audiences’.”

It is helpful to see the historical roots of Internet-enabled knowledge commons, said Hilary Wainwright, the editor behind the UK political magazine Red Pepper and a research at the Transnational Institute. The Internet escalated the practice of sharing knowledge that began with the feminist movement’s recognition of a “plurality of sources.” It also facilitated the socialization of knowledge as a kind of collective action.

That these roots are not widely appreciated points to the limited vision of many knowledge commons, which tend to rely on a “deeply individualistic ethical ontology,” said Talha Syed, a professor of law at the University of California, Berkeley. This worldview usually leads commoners to focus on coercion – enclosures of knowledge commons – as the problem, he said. But “markets are problematic even if there is no monopoly,” he noted, because “we need to express both threats and positive aspirations in a substantive way. Freedom is more than people not coercing us.”

Shun-Ling Chen, a Taiwanese professor of law at the University of Arizona, noted that even free, mass-collaboration projects such as Wikipedia tend to fall back on western, individualistic conceptions of authorship and authority. This obscures the significance of traditional knowledge and history from the perspective of indigenous peoples, where less knowledge is recorded by “reliable sources.”

As the Stream recorded in its notes, knowledge commons are not just about individual freedoms, but about “marginalized people and social justice.” “The case for knowledge commons as necessary for social justice is an undeveloped theme,” the group concluded. But commons of traditional knowledge may require different sorts of legal strategies than those that are used to protect the collective knowledge embodied in free software or open access journal. The latter are both based on copyright law and its premises of individual rights, whereas traditional knowledge is not recognized as the sum of individual creations, but as a collective inheritance and resource.

This discussion raised the question whether provisioning knowledge through commons can produce different sorts of “products” as those produced by corporate enclosures, or whether they will simply create similar products with less inequality. Big budget movies and pharmaceuticals are often posited as impossibilities for commons provision (wrongly, by the way). But should these industries be seen as the ‘commanding heights’ of culture and medicine, or would a commons-based society create different commanding heights?”

One hint at an answer comes from seeing informality as a kind of knowledge commons. “Constructed commons” that rely upon copyright licenses (the GPL for software, Creative Commons licenses for other content) and upon policy reforms, are generally seen as the most significant, reputable knowledge commons. But just as many medieval commons relied upon informal community cooperation such as “beating the bounds” to defend themselves, so many contemporary knowledge commons are powerful because they are based on informal social practice and even illegality.

Alan Toner of Ireland noted that commoners who resist enclosures often “start from a position of illegality” (a point made by Ugo Mattei in his keynote talk). It may be better to frankly acknowledge this reality, he said. After all, remix culture would be impossible without civil disobedience to various copyright laws that prohibit copying, sharing and re-use – even if free culture people sometimes have a problem with such disrespectful or illegal resistance. “Piracy” is often a precursor to new social standards and even ne w legal rules. “What is legal is continent,” said Toner, because practices we spread now set traditions and norms for the future. We therefore must be conscious about the traditions we are creating. “The law is gray, so we must push new practices and organizations need to take greater risks,” eschewing the impulse to be “respectable” in order to become a “guiding star.”

Felix Stalder, a professor of digital culture at Zurich University of the Arts, agreed that civil disobedience and piracy are often precisely what is needed to create a “new normal,” which is what existing law is explicitly designed to prevent. “Piracy is building a de facto commons,” he added, “even if it is unaware of this fact. It is a laboratory of the new that can enrich our understanding of the commons.”

One way to secure the commons for the future, said Philippe Aigrain of Sopinspace, is to look at the specific challenges facing the commons rather than idealizing them or over-relying on existing precedents. As the Stream discussion notes concluded, “Given a new knowledge commons problem X, someone will state that we need a ‘copyleft for X.’ But is copyleft really effective at promoting and protecting the commons of software? What if we were to re-conceptualize copyleft as a prototype for effective, pro-commons regulation, rather than a hack on enclosure?”

Mike Linksvayer, the former chief technology officer of Creative Commons and the coordinator of the Knowledge Stream, noted that copyleft should be considered as “one way to “force sharing of information, i.e., of ensuring that knowledge is in the commons. But there may be more effective and more appropriate regulatory mechanisms that could be used and demanded to protect the commons.”

One provocative speculation was that there is a greater threat to the commons than enclosure – and that is obscurity. Perhaps new forms of promotion are needed to protect the commons from irrelevance. It may also be that excluding knowledge that doesn’t really contribute to a commons is a good way to protect a commons. For example, projects like Wikipedia and Debian mandate that only free knowledge and software be used within their spaces.


Addendum

Thanks to everyone who participated in the knowledge stream. All who prepared and delivered deep and critical provocations in the very brief time allotted:
Bodó Balázs
Shun-Ling Chen
Rick Falkvinge
Marco Fioretti
Charlotte Hess
Gaëlle Krikorian
Glyn Moody
Mayo Fuster Morrell
Prabir Purkayastha
Felix Stalder
Talha Syed
Wouter Tebbens
Alan Toner
Chris Watkins

Also thanks to Mayo Fuster Morrell and Petros for helping coordinate during the stream, and though neither could attend, Tal Niv and Leonhard Dobusch for helpful conversations about the stream and its goals. I enjoyed working with and learned much from the other stream coordinators: Saki Bailey (nature), Heike Löschmann (labor & care), Ludwig Schuster (money), and especially Miguel Said Vieira (infrastructure; early collaboration kept both infrastructure and knowledge streams relatively focused); and stream keynote speaker Carolina Botero; and conference organizers/Commons Strategy Group members: David Bollier, Michel Bauwens, and Silke Helfrich (watch their post-conference interview).

See the conference wiki for much more documentation on each of the streams, the overall conference, and related resources.

If a much more academic and apolitical approach is of interest, note the International Association for the Study of the Commons held its 2013 conference about 10 days after ECC. I believe there was not much overlap among attendees, one exception being Charlotte Hess (who also chaired a session on Governance of the Knowledge and Information Commons at the IASC conference).

ECC only strengthened my feeling (but, of course I designed the knowledge stream to confirm my biases…) that a much more bold, deep, inclusive (domains and methods of commoning, including informality, and populations), critical (including self-critical; a theme broached by several of the people thanked above), and competitive (product: displacing enclosure; policy: putting equality & freedom first) knowledge commons movement, or vanguard of those movements. Or as Carolina Botero put it in the stream keynote: bring the commons in through the front door. I promise to contribute to this project.

ECC also made me reflect much more on commons and commoning as a “core paradigm” for understanding and participating in the arrangements studied by social scientists. My thoughts are half baked at best, but that will not stop me from making pronouncements, time willing.

What’s *really* wrong with the free and open internet — and how we could win it

Thursday, October 24th, 2013

A few days ago Sue Gardner, ED of the Wikimedia Foundation, posted What’s *really* wrong with nonprofits — and how we can fix it. Judging by seeing the the link sent around, it has been read to confirm various conflicting biases different people in the SF bay area/internet/nonprofit space and adjacent already had. May I? Excerpt-based-summary:

A major structural flaw of many nonprofits is that their revenue is decoupled from mission work, which pushes them to focus on providing a positive donor experience often at the expense of doing their core work.

WMF makes about 95% of its money from the many-small-donors model
…
I spend practically zero time fundraising. We at the WMF get to focus on our core work of supporting and developing Wikipedia, and when donors talk with us we want to hear what they say, because they are Wikipedia readers
…
I think the usefulness of the many-small-donors model, ultimately, will extend far beyond the small number of nonprofits currently funded by it.
…
[Because Internet.]
…
For organizations that can cover their costs with the many-small-donors model I believe there’s the potential to heal the disconnect between fundraising and core mission work, in a way that supports nonprofits being, overall, much more effective.

I agree concerning extended potential. I thought (here comes confirmation of biases) that Creative Commons should make growing its small donor base its number one fundraising effort, with the goal of having small donors provide the majority of funding as soon as possible — realistically, after several years of hard work on that model. While nowhere close to that goal, I recall that about 2006-2009 individual giving grew rapidly, in numbers and diversity (started out almost exclusively US-based), even though it was never the number one fundraising priority. I don’t think many, perhaps zero, people other than me believed individual giving could become CC’s main source of support. Wikimedia’s success in that, already very evident, and its unique circumstance, was almost taken as proof that CC couldn’t. I thought instead Wikimedia’s methods should be taken as inspiration. The “model” had already been proven by nearby organizations without Wikimedia’s eyeballs; e.g., the Free Software Foundation.

An organization that wants to rely on small donors will have to work insanely hard at it. And, if it had been lucky enough to be in a network affording it access to large foundation grants, it needs to be prepared to shrink if the foundations tire of the organization before individual giving supplants them, and it may never fully do so. (But foundations might tire of the organization anyway, resulting in collapse without individual donors.) This should not be feared. If an organization has a clear vision and operating mission, increased focus on core work by a leaner team, less distracted by fundraising, ought be more effective than a larger, distracted team.

But most organizations don’t have a clear vision and operating mission (I don’t mean words found in vision and mission statements; rather the shared and deep knowing-what-we’re-trying-to-do-and-how that allows all to work effectively, from governance to program delivery). This makes any coherent strategic change more difficult, including transitioning to small donor support. It also gives me pause concerning some of the bits of Gardner’s post that I didn’t excerpt above. For most organizations I’d bet that real implementation of nonprofit “best practices” regarding compliance, governance, management, reporting, etc, though boring and conservative, would be a big step up. Even trying to increase the much-maligned program/(admin+fundraising) ratio is probably still a good general rule. I’d like to hear better ones. Perhaps near realtime reporting of much more data than can be gleaned from the likes of a Form 990 will help “big data scientists” find better rules.

It also has to be said that online small donor fundraising can be just as distracting and warping (causing organization to focus on appearing appealing to donors) as other models. We (collectively) have a lot of work to do on practices, institutions, and intermediaries that will make the extended potential of small donor support possible (read Gardner’s post for the part I lazily summarized as [Because Internet.]) in order for the outcome to be good. What passes as savvy advice on such fundraising (usually centered around “social media”) has for years been appalling and unrealistic. And crowdfunding has thus far been disappointing in some ways as an method of coordinating public benefit.

About 7 months ago Gardner announced she would be stepping down as ED after finding a replacement (still in progress), because:

I’ve always aimed to make the biggest contribution I can to the general public good. Today, this is pulling me towards a new and different role, one very much aligned with Wikimedia values and informed by my experiences here, and with the purpose of amplifying the voices of people advocating for the free and open internet. I don’t know exactly what this will look like — I might write a book, or start a non-profit, or work in partnership with something that already exists.

My immediate reaction to this was exactly what Виктория wrote in reply to the announcement:

I cannot help but wonder what other position can be better for fighting consumerisation, walling-in and freedom curtailment of the Internet than the position of executive director of the Wikimedia Foundation.

I could take this as confirming another of my beliefs: that the Wikimedia movement (and other constructive free/open movements and organizations) do not realize their potential political potency — for changing the policy narrative and environment, not only taking rear guard actions against the likes of SOPA. Of course then, the Wikimedia ED wouldn’t think Wikimedia the most effective place from which to work for a free and open internet. But, my beliefs are not widely held, and likely incorrect. So I was and am mostly intrigued, and eager to see what Gardner does next.

After reading the What’s *really* wrong with nonprofits post above, I noticed that 4 months ago Gardner had posted The war for the free and open internet — and how we are losing it, which I eagerly read:

[non-profit] Wikipedia is pretty much alone. It’s NOT the general rule: it’s the exception that proves the rule.
…
The internet is evolving into a private-sector space that is primarily accountable to corporate shareholders rather than citizens. It’s constantly trying to sell you stuff. It does whatever it wants with your personal information. And as it begins to be regulated or to regulate itself, it often happens in a clumsy and harmful way, hurting the internet’s ability to function for the benefit of the public. That for example was the story of SOPA.
…
[Stories of how Wikipedia can fight censorship because it is both non-profit and very popular]
…
Aside from Wikipedia, there is no large, popular space being carved out for the public good. There are a billion tiny experiments, some of them great. But we should be honest: we are not gaining ground.
…
The internet needs serious help if it is to remain free and open, a powerful contributor to the public good.

Final exercise in confirming my biases (this post): yes, what the internet needs is more spaces carved our for the public good — more Wikipedias — categories other than encyclopedia in which a commons-based product out-competes proprietary incumbents, increasing equality and freedom powerfully in both the short and long (capitalization aligned with rent seeking demolished) term. Wikipedia is unique in being wildly successful and first and foremost a website, but not alone (free software collectively must many times more liberating by any metric, some of it very high profile, eg Firefox; Open Access is making tremendous progress, and I believe PLOS may have one of the strongest claims to operating not just to make something free, but to compete directly with and eventually displace incumbents).

A free and open internet, and society, needs intense competition from commons-based initiatives in many more categories, including those considered the commanding heights of culture and commerce, eg premium video, advertising, social networking, and many others. Competition does not mean just building stuff, but making it culturally relevant, meaning making it massively popular (which Wikipedia lucked into, being the world’s greatest keyword search goldmine). Nor does it necessarily mean recapitulating proprietary products exactly, eg some product expectations might moved to ones more favorable to mass collaboration.

Perhaps Gardner’s next venture will aim to carve out a new, popular space for the public good on the internet. Perhaps it will be to incubate other projects with exactly that aim (there are many experiments, as her post notes, but not many with “take overliberate the world” vision or resources; meanwhile there is a massive ecosystem churning out and funding attempts to take over the world new proprietary products). Perhaps it will be to build something which helps non-profits leverage the extended potential of the small donor model, in a way that maximizes public good. Most likely, something not designed to confirm my biases. ☺ But, many others should do just that!

z3R01P

Monday, October 14th, 2013

Video from my conversation with Stephanie Syjuco on “intellectual property & the future of culture” at ZERO1 Garage 11 months ago is available at YouTube and archive.org (direct link to theora encoding).

As expected (see my pre-event post) the setting was great: nice space, thoughtful, well-executed and highly appropriate installation. I enjoyed the conversation; perhaps you will too.

With more time it would’ve been great to talk about some of Syjuco’s other works, many of which deal more or less directly with copying (see also interviews with Syjuco). I don’t think either of us even used the word appropriation. Nor the term “open source”, despite being in the installation title — for example, why is intersection of formal Open Source (and similar legally/voluntarily constructed commons) and art, appropriation or otherwise, vanishingly small?

ZERO1 Garage presently holds another “IP” related exhibition, Patent Pending, featuring “artworks by contemporary artists that have either resulted from, or led to, a patent that the artist has either received a patent for or is patent pending.” Sounds problematic! If you’re anywhere near San Jose, I recommend checking out the exhibition and one of its upcoming events — October 17 Post-Natural Properties: The Art of Patented Life Forms and November 1 Does the U.S. Patent System stifle innovation? As I say in the video above, and elsewhere, I hope they also consider equality and freedom.

Why does the U.S. federal government permit negative sum competition among U.S. states and localities?

Monday, October 14th, 2013

I dimly recall learning that the point of the second paragraph of Article 1, Section 10 of the U.S. Constitution was to avoid ruinous trade competition among the states:

No State shall, without the Consent of the Congress, lay any Imposts or Duties on Imports or Exports, except what may be absolutely necessary for executing it’s inspection Laws: and the net Produce of all Duties and Imposts, laid by any State on Imports or Exports, shall be for the Use of the Treasury of the United States; and all such Laws shall be subject to the Revision and Controul of the Congress.

Any remotely modern conception of trade competition includes non-tariff barriers.* To what extent have U.S. states and localities been prohibited from implementing such barriers, and why hasn’t civic extortion — large businesses negotiating with several jurisdictions for ever larger public subsidy — been outlawed?

Of course I’m thinking of the professional sports racket. Another example in today’s media: $285m public subsidy for Detroit pro sports teams, while the city is bankrupt. But there’s also a probably much larger practice of states and localities goaded to offer huge subsidies to businesses in order to move their headquarters or other facilities. Sometimes only a matter of blocks, as in the case of Kansas-Missouri competition in the Kansas City metro area. What could be more clearly negative sum?

*Internationally, non-tariff barrier removal by treaty and other negotiation is often cover for spreading other anti-competitive and inequality promoting practices. I’m not a fan, especially considering that non-treaty autonomous liberalization has for decades been the main source of trade barrier reduction. I’m amused that contributors to the English Wikipedia article on non-tariff barriers to trade have listed “Intellectual property laws (patents, copyrights)” as examples of such barriers. This should be taken literally.

Wikipedia’s economic values

Tuesday, October 8th, 2013

Jonathan Band and Jonathan Gerafi have written a survey of papers estimating Wikipedia’s Economic Value (pdf), where Wikipedia is all Wikipedia language editions, about 22 million articles total. I extracted the ranges of estimates of various types in a summary.

Valuation if Wikipedia were for-profit:

  • $10b-$30b based on valuation of sites with similar visitor and in-link popularity
  • $21.1b-$340b based on revenue if visitors had to pay, akin to Britannica
  • $8.8b-$86b based on potential revenue if Wikipedia ran ads

One-time replacement cost:

  • $6.6b-$10.25b based on freelance writer rates

Ongoing maintenance cost:

  • $630m/year based on hiring writers

Annual consumer surplus

  • $16.9b-$80b based on potential revenue if visitors had to pay
  • $54b-$720b based on library estimates of value of answering reference inquiries

Conclusion: “Wikipedia demonstrates that highly valuable content can be created by non-professionals not incentivized by the copyright system.”

Though obvious and underwhelming, it’s great to see that conclusion stated. Wikipedia and similar are not merely treasures threatened by even more bad policy, but at the very least evidence for other policy, and shapers of the policy conversation and environment.

They don’t achieve this simply through the creation of great content. To fully appreciate the concept of “highly valuable” here, consider that Wikipedia is also immensely popular—a prime example of peer-produced, free cultural relevance. Platforms like Wikipedia succeed not only by generating good content but by fostering a collaborative environment that challenges products dependent on or perpetuating flawed policies. For those interested in understanding the broader impacts and methodologies behind such platforms, mehr Infos hier can offer insights into the power of community-driven projects and their role in reshaping digital culture.

Much about the ranges above, the estimates they include, and their pertinence to the “economic value of Wikipedia”, is highly speculative. Even more speculative, difficult, and interesting would be estimates of the value due to Wikipedia being a commons. The winning online encyclopedia probably would’ve been a very popular site, even if it had been proprietary, rather than Wikipedia or other somewhat open contenders. Consider that Encarta, not Wikipedia, mostly killed Britannica, and that people are very willing to contribute freely to proprietary products.

A broader (than just Wikipedia) take on this harder question was at the core of a research program on the welfare impact of Creative Commons that was in very early stages, and sadly ended coincident with lots of people leaving (including me).

How do we characterize the value (take your pick of value value) of knowledge systems that promote freedom and equality relative to those that promote enclosure? I hope many pick up that challenge, and activists use the results offensively (pdf, slideshare).