Post Patents

Pat Choate and Intellectual Protectionism

Saturday, April 13th, 2013

From at least the mid-1980s through the mid-1990s Pat Choate seemed to me to be the go-to pundit for anti-foreign (where “foreign” means “not USian”) punditry. His basic view seemed to be that foreign businesses, governments, and people were bad and sought to undermine everything USian. Hence he was opposed to trade and immigration, and sought a variety of nationalist and nativist policies to fight this conspiracy. I hated everything he wrote. “Protectionist” was a charitable description of him.

He ran for VP with Ross Perot in 1996. I ceased to notice him from about that time, probably largely because I started to cut back on following the spectacle of current events around then.

Today I learned via two posts at Techdirt that Choate had by 2005 (date of a book he wrote titled Hot Property, with hilarious burning compact disc book cover art) added intellectual protectionism to his repertoire:

We recently posted about an absolutely ridiculous NY Times op-ed piece in which Pat Choate argued both that patent laws have been getting weaker, and that if we had today’s patent laws in the 1970s that Apple and Microsoft wouldn’t have survived since bigger companies would just copy what they were doing and put them out of business. We noted that this was completely laughable to anyone who knew the actual history. A day or so ago, someone (and forgive me, because I can no longer find the tweet) pointed me on Twitter to a 45 minute excerpt from a documentary about the early days of Microsoft and Apple and it’s worth watching just to show how laughably wrong Choate obviously is.

I’m sorry to report that I get some dim satisfaction from learning that Choate’s trajectory led him to intellectual protectionism and feel some additional validation for using that term to describe copyright, patent, trademark, and nearby.

I also noticed today, in searching for “intellectual protectionism”, that Rick Falkvinge is thinking about using the term. I endorse that, though more recently my preferred expansion of “IP” is Inequality Promotion — “intellectual” and “protect” each sound nice, and there’s precious little about equality in “IP” discourse. But there is a bit about inequality in the first use I can find of “intellectual protectionism” more or less in contrast to “intellectual property”, a 1999 OECD publication The Future of the Global Economy: Towards a Long Boom? in a description of a “high friction world” scenario:

This is a winner-take-all economy where a small knowledge elite captures most of the economic value. The economic structure rewards a few and leaves the great majority behind. The resulting social friction of a two-tier society consisting of “knows” and “know-nots” consumes much of the economy’s potential in a vicious cycle.

The fruits of innovation drive economic growth in some parts of the world, creating local islands of prosperity. Highly educated knowledge workers do very well, but a modest education produces little economic benefit. Low wages characterise most service and manufacturing work. Overall, organisations evolve very slowly and remain mainly traditional in form. The “fast” gradually pull away from the “slow”. Highly divergent outcomes result as a few countries do well behind high-security shields and others fall behind. Intellectual protectionism is rife and the free flow of ideas is highly constrained by those who want to protect the value of their intellectual property and those who want to prevent the informational “pollution” of their populations.


Saturday, November 10th, 2012

Last week I attended CODATA 2012 in Taipei, the biannual conference of the Committee on Data for Science and Technology. I struggled a bit with deciding to go — I am not a “data scientist” nor a scientist and while I know a fair amount about some of the technical and policy issues for data management, specific application to science has never been my expertise, all away from my current focus, and I’m skeptical of travel.

I finally went in order to see through a session on mass collaboration data projects and policies that I developed with Tyng-Ruey Chuang and Shun-Ling Chen. A mere rationalization as they didn’t really need my presence, but I enjoyed the conference and trip anyway.

My favorite moments from the panel:

  • Mikel Maron said approximately “not only don’t silo your data, don’t silo your code” (see a corresponding bullet in his slides), a point woefully and consistently underestimated and ignored by “open” advocates.
  • Chen’s eloquent polemic closing with approximately “mass collaboration challenges not only Ⓒ but distribution of power, authority, credibility”; I hope she publishes her talk content!

My slides from the panel (odp, pdf, slideshare) and from an open data workshop following the conference (odp, pdf, slideshare).

Tracey Lauriault summarized the mass collaboration panel (all of it, check out the parts I do not mention), including:

Mike Linksvayer, was provocative in stating that copyright makes us stupider and is stupid and that it should be abolished all together. I argued that for traditional knowledge where people are seriously marginalized and where TK is exploited, copyright might be the only way to protect themselves.

I’m pretty sure I only claimed that including copyright in one’s thinking about any topic, e.g., data policy, effectively makes one’s thinking about that topic more muddled and indeed stupid. I’ve posted about this before but consider a post enumerating the ways copyright makes people stupid individually and collectively forthcoming.

I didn’t say anything about abolishing copyright, but I’m happy for that conclusion to be drawn — I’d be even happier for the conclusion to be drawn that abolition is a moderate reform and boring (in no-brainer and non-interesting senses) among the possibilities for information and innovation policies — indeed, copyright has made society stupid about these broader issues. I sort of make these points in my future of copyright piece that Lauriault linked to, but will eventually address them directly.

Also, Traditional Knowledge, about which I’ve never posted unless you count my claim that malgovernance of the information commons is ancient, for example cult secrets (mentioned in first paragraph of previous link), though I didn’t have contemporary indigenous peoples in mind, and TK covers a wide range of issues. Indeed, my instinct is to divide these between issues where traditional communities are being excluded from their heritage (e.g., plant patents, institutionally-held data and items, perhaps copyrestricted cultural works building on traditional culture) and where they would like to have a collective right to exclude information from the global public domain.

The theme of CODATA 2012 was “Open Data and Information for a Changing Planet” and the closing plenary appropriately aimed to place the entire conference in that context, and question its impact and followup. That included the inevitable asking whether anyone would notice. At the beginning of the conference attendees were excitedly encouraged to tweet, and if I understood correctly, there were some conference staff to be dedicated to helping people tweet. As usual, I find this sort of exhortation and dedication of resources to social media scary. But what about journalists? How can we make the media care?

Fortunately for (future) CODATA and other science and data related events, there’s a great answer (usually there isn’t one), but one I didn’t hear mentioned at all outside of my own presentation: invite data journalists. They could learn a lot from other attendees, have a meta story about exactly the topic they’re passionate about, and an inside track on general interest data-driven stories developing from data-driven science in a variety of fields — for example the conference featured a number of sessions on disaster data. Usual CODATA science and policy attendees would probably also learn a lot about how to make their work interesting for data journalists, and thus be able to celebrate rather than whinge when talking about media. A start on that learning, and maybe ideas for people to invite might come from The Data Journalism Handbook (disclaimer: I contributed what I hope is the least relevant chapter in the whole book).

Someone asked how to move forward and David Carlson gave some conceptually simple and very good advice, paraphrased:

  • Adopt an open access data publishing policy at the inception of a project.
  • Invest in data staff — human resources are the limiting factor.
  • Start publishing and doing small experiments with data very early in a project’s life.

Someone also asked about “citizen science”, to which Carlson also had a good answer (added to by Jane Hunter and perhaps others), in sum roughly:

  • Community monitoring (data collection) may be a more accurate name for part of what people call citizen science;
  • but the community should be involved in many more aspects of some projects, up to governance;
  • don’t assume “citizen scientists” are non-scientists: often they’ll have scientific training, sometimes full-time scientists contributing to projects outside of work.

To bring this full circle (and very much aligned with the conference’s theme and Carlson’s first recommendation above) would have been consideration of scientist-as-citizen. Fortunately I had serendipitously titled my “open data workshop” presentation for the next day “Open data policy for scientists as citizens and for citizen science”.

Finally, “data citation” was another major topic of the conference, but semantic web/linked open data not explicitly mentioned much, as observed by someone in the plenary. I tend to agree, but may have missed the most relevant sessions, though they may have been my focus if I was actually working in the field. I did really enjoy happening to sit next to Curt Tilmes at a dinner, and catching up a bit on W3C Provenance (I’ve mentioned briefly before) of which he is a working group member.

I got to spend a little time outside the conference. I’d been to Taipei once before, but failed to notice its beautiful setting — surrounded and interspersed with steep and very green hills.

I visited National Palace Museum with Puneet Kishor. I know next to nothing about feng shui, but I was struck by what seemed to be an ultra-favorable setting (and made me think of feng shui, which I never have before in my life, without someone else bringing it up) taking advantage of some of the aforementioned hills. I think the more one knows about Chinese history the more one would get out of the museum, but for someone who loves maps, the map room alone is worth the visit.

It was also fun hanging out a bit with Christopher Adams and Sophie Chiang, catching up with Bob Chao and seeing the booming Mozilla Taiwan offices, and meeting Florence Ko, Lucien Lin, and Rock of Open Source Software Foundry and Emily from Creative Commons Taiwan.

Finally, thanks to Tyng-Ruey Chuang, one of the main CODATA 2012 local organizers, and instigator of our session and workshop. He is one of the people I most enjoyed working with while at Creative Commons (e.g., a panel from last year) and given some overlapping technology and policy interests, one of the people I could most see working with again.


Tuesday, September 11th, 2012

Opus is now an open source, royalty-free IETF standard. See Mozilla and Xiph announcements and congratulations to all involved.

This is a pretty big deal. It seems that Opus is superior to all existing audio codecs in quality and latency for any given bitrate. I will guess that for some large number of years it will be the no-brainer audio codec to use in any embedded application.

Will it replace the ancient (almost ancient enough for relevant patents to expire) but ubiquitous MP3 for non-embedded uses (i.e., where users can interact with files via multiple applications, such as on-disk music libraries)? If I were betting I’d have to bet no, but surely long-term it has a better chance than any free audio codec since Vorbis in the late 1990s. Vorbis never gained wide use outside some classes of embedded applications and free software advocates, but it surely played a big role in suppressing licensing demands from MP3 patent holders. Opus puts a stake through the heart of future audio codec licensing demands, unless some other monopoly can be leveraged (by Apple) to make another codec competitive.

Also, Opus is a great brand. Which doesn’t include an exclamation point. The title of this post merely expresses excitement.

I published an Opus-encoded file July 30. Firefox ≥15 supports Opus, which meant beta at the time, and now means general release.

To publish your own Opus encoded audio files, use opus-tools for encoding, and add a line like the below to your web server’s .htaccess file (or equivalent configuration):

AddType audio/ogg .opus

Hopefully the obvious large community sites (Wikimedia Commons and Internet Archive) will accept and support Opus uploads as soon as possible. Unlike their slow action on WebM. Speaking of which the Mozilla announcement mentions “working on the same thing for video”. I can’t tell whether this means submitting WebM (probably more specifically the VP8 codec) to the IETF or something else, but good luck and thank you in all cases. [Update: The proposed video codec charter starts from some requirements not mentioning any particular code; my wholly uniformed wild guess is that it will be another venue for VP8 and H.264 camps to argue.] [Update 20120913: Or maybe “same thing for video” means Daala.] [Update 20120914: Greg Maxwell comments with a precise answer below.]

Copyright mitigation, not balance

Monday, September 10th, 2012

EU Commission VP Neelie Kroes gave a speech on copyright reform that while surely among the best on the subject from a high level politician (Techdirt coverage) is fundamentally broken.

Kroes argues that a lot has changed in the last 14 years about how information is consumed, distributed, produced, and used in research and that copyright needs to adapt to these changes. If that argument eventually obtains significant mitigation of copyright, great, but it’s mostly wrong, and I suspect questions far too little and gives away way too much to all invested in the current regime. For example:

And now let’s remind ourselves what our objectives as policymakers should be for the creative sector.

We should help artists live from their art. Stimulate creativity and innovation. Improve consumer choice. Promote our cultural heritage. And help the sector drive economic growth.

We can’t look at copyright in isolation: you have to look at how it fits into the real world. So let’s ask ourselves: how well is the current system achieving those objectives, in the world we live in today?

What about freedom? Equality?

Regarding new technologies in the last 14 years, there have been some (and Kroes was not so bold as to even hint at Napster and successors, nor broad offenses against these and the web), but those are not at all what makes copyright mitigation interesting, except down in the weeds of how specific regulations interact with specific technologies and practices — the view of the universe from the vantage of administrators and agitators of the current regime — understandably, as this is where most day to day battles are fought.

Instead, mitigation of anti-commons information policy is interesting and desirable, and has been especially pertinent at various times (eg 1800s) throughout human history, because free speech is always desirable and under threat by the embarrassments of control, corruption, and rent seeking. These are not qualities to be “balanced”, but diseases to be mitigated as much and for as long as possible.

The objectives Kroes says policymakers should have are fine, if secondary. Copyright (and patents, and sadly more) simply should not be seen as relevant to any of them, except as a barrier to be mitigated, not balanced nor adapted.

Future of Copyright

Monday, April 30th, 2012

“Copyright” (henceforth, copyrestriction) is merely a current manifestation of humanity’s malgovernance of information, of commons, of information commons (the combination being the most pertinent here). Copyrestriction was born of royal censorship and monopoly grants. It has acquired an immense retinue of administrators, advocates, bureaucrats, goons, publicists, scholars, and more. Its details have changed and especially proliferated. But its concept and impact are intact: grab whatever revenue and control you can, given your power, and call your grabbing a “right” and necessary for progress. As a policy, copyrestriction is far from unique in exhibiting these qualities. It is only particularly interesting because it, or more broadly, information governance, is getting more important as everything becomes information intensive, increasingly via computation suffusing everything. Before returning to the present and future, note that copyrestriction is also not temporally unique among information policies. Restriction of information for the purposes of control and revenue has probably existed since the dawn of agriculture, if not longer, e.g., cults and guilds.

Copyrestriciton is not at all a right to copy a work, but a right to persecute others who distribute, perform, etc, a work. Although it is often said that a work is protected by copyrestriction, this is strictly not true. A work is protected through the existence of lots of copies and lots of curators. The same is true for information about a work, i.e., metadata, e.g., provenance. Copyrestriction is an attack on the safety of a work. Instead, copyrestriction protects the revenue and control of whoever holds copyrestriction on a work. In some cases, some elements of control remain with a work’s immediate author, even if they no longer hold copyrestriction: so-called moral rights.

Copyrestriction has become inexorably more restrictive. Technology has made it increasingly difficult for copyrestriction holders and their agents to actually restrict others’ copying and related activity. Neither trend has to give. Neither abolition nor police state in service of copyrestriction scenarios are likely in the near future. Nor is the strength of copyrestricition the only dimension to consider.

Free and open source software has demonstrated the ethical and practical value of the opposite of copyrestriction, which is not its absence, but regulation mandating the sharing of copies, specifically in forms suitable for inspection and improvement. This regulation most famously occurs in the form of source-requiring copyleft, e.g., the GNU General Public License (GPL), which allows copyrestriction holders to use copyrestriction to force others to share works based on GPL’d works in their preferred form for modification, e.g., source code for software. However, this regulation occurs through other means as well, e.g., communities and projects refusing to curate and distribute works not available in source form, funders mandating source release, and consumers refusing to buy works not available in source form. Pro-sharing regulation (using the term “regulation” maximally broadly to include government, market, and others; some will disbelieve in the efficacy or ethics of one or more, but realistically a mix will occur) could become part of many policies. If it does not, society will be put at great risk by relying in security through obscurity, and lose many opportunities to scrutinize, learn about, and improve society’s digital infrastructure and the computing devices individuals rely on to live their lives, and to live, period.

Information sharing, and regulation promoting and protecting the same, also ought play a large role in the future of science. Science, as well as required information disclosure in many contexts, long precedes free and open source software. The last has only put a finer point on pro-sharing regulation in relation to copyrestriction, since the most relevant works (mainly software) are directly subject to both. But the extent to which pro-sharing regulation becomes a prominent feature of information governance, and more narrowly, the extent to which people have software freedom, will depend mostly on the competitive success of projects that reveal or mandate revelation of source, the success of pro-sharing advocates in making the case that pro-sharing regulation is socially desirable, and their success in getting pro-sharing regulation enacted and enforced (again, whether in customer and funding agreements, government regulation, community constitutions, or other) much more so than copyrestriction-based enforcement of the GPL and similar. But it is possible that the GPL is setting an important precedent for pro-sharing regulation, even though the pro-sharing outcome is conceptually orthogonal to copyrestriction.

Returning to copyrestriction itself, if neither abolition nor totalism are imminent, will humanity muddle through? How? What might be done to reduce the harm of copyrestriction? This requires a brief review of the forces that have resulted in the current muddle, and whether we should expect any to change significantly, or foresee any new forces that will significantly impact copyrestriction.

Technology (itself, not the industry as an interest group) is often assumed to be making copyrestriction enforcement harder and driving demands for for harsher restrictions. In detail, that’s certainly true, but for centuries copyrestriciton has been resilient to technical changes that make copying ever easier. Copying will continue to get easier. In particular the “all culture on a thumb drive” (for some very limited definition of “all”) approaches, or is here if you only care about a few hundred feature length films, or are willing to use portable hard drive and only care about a few thousand films (or much larger numbers of books and songs). But steadily more efficient copying isn’t going to destroy copyrestriction sector revenue. More efficient copying may be necessary to maintain current levels of unauthorized sharing, given steady improvement in authorized availability of content industry controlled works, and little effort to make unauthorized sharing easy and worthwhile for most people (thanks largely to suppression of anyone who tries, and media management not being an easy problem). Also, most collection from businesses and other organizations has not and will probably not become much more difficult due to easier copying.

National governments are the most powerful entities in this list, and the biggest wildcards. Although most of the time they act roughly as administrators or follow the cue of more powerful national governments, copyrestriction laws and enforcement are ultimately in their courts. As industries that could gain from copyrestriction grow in developing nations, those national governments could take on leadership of increasing restriction and enforcement, and with less concern for civil liberties, could have few barriers. At the same time, some developing nations could decide they’ve had enough of copyrestriction’s inequality promotion. Wealthy national governments could react to these developments in any number of ways. Trade wars seem very plausible, actual war prompted by a copyrestriction or related dispute not unimaginable. Nations have fought stupid wars over many perceived economic threats.

The traditional copyrestriction industry is tiny relative to the global economy, and even the U.S. economy, but its concentration and cachet make it a very powerful lobbyist. It will grab all of the revenue and control it possibly can, and it isn’t fading away. As alluded to above, it could become much more powerful in currently developing nations. Generational change within the content industry should lead to companies in that industry better serving customers in a digital environment, including conceivably attenuating persecution of fans. But it is hard to see any internal change resulting in support for positive legal changes.

Artists have always served as exhibit one for the content industry, and have mostly served as willing exhibitions. This has been highly effective, and every category genuflects to the need for artists to be paid, and generally assumes that copyrestriction is mandatory to achieve this. Artists could cause problems for copyrestriction-based businesses and other organizations by demanding better treatment under the current system, but that would only effect the details of copyrestriction. Artists could significantly help reform if more were convinced of the goodness of reform and usefulness of speaking up. Neither seems very likely.

Other businesses, web companies most recently, oppose copyrestriction directions that would negatively impact their businesses in the short term. Their goal is not fundamental reform, but continuing whatever their current business is, preferably with increasing profits. Just the same as content industries. A fundamental feature of muddling through will be tests of various industries and companies to carve out and protect exceptions. And exploit copyrestriction whenever it suits them.

Administrators, ranging from lawyers to WIPO, though they work constantly to improve or exploit copyrestriciton, will not be the source of significant change.

Free and open source software and other constructed commons have already disrupted a number of categories, including server software and encyclopedias. This is highly significant for the future of copyrestriction, and more broadly, information governance, and a wildcard. Successful commons demonstrate feasibility and desirability of policy other than copyrestriction, help create a constituency for reducing copyrestriction and increasing pro-sharing policies, and diminish the constituency for copyrestriction by reducing the revenues and cultural centrality of restricted works and their controlling entities. How many additional sectors will opt-in freedom disrupt? How much and for how long will the cultural centrality of existing restricted works retard policy changes flowing from such disruptions?

Cultural change will affect the future of copyrestriction, but probably in detail only. As with technology change, copyrestriction has been incredibly resilient to tremendous cultural change over the last centuries.

Copyrestriction reformers (which includes people who would merely prevent additional restrictions, abolitionists, and those between and beyond, with a huge range of motivations and strategies among them) will certainly affect the future of copyrestriction. Will they only mitigate dystopian scenarios, or cause positive change? So far they have mostly failed, as the political economy of diffuse versus concentrated interests would predict. Whether reformers succeed going forward will depend on how central and compelling they can make their socio-political cause, and thus swell their numbers and change society’s narrative around information governance — a wildcard.

Scholars contribute powerfully to society’s narrative over the long term, and constitute a separate wildcard. Much scholarship has moved from a property- and rights-based frame to a public policy frame, but this shift as yet is very shallow, and will remain so until a property- and rights-basis assumption is cut out from under today’s public policy veneer, and social scientists rather than lawyers dominate the conversation. This has occurred before. Over a century ago economists were deeply engaged in similar policy debates (mostly regarding patents, mostly contra). Battles were lost, and tragically economists lost interest, leaving the last century’s policy to be dominated by grabbers exploiting a narrative of rights, property, and intuitive theory about incentives as cover, with little exploration and explanation of public welfare to pierce that cover.

Each of the above determinants of the future of copyrestriction largely hinge on changing (beginning with engaging, in many cases) people’s minds, with partial exceptions for disruptive constructed commons and largely exogenous technology and culture change (partial as how these develop will be affected by copyrestriction policy and debate to some extent). Even those who cannot be expected to effect more than details as a class are worth engaging — much social welfare will be determined by details, under the safe assumption that society will muddle through rather than make fundamental changes.

I don’t know how to change or engage anyone’s mind, but close with considerations for those who might want to try:

  • Make copyrestriction’s effect on wealth, income, and power inequality, across and within geographies, a central part of the debate.
  • Investigate assumptions of beneficent origins of copyrestriction.
  • Tolerate no infringement of intellectual freedom, nor that of any civil liberty, for the sake of copyrestriction.
  • Do not assume optimality means “balance” nor that copyrestriction maximalism and public domain maximalism are the poles.
  • Make pro-sharing, pro-transparency, pro-competition and anti-monopoly policies orthogonal to above dimension part of the debate.
  • Investigate and celebrate the long-term policy impact of constructed commons such as free and open source software.
  • Take into account market size, oversupply, network effects, non-pecuniary motivations, and the harmful effects of pecuniary motivations on creative work, when considering supply and quality of works.
  • Do not grant that copyrestriction-based revenues are or have ever been the primary means of supporting creative work.
  • Do not grant big budget movies as failsafe argument for copyrestriction; wonderful films will be produced without, and even if not, we will love whatever cultural forms exist and should be ashamed to accept any reduction of freedom for want of spectacle.
  • Words are interesting and important but trivial next to substance. Replace all occurrences of “copyrestriction” with “copyright” as you see fit. There is no illusion concerning our referent.

This work takes part in the and is published under the CC BY-SA 3.0 license.


Intellectual Protectionism’s regressive double taxation of the real economy

Sunday, April 29th, 2012

How Apple Sidesteps Billions in Taxes:

Almost every major corporation tries to minimize its taxes, of course. For Apple, the savings are especially alluring because the company’s profits are so high. Wall Street analysts predict Apple could earn up to $45.6 billion in its current fiscal year — which would be a record for any American business.

For anyone slightly concerned about inequality, this record ought to raise another red flag concerning the effect of copyright and patent monopolies. (Similarly, review a list of the wealthiest individuals.)

Apple serves as a window on how technology giants have taken advantage of tax codes written for an industrial age and ill suited to today’s digital economy. Some profits at companies like Apple, Google, Amazon, Hewlett-Packard and Microsoft derive not from physical goods but from royalties on intellectual property, like the patents on software that makes devices work. Other times, the products themselves are digital, like downloaded songs. It is much easier for businesses with royalties and digital products to move profits to low-tax countries than it is, say, for grocery stores or automakers. A downloaded application, unlike a car, can be sold from anywhere.

The growing digital economy presents a conundrum for lawmakers overseeing corporate taxation: although technology is now one of the nation’s largest and most valued industries, many tech companies are among the least taxed, according to government and corporate data. Over the last two years, the 71 technology companies in the Standard & Poor’s 500-stock index — including Apple, Google, Yahoo and Dell — reported paying worldwide cash taxes at a rate that, on average, was a third less than other S.& P. companies’. (Cash taxes may include payments for multiple years.)

First tax: monopoly pricing. Second tax: burden shifted to entities less able to move profits. Remove monopolies for much good, then resume debate about all aspects of taxation per usual, as you wish.


  • Real economy usually refers to non-financial sector. Suggestions welcome for non-IP sector.
  • I may be double counting: without copyright and patent, “real” economy share of profits would increase, tax burden concomitantly.
  • Not all profits that are easy to move result from copyright and patent, e.g., I suspect a small proportion of Google’s profits are even indirectly resulting from such.
  • There are more non-IP than IP-related entities on record wealth and profit lists, in particular natural resource entities. I don’t claim IP is the dominant source of inequality — but surely an increasing one — and more easily mitigated than natural resource entities, or for that matter, dictators and other state entities, which I wish were included on rich lists.

Permissions are job 0 for public licenses

Saturday, February 25th, 2012

Copyright permission is the only mechanism that almost unambiguously is required to maximize social value realized from sharing and collaboration around intangible goods (given that copyright exists):

  • Some people think the addition of conditions that are in effect non-copyright regulation are also required, but others disagree, and given widespread ignorance about and noncompliance with copyleft regulation, I put in the class of probably important (is there anyone conducting serious research around this question?) rather than that of unambiguously required. In any case, current copyleft conditions would be nonsensical if not layered on top of permissions.
  • I’ve heard the argument made that no mechanism is needed: culture aided by the net will route around copyright and other restrictions, just ignore them. I can’t find a good example, but some exhortations and the like of copyheart and kopimi are a subset of the genre. But unless one can make the case that the participation of wealthy litigation targets (any significant organization, from IBM to Wikimedia) is a net negative (and that’s only the first hurdle for such an argument to clear), a mechanism for permissions that appear legally sound to the copyright regime seem unambiguously necessary.
  • There are lots of other real and potential restrictions that permission can and may be possible to grant around, but so much progress has been made with only copyright permissions explicitly granted, and how other restrictions will play out largely a matter of speculation, that I put other permissions also in the class of probably important rather than unambiguously required.

Each of these merit much more experimentation and critique, but while any progress on the first two will inevitably be controversial, progress on the third ought be celebrated and demanded. (For completeness sake, progressive changes in social policy must also be celebrated and demanded, but out of scope for this post.) I see few excuses for new licenses and dedications to not aggressively grant every permission that might be possible or needed, nor for new projects to use instruments that are not so aggressive (with the gigantic constraints that use of existing works and the non-existence of perfect instruments impose), nor for communities that vet instruments to give a stamp of approval to such instruments — indeed if politics and path dependencies were not an issues, such communities ought to push non-aggressive instruments to some kind of legacy status.

In this context I am happy with the outcome of the submission of CC0 to the Open Source Initiative for approval: due to not only lack of, but explicit exclusion of patent permissions, Creative Commons has withdrawn the submission. Richard Fontana’s and Bruce Perens’ contributions to the thread are instructive.

I still think that CC0 is the best thing Creative Commons has ever done — indeed I think that largely because of the above considerations; I don’t know of an instrument that makes as thorough attempt to grant permission around all copyright, related, and neighboring restrictions (patents aren’t in any of those classes) — and remain very happy that the Free Software Foundation considers CC0 to be GPL-compatible (I put GPL-incompatibility in a class of avoidable failure separate from but not wholly unlike not granting all permissions that may be possible, unless one is experimenting with some really novel copyleft regulation).

From the OSI submission thread, I also highly recommend Carl Boettiger’s plea for a public domain instrument appropriate for heterogeneous (code/data/other) products. It will (and ought to) take Creative Commons a long time to vet any potential new version of CC0, but fortunately as I’ve pointed out before, there is plenty of room for experimentation with public domain mechanisms, especially around branding (as incompatibility is less of an issue; compare with copyleft (although if one made explicit compatibility a requirement, there is plenty of potentially beneficial exploration to be done there, too)). An example of such that attempts to include a patent waiver is the Ampify Unlicense (background post).

I hope that the CC0/OSI discussion prompts a race to the bottom for public domain instruments, as new ones attempt to carve out every possible permission. This also ought beneficially affect future permissive and copyleft licenses, which also ought grant every permission possible, whatever conditions they layer on top. Note that adding one such permission — around sui generis database restrictions, is probably the most pressing reason for Creative Commons to have started working on version 4.0 of its licenses. I also hope that the discussion leads to increased collaboration and knowledge sharing (at the very least) across domains in which public licenses are used, taking into account Boettiger’s plea and the realities that such licenses are very often used across several domains (a major point of my recent FOSDEM talk, see especially slides 8-11) and that knowledge concerning commons governance is very thin in every domain.

But keep in mind that most of this post concerns very small potential gains relative to merely granting copyright permission (assuming no non-free conditions are present) and even those are quite a niche subject.☻

FOSDEM 2012 Legal Issues DevRoom

Thursday, February 9th, 2012

I attended and spoke at the FOSDEM 2012 Legal Issues DevRoom (Update 20120217: slides, blog posts) organized by Tom Marble, Bradley Kuhn, Karen Sandler, and Richard Fontana. I understand the general idea was to gather people for advanced discussions of free/libre/open source software legal and policy issues, bypassing the “what is copyright?” panel that apparently afflicts such conferences (I haven’t noticed, but don’t go to many FLOSS conferences; I bet presenters usually get the answer only superficially correct). I thought the track mostly succeeded (consider this high praise) — presentations did cover contemporary issues that mostly only people following FLOSS policy would have heard of, but I wished for just a bit more that would be news or really provocative to such people. In part I think 30 minute time slots were to blame — long enough for presenters to belabor background points, short enough for no substantive discussion. Given only 30 minutes, I personally probably would have benefited from a 15 minute speaking limit, thus being forced to state only important points, and leaving a little time for participants to tear those apart. Yes, I should have imposed that discipline on myself, but did not think of it until now.

Philippe Laurent gave an overview of cases involving “Open Licences before European Courts”. He did not list one recent “open content” case, Gerlach vs. DVU.

Ambjörn Elder on “The Methods of FOSS Activism” spoke about political activism; a worthy topic, but I hope for more discussion of activism for software freedom, rather than against ever worse policy.

In place of Armijn Hemel’s “Goes into an Executable? Identifying a Binary’s Sources by Tracing Build Processes” (missed flight) Kuhn and Sandler excerpted from a presentation on and took questions regarding nonprofit homes for free software projects. Writing this reminded me to make a donation to Software Freedom Conservancy, of which Kuhn and Sandler are respectively ED and Secretary of. Somewhat tangentially, I don’t find the topic boring, but I do find the lack of information, informed-ness (including mine), and tools regarding it boring. I don’t know of any libre documentation on running a nonprofit — I’d love to see a series of FLOSS Manuals on this. OneClickOrgs is a fairly new free software project to handle some aspects of governing a small organization, but I don’t know how useful it is at this point. Related to lack of documentation, some of the Q&A emphasized how little people know of these topics across jurisdictions — nevermind rule minutiae, even the existence of relevant “home” organizations.

Dave Neary on “Grey Areas of Software Licensing” questioned whether one could legally do various things, using examples largely drawn from GIMP development. The answer is always maybe. Fortunately developers sometimes take that as yes.

Allison Randal gave an overview of FLOSS history with a focus on legal arrangements in “FLOSSing for Good Legal Hygiene: Stories from the Trenches”.

Michael Meeks on “Risks vs. Benefits on Copyright Assignment” made the case that assignment (and some non-assingment contributor agreements) is harmful to participation, and proprietary re-licensing has not proven a good business, so a corporate sponsored software project ought to either be free (sans assignment and potential for propreitary relicensing) or proprietary, and fully enjoy the benefits of one or the other, rather than neither. He also indicated that permissive licensing can be better than copyleft for a free software project with copyrights held by a corporation, as the former gives all effectively equal rights, while the latter abets proprietary relicensing and ridiculous claims that the corporate sponsor will protect the community. Meeks repeatedly called on the FSF to abandon assingment, as for-profits disingenuously cite FSF’s practice in support of their own (FSF ED John Sullivan responded that they are getting corrections made where FSF practice is inappropriately cited and will work on explaining their practice better). Finally, Meeks requested an “ALGPL” which would require sharing of modified sources used to provide a network service, like the AGPL, but allow modifications that only link to or the equivalent ALGPL codebase to not be shared. I don’t know whether he wants GPL or LGPL behavior if such modificaitons are distributed. I was somewhat chagrined (but understanding; just not enough time, and maybe nobody submitted a decent proposal) that this was the only1 discussion of network services!

Loïc Dachary on “Can for-profit companies enforce copyleft without becoming corrupt like MySQL AB?” said yes, if they aren’t the sole copyright holders; on projects he is hired to work on, he seeks out additional contributors who will hold copyright independently.

John Sullivan in “Is copyleft being framed?” presented some new data, apparently replicable (based on Debian package metadata), showing that GPL-family licenses are used in the vast majority (did I hear 87%?) of Debian packages. Update 20120217: I did hear 87%, in 2009, and 93% in 2011. Note some software available under multiple licenses. Slides.

Richard Fontana on “The (possible) decline of the GPL, and what to do about it” suggested the need to start thinking about GPLv4, but I’m not sure for what issues2 — doesn’t matter; if the particulars of licenses can make a big difference, requirements for the next version of important ones should always be a relevant topic, even if there is no expectation of creating another version for many years. Fontana also indicated that perhaps the next (massively adopted, presumably) copyleft might not be created by an existing steward3 (meaning the FSF, or obviously CC in many non-software fields), which I take as an indication that license innovation is possibly more important than compatibility and non-proliferation.

I don’t remember much of panels with Hugo Roy, Giovanni Battista Gallus, Bradley Kuhn, Richard Fontana on application stores and Ciarán O’Riordan, Benjamin Henrion, Deb Nicholson, Karen Sandler on software patents, as I was probably preparing for my talk, but I trust that free software is still important if mode of delivery changes slightly and that software patents ought be abolished.

I spoke on “⊂ (FLOSS legal/policy ∩ CC [4.0])” (slides: odp, pdf, slideshare). Contrary to my apology I didn’t blog much of the talk beforehand. I will get to all of the topics eventually.

Most of the slides from the day should be available soon on the DevRoom’s page. Some audio might be available as well eventually.

Kuhn demonstrated his qualifications for another fallback career: crowd crontol. Fontana blogged a summary of the devroom. Sandler gave the most important talk on FLOSS policy (but not at FOSDEM). Marble apparently did almost all the organizing. Thanks to all! There will be another legal/policy devroom next year.

Addendum 20120210: Richard Fontana offered these corrections:

1“re network services, I mentioned rise as factor in possible GPL decline, coupled with AGPL pwned by dual-license hucksters”

2“main reason for GPLv4 right now is GPLv3 is needlessly complex, limiting popularity of strong copyleft.”

3“growing concern that anti-license-proliferationism concentrates power in privileged Establishment organizations”

Someday knowing the ins and outs of copyright will be like knowing the intricate rules of internal passports in Communist East Germany

Thursday, January 26th, 2012

Said Evan Prodromou, who I keep quoting.

I repeat Evan as a reminder and apology. I’ve blogged many times about copyright licenses in the past, and will have a few detailed posts on the subject soon in preparation for a short talk at FOSDEM.

Given current malgovernance of the intellectual commons, public copyright licenses are important for freedom. They’re probably also important trials for post-copyright regulation (meant in the broadest sense, including at least “market” and “government” regulatory mechanisms), eg of ability to inspect and modify complete and corresponding source.

At the same time, the totemic and contentious role copyright licenses (and sometimes assignment or contributor agreements, and sometimes covering related wrongs and patents) play in free/libre/open works, projects, and communities often seems an unfortunate misdirection of energy at best, and probably looks utterly ridiculous to casual observers. I suspect copyright also takes at least some deserved limelight, and perhaps much more, from other aspects of governance, plain old getting things done, and activism around other issues (regarding the first, some good recent writings includes those by Simon Phipps and Bradley Kuhn, but the prominence of copyright arrangements therein reinforces my point). But this all amounts to an additional reason it is important to get the details of public copyright licenses right, in particular compatibility between them where it can be achieved — so as to minimize the amount of time and energy projects put into considering and arguing about the options.

Obviously the energy put into public licenses is utterly insignificant against that spent on other copyright/patent/trademark complex activities. But I’m not going to write about that in the near future, so it isn’t part of my apology and rationalization.

Someday I hope that knowing the ins and outs of both Internal Passports of the mind and international passports will be like knowing the rules of internal passports in Communist East Germany (presumably intricate; I did not look for details, but hopefully they exist not many hops from a Wikipedia article on Eastern Bloc emigration and defection).

Years of open hardware licenses

Tuesday, January 10th, 2012

Last in a list of the top 10 free/open source software legal developments in 2011 (emphasis added):

Open Hardware License. The open hardware movement received a boost when CERN published an Open Hardware License (“CERN OHL”). The CERN OHL is drafted as a documentation license which is careful to distinguish between documentation and software (which is not licensed under the CERN OHL) The license is “copyleft” and, thus, similar to GPLv2 because it requires that all modifications be made available under the terms of the CERN OHL. However, the license to patents, particularly important for hardware products, is ambiguous. This license is likely to the first of a number of open hardware licenses, but, hopefully, the open hardware movement will keep the number low and avoid “license proliferation” which has been such a problem for open source software.

But the CERN OHL isn’t the first “open hardware license”. Or perhaps it is the nth first. Several free software inspired licenses intended specifically for design and documentation have been created over the last decade or so. I recall encountering one dating back to the mid-1990s, but can’t find a reference now. Discussion of open hardware licenses was hot at the turn of the millennium, though most open hardware projects from that time didn’t get far, and I can’t find a license that made it to “1.0”.

People have been wanting to do for hardware what the GNU General Public License has done for software and trying to define open hardware since that timeframe. They keep on wanting (2006) and trying (2007, 2011 comments).

Probably the first arguably “high quality” license drafted specifically for open hardware is the (2007). The CERN OHL might be the second such. There has never been consensus on the best license to use for open hardware. Perhaps this is why CERN saw fit to create yet another (incompatible copyleft at that — incompatible with TAPR OHL, GPL, and BY-SA), but there still isn’t consensus in 2012.

Licenses primarily used for software (usually [L]GPL, occasionally BSD, MIT, or Apache) have also been used for open hardware since at least the late 1990s — and much more so than any license created specifically for open hardware. CC-BY-SA has been used by Arduino since at least 2008 and since 2009.

In 2009 the primary drafter of the TAPR OHL published a paper with a rationale for the license. By my reading of the paper, the case for a license specific to hardware seems pretty thin — hardware design and documentation files, and distribution of printed circuit boards seem a lot like program source and executables, and mostly subject to copyright. It also isn’t clear to me why the things TAPR OHL handles differently than most open source software licenses (disclaims strictly being a copyright license, instead wanting to serve as a clickwrap contract; attempts to describe requirements functionally, instead of legally, to avoid describing explicitly the legal regime underlying requirements; limited patent grant applies to “possessors” not just contributors) might not be interesting for software licenses, if they are interesting at all, nor why features generally rejected for open source software licenses shouldn’t also be rejected for open hardware (email notification to upstream licensors; a noncommercial-only option — thankfully deprecated late last year).

Richard Stallman’s 1999 note about free hardware seems more clear and compelling than the TAPR paper, but I wish I could read it again without knowing the author. Stallman wrote:

What this means is that anyone can legally draw the same circuit topology in a different-looking way, or write a different HDL definition which produces the same circuit. Thus, the strength of copyleft when applied to circuits is limited. However, copylefting HDL definitions and printed circuit layouts may do some good nonetheless.

In a thread from 2007 about yet another proposed open hardware license, three people who generally really know what they’re talking about each wondered why a hardware-specific license is needed: Brian Behlendorf, Chris DiBona, and Simon Phipps. The proposer withdrew and decided to use the MIT license (a popular non-copyleft license for software) for their project.

My bias, as with any project, would be to use a GPL-compatible license. But my bias may be inordinately strong, and I’m not starting a hardware project.

One could plausibly argue that there are still zero quality open hardware specific licenses, as the upstream notification requirement is arguably non-open, and the CERN OHL also contains an upstream notification requirement. Will history repeat?

Addendum: I just noticed the existence of an open hardware legal mailing list, probably a good venue to follow if you’re truly interested in these issues. The organizer is Bruce Perens, who is involved with TAPR and is convinced non-copyright mechanisms are absolutely necessary for open hardware. His attempt to bring rigor to the field and his decades of experience with free and open source software are to be much appreciated in any case.