Post Creative Commons

Copyleft.spin

Monday, July 9th, 2012

My post on 5 years of GPLv3 got picked up by LWN.net, see a medium-sized comment thread. There and elsewhere some people thought I was spinning or apologizing for GNU GPLv3. I can see why; I should’ve included more disclaimers. I certainly was spinning — some of my hobby horses, not for GPLv3. I shouldn’t expect anyone except maybe a few faithful readers and friends to accurately detect my hobby horses. Here are the relevant ones, sans GPLv3 anniversary dressing:

  • Copyleft is pro-sharing regulation.
  • But job 0 of all public licenses is negation of bad default regulation (copyright and patent).
  • Incompatibility, including across magisteria, hampers both the pull to accept pro-sharing regulation and the negation of bad default regulation (I don’t recall addressing the latter explicitly prior to my 5 years of GPLv3 post).

Thus (one of) my stated “metrics” for judging GPLv3, “number and preponderance of works under GPLv3-compatible terms.” I should have generalized: a marker for a license’s success is the extent to which it contributes to increasing the pool of works which may be used together without copyright as a barrier (but possibly as a pro-sharing regulatory enforcement mechanism). It makes sense to talk about this generalization specifically in terms of the pool of GPLv3-compatible works because it is the closest thing we have to a universal recipient (technically AGPLv3 is that, but it is less well known, I don’t know of any license that is AGPLv3 compatible but not GPLv3 compatible, and I’ll cover it further in a future post).

I said by this metric GPLv3 is doing pretty well. But that depends on where one wants to start from. GPLv3 allowed for the preexisting Apache2 to be compatible with it, and for MPL2 to be developed such that it could be. But maybe it was ridiculous for Apache2 to have been released without being GPLv2 compatible. And I consider GPLv2’s option for incompatibility with future versions a gross error (which GPLv3 repeated; it just isn’t felt right now as there probably will be no GPLv4 for a very long time). One could reasonably argue that the only useful development in licensing in the history of free and open source licensing is BSD3 dropping BSD4’s advertising clause (and to a lesser extent BSD2 also dropping a no endorsement clause — note that BSD# refers to the number of clauses present, not versions, and note that one could argue that people should’ve used similar licenses like MIT and ISC without the problematic clauses all along). But given that previous history could not be rewritten in 2006 and 2007, I think GPLv3 has done pretty well, and at the very least could have done a whole lot worse.

Could GPLv3 have done a whole lot better? I appreciate that some people would’ve preferred no versioning, or perhaps a conservative versioning which could have achieved Apache2 and eventually MPL2 compatibility and have been uncontroversial for all GPLv2 licensors to move to. That would have to weighed against the additional (and controversial, e.g., anti-Tivoization) measures GPLv3 actually has, and as I said in my previous post, I think it will take a long time to be able to judge how big of an instrumental effect those have toward software freedom. But again, in 2012 previous history can’t be rewritten, though consideration of such hypotheticals could possibly help us make improved choices in the future, which is being written now: version 4.0 of various Creative Commons licenses are in development (in addition to the somewhat abstract hobby horses listed above, my previous post can be read as an abstract case for specific CC BY-SA 4.0->GPLv3 compatibility) and Richard Fontana has just started drafting Copyleft.next.

There’s much interesting about Copyleft.next, but regarding compatibility, I like that it is explicitly compatible with any version of GNU GPL. Copyleft license innovation (which admittedly one ought be extremely skeptical of; see above) and critique-in-a-license is possible while avoiding the obvious harm of incompatibility — declare explicit compatibility. Such can even be a sort of pro-compatibility agitation, of which I consider the Italian Open Data License 1.0 (explicitly compatible with both CC BY-SA and ODbL, which are mostly incompatible with each other) an example.

In case anyone thinks I’m still spinning or apologizing for GPLv3, let me state that over the past 5 years I’ve grown increasingly skeptical about copyleft-via-copyright-licenses (but I still think it has a useful role, especially given inability to rewrite history, and cheer its enforcement; if you don’t like enforcement, use a permissive instrument). I’ll write a post about why in the fullness of time, but for now, the executive summary: institutions, obscurity, politics, stupidity.

To close on an appreciative, celebratory note: forget analysis of GPLv3’s net impact — it is a fine, elegant, and surprisingly readable document. Especially in comparison to extant Creative Commons licenses, which lack both clarity of purpose and readability (according to various readability metrics, CC 4.0 licenses will probably catch up to and maybe surpass GPLv3 on readability). Compatibility aside, one of my major tendencies in advising on CC 4.0 is to advocate for copying GPLv3 in a number of places. I probably won’t ever get to it, but I could do a post or series of posts on the sentences and strategies in GPLv3 that I think are great. Also, check out the FSF’s GPLv3 turns 5 post featuring a cake.

5 years of GPLv3

Friday, June 29th, 2012

5louds

Version 3 of the GNU GPL was released 5 years ago today. How successful the license is and will be may become more clear over the next 5 years. Use relative to other free software licenses? Good data and analysis are difficult. The importance of v3’s innovations in protecting and promoting users’ freedoms in practice? Will play out over many years. More software freedom and indeed, general welfare, than in a hypothetical world without GPLv3? Academic questions, and well worth considering.

I suggest that number (add qualifiers of and scaling by importance, quality, etc, as you wish) of works under GPLv3 or use of GPLv3 relative to other licenses are less important markers of GPLv3’s success, and that of the broader FLOSS community, than the number and preponderance of works under GPLv3-compatible terms. Although it is a relatively highly regulatory license, its first and most important job is the same as that of permissive and public domain instruments — grant all permissions possible around default restrictions imposed by current and future bad public policy.

Incompatibility among free licenses means that the licenses have failed at their most important jobs for any case in which one wishes to use works under incompatible terms together in a way that default bad policy restricts. That such cases may currently be edge cases, or even unknown, is a poor excuse for incompatibility. Remember that critique of current bad policy includes the restrictions it places on serendipitous uses and uses in the distant future!

On this number-and-preponderance-of-GPLv3-compatible-works metric, the license and free software community look pretty good (note that permissive licenses such as MIT and BSD, visibly popular among web developers, are GPL-compatible). Probably the most important incompatible terms are GPLv2-only and EPL. But software is suffusing everything, including hardware design, cultural/scientific/documentation works, and data. I hope to see major progress toward eliminating barriers across these overlapping domains in the next years.

Future of Intellectual Protectionism and not much Innovation Policy

Wednesday, May 23rd, 2012

I read all of the pieces selected for a „Future of copyright” anthology resulting from a contest run by the Modern Poland Foundation (apparently the winner of a small cash prize will be announced tomorrow; I highly recommend all of the pieces below and commend the judges for their selections):

7 are fiction (the 3 exceptions are me, Spitzlinger, and Togi). 5 of these are dystopian (exceptions: Binns, Mansoux), 4 of which (exception: Å»yÅ‚a) involve some kind of fundamental loss of personal control as a result of intellectual protectionism (even more fundamental than drug war style enforcement involves, which Å»yÅ‚a’s does concern). 3 of these (exception: Eddie) involve extrapolations of DRM, 2 of which (exception: Melin) involve DRM implants.

I’d like to see versions of the dystopian stories written as IP propaganda, e.g., recast as RIAA/MPAA pieces from the future (several of the stories have funnily named future enforcement organizations in that vein). Such could be written as satire, apology, or even IP totalist advocacy (utopian rather than dystopian).

Of the dystopian stories, Solís is probably most dystopian, Eddie most humorous, and Betteridge overall best executed. Å»yÅ‚a needs a bit of development — the trend posited is incongruous and unexplained — but maybe due to an unknown factor to be suggested by fictional future freakonomics, or perhaps I just missed it. Melin ends with some hope, but annoys me for contemporary reasons — why would the recipient of a body part artificially grown with “open” methods be constrained in the disposition of that part by a “Creative Commons license” on those methods? Another reason to discourage use of CC licenses for hardware design.

The two non-dystopian stories take the form of a “letter from the future” in which various “open” movements and “models” win (Binns; if I had to bet on a winner of the contest, I’d put my money on this one) and an allegory for the history and projected future of copyright (Mansoux; probably the piece I enjoyed reading most).

Of the 3 non-fiction pieces, Togi is most non-standard — a rant in the form of lemmas — and fun, though briefly goes off the rails in asserting that “those entities which represent the greatest tax gain will be preferred by government.” If that were the case, all that is prohibited would instead be taxed. Statements about “revenue” leave little wiggle room, but I suppose a charitable interpretation would include in “tax gain” all rents to those influencing power, be they bootleggers, baptists, or those directly obtaining tax revenue. Spitzlinger accepts the stories my piece rejects and suggests something like the Creative Commons Attribution-NonCommercial-ShareAlike license be the default for new works, with the possibility of additional temporary restriction (a one-year usufruct, perhaps?).

All of the pieces evince unhappiness with the current direction of information governance. Of those that reveal anything about where they stand on the reform spectrum (admitting that one dimension makes for an impoverished description of reform possibilities; that’s one of the points I hoped to communicate in my piece) I’d place Binns, Melin, and Spitzlinger away from abolition, and me, Mansoux, and Togi toward abolition.

I expect the contest and anthology to be criticized for only representing reform viewpoints. Sadly, no maximalist pieces were submitted. The most moderate reform submission didn’t follow contest rules (not a new piece, no license offered). More than alternate perspective versions of IP dystopias, I’d like to see attempts to imagine future systems which increase private returns to innovation, perhaps looking nothing like today’s copyright, patent, etc., and increase overall social welfare — I’m dubious, but please try.

Update 20120524: The two most fun and non-standard entries wonMansoux, with an honorable mention to Togi. I now must also congratulate the judges on their good taste. Read those two, or the whole anthology (pdf).

technicaldebt.xsl

Thursday, May 17th, 2012

Former colleague Nathan Yerlger has a series of posts on technical debt (1, 2, 3). I’m responsible for some of the debt described in the third posting:

We had the “questions” used for selecting a license modeled as an XSLT transformation (why? don’t remember; wish I knew what we were thinking when we did that)

In 2004, I was thinking:

The idea is to encapsulate the “choose license” process (see in a file or a few files that can be reused in different environments (e.g., standalone apps) without having those apps reproduce the core language surrounding the process or the rules for translating user answers into a license choice and associated metadata.

Making the “questions” available as XML (questions.xml) and “rules” as XSL (chooselicense.xsl) attempts to maximize accessibility and minimize reimplementation of logic across multiple implementations.

I also thought about XSLT as an interesting mechanism for distributing untrusted code. Probably too complex, or just too unconventional and ill-supported, and driven by bad requirements. I’ll probably say more about the last in a future refutation post.

Anyway, I’m sorry for that bit. I recommend Nathan’s well written series.

Open Source Semiconductor Core Licensing → GPL hardware?

Saturday, May 12th, 2012

In Open Source Semiconductor Core Licensing (pdf; summary) Eli Greenbaum considers when use of the semiconductor core designs under the GPL would make designs of chips and devices, and possibly physical objects based on those designs, trigger GPL requirements to distribute design for a derived work under the GPL.

It depends of course, but overall Greenbaum’s message for proprietary hardware is exactly the same as innumerable commentators’ messages for proprietary software:

  • If you use any GPL work, be extremely careful to isolate that use in ways that minimize the chances one could successfully claim your larger work triggers GPL requirements;
  • Excluding GPL work would be easier; if you want to incorporate open source works, consider only LGPL (I don’t understand why Greenbaum didn’t mention permissive licenses, but typically they’d be encouraged here).

Greenbaum concludes:

The semiconductor industry has been moving further toward the use of independently developed cores to speed the creation of new devices and products. However, the need for robustly maintained and supported cores and the absence of clear rules and licenses appropriate for the industry’s structure and practice have stymied the development of an open source ecosystem, which might otherwise have been a natural outgrowth of the use of independently developed cores. The development of a context-specific open source license may be the simplest way to clarify the applicable legal rules and encourage the commercial use of open source cores.

That’s something like what John Ackermann wanted to show more generally for hardware designs in a paper I’ve written about before. Each leaves me unconvinced:

  • If one wants copyleft terms, whether to protect a community or proprietary licensing revenue, use the GPL, which gives you plenty of room to aggressively enforce as and if you wish;
  • If you don’t want copyleft terms, use a permissive license such as the Apache License 2.0 (some people understand this but still think version tweaked for hardware is necessary; I’m skeptical of that too).

Greenbaum does mention Ackermann’s paper and TAPR license and other “open hardware” licenses I previously discussed in a footnote:

While “open hardware” licenses do exist, they do not take account of many of the complexities of the semiconductor device manufacturing process. For example, the TAPR Open Hardware License does not address the use of technology libraries, the incorporation of soft cores in a device design, or the use of independent contractors for part s of the design
process.

I think this highlights a difference of perspective. “Open hardware” people inclined toward copyleft want licenses which even more clearly than the GPL impose copyleft obligations on entities that build on copylefted designs. Greenbaum doesn’t even sketch what a license he’d consider appropriate for the industry would look like, but I’m doubtful that a license tailored to enabling some open collaboration but protecting revenues in industry-specific ways would be considered free or open by many people, or be used much.

I suspect the reason open hardware has only begun taking off recently (and will be huge soon) and open semiconductor design not yet (though for both broad and narrow categories people have been working on it for well over a decade) has almost nothing to do with the applicability of widely used licenses (which are far from ideal even for software, but network effects rule) and everything to do with design and production technologies that make peer production a useful addition.

Although I think the conclusion is weak (or perhaps merely begs for a follow-up explaining the case), Greenbaum’s paper is well worth reading, in particular section VI. Distribution of Physical Devices, which makes the case the GPL applies to such based on copyright, contract, and copyright-like restrictions and patent. These are all really important issues for info/innovation/commons governance to grapple with going forward. My hope is that existing license stewards take this to heart (e.g., do serious investigations of how GPLv3+ and Apache 2.0 can be best used for designs, and take what is learned and what the relevant communities say when in the fullness of time the next versions of those licenses are developed; the best contribution Creative Commons can probably make is to increase compatibility with software licenses and disrecommend direct use of CC licenses for designs as it has done for software) and that newer communities not operate in an isolated manner when it comes to commons governance.

[e]Book escrow

Thursday, May 10th, 2012

I had no intention of writing yet another post about DRM today. But a new post on Boing Boing, Libraries set out to own their ebooks, has some of the same flavor as some of the posts I quoted yesterday and is a good departure (for making a few more points, and not writing any more about the topic for some time).

Today’s Boing Boing post (note their Day Against DRM post from last week) says a library in Colorado is:

buying eBooks directly from publishers and hosting them on its own platform. That platform is based on the purchase of content at discount; owning—not leasing—a copy of the file; the application of industry-standard DRM on the library’s files; multiple purchases based on demand; and a “click to buy” feature.

I think that’s exactly what Open Library is doing (maybe excepting “click to buy”; not sure what happened to “vending” mentioned when BookServer was announced). A letter to publishers from the library is fairly similar to the Internet Archive’s plea of a few days ago. Exceprt:

  • We will attach DRM when you want it. Again, the Adobe Content Server requires us to receive the file in the ePub format. If the file is “Creative Commons” and you do not require DRM, then we can offer it as a free download to as many people as want it. DRM is the default.
  • We will promote the title. Over 80% of our adult checkouts (and we checked out over 8.2 million items last year) are driven by displays. We will present e-content data (covers and descriptions) on large touch screens, computer catalogs, and a mobile application. These displays may be “built” by staff for special promotions (Westerns, Romances, Travel, etc.), automatically on the basis of use (highlighting popular titles), and automatically through a recommendation engine based on customer use and community reviews.
  • We will promote your company. See a sample press release, attached.

I did not realize libraries were so much like retail (see “driven by displays”). Disturbing, but mostly off-topic.

The letter lists two concerns, both financial. Now: give libraries discounts. Future: allow them to sell used copies. DRM is not a concern now, nor for the future. As I said a couple days ago, I appreciate the rationale for making such a deal. Librarian (and Wikimedian, etc) Phoebe Ayers explained it well almost exactly two years ago: benefit patrons (now). Ok. But this seems to me to fit what ought to be a canonical definition of non-visionary action: choosing to climb a local maximum which will be hard to climb down from, with higher peaks in full view. Sure, the trails are not known, but must exist. This “vision” aspect is one reason Internet Archive’s use of DRM is more puzzling than local libraries’ use.

Regarding “owning—not leasing—a copy of the file”, I now appreciate more a small part of the Internet Archive’s recent plea:

re-format for enduring access, and long term preservation

Are libraries actually getting books from publishers in formats ideal for these tasks? I doubt it, but if they are, that’s a very significant plus.

I dimly recall source code escrow being a hot topic in software around 25 years ago. (At which time I was reading industry rags…at my local library.) I don’t think it has been a hot topic for a long time, and I’d guess because the ability to run the software without a license manager, and to inspect, fix, and share the software right now, on demand, rather than as a failsafe mechanism, is a much, much better solution. Good thing lots of people and institutions over the last decades demanded the better solution.

DRM and the Churches of Universal Access to All Knowledge’s strategic plans

Friday, May 4th, 2012

img_1825.jpg

Over 2.5 years ago (2009-10-19) the Internet Archive celebrated its move into a former church (I know it’s a cheap shot, but my immediate reaction was “yay, monument to ignorance made into a monument to knowledge; more like that please (if we must have monuments)!”) and to launch BookServer. The latter was described as “like the web, but for books” illustrated with a slide featuring a cloud in the middle surrounded by icons representing various devices and actors (see the same or similar image at the previous link). I was somewhat perplexed — if a less credible entity had described their project as “like the web, but for Foo” as illustrated by a picture of a cloud labeled “FooServer”, by bullshit alarm would’ve been going crazy.

For the remainder of the event a parade of people associated in some way with books endorsed the project on stage. I only remember a few of them. One was Adam Hyde, who recently drafted a book called A Webpage is a Book. Somewhere in the middle of this parade someone stood out — tall and slick, salesperson slick — and gave a spiel about how Adobe was excited about BookServer and using technology to maximize getting content to consumers. In any case, it was obvious from what the Adobe person said that BookServer, whatever it was, would be using DRM. I nearly fell out of my seat, but I don’t think anyone else noticed — everyone just clapped, same as for all other endorsers — and the crowd was filled with people who ought to have understood and been alarmed.

Over the past couple years I occasionally wondered what became of BookServer and its use of DRM, but was reminded to look by Mako Hill’s post in March concerning how it often isn’t made clear whether a particular offer is made with DRM. I didn’t see anything on the Internet Archive site, but a few days ago Peter Brantley’s writeup of a Digital Public Library of America meeting included:

Kahle announced his desire to broaden access to 20th Century literature, much of it still in copyright, by digitizing library collections and making them available for a 1-copy/1-user borrowing system, such as that provided by the Internet Archive’s Open Library, in concert with State libraries.

Right, OpenLibrary in addition to book metadata (“one web page for every book”; do we obtain recursion if we take Hyde literally? a mere curiosity, as we probably shouldn’t) now offers downloading, reading, and borrowing in various combinations for some books. Downloading includes the obvious formats. Reading is via the excellent web-based Internet Archive BookReader, and is available for books that may be downloaded as well as a borrowing option. In the borrowing case, only one person at a time may read a particular book on the OpenLibrary site. The other digital borrowing option is where DRM comes in — Adobe Digital Editions is required. (This is for books that can be borrowed via OpenLibrary; some may be borrowed digitally from traditional libraries via OverDrive, which probably also uses DRM.)

This and screens leading up to this are clear to me, but I don’t know about most people. That there’s DRM involved is just not deemed to be pertinent; some particular software is needed, that’s all. For myself, the biggest improvement not involving a big policy change would be to split up the current “Show only eBooks” search option. Maybe “Show only downloadable eBooks”.

img_1823.jpg

OpenLibrary is looking to expand its ebook “lending” offerings according to a post made just two days ago, We want to buy your books! Internet Archive Letter to Publishers:

We currently buy, lend, and preserve eBooks from publishers and booksellers, but we have not found many eBooks for sale at any price. The Internet Archive is running standard protection systems to lend eBooks from our servers through our websites, openlibrary.org and archive.org. In this way, we strive to provide a seamless experience for our library patrons that replicates a traditional library check-out model, but now with eReaders and searching.

By buying eBooks from you, we hope to continue the productive relationship between libraries and publishers. By respecting the rights and responsibilities that have evolved in the physical era, we believe we will all know how to act: one patron at a time, restrictions on copying, re-format for enduring access, and long term preservation.

Rather than begging to buy books with restrictions, I’d prefer the Internet Archive, and indeed everyone, to demand books without restrictions, software or legal (of course they’re mixed given current malgovernance — anticircumvention laws). But that’s a different strategy, possibly requiring a lower discount rate. I can appreciate the Internet Archive’s dedication to being a library, and getting its patrons — everyone — access to knowledge, right now.

Still, it would be nice if libraries were to participate (even more, I know many librarians do) in anti-DRM activism, such as a Day Against DRM, which is today. Also see my Day Against DRM post from last year.

Speaking of different strategies, Creative Commons licenses so far include a regulatory clause prohibiting distribution with DRM. Some people have been dissatisfied with this clause since the beginning, and it is again being debated for version 4.0 of the licenses. I still don’t think the effectiveness (in promoting the desired outcome, a more free world; enforcement, enforceability, etc, all ought be subsidiary) of the options has really been discussed, though I did try:

I suspect that anyone who has or will bother to participate in discussions about CC and DRM is a bitter opponent of DRM (I can say this with certainty about most of the participants so far). My guess is that the disagreement comes from not one or other set of people hating or misunderstanding freedom or accepting DRM, but from different estimations of the outcomes of different strategies.

Keeping or strengthening the DRM prohibition fights DRM by putting DRM-using platforms at a disadvantage (probably not significant now, but could become substantial if more CC-licensed works become culturally central and significant enforcement efforts commence) and by putting CC’s reputation unambiguously against DRM, making the license an expression of the world we aspire to live in, and giving policy advocates a talking point against mandating DRM anywhere (“it breaks this massive pool of content”).

Weakening through parallel distribution or removing altogether the DRM prohibition fights DRM indirectly, by removing a barrier (probably small now, given widespread non-compliance) to CC-licensed works becoming culturally central (ie popular) and thus putting DRM-using platforms at a disadvantage – the defect being useless to gain access to content, thus being merely a defect.

Personally, I find the second more compelling, but I admit it is simply the sort of story that usually appeals to me. Also, I find it congruent with the conventional wisdom a broad “we” tell to people who just don’t get it, supposedly: obscurity is a bigger threat than piracy. But I don’t expect anyone to change their minds as a result. Especially since this is in concept more or less what Evan Prodromou was saying in 2006 http://evan.prodromou.name/Free_content_and_DRM :-)

I do think that expression is important, and whatever gets baked into 4.0, CC could do more in a couple ways:

1. Communicate the DRM prohibition especially on license deeds (where applicable, at least in < =3.0); suggested by Luis Villa in http://lists.ibiblio.org/pipermail/cc-licenses/2012-January/006663.html 2. Make anti-DRM advocacy a bigger part of CC's overall message; a bit at http://creativecommons.org/tag/drm but IIRC something like Day Against DRM has never been featured on the home page.

Day Against DRM is featured on the CC home page today.

Future of Copyright

Monday, April 30th, 2012

“Copyright” (henceforth, copyrestriction) is merely a current manifestation of humanity’s malgovernance of information, of commons, of information commons (the combination being the most pertinent here). Copyrestriction was born of royal censorship and monopoly grants. It has acquired an immense retinue of administrators, advocates, bureaucrats, goons, publicists, scholars, and more. Its details have changed and especially proliferated. But its concept and impact are intact: grab whatever revenue and control you can, given your power, and call your grabbing a “right” and necessary for progress. As a policy, copyrestriction is far from unique in exhibiting these qualities. It is only particularly interesting because it, or more broadly, information governance, is getting more important as everything becomes information intensive, increasingly via computation suffusing everything. Before returning to the present and future, note that copyrestriction is also not temporally unique among information policies. Restriction of information for the purposes of control and revenue has probably existed since the dawn of agriculture, if not longer, e.g., cults and guilds.

Copyrestriciton is not at all a right to copy a work, but a right to persecute others who distribute, perform, etc, a work. Although it is often said that a work is protected by copyrestriction, this is strictly not true. A work is protected through the existence of lots of copies and lots of curators. The same is true for information about a work, i.e., metadata, e.g., provenance. Copyrestriction is an attack on the safety of a work. Instead, copyrestriction protects the revenue and control of whoever holds copyrestriction on a work. In some cases, some elements of control remain with a work’s immediate author, even if they no longer hold copyrestriction: so-called moral rights.

Copyrestriction has become inexorably more restrictive. Technology has made it increasingly difficult for copyrestriction holders and their agents to actually restrict others’ copying and related activity. Neither trend has to give. Neither abolition nor police state in service of copyrestriction scenarios are likely in the near future. Nor is the strength of copyrestricition the only dimension to consider.

Free and open source software has demonstrated the ethical and practical value of the opposite of copyrestriction, which is not its absence, but regulation mandating the sharing of copies, specifically in forms suitable for inspection and improvement. This regulation most famously occurs in the form of source-requiring copyleft, e.g., the GNU General Public License (GPL), which allows copyrestriction holders to use copyrestriction to force others to share works based on GPL’d works in their preferred form for modification, e.g., source code for software. However, this regulation occurs through other means as well, e.g., communities and projects refusing to curate and distribute works not available in source form, funders mandating source release, and consumers refusing to buy works not available in source form. Pro-sharing regulation (using the term “regulation” maximally broadly to include government, market, and others; some will disbelieve in the efficacy or ethics of one or more, but realistically a mix will occur) could become part of many policies. If it does not, society will be put at great risk by relying in security through obscurity, and lose many opportunities to scrutinize, learn about, and improve society’s digital infrastructure and the computing devices individuals rely on to live their lives, and to live, period.

Information sharing, and regulation promoting and protecting the same, also ought play a large role in the future of science. Science, as well as required information disclosure in many contexts, long precedes free and open source software. The last has only put a finer point on pro-sharing regulation in relation to copyrestriction, since the most relevant works (mainly software) are directly subject to both. But the extent to which pro-sharing regulation becomes a prominent feature of information governance, and more narrowly, the extent to which people have software freedom, will depend mostly on the competitive success of projects that reveal or mandate revelation of source, the success of pro-sharing advocates in making the case that pro-sharing regulation is socially desirable, and their success in getting pro-sharing regulation enacted and enforced (again, whether in customer and funding agreements, government regulation, community constitutions, or other) much more so than copyrestriction-based enforcement of the GPL and similar. But it is possible that the GPL is setting an important precedent for pro-sharing regulation, even though the pro-sharing outcome is conceptually orthogonal to copyrestriction.

Returning to copyrestriction itself, if neither abolition nor totalism are imminent, will humanity muddle through? How? What might be done to reduce the harm of copyrestriction? This requires a brief review of the forces that have resulted in the current muddle, and whether we should expect any to change significantly, or foresee any new forces that will significantly impact copyrestriction.

Technology (itself, not the industry as an interest group) is often assumed to be making copyrestriction enforcement harder and driving demands for for harsher restrictions. In detail, that’s certainly true, but for centuries copyrestriciton has been resilient to technical changes that make copying ever easier. Copying will continue to get easier. In particular the “all culture on a thumb drive” (for some very limited definition of “all”) approaches, or is here if you only care about a few hundred feature length films, or are willing to use portable hard drive and only care about a few thousand films (or much larger numbers of books and songs). But steadily more efficient copying isn’t going to destroy copyrestriction sector revenue. More efficient copying may be necessary to maintain current levels of unauthorized sharing, given steady improvement in authorized availability of content industry controlled works, and little effort to make unauthorized sharing easy and worthwhile for most people (thanks largely to suppression of anyone who tries, and media management not being an easy problem). Also, most collection from businesses and other organizations has not and will probably not become much more difficult due to easier copying.

National governments are the most powerful entities in this list, and the biggest wildcards. Although most of the time they act roughly as administrators or follow the cue of more powerful national governments, copyrestriction laws and enforcement are ultimately in their courts. As industries that could gain from copyrestriction grow in developing nations, those national governments could take on leadership of increasing restriction and enforcement, and with less concern for civil liberties, could have few barriers. At the same time, some developing nations could decide they’ve had enough of copyrestriction’s inequality promotion. Wealthy national governments could react to these developments in any number of ways. Trade wars seem very plausible, actual war prompted by a copyrestriction or related dispute not unimaginable. Nations have fought stupid wars over many perceived economic threats.

The traditional copyrestriction industry is tiny relative to the global economy, and even the U.S. economy, but its concentration and cachet make it a very powerful lobbyist. It will grab all of the revenue and control it possibly can, and it isn’t fading away. As alluded to above, it could become much more powerful in currently developing nations. Generational change within the content industry should lead to companies in that industry better serving customers in a digital environment, including conceivably attenuating persecution of fans. But it is hard to see any internal change resulting in support for positive legal changes.

Artists have always served as exhibit one for the content industry, and have mostly served as willing exhibitions. This has been highly effective, and every category genuflects to the need for artists to be paid, and generally assumes that copyrestriction is mandatory to achieve this. Artists could cause problems for copyrestriction-based businesses and other organizations by demanding better treatment under the current system, but that would only effect the details of copyrestriction. Artists could significantly help reform if more were convinced of the goodness of reform and usefulness of speaking up. Neither seems very likely.

Other businesses, web companies most recently, oppose copyrestriction directions that would negatively impact their businesses in the short term. Their goal is not fundamental reform, but continuing whatever their current business is, preferably with increasing profits. Just the same as content industries. A fundamental feature of muddling through will be tests of various industries and companies to carve out and protect exceptions. And exploit copyrestriction whenever it suits them.

Administrators, ranging from lawyers to WIPO, though they work constantly to improve or exploit copyrestriciton, will not be the source of significant change.

Free and open source software and other constructed commons have already disrupted a number of categories, including server software and encyclopedias. This is highly significant for the future of copyrestriction, and more broadly, information governance, and a wildcard. Successful commons demonstrate feasibility and desirability of policy other than copyrestriction, help create a constituency for reducing copyrestriction and increasing pro-sharing policies, and diminish the constituency for copyrestriction by reducing the revenues and cultural centrality of restricted works and their controlling entities. How many additional sectors will opt-in freedom disrupt? How much and for how long will the cultural centrality of existing restricted works retard policy changes flowing from such disruptions?

Cultural change will affect the future of copyrestriction, but probably in detail only. As with technology change, copyrestriction has been incredibly resilient to tremendous cultural change over the last centuries.

Copyrestriction reformers (which includes people who would merely prevent additional restrictions, abolitionists, and those between and beyond, with a huge range of motivations and strategies among them) will certainly affect the future of copyrestriction. Will they only mitigate dystopian scenarios, or cause positive change? So far they have mostly failed, as the political economy of diffuse versus concentrated interests would predict. Whether reformers succeed going forward will depend on how central and compelling they can make their socio-political cause, and thus swell their numbers and change society’s narrative around information governance — a wildcard.

Scholars contribute powerfully to society’s narrative over the long term, and constitute a separate wildcard. Much scholarship has moved from a property- and rights-based frame to a public policy frame, but this shift as yet is very shallow, and will remain so until a property- and rights-basis assumption is cut out from under today’s public policy veneer, and social scientists rather than lawyers dominate the conversation. This has occurred before. Over a century ago economists were deeply engaged in similar policy debates (mostly regarding patents, mostly contra). Battles were lost, and tragically economists lost interest, leaving the last century’s policy to be dominated by grabbers exploiting a narrative of rights, property, and intuitive theory about incentives as cover, with little exploration and explanation of public welfare to pierce that cover.

Each of the above determinants of the future of copyrestriction largely hinge on changing (beginning with engaging, in many cases) people’s minds, with partial exceptions for disruptive constructed commons and largely exogenous technology and culture change (partial as how these develop will be affected by copyrestriction policy and debate to some extent). Even those who cannot be expected to effect more than details as a class are worth engaging — much social welfare will be determined by details, under the safe assumption that society will muddle through rather than make fundamental changes.

I don’t know how to change or engage anyone’s mind, but close with considerations for those who might want to try:

  • Make copyrestriction’s effect on wealth, income, and power inequality, across and within geographies, a central part of the debate.
  • Investigate assumptions of beneficent origins of copyrestriction.
  • Tolerate no infringement of intellectual freedom, nor that of any civil liberty, for the sake of copyrestriction.
  • Do not assume optimality means “balance” nor that copyrestriction maximalism and public domain maximalism are the poles.
  • Make pro-sharing, pro-transparency, pro-competition and anti-monopoly policies orthogonal to above dimension part of the debate.
  • Investigate and celebrate the long-term policy impact of constructed commons such as free and open source software.
  • Take into account market size, oversupply, network effects, non-pecuniary motivations, and the harmful effects of pecuniary motivations on creative work, when considering supply and quality of works.
  • Do not grant that copyrestriction-based revenues are or have ever been the primary means of supporting creative work.
  • Do not grant big budget movies as failsafe argument for copyrestriction; wonderful films will be produced without, and even if not, we will love whatever cultural forms exist and should be ashamed to accept any reduction of freedom for want of spectacle.
  • Words are interesting and important but trivial next to substance. Replace all occurrences of “copyrestriction” with “copyright” as you see fit. There is no illusion concerning our referent.

This work takes part in the and is published under the CC BY-SA 3.0 license.

dsc02482.jpg

Libre Planet 2012

Tuesday, April 10th, 2012

2012-03-24%2009.44.38

A couple weeks ago I attended the Free Software Foundation’s annual conference, Libre Planet, held at UMass Boston a bit south of downtown. I enjoyed the event considerably, but can only give brief impressions of some of the sessions I saw.

John Sullivan, Matt Lee, Josh Gay started with a welcome and talk about some recent FSF campaigns. I think Sullivan said they exceeded their 2011 membership goal, which is great. Join. (But if I keep to my refutation schedule, I’m due to tell you why you shouldn’t join in less than 5 years.)

Rubén Rodríguez spoke about Trisquel, a distribution that removes non-free software and recommendations from Ubuntu (lagging those releases by about 5 months) and makes other changes its developers consider user-friendly, such as running GNOME 3 in fallback mode and some Web (an IceWeasel-like de-branded Firefox) privacy settings. I also saw a lightning talk from someone associated with ThinkPenguin, which sells computers pre-loaded with Trisquel.

Asheesh Laroia spoke about running events that attract and retain newcomers. You can read about OpenHatch (the organization he runs) events or see a more specific presentation he recently gave at PyCon with Jessica McKellar. The main point of humor in the talk concerned not telling potential developers to download a custom built VM to work with your software: it will take a long time, and often not work.

Joel Izlar’s talk was titled Digital Justice: How Technology and Free Software Can Build Communities and Help Close the Digital Divide about his work with Free IT Athens.

Alison Chaiken gave the most important talk of the conference, Why Cars need Free Software. I was impressed by how many manufacturers are using at least some free software in vehicles and distressed by the state of automotive security and proprietary vendors pitching security through obscurity. Like , get Chaiken in front of as many people as possible.

Brett Smith gave an update on the FSF GPL compliance Lab, including mentioning MPL 2.0 and potential CC-BY-SA 4.0 compatibility with GPLv3 (both of which I’ve blogged about before), but the most interesting part of the talk concerned his participation in Trans-Pacific Partnership Stakeholder Forums; it sounded like software freedom concerns got a more welcome reception than expected.

ginger coons spoke about Libre Graphics Magazine, a graphic arts magazine produced entirely with free software. I subscribed.

Deb Nicholson gave a great, funny presentation on Community Organizing for Free Software Activists. If the topic weren’t free software, Nicholson could make a lot of money as a motivational speaker.

Evan Prodromou spoke on the Decentralized Social Web, using slides the same or very similar to his SXSW deck, which is well worth flipping through.

Chris Webber and I spoke about Creative Commons 4.0 licenses and free software/free culture cooperation. You can view our picture-only slides (odp; pdf; slideshare) but a recent interview with me and post about recent developments in MediaGoblin (Webber’s project) would be more informative and cover similar ground. We also pre-announced an exciting project that Webber will spam the world about tomorrow and sort of reciprocated for an award FSF granted Creative Commons three years ago — the GNU project won the Free Software Project for the Advancement of Free Culture Social Benefit Award 0, including the amount of 100BTC, which John Sullivan said would be used for the aforementioned exciting project.

Yukihiro ‘matz’ Matsumoto spoke on how Emacs changed his life, including introducing him to programming, free software, and influencing the design of Ruby.

Matthew Garrett spoke on Preserving user freedoms in the 21st century. Perhaps the most memorable observation he made concerned how much user modification of software occurs without adequate freedom (making the modifications painful), citing CyanogenMod.

I mostly missed the final presentations in order to catch up with people I wouldn’t have been able to otherwise, but note that Matsumoto won the annual Advancement of Free Software award, and GNU Health the Free Software Award for Projects of Social Benefit. Happy hacking!

Announcing RichClowd: crowdfunding with a $tatus check

Sunday, April 1st, 2012

RichClowd

Oakland, California, USA — 2012 April 1

Today, RichClowd pre-announces the launch of RichClowd.com, an exclusive “crowdfunding” service for the wealthy. Mass crowdfunding sites like Kickstarter have demonstrated a business model, but are held back by the high transaction costs of small funds and non-audacious projects proposed by under-capitalized creators. RichClowd will be open exclusively to funders and creators with already substantial access to capital.

The wealthy can fund and create audacious projects without joining together, but mass crowdfunding points to creative, marketing, networking, and status benefits to joint funding. So far mass crowdfunding has improved the marketplace for small projects and trinkets. The wealthy constitute a different strata of the marketplace — in the clouds, relatively — and RichClowd exists to improve the marketplace for monuments, public and personal, and other monumental projects.

“Through exclusivity RichClowd will enable projects with higher class, bigger vision, and that ultimately long-lasting contributions to society”, said RichClowd founder Mike Linksvayer, who continued: “Throughout human history great people have amassed and created the infrastructure, artifacts and knowledge that survives and is celebrated. As the Medicis were to the renaissance, RichClowders will be to the next stage of global society.”

RichClowd will initially have a membership fee of $100,000, which may be applied to project funding pledges. To ensure well-capitalized projects, RichClowd will implement a system called Dominant Assurance Contracts, which align the interests of funders and creators via a refund above the pledged amount for unsuccessful projects. This system will require creators to deposit the potential additional refund amount prior to launching a RichClowd project.

For the intellectual products of RichClowd projects, use of a forthcoming RichClowd Club License (RCCL) will be encouraged, making outputs maximally useful to funders, while maintaining exclusivity. Egalitarian projects will have the option of using a free public license.

The technology powering RichClowd.com will be developed openly and available under an AGPL open source badgeware intellectual property license. “RichClowd believes in public works. In addition to the many that will be created via the RichClowd service, open development of the RichClowd.com technology is the company’s own direct contribution to the extraordinary public work that is the Internet”, said Linksvayer.

About RichClowd

RichClowd is a pre-launch exclusive crowdfunding service with a mission of increasing the efficiency of bringing together great wealth and great projects to make an amazing world. Based in Oakland, California, a city with a reputation for poverty and agitation, RichClowd additionally takes on the local civic duty of pointing out Oakland’s incredible wealth and wealthy residents: to begin with, look up at the hills.

Contact

Mike Linksvayer, Founder
biginfo@richclowd.com